id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.16720
Dynamical low-rank tensor approximations to high-dimensional parabolic problems: existence and convergence of spatial discretizations
We consider dynamical low-rank approximations to parabolic problems on higher-order tensor manifolds in Hilbert spaces. In addition to existence of solutions and their stability with respect to perturbations to the problem data, we show convergence of spatial discretizations. Our framework accommodates various standard low-rank tensor formats for multivariate functions, including tensor train and hierarchical tensors.
Markus Bachmayr, Henrik Eisenmann, André Uschmajew
2023-08-31T13:38:13Z
http://arxiv.org/abs/2308.16720v1
Dynamical low-rank tensor approximations to high-dimensional parabolic problems: existence and convergence of spatial discretizations ###### Abstract. We consider dynamical low-rank approximations to parabolic problems on higher-order tensor manifolds in Hilbert spaces. In addition to existence of solutions and their stability with respect to perturbations to the problem data, we show convergence of spatial discretizations. Our framework accommodates various standard low-rank tensor formats for multivariate functions, including tensor train and hierarchical tensors. 2010 Mathematics Subject Classification: Primary 35K15, 35R01; Secondary 15A69, 65M12 ## 1. Introduction Dynamical low-rank approximation (DLRA) is a nonlinear model for time evolution of high-dimensional functions on low-dimensional submanifolds. In its simplest form for matrices, as presented in the seminal work [20], one seeks approximate solutions of matrix valued ODEs \[\dot{X}(t)=F(X(t),t),\quad X(t)\in\mathbb{R}^{M\times N}\] constrained to a low-rank model \[X(t)=U(t)V(t)^{\mathsf{T}},\] where \(U(t)\in\mathbb{R}^{M\times r}\) and \(V(t)\in\mathbb{R}^{N\times r}\). Hence \(X(t)\) has rank at most \(r\) for every time \(t\), which if \(r\ll M,N\) allows for an efficient numerical treatment. From a geometric perspective, by enforcing \(X(t)\) to have rank exactly \(r\), the problem can be formulated as an ODE on the manifold \(\mathcal{M}_{r}\) of matrices of fixed rank \(r\) by projecting the vector field \(F\) onto the tangent space \(T_{X(t)}\mathcal{M}_{r}\) at every time \(t\), \[\dot{X}(t)=P_{X(t)}F(X(t),t). \tag{1.1}\] In this way, assuming a starting value \(X_{0}\) on \(\mathcal{M}_{r}\), the time evolution is automatically constrained to the manifold. This constrained problem admits the time-dependent variational formulation \[\langle\dot{X}(t),Y\rangle=\langle F(X(t),t),Y\rangle\quad\text{for all }Y\in T_{X(t)} \mathcal{M}_{r}, \tag{1.2}\] which in physics is also called the Dirac-Frenkel principle [9, 15]. The state-of-the-art tool for the numerical solution of these equations is based on a splitting of the tangent space projector \(P_{X(t)}\), see [24]. The DLRA approach can also be applied to time-dependent problems where the solution \(X(t)\) is a tensor of size \(N_{1}\times\cdots\times N_{d}\). In this case, various low-rank models are possible, Introduction The study of the \(\alpha\)-stable \ Here, by \(\langle\cdot,\cdot\rangle\) we denote the dual pairing on \(H^{-1}(\Omega)\times H^{1}_{0}(\Omega)\), and \(a\colon H^{1}_{0}(\Omega)\times H^{1}_{0}(\Omega)\times[0,T]\to\mathbb{R}\) is the bounded, symmetric and coercive bilinear form \[a(u,v;t)=\int_{\Omega}(B(t)\nabla u(x))\cdot\nabla v(x)\,dx.\] Classical theory provides a unique solution to (1.5); see for example [38, Theorem 23.A] and [39, Theorem 30.A]. A DLRA formulation for (1.5) is obtained by selecting a low-rank model class \[\mathcal{M}\subset L_{2}(\Omega)\] based on the tensor product structure of \(L_{2}(\Omega)=L_{2}(0,1)\otimes\cdots\otimes L_{2}(0,1)\), such as the tensor train format in a Hilbert space setting to be made precise in Section 4, and restricting the test function in (1.5) to the tangent space of \(\mathcal{M}\) at \(u(t)\). In other words, the problem reads: given \(u_{0}\in\mathcal{M}\) and \(f\in L_{2}(0,T;H^{-1}(\Omega))\), find \[u\in W(0,T;H^{1}_{0}(\Omega),H^{-1}(\Omega))\coloneqq\{u\in L_{2}(0,T;H^{1}_ {0}(\Omega))\colon u^{\prime}\in L_{2}(0,T;H^{-1}(\Omega))\}\] such that \(u(t)\in\mathcal{M}\) for all \(t\in(0,T)\) and \[\begin{split}\langle u^{\prime}(t),v\rangle+a(u(t),v;t)& =\langle f(t),v\rangle\quad\text{for all }v\in T_{u(t)}\mathcal{M}\cap H^{1}_{0}(\Omega),\\ u(0)&=u_{0}.\end{split} \tag{1.6}\] For the matrix case \(d=2\), existence and uniqueness of DLRA solutions on a maximal time interval \((0,T^{*})\) has been shown in [3] under the additional regularity assumptions \(u_{0}\in H^{1}_{0}(\Omega)\) and \(f\in L_{2}(0,T;L_{2}(\Omega))\). The proof given there is carried out in a more general framework of parabolic problems on manifolds in Gelfand triplets and is based on a variational time stepping scheme in Hilbert space. As we will verify in Section 4, this framework also applies to the tensor train model of \(d\)-variate functions, and can analogously be used to other tree based low-rank tensor models. To this end, we establish a general concept of tensor manifolds in Hilbert space which allows to deduce the required manifold properties, such as curvature estimates, from their finite-dimensional counterparts. As a result, we obtain a meaningful and rigorous notion of a continuous DLRA solution for parabolic problems for higher-order tensors based on the results from [3]. However, the existence proof in [3] does not provide the convergence of solutions of spatial semidiscretizations to the continuous solution, and we address this open problem in the present work as well. Spatial discretizations of low-rank problems can be obtained in a very natural way via tensor products of discretization spaces. By considering finite-dimensional subspaces \(V^{\mu}_{h}\subset H^{1}_{0}(0,1)\) of dimension \(N_{\mu}\) one obtains a DLRA problem analogous to (1.6) by restricting to the tensor product space \[\mathcal{V}_{h}=V^{1}_{h}\otimes\cdots\otimes V^{d}_{h}\] We hence seek a solution \(u_{h}(t)\in\mathcal{M}\cap\mathcal{V}_{h}\) satisfying \[\begin{split}\langle u^{\prime}_{h}(t),v_{h}\rangle+a(u_{h}(t),v_ {h};t)&=\langle f(t),v_{h}\rangle\quad\text{for all }v_{h}\in T_{u_{h}(t)}\mathcal{M}\cap \mathcal{V}_{h},\\ u(0)&=u_{0}.\end{split} \tag{1.7}\] By writing \(u_{h}(t)\) in a tensor product basis of \(\mathcal{V}_{h}\), \[u_{h}(t)=\sum_{i_{1}=1}^{N_{1}}\cdots\sum_{i_{d}=1}^{N_{d}}X(t;i_{1},\ldots,i_ {d})\varphi_{i_{1}}^{1}\otimes\cdots\otimes\varphi_{i_{d}}^{d}\,,\] we obtain a DLRA problem for the the coefficient tensor \(X(t)\in\mathbb{R}^{N_{1}\times\cdots\times N_{d}}\). The question is then whether for \(h\to 0\), the functions \(u_{h}\) converge to the (unique) solution \(u\) of (1.7). ### Contributions and outline In this work, we extend the existence and uniqueness result for the DLRA evolution obtained in [3] from the bivariate to the multivariate case. The abstract framework from [3] is recalled in Section 2. This section also contains the new stability estimate Theorem 2.5, which complements the uniqueness result of [3]. The main result of Section 3 is the convergence result for spatial semidiscretizations (Theorem 3.2) in the abstract framework of Gelfand triplets as developed in [3]. In Section 4, we present a general notion of low-rank manifolds in Hilbert spaces and obtain new curvature estimates. We then in Section 5 apply these results to dynamical low-rank approximations of the model problem (1.6) using the tensor train format. Based on the general setting of Section 4, analogous results can be obtained for other low-rank manifolds; in particular, our results apply directly also to the Tucker format. ## 2. Abstract formulation For developing the essential aspects of the theory we investigate the DLRA problem in an abstract context as in [3]. We consider a Gelfand triplet \[\mathcal{V}\hookrightarrow\mathcal{H}\cong\mathcal{H}^{*}\hookrightarrow \mathcal{V}^{*}\] of Hilbert spaces, where \(\mathcal{V}\) is compactly embedded in \(\mathcal{H}\). This implies that the embedding is also continuous, that is, \[\|u\|_{\mathcal{H}}\lesssim\|u\|_{\mathcal{V}}. \tag{2.1}\] By \(\langle\cdot,\cdot\rangle:\mathcal{V}^{*}\times\mathcal{V}\to\mathbb{R}\) we denote the dual pairing of \(\mathcal{V}^{*}\) and \(\mathcal{V}\). Note that for \(u\in\mathcal{H}\subset\mathcal{V}^{*}\) and \(v\in\mathcal{V}\subset\mathcal{H}\), the dual pairing and the inner product on \(\mathcal{H}\) coincide, that is, \(\langle u,v\rangle_{\mathcal{H}}=\langle u,v\rangle\). We will frequently identify \(u\in\mathcal{V}\) as an element of \(\mathcal{H}\) and in turn also as an element in \(\mathcal{V}^{*}\). For every \(t\in[0,T]\), let \(a(\cdot,\cdot;t):\mathcal{V}\times\mathcal{V}\to\mathbb{R}\) be a bilinear form which is assumed to be symmetric, \[a(u,v;t)=a(v,u;t)\quad\text{for all $u,v\in\mathcal{V}$ and $t\in[0,T]$},\] uniformly bounded, \[|a(u,v;t)|\leq\beta\|u\|_{\mathcal{V}}\|v\|_{\mathcal{V}}\quad\text{for all $u,v\in\mathcal{V}$ and $t\in[0,T]$}\] for some \(\beta>0\), and uniformly coercive, \[a(u,u;t)\geq\mu\|u\|_{\mathcal{V}}^{2}\quad\text{for all $u\in\mathcal{V}$ and $t\in[0,T]$}\] for some \(\mu>0\). Under these assumptions, \(a(\cdot,\cdot;t)\) is an inner product on \(\mathcal{V}\) defining an equivalent norm. Furthermore, it defines a bounded operator \[A(t):\mathcal{V}\to\mathcal{V}^{*} \tag{2.2}\] such that \[a(u,v;t)=\langle A(t)u,v\rangle\quad\text{for all $u,v\in\mathcal{V}$}.\] We also assume that \(a(u,v;t)\) is Lipschitz continuous with respect to \(t\). In other words, there exists an \(L\geq 0\) such that \[|a(u,v;t)-a(u,v;s)|\leq L\beta\|u\|_{\mathcal{V}}\|v\|_{\mathcal{V}}|t-s| \tag{2.3}\] for all \(u,v\in\mathcal{V}\) and \(s,t\in[0,T]\), which in the model problem corresponds to the Lipschitz continuity of the function \(t\mapsto B(t)\). We deal with evolution equations on subsets \(\mathcal{M}\subset\mathcal{H}\) that are submanifolds in the following sense: for every point \(u\in\mathcal{M}\) there exists a closed subspace \(T_{u}\mathcal{M}\subset\mathcal{H}\) such that \(T_{u}\mathcal{M}\) contains all tangent vectors to \(\mathcal{M}\) at \(u\). Here a tangent vector is any \(v\in\mathcal{H}\) for which there exists a (strongly) differentiable curve \(\varphi\colon(-\varepsilon,\varepsilon)\to\mathcal{H}\) (for some \(\varepsilon>0\)) such that \(\varphi(t)\in\mathcal{M}\) for all \(t\) and \[\varphi(0)=u,\quad\varphi^{\prime}(0)=v.\] By \(P_{u}:\mathcal{H}\to T_{u}\mathcal{M}\) we denote the \(\mathcal{H}\)-orthogonal projection onto \(T_{u}\mathcal{M}\). We will also assume \(\mathcal{M}\cap\mathcal{V}\) to be nonempty as well as \(T_{u}\mathcal{M}\cap\mathcal{V}\) to be nonempty for \(u\in\mathcal{M}\cap\mathcal{V}\). By \(\overline{\mathcal{M}}^{w}\) we denote the weak closure of \(\mathcal{M}\) in \(\mathcal{H}\). The abstract problem takes the following form. **Problem 2.1**.: In the above setting, given \(f\in L_{2}(0,T;\mathcal{V}^{*})\) and \(u_{0}\in\mathcal{M}\cap\mathcal{H}\), find \[u\in W(0,T;\mathcal{V},\mathcal{V}^{*})\coloneqq\{u\in L_{2}(0,T;\mathcal{V}) \colon u^{\prime}\in L_{2}(0,T;\mathcal{V}^{*})\}\] such that for almost all \(t\in[0,T]\), \[u(t) \in\mathcal{M},\] \[\langle u^{\prime}(t),v\rangle+a(u(t),v;t) =\langle f(t),v\rangle\quad\text{for all }v\in T_{u(t)}\mathcal{M}\cap \mathcal{V}, \tag{2.4}\] \[u(0) =u_{0}.\] ### Basic assumptions and existence of solutions The main challenge of the weak formulation in Problem 2.1 is that according to the Dirac-Frenkel principle, the test functions are from the tangent space only. The existence result in [3] requires several assumptions, including additional regularity of the data as in assumption **A0** below. The other assumption **A1**-**A4** are abstractions of corresponding properties of the model problem and will be discussed for the tensor train format in Section 4, and hence the main results of this paper apply to this setting. The assumptions are the following. * (Regularity of data) We have \(f\in L_{2}(0,T;\mathcal{H})\) and \(u_{0}\in\mathcal{M}\cap\mathcal{V}\). * (Cone property) \(\mathcal{M}\) is a cone, that is, \(u\in\mathcal{M}\) implies \(su\in\mathcal{M}\) for all \(s>0\). * (Curvature bound) For every subset \(\mathcal{M}^{\prime}\) of \(\mathcal{M}\) that is weakly compact in \(\mathcal{H}\), there exists a constant \(\kappa=\kappa(\mathcal{M}^{\prime})\) such that \[\|P_{u}-P_{v}\|_{\mathcal{H}\to\mathcal{H}}\leq\kappa\|u-v\|_{\mathcal{H}}\] and \[\|(I-P_{u})(u-v)\|_{\mathcal{H}}\leq\kappa\|u-v\|_{\mathcal{H}}^{2}\] for all \(u,v\in\mathcal{M}^{\prime}\). * (Compatibility of tangent spaces) * For \(u\in\mathcal{M}\cap\mathcal{V}\) and \(v\in T_{u}\mathcal{M}\cap\mathcal{V}\) an admissible curve with \(\varphi(0)=u\), \(\varphi^{\prime}(0)=v\) can be chosen such that \[\varphi(t)\in\mathcal{M}\cap\mathcal{V}\] for all \(|t|\) small enough. * If \(u\in\mathcal{M}\cap\mathcal{V}\) and \(v\in\mathcal{V}\) then \(P_{u}v\in T_{u}\mathcal{M}\cap\mathcal{V}\). * (Operator splitting) The associated operator \(A(t)\) in (2.2) admits a splitting \[A(t)=A_{1}(t)+A_{2}(t)\] into two uniformly bounded operators \(\mathcal{V}\to\mathcal{V}^{*}\) such that for all \(t\in[0,T]\), all \(u\in\mathcal{M}\cap\mathcal{V}\) and all \(v\in\mathcal{V}\), the following holds: * "\(A_{1}(t)\) maps to the tangent space": \[\langle A_{1}(t)u,v\rangle=\langle A_{1}(t)u,P_{u}v\rangle.\] 2. "\(A_{2}(t)\) is locally bounded from \(\mathcal{M}\cap\mathcal{V}\) to \(\mathcal{H}\)": For every subset \(\mathcal{M}^{\prime}\) of \(\mathcal{M}\) that is weakly compact in \(\mathcal{H}\), there exists \(\gamma=\gamma(\mathcal{M}^{\prime})>0\) such that \[A_{2}(t)u\in\mathcal{H}\quad\text{and}\quad\|A_{2}(t)u\|_{\mathcal{H}}\leq \gamma\|u\|_{\mathcal{V}}^{\eta}\quad\text{for all $u\in\mathcal{M}^{\prime}$}\] with an \(\eta>0\) independent of \(\mathcal{M}^{\prime}\). The following existence and uniqueness result has been obtained in [3, Theorem 4.3]. Here \(W(0,T;\mathcal{V},\mathcal{H})\) denotes the subspace of \(W(0,T;\mathcal{V},\mathcal{H})\) with \(u^{\prime}\in L_{2}(0,T;\mathcal{H})\). **Theorem 2.2**.: _Let the Assumptions_ **A0**_-_**A4** _hold and let \(u_{0}\) have positive \(\mathcal{H}\)-distance from \(\overline{\mathcal{M}}^{w}\setminus\mathcal{M}\). There exist \(T^{*}\in(0,T]\) and \(u\in W(0,T^{*};\mathcal{V},\mathcal{H})\cap L^{\infty}(0,T^{*};\mathcal{V})\) such that \(u\) solves Problem 2.1 on the time interval \([0,T^{*}]\), and its continuous representative \(u\in C(0,T^{*};\mathcal{H})\) satisfies \(u(t)\in\mathcal{M}\) for all \(t\in[0,T^{*})\). Here \(T^{*}\) is maximal for the evolution on \(\mathcal{M}\) in the sense that if \(T^{*}<T\), then_ \[\liminf_{t\to T^{*}}\ \inf_{v\in\overline{\mathcal{M}}^{w}\setminus\mathcal{M}} \|u(t)-v\|_{\mathcal{H}}=0.\] _In either case, \(u\) is the unique solution of Problem 2.1 in \(W(0,T^{*};\mathcal{V},\mathcal{H})\cap L_{\eta}(0,T^{*};\mathcal{V})\)._ _In particular, let \(\sigma=\operatorname{dist}_{\mathcal{H}}(u_{0},\overline{\mathcal{M}}^{w} \setminus\mathcal{M})\), then there exists a constant \(c>0\) such that \(T^{*}\geq\min(\sigma^{2}/c,T)\)._ _The solution satisfies the following estimates:_ \[\|u\|_{L_{2}(0,T^{*};\mathcal{V})}^{2} \leq\|u_{0}\|_{\mathcal{H}}^{2}+C_{1}\|f\|_{L_{2}(0,T^{*}; \mathcal{H})}^{2}, \tag{2.5}\] \[\|u^{\prime}\|_{L_{2}(0,T^{*};\mathcal{H})}^{2} \leq C_{2}\left(\|u_{0}\|_{\mathcal{V}}^{2}+\|f\|_{L_{2}(0,T^{*}; \mathcal{H})}^{2}\right),\] (2.6) \[\|u\|_{L^{\infty}(0,T^{*};\mathcal{V})}^{2} \leq C_{3}\left(\|u_{0}\|_{\mathcal{V}}^{2}+\|f\|_{L_{2}(0,T^{*}; \mathcal{H})}^{2}\right), \tag{2.7}\] _where \(C_{1}\), \(C_{2}\), and \(C_{3}\) are the constants from [3, Lemma 4.4]._ Note that the energy estimates (2.5)-(2.7) are not explicitly stated in [3, Theorem 4.3], but immediately follow from its proof in combination with [3, Lemma 4.4]. **Remark 2.3**.: We can take \(c\) as the right-hand side of (2.6). **Remark 2.4**.: Our Assumption **A2** is stronger than the one used in [3], which requires the curvature estimates to be valid for \(\|u-v\|_{\mathcal{H}}\leq\varepsilon\) for some \(\varepsilon>0\) (the constant \(\kappa\) then may depend on \(\varepsilon\)). The stronger assumption made here is used in the proof of Theorem 2.5. As our new curvature estimates in Section 4.1.4 show, this stronger assumption is in fact satisfied for the low-rank manifolds under consideration. ### A stability estimate As a first extension to the above existence and uniqueness theorem, we now provide a stability estimate that in particular ensure continuity of the solution with respect to the data. This result was obtained in [12]. The proof follows a similar idea as the uniqueness result in [3, Theorem 4.1]. **Theorem 2.5**.: _Let \(u,v\in W(0,T^{*};\mathcal{V},\mathcal{H})\) be two solutions of Problem 2.1 on a time interval \([0,T^{*}]\) corresponding to right-hand sides \(f,g\in L_{2}(0,T;\mathcal{H})\), and initial values \(u_{0},v_{0}\in\mathcal{M}\), respectively. Assume that the continuous representatives \(u,v\in C(0,T^{*};\mathcal{H})\) have values in a weakly compact subset \(\mathcal{M}^{\prime}\subset\mathcal{M}\) (in particular their \(\mathcal{H}\)-distance to \(\overline{\mathcal{M}}^{w}\setminus\mathcal{M}\) remains bounded from below). Moreover, assume that \(u,v\in L_{\eta}(0,T^{*};\mathcal{V})\) where \(\eta\) is from Assumption_ **A4**(b). Then for any \(\varepsilon>0\), \[\|u(t)-v(t)\|_{\mathcal{H}}^{2}\leq\left(\|u_{0}-v_{0}\|_{\mathcal{H}}^{2}+ \frac{1}{\varepsilon}\int_{0}^{t}\|f(s)-g(s)\|_{\mathcal{H}}^{2}\,ds\right)\exp( \Lambda(t)+\varepsilon t),\] _where_ \[\Lambda(t)\coloneqq 2\kappa\int_{0}^{t}\|u^{\prime}(s)\|_{\mathcal{H}}+\|v^{ \prime}(s)\|_{\mathcal{H}}+\gamma\left(\|u(s)\|_{\mathcal{V}}^{\eta}+\|v(s)\|_{ \mathcal{V}}^{\eta}\right)+\|f(s)\|_{\mathcal{H}}+\|g(s)\|_{\mathcal{H}}\,ds<\infty\] _with \(\kappa=\kappa(\mathcal{M}^{\prime})\) from Assumption_ **A2**_._ Proof.: We use integration by parts in the sense of [38, Proposition 23.23(iv)]. This results in \[\frac{1}{2}\frac{d}{dt}\|u(t)-v(t)\|_{\mathcal{H}}^{2}\leq\langle u^{\prime}( t)-v^{\prime}(t)+A(t)(u(t)-v(t))-f(t)+g(t)+f(t)-g(t),u(t)-v(t)\rangle\] for almost all \(t\in[0,T^{*}]\) by coercivity of \(A(t)\) and adding and subtracting \(\langle f(t)-g(t),u(t)-v(t)\rangle\). We add and subtract (2.4) for the solutions \(u\) and \(v\) with \(w=P_{v(t)}(u(t)-v(t))\) and \(w=P_{u(t)}(u(t)-v(t))\), respectively. This results in \[\frac{1}{2}\frac{d}{dt}\|u(t)-v(t)\|_{\mathcal{H}}^{2}\\ \leq\langle f(t)-g(t),u(t)-v(t)\rangle+\langle u^{\prime}(t)+A(t) u(t)-f(t),(\operatorname{id}-P_{u(t)})(u(t)-v(t))\rangle\\ -\langle v^{\prime}(t)+A(t)v(t)-g(t),(\operatorname{id}-P_{v(t)}) (u(t)-v(t))\rangle.\] We use Young's inequality to estimate \[\langle f(t)-g(t),u(t)-v(t)\rangle\leq\frac{1}{2\varepsilon}\|f(t)-g(t)\|_{ \mathcal{H}}^{2}+\frac{\varepsilon}{2}\|u(t)-v(t)\|_{\mathcal{H}}^{2}\] and Assumption **A4** to get \[\frac{1}{2}\frac{d}{dt}\|u(t)-v(t)\|_{\mathcal{H}}^{2}\\ \leq\left(\|u^{\prime}(t)\|_{\mathcal{H}}+\gamma\|u(t)\|_{\mathcal{ V}}^{\eta}+\|f(t)\|_{\mathcal{H}}\right)\|(\operatorname{id}-P_{u(t)})(u(t)-v(t)) \|_{\mathcal{H}}\\ +\left(\|v^{\prime}(t)\|_{\mathcal{H}}+\gamma\|v(t)\|_{\mathcal{V} }^{\eta}+\|g(t)\|_{\mathcal{H}}\right)\|(\operatorname{id}-P_{v(t)})(u(t)-v(t ))\|_{\mathcal{H}}\\ +\frac{1}{2\varepsilon}\|f(t)-g(t)\|_{\mathcal{H}}^{2}+\frac{ \varepsilon}{2}\|u(t)-v(t)\|_{\mathcal{H}}^{2}.\] Finally, Assumption **A2** implies \[\frac{d}{dt}\|u(t)-v(t)\|_{\mathcal{H}}^{2}\leq\bigg{(}2\kappa \Big{(}\|u^{\prime}(t)\|_{\mathcal{H}}+\|v^{\prime}(t)\|_{\mathcal{H}}+\gamma \big{(}\|u(t)\|_{\mathcal{V}}^{\eta}+\|u(t)\|_{\mathcal{V}}^{\eta}\big{)}\\ +\|f(t)\|_{\mathcal{H}}+\|g(t)\|_{\mathcal{H}}\Big{)}+\varepsilon \bigg{)}\|u(t)-v(t)\|_{\mathcal{H}}^{2}+\frac{1}{\varepsilon}\|f(t)-g(t)\|_{ \mathcal{H}}^{2}\] and the results follows from Gronwall's lemma; see for example [30, Lemma 2.7]. Here we take into account that \(L_{2}(0,T^{*};\mathcal{H})\subset L_{1}(0,T^{*};\mathcal{H})\). ## 3. Convergence of spatial discretizations From the perspective of numerical analysis, an important question is whether the unique solution of Problem 2.1 can be obtained as the limit of solutions of spatially discretized problems. We now provide such a result under assumptions on the compatibility of the discrete spaces \(\mathcal{V}_{h}\subset\mathcal{V}\) with \(\mathcal{M}\). The spatially discretized problems are of the following form. **Problem 3.1**.: Given \(f\in L_{2}(0,T;\mathcal{H})\) and \(u_{0,h}\in\mathcal{M}\cap\mathcal{V}_{h}\), find \(u_{h}\in W(0,T;\mathcal{V},\mathcal{H})\) such that for almost all \(t\in[0,T]\), \[u_{h}(t) \in\mathcal{M}\cap\mathcal{V}_{h},\] \[\langle u_{h}^{\prime}(t),v_{h}\rangle+a(u_{h}(t),v_{h};t) =\langle f(t),v_{h}\rangle\quad\text{for all }v_{h}\in T_{u_{h}(t)} \mathcal{M}\cap\mathcal{V}_{h}, \tag{3.1}\] \[u_{h}(0) =u_{h,0}.\] We require that the discrete subspaces \(\mathcal{V}_{h}\subset\mathcal{V}\) have the following properties. * (Approximation property) 1. For every \(v\in\mathcal{V}\), the \(\mathcal{V}\)-orthogonal projections \(v_{h}\in\mathcal{V}_{h}\) satisfy \(\|v-v_{h}\|_{\mathcal{V}}\to 0\) as \(h\searrow 0\). 2. For every \(u\in\mathcal{M}\cap\mathcal{V}\) there is a sequence \((u_{h})\) with \(u_{h}\in\mathcal{M}\cap\mathcal{V}_{h}\) such that \(u_{h}\) converges to \(u\) in \(\mathcal{V}\) as \(h\searrow 0\) and \(\|u_{h}\|_{\mathcal{V}}\leq\|u\|_{\mathcal{V}}\). * (Compatibility of tangent spaces) 1. For \(u_{h}\in\mathcal{M}\cap\mathcal{V}_{h}\) and \(v_{h}\in T_{u}\mathcal{M}\cap\mathcal{V}_{h}\) a continuously differentiable curve with \(\varphi(0)=u_{h}\), \(\varphi^{\prime}(0)=v_{h}\) can be chosen such that \[\varphi(t)\in\mathcal{M}\cap\mathcal{V}_{h}\] for all \(|t|\) small enough. 2. If \(u_{h}\in\mathcal{M}\cap\mathcal{V}_{h}\) and \(v_{h}\in\mathcal{V}_{h}\) then \(P_{u_{h}}v_{h}\in T_{u_{h}}\mathcal{M}\cap\mathcal{V}_{h}\). The model problem (1.5) allows for such a space discretization, as verified in Section 5.1.3 for tensor train manifolds. The following result was obtained in [12]. **Theorem 3.2**.: _Let the Assumptions_ **A0**_-_**A4** _and_ **B1**_-_**B2** _hold. Let \(u_{0,h}\in\mathcal{M}\cap\mathcal{V}_{h}\) define a sequence that converges to \(u_{0}\) in \(\mathcal{V}\) as \(h\searrow 0\) and let \(u_{0}\) have positive \(\mathcal{H}\)-distance \(\sigma\) to the relative boundary \(\overline{\mathcal{M}}^{\mathsf{w}}\setminus\mathcal{M}\). Then there exists a constant \(c>0\) independent of \(\sigma\) and a constant \(h_{0}>0\) such that there is a unique \(u_{h}\) in \(W(0,T^{*};\mathcal{V},\mathcal{H})\cap L_{\eta}(0,T^{*};\mathcal{V})\) that solves Problem 3.1 on the time interval \([0,T^{*}]\) when \(T^{*}<\sigma^{2}/c\) for all \(h\leq h_{0}\). Furthermore, \(u_{h}\) converges to the unique solution \(u\) of Problem 2.1 in \(W(0,T^{*};\mathcal{V},\mathcal{H})\cap L_{\eta}(0,T^{*};\mathcal{V})\) weakly in \(L_{2}(0,T^{*};\mathcal{V})\) and strongly in \(C(0,T^{*};\mathcal{H})\), while the weak derivatives \(u_{h}^{\prime}\) converge weakly to \(u^{\prime}\) in \(L_{2}(0,T^{*},\mathcal{H})\)._ Proof.: Since \(u_{0,h}\) converges to \(u_{0}\) in \(\mathcal{V}\), there is an \(h_{0}>0\) such that \(\|u_{0,h}-u_{0}\|_{\mathcal{V}}\leq\sigma/2\) and \(\|u_{0,h}-u_{0}\|_{\mathcal{H}}\leq\sigma/2\) for all \(h\leq h_{0}\) due to (2.1). Furthermore, we can choose \(h_{0}\) small enough, such that \(\|u_{0,h}\|_{\mathcal{V}}^{2}\leq 2\|u_{0}\|_{\mathcal{V}}^{2}\) and \(\|u_{0,h}\|_{\mathcal{H}}^{2}\leq 2\|u_{0}\|_{\mathcal{H}}^{2}\). Therefore, the \(\mathcal{H}\)-distance of \(u_{0,h}\) from \(\overline{\mathcal{M}}^{\mathsf{w}}\setminus\mathcal{M}\) is at least \(\sigma/2\). Hence, applying Theorem 2.2 with \(\mathcal{V}_{h}\) in place of \(\mathcal{V}\) provides us with solutions \(u_{h}\) to Problem 3.1 on a time interval \([0,T^{*}]\) with \(T^{*}<\sigma^{2}/(4c)\) for every \(h\leq h_{0}\), where \(c\) can be chosen as the right-hand side of the following estimate (3.3). Theorem 2.2 provides us with the estimates \[\|u_{h}\|_{L_{2}(0,T^{*};\mathcal{V})}^{2} \leq 2\|u_{0}\|_{\mathcal{H}}^{2}+C_{1}\|f\|_{L_{2}(0,T^{*}; \mathcal{H})}^{2}, \tag{3.2}\] \[\|u_{h}^{\prime}\|_{L_{2}(0,T^{*};\mathcal{H})}^{2} \leq C_{2}\left(2\|u_{0}\|_{\mathcal{V}}^{2}+\|f\|_{L_{2}(0,T^{*}; \mathcal{H})}^{2}\right),\] (3.3) \[\|u_{h}\|_{L_{\infty}(0,T^{*};\mathcal{V})}^{2} \leq C_{3}\left(2\|u_{0}\|_{\mathcal{V}}^{2}+\|f\|_{L_{2}(0,T^{*} ;\mathcal{H})}^{2}\right). \tag{3.4}\] Note that by (3.3), we can assume that for \(h\) sufficiently small, \(\|u_{h}(t)-u_{0}\|_{\mathcal{H}}\leq\sigma-\delta\) for a \(\delta>0\). As a consequence, there is a subsequence \((u_{h})\) converging weakly to \(\tilde{u}\) in \(L_{2}(0,T^{*};\mathcal{V})\) and weakly\({}^{*}\) in \(L_{\infty}(0,T^{*};\mathcal{V})\) and the derivatives \((u_{h}^{\prime})\) converging weakly to \(\tilde{w}\) in \(L_{2}(0,T^{*};\mathcal{H})\). We next show that \(\tilde{w}\) is the weak derivative of \(\tilde{u}\). For this, we need to verify that \[\int_{0}^{T^{*}}\langle\tilde{w}(t),v\rangle\,\phi(t)+\langle\tilde{u}(t),v \rangle\,\phi^{\prime}(t)\,dt=0\] for arbitrary \(v\in\mathcal{V}\) and \(\phi\in C_{0}^{\infty}(0,T^{*})\). For any \(v_{h}\in\mathcal{V}_{h}\) we may add and subtract the weak derivative of \(u_{h}\) to obtain \[\int_{T^{*}}\langle\tilde{w}(t),v_{h}\rangle\,\phi(t)\,dt+\langle \tilde{u}(t),v_{h}\rangle\,\phi^{\prime}(t)\,dt\\ =\,\int_{T^{*}}\langle\tilde{w}(t)-u_{h}^{\prime}(t),v_{h} \rangle\,\phi(t)\,+\langle\tilde{u}(t)-u_{h}(t),v_{h}\rangle\,\phi^{\prime}(t) \,dt.\] Now let \((v_{h})\) be a sequence converging to \(v\) in \(\mathcal{V}\). Then \[\int_{T^{*}}\langle\tilde{w}(t),v\rangle\,\phi(t)\,dt+\langle \tilde{u}(t),v\rangle\,\phi^{\prime}(t)\,dt=\lim_{h\searrow 0}\int_{T^{*}} \langle\tilde{w}(t),v_{h}\rangle\,\phi(t)\,dt+\langle\tilde{u}(t),v_{h} \rangle\,\phi^{\prime}(t)\,dt\\ =\lim_{h\searrow 0}\int_{T^{*}}\langle\tilde{w}(t)-u_{h}^{\prime}(t),v_ {h}\rangle\,\phi(t)\,+\langle\tilde{u}(t)-u_{h}(t),v_{h}\rangle\,\phi^{ \prime}(t)\,dt=0\] since \(v_{h}\phi\) converges strongly to \(v\phi\) in \(L_{2}(0,T^{*};\mathcal{V})\). Therefore, the sequence \((u_{h})\) converges weakly in \(W(0,T^{*};\mathcal{V},\mathcal{H})\) to \(\tilde{u}\). Due to the Aubin-Lions theorem, and by boundedness in \(L_{\infty}(0,T^{*};\mathcal{V})\), it also converges strongly in \(C(0,T^{*};\mathcal{H})\) to \(\tilde{u}\), and \(\tilde{u}(0)=\lim_{h\searrow 0}u_{h}(0)=\lim_{h\searrow 0}u_{0,h}=u_{0}\). It remains to show that \(\tilde{u}\) satisfies (2.4) and therefore agrees with the unique solution \(u\) of Problem 2.1 in \(W(0,T^{*};\mathcal{V},\mathcal{H})\cap L_{\eta}(0,T^{*};\mathcal{V})\) provided by Theorem 2.2. By a subsequence-of-subsequence argument, it then follows that the entire sequence converges to \(\tilde{u}\). Let \(Q_{h}\) be the \(\mathcal{V}\)-orthogonal projection onto \(\mathcal{V}_{h}\). For \(v\in T_{\tilde{u}(t)}\mathcal{M}\cap\mathcal{V}\), let \(v_{h}=Q_{h}v\), which converges strongly to \(v\) in \(\mathcal{V}\) by Property **B1**(a). This also implies that the sequence is uniformly bounded in \(\mathcal{H}\). By (3.1), we have \[\langle u_{h}^{\prime}(t),P_{u_{h}(t)}v_{h}\rangle+a(u_{h}(t),P_{u_{h}(t)}v_{h };t)=\langle f(t),P_{u_{h}(t)}v_{h}\rangle\] for almost every \(t\), since \(P_{u_{h}(t)}v_{h}\in T_{u_{h}(t)}\mathcal{M}\cap\mathcal{V}_{h}\) by Property **B2**(b). We have chosen the time interval such that \(u_{h}(t)\in\mathcal{M}^{\prime}\subset\mathcal{M}\) lie in a weakly compact subset for all \(t\in[0,T^{*}]\). Hence, using Assumption **A2**, \[\|v-P_{u_{h}(t)}v_{h}\|_{\mathcal{H}}\leq\|v-P_{u_{h}(t)}v\|_{ \mathcal{H}}+\|P_{u_{h}(t)}(v-v_{h})\|_{\mathcal{H}}\\ \leq\kappa(\mathcal{M}^{\prime})\|\tilde{u}(t)-u_{h}(t)\|_{ \mathcal{H}}\|v\|_{\mathcal{H}}+\|(v-v_{h})\|_{\mathcal{H}}, \tag{3.5}\] and thus \(P_{u_{h}(t)}v_{h}\) converges strongly to \(v\) in \(\mathcal{H}\). Using a similar argument as in the proof of Theorem 2.2 in [3], it suffices to show \[\int_{0}^{T^{*}}\langle\tilde{u}^{\prime}(t),v(t)\rangle+a(\tilde{u}(t),v(t);t )-\langle f(t),v(t)\rangle\,dt=0\] for all \(v\in L_{\infty}(0,T^{*};\mathcal{V})\) with \(v(t)\in T_{\tilde{u}(t)}\mathcal{M}\cap\mathcal{V}\) for almost every \(t\). Since \(P_{u_{h}(t)}Q_{h}v(t)\) converges to \(v(t)\) in \(\mathcal{H}\) for almost all \(t\in[0,T^{*}]\) and we have the square integrable bound (3.5), the sequence \(P_{u_{h}(t)}Q_{h}v(t)\) converges strongly to \(v\) in \(L_{2}(0,T^{*};\mathcal{H})\). This together with weak convergence of \((u_{h}^{\prime})\) in \(L_{2}(0,T^{*};\mathcal{H})\) implies \[\lim_{h\searrow 0}\int_{0}^{T^{*}}\langle u_{h}^{\prime}(t),P_{u_{h}(t)}Q_{h}v(t) \rangle-\langle f(t),P_{u_{h}(t)}Q_{h}v(t)\rangle\,dt=\int_{0}^{T^{*}}\langle \tilde{u}^{\prime}(t),v(t)\rangle-\langle f(t),v(t)\rangle\,dt.\] Finally, we use Assumption **A4**. We have \[a(u_{h}(t),P_{u_{h}(t)}Q_{h}v(t);t)-a(\tilde{u}(t),v(t);t)\\ =\langle A_{1}(t)u_{h}(t),P_{u_{h}(t)}Q_{h}v(t)\rangle-\langle A_{ 1}(t)\tilde{u}(t),v(t)\rangle\\ +\langle A_{2}(t)u_{h}(t),P_{u_{h}(t)}Q_{h}v(t)\rangle-\langle A_ {2}(t)\tilde{u}(t),v(t)\rangle \tag{3.6}\] and due to Assumption **A4**(a) \[\langle A_{1}(t)u_{h}(t),P_{u_{h}(t)}Q_{h}v(t)\rangle=\langle A_{1}(t)u_{h}(t ),Q_{h}v(t)\rangle.\] This implies \[\lim_{h\searrow 0}\int_{0}^{T^{*}}\left|\langle A_{1}(t)u_{h}(t),P_{u_{h}(t) }Q_{h}v(t)\rangle-\langle A_{1}(t)u(t),v(t)\rangle\right|\,dt=0\] as \(u_{h}\) converges weakly to \(\tilde{u}\) and \(Q_{h}v\) converges strongly to \(v\) in \(L_{2}(0,T^{*};\mathcal{V})\). For the second summand in (3.6) we have \[\langle A_{2}(t)u_{h}(t),P_{u_{h}(t)}Q_{h}v(t)\rangle-\langle A_{ 2}(t)\tilde{u}(t),v(t)\rangle\\ =\langle A_{2}(t)u_{h}(t),P_{u_{h}(t)}Q_{h}v(t)-v(t)\rangle+ \langle A_{2}(t)(\tilde{u}(t)-u_{h}(t)),v(t)\rangle\] where \[\left|\langle A_{2}(t)u_{h}(t),P_{u_{h}(t)}Q_{h}v(t)-v(t)\rangle\right|\leq \gamma\|u_{h}(t)\|_{\mathcal{V}}^{\eta}\|P_{u_{h}(t)}Q_{h}v(t)-v(t)\|_{ \mathcal{H}}\] and \(\int_{0}^{T^{*}}\|u_{h}(t)\|_{\mathcal{V}}^{\eta}\|P_{u_{h}(t)}Q_{h}v(t)-v(t) \|_{\mathcal{H}}\,dt\to 0\). Moreover, \[\int_{0}^{T^{*}}\langle A_{2}(t)(\tilde{u}(t)-u_{h}(t)),v(t)\rangle\,dt\to 0 \quad\text{ as }h\searrow 0,\] since \(u_{h}\) converges weakly to \(\tilde{u}\) in \(L_{2}(0,T^{*};\mathcal{V})\). Taken together with (3.5) and the uniform bound of \(u_{h}\) in \(L_{\infty}(0,T^{*};\mathcal{V})\), we have \[\int_{0}^{T^{*}}a(u_{h}(t),P_{u_{h}(t)}Q_{h}v(t);t)-a(\tilde{u}(t),v(t);t)\, dt\to 0\quad\text{as }h\searrow 0\] and hence \[\int_{0}^{T^{*}}\langle\tilde{u}^{\prime}(t),v(t)\rangle+a( \tilde{u}(t),v(t);t)-\langle f(t),v(t)\rangle\,dt\\ =\lim_{h\searrow 0}\int_{0}^{T^{*}}\langle u^{\prime}_{h}(t),P_{u_{h} (t)}v_{h}\rangle+a(u_{h}(t),P_{u_{h}(t)}v_{h};t)-\langle f(t),P_{u_{h}(t)}v_{ h}\rangle\,dt=0\] for all \(v\in L_{\infty}(0,T^{*};\mathcal{V})\) with \(v(t)\in T_{\tilde{u}(t)}\mathcal{M}\cap\mathcal{V}\) for almost every \(t\). ## 4. Properties of low-rank tensor manifolds in Hilbert space In this section we return to our model problem (1.4) in its weak formulation (1.5) and apply the theory developed above for low-rank models of multivariate functions. In the model problem \(\mathcal{H}=L^{2}(\Omega)\) and \(\mathcal{V}=H_{0}^{1}(\Omega)\), the compact embedding \(\mathcal{V}\hookrightarrow\mathcal{H}\) is due to the Rellich-Kondrachov theorem and (2.1) is the Poincare inequality. ### Low-rank tensor manifolds in function space Let \(\Omega=\Omega_{1}\times\cdots\times\Omega_{d}\), where \(\Omega_{\mu}\) is a bounded domain in a Euclidean space for \(\mu=1,\ldots,d\). We write \(H^{\mu}=L_{2}(\Omega_{\mu})\) for abbreviation. The space \(\mathcal{H}=L_{2}(\Omega_{1}\times\cdots\times\Omega_{d})=H^{1}\otimes\cdots \otimes H^{d}\) is a tensor product Hilbert space. In DLRA, one considers low-rank manifolds \(\mathcal{M}\) in such spaces. We consider manifolds of the general form \[\mathcal{M}=\Big{\{}u=\sum_{k_{1}=1}^{r_{1}}\cdots\sum_{k_{d}=1}^ {r_{d}}C(k_{1},\ldots,k_{d})\,u_{k_{1}}^{1}\otimes\cdots\otimes u_{k_{d}}^{d} \colon \tag{4.1}\] \[C\in\mathcal{M}_{\mathrm{c}}\subset\mathbb{R}_{*}^{r_{1}\times \cdots\times r_{d}},G(u^{\mu})\text{ is invertible}\Big{\}}\] Here \(\mathbb{R}_{*}^{r_{1}\times\cdots\times r_{d}}\) denotes the dense and open subset of "regular" \(r_{1}\times\cdots\times r_{d}\) tensors with full multilinear rank \((r_{1},\ldots,r_{d})\), and \(G(u^{\mu})=[\langle u_{i}^{\mu},u_{j}^{\mu}\rangle]_{ij}\in\mathbb{R}^{r_{\mu }\times r_{\mu}}\) is the Gramian of the system \(\{u_{1}^{\mu},\ldots,u_{r_{\mu}}^{\mu}\}\). We assume that \(\mathcal{M}_{\mathrm{c}}\) is a smooth submanifold in \(\mathbb{R}_{*}^{r_{1}\times\cdots\times r_{d}}\) that we additionally assume to be invariant under changes of basis in the sense that for all \(C\in\mathcal{M}_{\mathrm{c}}\), \[C\times_{1}T^{1}\times_{2}\cdots\times_{d}T^{d}\in\mathcal{M}_{\mathrm{c}} \quad\text{for all invertible matrices $T^{1},\ldots,T^{d}$.} \tag{4.2}\] Here we use the notation \(\times_{\mu}\) for left multiplication of a matrix onto the \(\mu\)-th mode a tensor [22]. The definition of \(\mathcal{M}\) results in a constrained version of the Tucker format (for which \(\mathcal{M}_{\mathrm{c}}=\mathbb{R}_{*}^{r_{1}\times\cdots\times r_{d}}\)), and covers continuous versions of general tree based tensor formats, for example by letting \(\mathcal{M}_{\mathrm{c}}\) be a corresponding finite-dimensional low-rank tensor manifold in \(\mathbb{R}^{r_{1}\times\cdots\times r_{d}}\), such as manifolds of tensor trains [18, 35] or of hierarchical Tucker tensors [33, 4] with fixed ranks. Note that it follows from the assumed properties of \(\mathcal{M}_{\mathrm{c}}\), that this set is not closed in \(\mathbb{R}^{r_{1}\times\cdots\times r_{d}}\), and hence \(\overline{\mathcal{M}_{\mathrm{c}}}\setminus\mathcal{M}_{\mathrm{c}}\neq\emptyset\). To see this, let \(C\in\mathcal{M}_{\mathrm{c}}\) and \(T^{1}_{n},\ldots,T^{d}_{n}\) be invertible matrices converging to \(T^{1}_{*},\ldots,T^{d}_{*}\) such that at least one of the limits is not invertible. Then \(C\times_{1}T^{1}_{n}\times_{2}\cdots\times_{d}T^{d}_{n}\in\mathcal{M}_{ \mathrm{c}}\) for all \(n\) by (4.2), but the limit \(C\times_{1}T^{1}_{*}\times_{2}\cdots\times_{d}T^{d}_{*}\) does not have full multilinear rank, and hence is not in \(\mathcal{M}_{\mathrm{c}}\). This example also shows that the set \(\mathcal{M}\) is not closed in \(\mathcal{H}\), and in particular not weakly closed. Moreover, in Lemma 4.4 we will see that the closure and weak closure of \(\mathcal{M}\) coincide with the set described in (4.1) with \(\mathcal{M}_{\mathrm{c}}\) replaced by \(\overline{\mathcal{M}_{\mathrm{c}}}\). For investigating the manifold properties of \(\mathcal{M}\), the concepts of matricizations and minimal subspaces play a crucial role. For every \(\mu=1,\ldots,d\), we can identify \(u\) with an element \(M^{\mu}_{u}\), called the \(\mu\)-th matricization of \(u\), in the subspace \(H^{\mu}\otimes H^{\neq\mu}\), where \(H^{\neq\mu}=\bigotimes_{\nu\neq\mu}H^{\nu}\). Assuming \(u\in\mathcal{M}\) as above and letting \[v^{\mu}_{k_{\mu}}=\sum_{k_{1}=1}^{r_{1}}\cdots\sum_{k_{\mu-1}=1}^{r_{\mu-1}} \sum_{k_{\mu+1}=1}^{r_{\mu+1}}\cdots\sum_{k_{d}=1}^{r_{d}}C(k_{1},\ldots,k_{d} )\,u_{k_{1}}^{1}\otimes\cdots\otimes u_{k_{\mu-1}}^{\mu-1}\otimes u_{k_{\mu+1 }}^{\mu+1}\otimes\cdots\otimes u_{k_{d}}^{d}, \tag{4.3}\] one has \[M^{\mu}_{u}=\sum_{k_{\mu}=1}^{r_{\mu}}u_{k_{\mu}}^{\mu}\otimes v_{k_{\mu}}^{ \mu}. \tag{4.4}\] Since the core tensor \(C\) has full multilinear rank, one can show that the \(v^{\mu}_{k_{\mu}}\) are also linearly independent. Now define \[\mathcal{U}^{\mu}=\mathrm{span}\{u_{1}^{\mu},\ldots,u_{r_{\mu}}^{\mu}\}\] and \[\mathcal{V}^{\mu}=\mathrm{span}\{v_{1}^{\mu},\ldots,v_{r_{\mu}}^{\mu}\},\] then (4.4) expresses the fact, that \(M^{\mu}_{u}\) is an element of the "matrix subspace" \(\mathcal{U}^{\mu}\otimes\mathcal{V}^{\mu}\) and \(\operatorname{rank}(M^{\mu}_{u})=r_{\mu}\). We call \(\mathcal{U}^{\mu}\) the \(\mu\)-th minimal subspace of \(u\in\mathcal{M}\). Choosing an orthonormal basis for each space \(H^{\mu}=L_{2}(\Omega_{\mu})\), we obtain an isomorphism between \(H^{\mu}\) and \(\ell_{2}(\mathbb{N})\) for each \(\mu\). This in turn defines a tensor space isomorphism between \(\mathcal{H}\) and \(\ell_{2}(\mathbb{N})\otimes\cdots\otimes\ell_{2}(\mathbb{N})\). In what follows, in order to use a more common matrix and tensor notation, we can thus assume without loss of generality that \[\mathcal{H}=\ell_{2}(\mathbb{N}^{d})=\ell_{2}(\mathbb{N})\otimes\cdots\otimes \ell_{2}(\mathbb{N}),\] and thus consider \(\mathcal{M}\) as a set in the tensor product Hilbert space of square summable infinite arrays. The definition of \(\mathcal{M}\) remains the same as in (4.1), only that now \(u^{\mu}_{k_{\mu}}\in\ell_{2}(\mathbb{N})\). We will, however, denote the elements of \(\ell_{2}(\mathbb{N}^{d})\) as \(X\) instead of \(u\), in order to clearly distinguish these sequences from functions. The corresponding matricizations are \(M^{\mu}_{X}\in\ell_{2}(\mathbb{N})\otimes\ell_{2}(\mathbb{N}^{d-1})\). The Tucker format (4.1) can then be written in the usual abbreviated form \[X=C\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}\] where \(U^{\mu}=[u^{\mu}_{1},\ldots,u^{\mu}_{r_{\mu}}]\in(\ell_{2}(\mathbb{N}))^{r_{ \mu}}\) contains a basis for \(\mathcal{U}^{\mu}\). Here the multiplications \(\times_{\mu}\) are defined as for finite tensors. #### 4.1.1. Manifold structure Using the concept of manifolds in Banach space as presented in [37, Ch. 73] we can prove the following result. **Theorem 4.1**.: _Let \(X=C\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}\) be in \(\mathcal{M}\) defined as in (4.1) satisfying (4.2). Then the following statements hold._ 1. _There exists an open neighborhood_ \(\mathcal{O}\) _of_ \(X\) _and a submersion_ \(g\) _defined on_ \(\mathcal{O}\) _such that_ \(\mathcal{M}\cap\mathcal{O}=g^{-1}(0)\)_. Consequently,_ \(\mathcal{M}\cap\mathcal{O}\) _is a smooth submanifold in the Hilbert space_ \(\mathcal{H}\)_. The tangent space_ \(T_{X}\mathcal{M}\) _at_ \(X\in\mathcal{M}\cap\mathcal{O}\) _is the null space of_ \(g^{\prime}(X)\)_._ 2. _There exists a continuously Frechet-differentiable homeomorphism_ \(\varphi\) _from a neighborhood of zero in_ \(T_{X_{*}}\mathcal{M}\) _to_ \(\mathcal{M}\cap\mathcal{O}\) _satisfying_ \(\varphi(\xi)=X_{*}+\xi+o(\|\xi\|_{\mathcal{H}})\) _for all_ \(\xi\) _in that neighborhood. Moreover,_ \(\varphi\) _is also an immersion and hence a local embedding for_ \(\mathcal{M}\)_._ 3. _The tangent space equals the subspace spanned by elements of the form_ \[\xi=\dot{C}\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}+C\times_{1}\dot{U}^ {1}\times_{2}\cdots\times_{d}U^{d}+\cdots+C\times_{1}U^{1}\times_{2}\cdots \times_{d}\dot{U}^{d}\] (4.5) _with_ \(\dot{C}\in T_{C}\mathcal{M}_{\rm c}\) _and_ \((U^{\mu})^{\sf T}\dot{U}^{\mu}=0_{r_{\mu}\times r_{\mu}}\) _for_ \(\mu=1,\ldots,d\) _(that is, the columns of_ \(\dot{U}^{\mu}\) _span a subspace orthogonal to_ \(\mathcal{U}^{\mu}\)_)._ The proof of this theorem is given in the appendix. For an alternative treatment of low-rank manifolds in Banach spaces, see [13, 14]. #### 4.1.2. Tangent space projection We now consider the orthogonal projection onto the tangent space \(T_{X}\mathcal{M}\) at given \(X=C\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}\). By Theorem 4.1(iii), \(T_{X}\mathcal{M}\) is spanned by elements \(\xi=\xi_{0}+\xi_{1}+\cdots+\xi_{d}\) of the form (4.5). Here the elements \(\xi_{0}=\dot{C}\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}\) with \(\dot{C}\in T_{C}\mathcal{M}_{\rm c}\) span a subspace of \(\mathcal{U}^{1}\otimes\cdots\otimes\mathcal{U}^{d}\) which we denote by \(\mathcal{S}_{X}\). For \(\mu=1,\ldots,d\), the elements \(\xi_{\mu}=C\times_{1}U^{1}\times_{2}\cdots\times_{\mu}\dot{U}^{\mu}\times_{ \mu+1}\cdots\times_{d}U^{d}\) with \((U^{\mu})^{\sf T}\dot{U}^{\mu}=0\) are equivalently described via their matricization as \[M^{\mu}_{\xi_{\mu}}=\dot{U}^{\mu}(V^{\mu})^{\sf T}=\sum_{k_{\mu}=1}^{r_{\mu}} \dot{u}^{\mu}_{k_{\mu}}\otimes v^{\mu}_{k_{\mu}}\] due to the definition (4.3) of \(V^{\mu}\). Since actually any element in the space \((\mathcal{U}^{\mu})^{\perp}\otimes\mathcal{V}^{\mu}\) can be written in this way, the \(M^{\mu}_{\xi_{\mu}}\) span this space. Treating the \((\mathcal{U}^{\mu})^{\perp}\otimes\mathcal{V}^{\mu}\) as subspaces of \(\ell_{2}(\mathbb{N}^{d})\) (in a slight abuse of notation) we conclude that \[T_{X}\mathcal{M}=\mathcal{S}_{X}\oplus[(\mathcal{U}^{1})^{\perp}\otimes \mathcal{V}^{1}]\oplus\cdots\oplus[(\mathcal{U}^{d})^{\perp}\otimes\mathcal{V }^{d}], \tag{4.6}\] which indeed is an orthogonal decomposition as can be seen from the fact that \(\mathcal{V}^{\mu}\subseteq\mathcal{U}^{1}\otimes\cdots\otimes\mathcal{U}^{\mu -1}\otimes\mathcal{U}^{\mu+1}\otimes\cdots\otimes\mathcal{U}^{d}\) for every \(\mu\). In the following proposition, we compute the tangent space projection under the assumption that the matrices \(U^{\mu}\) have orthonormal columns; see Remark 4.3 for the general formula. **Proposition 4.2**.: _Let \(X=C\times_{1}U_{1}\times_{2}\cdots\times_{d}U_{d}\in\mathcal{M}\), and assume \((U^{\mu})^{\mathsf{T}}U^{\mu}=\mathrm{id}\). The orthogonal projection onto the tangent space \(T_{X}\mathcal{M}\) is given as_ \[P_{X}=P_{X}^{0}+P_{X}^{1}+\cdots+P_{X}^{d} \tag{4.7}\] _with \(P_{X}^{1},\ldots,P_{X}^{d}\) being implicitly defined via their action on matricizations as_ \[M^{\mu}_{P_{X}^{\mu}(Z)}=(I-P_{\mathcal{U}^{\mu}})M^{\mu}_{Z}P_{\mathcal{V}^{ \mu}},\quad\mu=1,\ldots,d. \tag{4.8}\] _The projector \(P_{X}^{0}\) is defined as_ \[P_{X}^{0}(Z)=P_{C}(C_{Z})\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d} \tag{4.9}\] _where \(P_{C}\) is the orthogonal tangent space projector onto \(T_{C}\mathcal{M}_{\mathrm{c}}\) in \(\mathbb{R}^{r_{1}\times\cdots\times r_{d}}\), and \(C_{Z}=Z\times_{1}(U^{1})^{T}\times_{2}\cdots\times_{d}(U^{d})^{T}\)._ Proof.: By (4.6), the single terms \(\xi_{0},\ldots,\xi_{d}\) in the tangent vector representation (4.5) belong to mutually orthogonal subspaces. Therefore the orthogonal projection \(P_{X}(Z)=\xi_{0}+\xi_{1}+\cdots+\xi_{d}\) onto \(T_{X}\mathcal{M}\) can be decomposed accordingly as in (4.7). Here the \(P_{X}^{\mu}\) for \(\mu=1,\ldots,d\) are the projections on \((\mathcal{U}^{\mu})^{\perp}\otimes\mathcal{V}^{\mu}\) which have the asserted form. We consider the projection \(\xi_{0}=P_{X}^{0}(Z)\) of a given \(Z\) onto the space \(\mathcal{S}_{X}\) in (4.6). We write \[\xi_{0}=K\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}\] and need to determine \(K\in T_{C}\mathcal{M}_{\mathrm{c}}\). The orthogonality condition for the projection is \[0=\langle Z-\xi_{0},\dot{C}\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}\rangle\] for all \(\dot{C}\in T_{C}\mathcal{M}_{\mathrm{c}}\). Using properties of tensor-matrix multiplication, we rewrite this as \[0 =\langle Z\times_{1}(U^{1})^{\mathsf{T}}\times_{2}\cdots\times_{ d}(U^{d})^{\mathsf{T}}-\xi_{0}\times_{1}(U^{1})^{\mathsf{T}}\times_{2}\cdots \times_{d}(U^{d})^{\mathsf{T}},\dot{C}\rangle\] \[=\langle Z\times_{1}(U^{1})^{\mathsf{T}}\times_{2}\cdots\times_{ d}(U^{d})^{\mathsf{T}}-K,\dot{C}\rangle=\langle C_{Z}-K,\dot{C}\rangle.\] Since this holds for all \(\dot{C}\in T_{C}\mathcal{M}_{\mathrm{c}}\), it follows that \(K\) equals the orthogonal projection of \(C_{Z}\) onto \(T_{C}\mathcal{M}_{\mathrm{c}}\). **Remark 4.3**.: If the \(U^{\mu}\) are not orthonormal, then the formula for \(P_{X}^{0}(Z)\) needs to be adjusted to \[P_{X}^{0}(Z)=\Pi_{C}(C_{Z}),\] where \(\Pi_{C}\) is the orthogonal projection in \(\mathbb{R}^{r_{1}\times\cdots\times r_{d}}\) onto \(T_{C}\mathcal{M}_{\mathrm{c}}\) with respect to the inner product induced by the operator \(\mathbf{A}=(U^{1})^{T}U^{1}\otimes\cdots\otimes(U^{d})^{T}U^{d}\), which is symmetric and positive definite on \(\mathbb{R}^{r_{1}\times\cdots\times r_{d}}\). This projection is given by \(\Pi_{C}=(P_{C}\mathbf{A}P_{C})^{-1}P_{C}\mathbf{A}\), where \((P_{C}\mathbf{A}P_{C})^{-1}\) denotes the inverse of \(P_{C}\mathbf{A}P_{C}\) on \(T_{C}\mathcal{M}_{\mathrm{c}}\). #### 4.1.3. Distance to boundary As we will see in Section 4.1.4, curvature estimates as in **A2** for low-rank tensor manifolds can be expressed in terms of inverses of smallest singular values of certain matricizations. In this subsection, we therefore estimate the smallest singular values of matricizations of a tensor \(X\in\mathcal{M}\) from below by its distance to the boundary \(\overline{\mathcal{M}}^{w}\backslash\mathcal{M}\). This will have the effect, that on every weakly compact subset \(\mathcal{M}^{\prime}\subseteq\mathcal{M}\) these singular values remain bounded from below. We first give a characterization of the weak closure of \(\mathcal{M}\). In what follows, by \(\|\cdot\|\) without further specification we denote the Frobenius norm of tensors. **Lemma 4.4**.: _Let \(\mathcal{M}\) be of the form (4.1) with \(\mathcal{M}_{\mathrm{c}}\) satisfying (4.2). Then_ \[\overline{\mathcal{M}}^{w}=\overline{\mathcal{M}}=\Big{\{}X=C\times_{1}U^{1} \times_{2}\cdots\times_{d}U^{d}\colon C\in\overline{\mathcal{M}_{\mathrm{c}}},\quad(U^{\mu})^{\mathsf{T}}U^{\mu}\in\mathrm{GL}_{\mathrm{r}_{\mu}}\Big{\}},\] _that is, the weak closure and closure of \(\mathcal{M}\) coincide and are of the form (4.1) with \(\mathcal{M}_{\mathrm{c}}\) replaced by \(\overline{\mathcal{M}_{\mathrm{c}}}\)._ Proof.: Let \((X_{n})\subset\mathcal{M}\) be a sequence converging weakly to \(X\in\overline{\mathcal{M}}^{w}\). By [17, Thm. 6.29], there are \(r_{\mu}\) dimensional subspaces \(\mathcal{U}^{\mu}\) such that \(X\in\mathcal{U}^{1}\otimes\cdots\otimes\mathcal{U}^{d}\). In particular, let \(U^{\mu}\in(\ell_{2}(\mathbb{N}))^{r_{\mu}}\) be orthonormal bases of the spaces \(\mathcal{U}^{\mu}\). Then \[X=\Big{(}X\times_{1}(U^{1})^{\mathsf{T}}\times_{2}\cdots\times_{d}(U^{d})^{ \mathsf{T}}\Big{)}\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}=C\times_{1}U^ {1}\times_{2}\cdots\times_{d}U^{d}\] with \(C\in\mathbb{R}^{r_{1}\times\cdots}\times r_{d}\). Moreover, since \(X_{n}\in\mathcal{M}\), we have \[X_{n}=C_{n}\times_{1}U^{1}_{n}\times_{2}\cdots\times_{d}U^{d}_{n}\] and by weak convergence, we have \[\lim_{n\to\infty}C_{n}\times_{1}(U^{1})^{\mathsf{T}}U^{1}_{n}\times_{2}\cdots \times_{d}(U^{d})^{\mathsf{T}}U^{d}_{n}=C.\] By the invariance condition (4.2) for \(\mathcal{M}_{\mathrm{c}}\), it follows that \(C\in\overline{\mathcal{M}_{\mathrm{c}}}\). This shows \[\overline{\mathcal{M}}^{w}=\Big{\{}X=C\times_{1}U^{1}\times_{2}\cdots\times_{ d}U^{d}\colon C\in\overline{\mathcal{M}_{\mathrm{c}}},\quad(U^{\mu})^{\mathsf{T}}U^{ \mu}\in\mathrm{GL}_{\mathrm{r}_{\mu}}\Big{\}}.\] Now let \((\tilde{C}_{n})\subset\mathcal{M}_{\mathrm{c}}\) be a sequence converging to \(C\). Then the sequence defined by \[\tilde{X}_{n}=\tilde{C}_{n}\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}\] converges strongly to \(X\) as \(\|X_{n}-X\|_{\ell_{2}(\mathbb{N}^{d})}=\|C_{n}-C\|\). This proves the assertion. For \(X\in\mathcal{M}\) and \(\mu=1,\ldots,d\), let \(\{u^{\mu}_{1},\ldots,u^{\mu}_{r_{\mu}}\}\) be the left singular vectors of the matricization \(M^{\mu}_{X}\). Then \[M^{\mu}_{X}=\sum_{k=1}^{r_{\mu}}u^{\mu}_{k}\otimes v^{\mu}_{k}\] with \(\{v^{\mu}_{1},\ldots,v^{\mu}_{r_{\mu}}\}\) orthogonal in \(\ell_{2}(\mathbb{N}^{d-1})\) such that \[\sigma^{\mu}_{k}=\sigma^{\mu}_{k}(X)=\|v^{\mu}_{k}\|_{\ell_{2}(\mathbb{N}^{d-1 })},\quad k=1,\ldots,r_{\mu}, \tag{4.10}\] are the singular values of \(M^{\mu}_{X}\), for which we may assume \[\sigma^{\mu}_{1}\geq\sigma^{\mu}_{2}\geq\ldots\geq\sigma^{\mu}_{r_{\mu}}.\] Further, we define \[\sigma=\mathrm{dist}(X,\overline{\mathcal{M}}^{w}\setminus\mathcal{M}). \tag{4.11}\] **Proposition 4.5**.: _Let \(\sigma^{\mu}_{k}\) for \(\mu=1,\ldots,d\) and \(k=1,\ldots,r_{\mu}\) be defined as above. Then_ \[\min_{\mu\in\{1,\ldots,d\}}\sigma^{\mu}_{r_{\mu}}\geq\sigma.\] Proof.: Let \(\mu\in\{1,\ldots,d\}\) and let \(u_{1}^{\mu},\ldots,u_{r_{\mu}}^{\mu}\) be the left singular vectors of \(M_{X}^{\mu}\) associated to \(\sigma_{1}^{\mu},\ldots,\sigma_{r_{\mu}}^{\mu}\), respectively. Then for the tensor \(\tilde{X}\) defined by its matricization \[M_{\tilde{X}}^{\mu}=\sum_{k=1}^{r_{\mu}-1}u_{k}^{\mu}(u_{k}^{\mu})^{\top}M_{X}^{ \mu},\] we have \(\|X-\tilde{X}\|_{\ell_{2}(\mathbb{N}^{d})}=\sigma_{r_{\mu}}^{\mu}\), and \(\tilde{X}\notin\mathcal{M}\). Furthermore, we have \[X=C\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d},\] with \(U^{\mu}=[u_{1}^{\mu},\ldots,u_{r_{\mu}}^{\mu}]\) and \[\tilde{X}=C\times_{1}U^{1}\times_{2}\cdots\times_{\mu}U^{\mu}(\tilde{U}^{\mu} )^{\top}U^{\mu}\times_{\mu+1}\cdots\times_{d}U^{d}=(C\times_{\mu}(\tilde{U}^{ \mu})^{\top}U^{\mu})\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}\] with \(\tilde{U}^{\mu}=[u_{1}^{\mu},\ldots,u_{r_{\mu}-1}^{\mu},0]\). It follows from Lemma 4.4 that \(\tilde{X}\in\overline{\mathcal{M}}\) and the claim is proven. It is important to note that the distance \(\sigma\) defined in (4.11) can be expressed as the distance of the core tensor \(C\) to the relative boundary of \(\mathcal{M}_{\mathrm{c}}\). **Proposition 4.6**.: _Let \(X=C\times_{1}U_{1}\times_{2}\cdots\times_{d}U_{d}\) with \(C\in\mathcal{M}_{\mathrm{c}}\) and orthonormal \(U_{1},\ldots,U_{d}\). Then \(\mathrm{dist}(C,\overline{\mathcal{M}_{\mathrm{c}}}\setminus\mathcal{M}_{ \mathrm{c}})=\mathrm{dist}(X,\overline{\mathcal{M}}^{w}\setminus\mathcal{M})\)._ Proof.: First, let \(Y\in\overline{\mathcal{M}}^{w}\setminus\mathcal{M}\) satisfy \(\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}=\mathrm{dist}(X,\overline{\mathcal{M}}^{w} \setminus\mathcal{M})\). Then the tensor \(D=Y\times_{1}U_{1}^{\mathrm{T}}\times_{2}\cdots\times_{d}U_{d}^{\mathrm{T}}\) satisfies \(\|C-D\|\leq\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\) and by Lemma 4.4, we also have \(D\in\overline{\mathcal{M}_{\mathrm{c}}}\setminus\mathcal{M}_{\mathrm{c}}\), and hence \(\mathrm{dist}(C,\overline{\mathcal{M}_{\mathrm{c}}}\setminus\mathcal{M}_{ \mathrm{c}})\leq\mathrm{dist}(X,\overline{\mathcal{M}}^{w}\setminus\mathcal{M})\). To show equality, we consider a \(D\in\overline{\mathcal{M}_{\mathrm{c}}}\setminus\mathcal{M}_{\mathrm{c}}\) with \(\|C-D\|=\mathrm{dist}(C,\overline{\mathcal{M}_{\mathrm{c}}}\setminus\mathcal{M} _{\mathrm{c}})\). Set \(Y=D\times_{1}U_{1}\times_{2}\cdots\times_{d}U_{d}\in\overline{\mathcal{M}}^{w} \setminus\mathcal{M}\). Then \(\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}=\|C-D\|\) holds, and thus \(\mathrm{dist}(C,\overline{\mathcal{M}_{\mathrm{c}}}\setminus\mathcal{M}_{ \mathrm{c}})\geq\mathrm{dist}(X,\overline{\mathcal{M}}^{w}\setminus\mathcal{M})\). #### 4.1.4. Curvature estimates We now turn to the curvature bounds in Assumption **A2**. We first derive an estimate for the norm difference \(\|P_{X}-P_{Y}\|_{\ell^{2}(\mathbb{N}^{d})\to\ell^{2}(\mathbb{N}^{d})}\) of two such projections in operator norm, which can be regarded as a curvature estimate for the manifold \(\mathcal{M}\). It will be required for Assumption **A2**. **Proposition 4.7**.: _Assume a curvature estimate_ \[\max_{\|Z\|=1}\|(P_{C}-P_{\tilde{C}})Z\|\leq\frac{c}{\hat{\sigma}}\|C-\tilde{C }\|\quad\text{for all $C,\tilde{C}\in\mathcal{M}_{\mathrm{c}}$,} \tag{4.12}\] _where \(\hat{\sigma}=\mathrm{dist}(C,\overline{\mathcal{M}_{\mathrm{c}}}\setminus \mathcal{M}_{\mathrm{c}})\) and where \(c>0\) is independent of \(C,\tilde{C}\). Let \(X,Y\in\mathcal{M}\) with corresponding tangent space projections \(P_{X}\) and \(P_{Y}\). Then_ \[\|P_{X}-P_{Y}\|_{\ell_{2}(\mathbb{N}^{d})\to\ell_{2}(\mathbb{N}^{d})}\leq \left(\frac{\sqrt{2}c}{\sigma}+2(\sqrt{2}+1)\sum_{\mu=1}^{d}\frac{1}{\sigma_{ \mu}^{\mu}}\right)\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\] _where \(\sigma=\mathrm{dist}(X,\overline{\mathcal{M}}^{w}\setminus\mathcal{M})\) and \(\sigma_{r_{\mu}}^{\mu}\) is the smallest singular value of the \(\mu\)-th matricization._ Note that \(\sigma_{r_{\mu}}^{\mu}\geq\sigma\) for each \(\mu\) as a consequence of Proposition 4.5. Therefore, we have the simpler estimate \[\|P_{X}-P_{Y}\|_{\ell_{2}(\mathbb{N}^{d})\to\ell_{2}(\mathbb{N}^{d})}\leq \frac{\sqrt{2}c+2d(\sqrt{2}+1)}{\sigma}\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}.\] Since on every weakly compact subset \(\mathcal{M}^{\prime}\subseteq\mathcal{M}\) the distance \(\sigma\) to the boundary is bounded from below (recall that \(\mathcal{M}\) itself is not weakly closed), we obtain the first curvature estimate in **A2**. In the proof of Proposition 4.7, we use the following lemma. **Lemma 4.8**.: _Let \(U,V\in[\ell_{2}(\mathbb{N})]^{r}\) be orthonormal (that is, \(U^{\mathsf{T}}U=V^{\mathsf{T}}V=\mathrm{id}\)) such that the \(r\times r\) matrix \(U^{\mathsf{T}}V\) is symmetric and positive semidefinite._ (i) _The corresponding subspace projections \(P_{\mathcal{U}}=UU^{\mathsf{T}}\) and \(P_{\mathcal{V}}=VV^{\mathsf{T}}\) satisfy_ \[\|U-V\|_{\ell_{2}(\mathbb{N})\to\ell_{2}(\mathbb{N})}\leq\sqrt{2}\|P_{ \mathcal{U}}-P_{\mathcal{V}}\|_{\ell_{2}(\mathbb{N})\to\ell_{2}(\mathbb{N})}.\] (ii) _For all \(x,y\in\mathbb{R}^{r}\), \(\|x-y\|\leq\sqrt{2}\|Ux-Vy\|_{\ell_{2}(\mathbb{N})}\)._ Proof.: After orthogonal change of basis, we may assume the matrix \(U^{\mathsf{T}}V=\Sigma\) to be diagonal with entries \(1\geq\sigma_{i}\geq 0\), that is \(u_{j}^{\mathsf{T}}v_{i}=\sigma_{i}\delta_{ij}\). Ad (i). We define the spaces \(W_{i}=\mathrm{span}\{u_{i},v_{i}\}\) for \(i=1,\ldots,r\). These are pairwise orthogonal. Furthermore, let \(W_{r+1}=(\bigoplus_{i=1}^{r}W_{i})^{\perp}\). Then the difference of projections is block-diagonal with respect to the spaces \(W_{i}\), that is, \((P_{\mathcal{U}}-P_{\mathcal{V}})(xu_{i}+yv_{i})=(x+\sigma_{i}y)u_{i}-(y+ \sigma_{i}x)v_{i}\). Therefore, the operator norm is given by \[\|P_{\mathcal{U}}-P_{\mathcal{V}}\|_{\ell_{2}(\mathbb{N})\to\ell_{2}(\mathbb{ N})}=\max_{i}\max_{x\neq 0\neq y}\frac{1}{\|xu_{i}+yv_{i}\|_{\ell_{2}(\mathbb{N})}} \|(P_{\mathcal{U}}-P_{\mathcal{V}})(xu_{i}+yv_{i})\|_{\ell_{2}(\mathbb{N})}.\] We note the norm equality \(\|xu_{i}+yv_{i}\|_{\ell_{2}(\mathbb{N})}^{2}=x^{2}+y^{2}+2\sigma_{i}xy\). Then on the one hand, \[\|(P_{\mathcal{U}}-P_{\mathcal{V}})(xu_{i}+yv_{i})\|_{\ell_{2}( \mathbb{N})}^{2} =(x+\sigma_{i}y)^{2}+(y+\sigma_{i}x)^{2}-2\sigma_{i}(x+\sigma_{i}y )(y+\sigma_{i}x)\] \[=x^{2}+y^{2}+2\sigma_{i}xy-\sigma_{i}^{2}(x^{2}+y^{2}+2\sigma_{i} xy)\] \[=(1-\sigma_{i}^{2})\|xu_{i}+yv_{i}\|_{\ell_{2}(\mathbb{N})}^{2}\] \[\geq(1-\sigma_{i})\|xu_{i}+yv_{i}\|_{\ell_{2}(\mathbb{N})}^{2},\] that is, \(\|P_{\mathcal{U}}-P_{\mathcal{V}}\|_{\ell_{2}(\mathbb{N})\to\ell_{2}(\mathbb{ N})}^{2}\geq 1-\sigma_{i}\). On the other hand, we have \[(U-V)^{\mathsf{T}}(U-V)=2(\mathrm{id}_{r}-\Sigma),\] and thus \(\max_{\|w\|=1}\|(U-V)w\|_{\ell_{2}(\mathbb{N})}^{2}=2\max_{i}(1-\sigma_{i})\), which leads to the desired inequality. Ad (ii). Using inequality \((a-b)^{2}\leq 2a^{2}+2b^{2}\) componentwise, we get \[\|x-y\|^{2} =(x-y)^{\mathsf{T}}\Sigma(x-y)+(x-y)^{\mathsf{T}}(\mathrm{id}_{r} -\Sigma)(x-y)\] \[\leq(x-y)^{\mathsf{T}}\Sigma(x-y)+2x^{\mathsf{T}}(\mathrm{id}_{r} -\Sigma)x+2y^{\mathsf{T}}(\mathrm{id}_{r}-\Sigma)y\] \[\leq 2\|x\|^{2}+2\|y\|^{2}-4x^{\mathsf{T}}\Sigma y\] \[=2\|Ux\|_{\ell_{2}(\mathbb{N})}+2\|Vy\|_{\ell_{2}(\mathbb{N})}-4( Ux)^{\mathsf{T}}(Vy)=2\|Ux-Vy\|_{\ell_{2}(\mathbb{N})}^{2},\] which is the claim. Proof of Proposition 4.7.: Assume representations \[X=C\times_{1}U_{1}\times_{2}\cdots\times_{d}U_{d},\quad Y=\tilde{C}\times_{1 }\tilde{U}_{1}\times_{2}\cdots\times_{d}\tilde{U}_{d}\] as in Proposition 4.2. By using polar decompositions \((U^{\mu})^{\mathsf{T}}\tilde{U}^{\mu}=Q^{\mu}S^{\mu}\), where \(Q^{\mu}\) is orthogonal and \(S^{\mu}\) is positive semidefinite, we can replace the \(U^{\mu}\) with \(U^{\mu}Q^{\mu}\) and the core tensor \(C\) accordingly such that \((U^{\mu})^{\mathsf{T}}\tilde{U}^{\mu}\) is positive semidefinite, which we assume to be the case for all \(\mu=1,\ldots,d\). By Proposition 4.2, \[P_{X}-P_{Y}=P_{X}^{0}-P_{Y}^{0}+\sum_{\mu=1}^{d}P_{X}^{\mu}-P_{Y}^{\mu}.\] We will estimate the single differences separately. Applying the triangle inequality will then prove the assertion. We first consider any of the projector differences \(P_{X}^{\mu}-P_{Y}^{\mu}\) for \(\mu=1,\ldots,d\). By (4.8), they can be written in the \(\mu\)-th matricization space as \[P_{X}^{\mu}-P_{Y}^{\mu} =(\operatorname{id}-\!P_{\mathcal{U}^{\mu}})\otimes P_{\mathcal{ V}^{\mu}}-(\operatorname{id}-\!P_{\tilde{\mathcal{U}}^{\mu}})\otimes P_{ \tilde{\mathcal{V}}^{\mu}}\] \[=(\operatorname{id}-\!P_{\mathcal{U}^{\mu}})\otimes(P_{\mathcal{ V}^{\mu}}-P_{\tilde{\mathcal{V}}^{\mu}})+(P_{\tilde{\mathcal{U}}^{\mu}}-P_{ \mathcal{U}^{\mu}})\otimes P_{\tilde{\mathcal{V}}^{\mu}}.\] We have \[\|P_{\tilde{\mathcal{U}}^{\mu}}-P_{\mathcal{U}^{\mu}}\|_{\ell_{2}(\mathbb{N}) \to\ell_{2}(\mathbb{N})}\leq\frac{1}{\sigma_{\mu_{\mu}}^{\mu}}\|M_{X}^{\mu}-M _{Y}^{\mu}\|_{\ell_{2}(\mathbb{N}^{d-1})\to\ell_{2}(\mathbb{N})}\leq\frac{1}{ \sigma_{r_{\mu}}^{\mu}}\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}, \tag{4.13}\] where again \(M_{X}^{\mu}\) and \(M_{Y}^{\mu}\) denote the matricizations of \(X\) and \(Y\) and \(\sigma_{r_{\mu}}^{\mu}=\sigma_{r_{\mu}}^{\mu}(X)\) denotes the smallest positive singular value of \(M_{X}^{\mu}\) as in (4.10). For the first inequality see, for example, the proof of [3, Lemma A.2], the second one is trivial. The same upper bound holds for \(\|P_{\tilde{\mathcal{V}}^{\mu}}-P_{\mathcal{V}^{\mu}}\|_{\ell_{2}(\mathbb{N}^ {d-1})\to\ell_{2}(\mathbb{N}^{d-1})}\). Thus we conclude \[\|P_{X}^{\mu}-P_{Y}^{\mu}\|_{\ell_{2}(\mathbb{N}^{d})\to\ell_{2}(\mathbb{N}^{d })}\leq\frac{2}{\sigma}\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\,.\] We now proceed with estimating the operator norm of the difference \(P_{X}^{0}-P_{Y}^{0}\). By (4.9), \[(P_{X}^{0}-P_{Y}^{0})(Z) =P_{C}(C_{Z})\times_{1}U^{1}\times_{2}\cdots\times_{d}U^{d}-P_{ \tilde{C}}(\tilde{C}_{Z})\times_{1}\tilde{U}^{1}\times_{2}\cdots\times_{d} \tilde{U}^{d}\] \[=[P_{C}(C_{Z})-P_{\tilde{C}}(\tilde{C}_{Z})]\times_{1}\tilde{U}^{ 1}\times_{2}\cdots\times_{d}\tilde{U}^{d}\] \[\qquad+P_{C}(C_{Z})\times_{1}[U^{1}-\tilde{U}^{1}]\times_{2} \tilde{U}^{2}\times_{3}\cdots\times_{d}\tilde{U}^{d}\] \[\qquad\vdots\] \[\qquad+P_{C}(C_{Z})\times_{1}U^{1}\times_{2}\cdots\times_{d-1}U^{ d-1}\times_{d}[U^{d}-\tilde{U}^{d}]\] where \(C_{Z}=Z\times_{1}(U^{1})^{T}\times_{2}\cdots\times_{d}(U^{d})^{T}\) and similar for \(\tilde{C}_{Z}\). The first term in the right sum is bounded by \[\|P_{C}(C_{Z})-P_{\tilde{C}}(\tilde{C}_{Z})\|_{\ell_{2}(\mathbb{N}^{d})}\leq \|(P_{C}-P_{\tilde{C}})C_{Z}\|+\|C_{Z}-\tilde{C}_{Z}\|,\] since \(U^{\mu}\) and \(\tilde{U}^{\mu}\) have orthonormal columns and hence spectral norm one. For the other terms we use Lemma 4.8(i) and (4.13), which leads to \[\|P_{C}(C_{Z})\times_{1}[U^{1}-\tilde{U}^{1}]\times_{2} \tilde{U}^{2}\times_{3}\cdots\times_{d}\tilde{U}^{d}\|_{\ell_{2}( \mathbb{N}^{d})}\leq\sqrt{2}\|P_{\mathcal{U}^{1}}-P_{\tilde{\mathcal{U}}^{1}} \|_{\ell_{2}(\mathbb{N})\to\ell_{2}(\mathbb{N})}\|P_{C}(C_{Z})\|\] \[\leq\frac{\sqrt{2}}{\sigma_{r_{1}^{1}}(X)}\|X-Y\|_{\ell_{2}( \mathbb{N}^{d})}\|C_{Z}\|\leq\frac{\sqrt{2}}{\sigma_{r_{1}}^{1}}\|X-Y\|_{\ell _{2}(\mathbb{N}^{d})}\|Z\|_{\ell_{2}(\mathbb{N}^{d})}\] and we proceed similarly for the further modes. So far we have shown \[\|(P_{X}^{0}-P_{Y}^{0})(Z)\|_{\ell_{2}(\mathbb{N}^{d})}\leq\|(P_{C }-P_{\tilde{C}}) C_{Z}\|+\|C_{Z}-\tilde{C}_{Z}\|\\ +\sqrt{2}\left(\frac{1}{\sigma_{r_{1}}^{1}}+\cdots+\frac{1}{ \sigma_{r_{d}}^{d}}\right)\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\|Z\|_{\ell_{2}( \mathbb{N}^{d})}.\] It remains to estimate \(\|C_{Z}-\tilde{C}_{Z}\|\) and \(\|(P_{C}-P_{\tilde{C}})C_{Z}\|\). Using again a telescopic expansion of \[C_{Z}-\tilde{C}_{Z}=Z\times_{1}(U^{1})^{\mathsf{T}}\times_{2}\cdots\times_{d }(U^{d})^{\mathsf{T}}-Z\times_{1}(\tilde{U}^{1})^{\mathsf{T}}\times_{2}\cdots \times_{d}(\tilde{U}^{d})^{T},\] one obtains in a similar way as above that \[\|C_{Z}-\tilde{C}_{Z}\|\leq\sqrt{2}\left(\frac{1}{\sigma_{r_{1}}^{1}}+\cdots+ \frac{1}{\sigma_{r_{d}}^{d}}\right)\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\|Z\|_{\ell_ {2}(\mathbb{N}^{d})}.\] We need to bound \(\|(P_{C}-P_{\tilde{C}})C_{Z}\|\) in terms of \(\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\). Note that \[X-Y=C\times_{1}U_{1}\times_{2}\cdots\times_{d}U_{d}-\tilde{C}\times_{1}\tilde{U }_{1}\times_{2}\cdots\times_{d}\tilde{U}_{d}\] where \(U_{\mu}\) and \(\tilde{U}_{\mu}\) satisfy the assumptions in Lemma 4.8. It follows that \(\|C-\tilde{C}\|\leq\sqrt{2}\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\). Hence by (4.12) and Proposition 4.6 we have \[\|(P_{C}-P_{\tilde{C}})C_{Z}\|\leq\frac{c}{\sigma}\|C-\tilde{C}\|\|C_{Z}\|\leq \frac{\sqrt{2}c}{\sigma}\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\|Z\|_{\ell_{2}( \mathbb{N}^{d})}.\] In total, we have \[\|(P_{X}^{0}-P_{Y}^{0})(Z)\|_{\ell_{2}(\mathbb{N}^{d})}\leq\sqrt{2}\left(\frac {c}{\sigma}+\frac{2}{\sigma_{r_{1}}^{1}}+\cdots+\frac{2}{\sigma_{r_{d}}^{d}} \right)\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\|Z\|_{\ell_{2}(\mathbb{N}^{d})}.\] In summary, this allows to conclude the asserted curvature estimate. In Assumption **A2** we also need an estimate for the projection \(\operatorname{id}-P_{X}\). **Proposition 4.9**.: _Assume a curvature estimate of the form_ \[\|(\operatorname{id}-P_{\tilde{C}})(C-\tilde{C})\|\leq\frac{c}{\hat{\sigma}} \|C-\tilde{C}\|^{2}\quad\text{for all $C\in\mathcal{M}_{\mathrm{c}}$ and $\tilde{C}\in\overline{\mathcal{M}_{\mathrm{c}}}$,}\] _where \(\hat{\sigma}=\operatorname{dist}(C,\overline{\mathcal{M}_{\mathrm{c}}}\setminus \mathcal{M}_{\mathrm{c}})\) and where \(c>0\) is independent of \(C,\tilde{C}\). Let \(X,Y\in\mathcal{M}\) with corresponding tangent space projections \(P_{X}\) and \(P_{Y}\). Then_ \[\|(\operatorname{id}-P_{X})(X-Y)\|_{\ell_{2}(\mathbb{N}^{d})}\leq\sqrt{\frac{ c^{2}}{\sigma^{2}}+\sum_{\mu=1}^{d}\frac{1}{(\sigma_{r_{\mu}}^{\mu})^{2}}}\|X-Y\|_{ \ell_{2}(\mathbb{N}^{d})}^{2}\leq\frac{\sqrt{d+c^{2}}}{\sigma}\|X-Y\|_{\ell_{2 }(\mathbb{N}^{d})}^{2}\] _where \(\sigma=\operatorname{dist}(X,\overline{\mathcal{M}}^{w}\setminus\mathcal{M})\)._ Proof.: We use the same notation as in the proof of Proposition 4.7. We decompose the identity into \[\operatorname{id}=P_{\mathcal{U}^{1}\otimes\cdots\otimes\mathcal{U}^{d}}+P_{( \mathcal{U}^{1})^{\perp}\otimes\cdots\otimes\mathcal{U}^{d}}+P_{\ell_{2}( \mathbb{N})\otimes(\mathcal{U}^{2})^{\perp}\cdots\otimes\mathcal{U}^{d}}+ \ldots+P_{\ell_{2}(\mathbb{N}^{d-1})\otimes(\mathcal{U}^{d})^{\perp}}. \tag{4.14}\] Then \[(\operatorname{id}-P_{X})(X-Y)=(P_{\mathcal{U}^{1}\otimes\cdots \otimes\mathcal{U}^{d}}-P_{X}^{0})(X-Y)+(P_{(\mathcal{U}^{1})^{\perp}\otimes \cdots\otimes\mathcal{U}^{d}}-P_{X}^{1})(X-Y)\] \[\qquad\qquad+(P_{\ell_{2}(\mathbb{N})\otimes(\mathcal{U}^{2})^{ \perp}\otimes\cdots\otimes\mathcal{U}^{d}}-P_{X}^{2})(X-Y)+\ldots+(P_{\ell_{2 }(\mathbb{N}^{d-1})\otimes(\mathcal{U}^{d})^{\perp}}-P_{X}^{d})(X-Y)\] holds. For the first summand, we have \[(P_{\mathcal{U}^{1}\otimes\cdots\otimes\mathcal{U}^{d}}-P_{X}^{0})(X-Y)= \Big{(}(\operatorname{id}-P_{C})(X-Y)\times_{1}(U^{1})^{\mathsf{T}}\times_{2} \cdots\times_{d}(U^{d})^{\mathsf{T}}\Big{)}\times_{1}U^{1}\times_{2}\cdots \times_{d}U^{d}\] where \(\|(X-Y)\times_{1}(U^{1})^{\mathsf{T}}\times_{2}\cdots\times_{d}(U^{d})^{ \mathsf{T}}\|\leq\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}\). Since \(\operatorname{dist}(X,\overline{\mathcal{M}}^{w}\setminus\mathcal{M})= \operatorname{dist}(C,\overline{\mathcal{M}_{\mathrm{c}}}\setminus\mathcal{M}_{ \mathrm{c}})\) by Proposition 4.6, we have \[\|(P_{\mathcal{U}^{1}\otimes\cdots\otimes\mathcal{U}^{d}}-P_{X}^{0})(X-Y)\|_{ \ell_{2}(\mathbb{N}^{d})}\leq\frac{c}{\sigma}\|X-Y\|_{\ell_{2}(\mathbb{N}^{d}) }^{2},\] where we use that \(Y\times_{1}(U^{1})^{\mathsf{T}}\times_{2}\cdots\times_{d}(U^{d})^{\mathsf{T}} \in\overline{\mathcal{M}_{\mathrm{c}}}\). For the next summand, we obtain \[(P_{(\mathcal{U}^{1})^{\perp}\otimes\cdots\otimes\mathcal{U}^{d}}-P_{X}^{1})(X-Y )=P_{(\mathcal{U}^{1})^{\perp}}\otimes(\operatorname{id}_{\ell_{2}(\mathbb{N} ^{d-1})}-P_{\mathcal{V}^{1}})(X-\operatorname{id}_{\ell_{2}(\mathbb{N})} \otimes P_{\mathcal{U}^{2}\otimes\cdots\otimes\mathcal{U}^{d}}Y).\] Then, \(\|X-\operatorname{id}_{\ell_{2}(\mathbb{N})}\otimes P_{\mathcal{U}^{2}\otimes \cdots\otimes\mathcal{U}^{d}}Y\|_{\ell_{2}(\mathbb{N}^{d})}\leq\|X-Y\|_{\ell_{2 }(\mathbb{N}^{d})}\) and \(\operatorname{id}_{\ell_{2}(\mathbb{N})}\otimes P_{\mathcal{U}^{2}\otimes \cdots\otimes\mathcal{U}^{d}}Y\in\overline{\mathcal{M}}^{w}\). If \(\operatorname{id}_{\ell_{2}(\mathbb{N})}\otimes P_{\mathcal{U}^{2}\otimes \cdots\otimes\mathcal{U}^{d}}Y\in\mathcal{M}\), then we have corresponding spaces \(\tilde{\mathcal{U}}^{1}\) and \(\tilde{\mathcal{V}}^{1}\) and \[\|P_{\mathcal{V}^{1}}-P_{\tilde{\mathcal{V}}^{1}}\|_{\ell_{2}(\mathbb{N}^{d}) \to\ell_{2}(\mathbb{N}^{d})}\leq\frac{1}{\sigma_{r_{1}}^{1}}\|X-\operatorname{ id}_{\ell_{2}(\mathbb{N})}\otimes P_{\mathcal{U}^{2}\otimes\cdots\otimes\mathcal{U}^{d}}Y \|_{\ell_{2}(\mathbb{N}^{d})}\leq\frac{1}{\sigma_{r_{1}}^{1}}\|X-Y\|_{\ell_{2}( \mathbb{N}^{d})}.\] As a consequence, \[\|(P_{(\mathcal{U}^{1})^{\perp}\otimes\cdots\otimes\mathcal{U}^{d}}-P_{X}^{1})(X-Y )\|_{\ell_{2}(\mathbb{N}^{d})}\leq\frac{1}{\sigma_{r_{1}}^{1}}\|X-Y\|_{\ell_{2} (\mathbb{N}^{d})}^{2}.\] If \(\operatorname{id}_{\ell_{2}(\mathbb{N})}\otimes P_{\mathcal{U}^{2}\otimes \cdots\otimes\mathcal{U}^{d}}Y\notin\mathcal{M}\), the same estimate follows by continuity of the linear operator \(P_{(\mathcal{U}^{1})^{\perp}\otimes\cdots\otimes\mathcal{U}^{d}}-P_{X}^{1}\). Similar considerations show \[\|(P_{\ell_{2}(\mathbb{N}^{\mu-1})\otimes(\mathcal{U}^{\mu})^{\perp}\otimes \cdots\otimes\mathcal{U}^{d}}-P_{X}^{\mu})(X-Y)\|_{\ell_{2}(\mathbb{N}^{d})} \leq\frac{1}{\sigma_{r_{\mu}}^{\mu}}\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}^{2}\] for \(\mu=2,\ldots,d\). Finally, we obtain \[\|(\operatorname{id}-P_{X})(X-Y)\|_{\ell_{2}(\mathbb{N}^{d})}\leq\sqrt{\frac{ c^{2}}{\sigma^{2}}+\sum_{\mu=1}^{d}\frac{1}{(\sigma_{r_{\mu}}^{\mu})^{2}}}\|X-Y\|_{ \ell_{2}(\mathbb{N}^{d})}^{2}\] utilizing that the images of the operators appearing in (4.14) are orthogonal. The final inequality follows with Proposition 4.6. ### Application to tensor train manifolds In order to give the above estimates a more concrete meaning, we now consider the popular example of the fixed-rank tensor-train (TT) format discussed in the introduction. Here \(\mathcal{M}_{\mathrm{c}}=\mathcal{M}_{\mathbf{k}}\) consists of all finite dimensional tensors \(C\in\mathbb{R}^{r_{1}\times\cdots\times r_{d}}\) with the fixed TT rank \(\mathbf{k}\) of the form (1.3). We denote the resulting manifold \(\mathcal{M}\) in (4.1) by \(\mathcal{M}_{\mathbf{r},\mathbf{k}}\). It thus contains infinite tensors in \(\ell_{2}(\mathbb{N}^{d})\) of "outer" multilinear rank \(\mathbf{r}=(r_{1},\ldots,r_{d})\) and "inner" TT rank \(\mathbf{k}=(k_{1},\ldots,k_{d-1})\). This can be seen as a special case of the hierarchical tensor format with linear dimension tree, see [2, Rem. 2.27]. **Proposition 4.10**.: _Let \(X,Y\in\mathcal{M}_{\mathbf{r},\mathbf{k}}\) with corresponding tangent space projections \(P_{X}\) and \(P_{Y}\). Then_ \[\|P_{X}-P_{Y}\|_{\ell_{2}(\mathbb{N}^{d})\to\ell_{2}(\mathbb{N}^{ d})} \leq\frac{2d(3\sqrt{2}+1)}{\sigma}\|X-Y\|_{\ell_{2}(\mathbb{N}^{ d})},\] \[\|(\operatorname{id}-P_{X})(X-Y)\|_{\ell_{2}(\mathbb{N}^{d})} \leq\frac{\sqrt{2d-1}}{\sigma}\|X-Y\|_{\ell_{2}(\mathbb{N}^{d})}^{2},\] _where \(\sigma=\operatorname{dist}(X,\overline{\mathcal{M}_{\mathbf{r},\mathbf{k}}}^{w }\setminus\mathcal{M}_{\mathbf{r},\mathbf{k}})\)._ The result follows directly follows from Propositions 4.7 and 4.9 and the following refined curvature estimates for the finite-dimensional TT manifold \(\mathcal{M}_{\mathbf{k}}\), which under the given assumptions seem to be new. **Proposition 4.11**.: _Let \(\mathcal{M}_{\mathbf{k}}\subset\mathbb{R}^{N_{1}\times\cdots\times N_{d}}\) be a finite-dimensional TT manifold of fixed TT rank \(\mathbf{k}\). Let \(X,Y\in\mathcal{M}_{\mathbf{k}}\) and \(\sigma=\operatorname{dist}(X,\overline{\mathcal{M}}_{\mathbf{k}}\setminus \mathcal{M}_{\mathbf{k}})\). Then_ \[\max_{\|Z\|=1}\|(P_{X}-P_{Y})Z\|\leq\frac{4d}{\sigma}\|X-Y\|,\quad\|( \operatorname{id}-P_{X})(X-Y)\|\leq\frac{\sqrt{d-1}}{\sigma}\|X-Y\|^{2}\] _and_ \[\|(\operatorname{id}-P_{X})(X-Y)\|\leq\frac{\sqrt{d-1}}{\sigma}\|X-Y\|^{2}.\] _Furthermore, the last inequality holds more generally for \(Y\in\overline{\mathcal{M}}_{\mathbf{k}}\)._ The proof is given in the appendix. ## 5. Application to the model problem We now return to the model problem (1.6) under the regularity assumption **A0**. **Problem 5.1**.: Given \(f\in L_{2}(0,T;L_{2}(\Omega))\) and \(u_{0}\in\mathcal{M}\cap H^{1}_{0}(\Omega)\), find \[u\in W(0,T;H^{1}_{0}(\Omega),L_{2}(\Omega))=\{u\in L_{2}(0,T;H^{1}_{0}(\Omega)) \colon u^{\prime}\in L_{2}(0,T;\mathcal{V}^{*})\}\] such that for almost all \(t\in[0,T]\), \[u(t) \in\mathcal{M},\] \[\langle u^{\prime}(t),v\rangle+a(u(t),v;t) =\langle f(t),v\rangle\quad\text{for all }v\in T_{u(t)}\mathcal{M}\cap H^{1}_{0}(\Omega), \tag{5.1}\] \[u(0) =u_{0}.\] Here \(\mathcal{M}\) is a manifold of functions in \(L_{2}(\Omega)=L_{2}(\Omega_{1}\times\cdots\times\Omega_{d})\) as described in Section 4 and \[a(u,v;t)=\int_{\Omega}(B(t)\nabla u(x))\cdot\nabla v(x)\,dx\] with a symmetric positive definite matrix \(B(t)\) that is entrywise Lipschitz continuous in \(t\). ### Discussion of main assumptions Our goal is to apply Theorems 2.2, 2.5 and 3.2 to Problem 5.1. It suffices to verify the Assumptions **A1**-**A4**, **B1**, and **B2**. Assumption **A1** holds since \(\mathcal{M}_{\mathrm{c}}\) is a cone, which follows from the invariance assumption (4.2). We already proved **A2** in Proposition 4.7 and Proposition 4.9 assuming that corresponding curvature bounds for \(\mathcal{M}_{\mathrm{c}}\) are available. Indeed, in the special case where \(\mathcal{M}_{\mathrm{c}}\) is the manifold of tensors with constant TT-rank \(\mathcal{M}_{\mathbf{k}}\), such curvature bounds are stated in Proposition 4.10. We now consider the remaining Assumptions **A3** and **A4** as well as **B1** and **B2**. We make use of the following well-known technique for estimating norms of factors in low-rank representations, see for example [31]. **Lemma 5.2**.: _Let \(u\in H^{1}_{0}(\Omega)\) admit a singular value decomposition_ \[u(x)=\sum_{k=1}^{r_{\nu}}\sigma_{k}u_{1,k}(x_{\nu})u_{2,k}(x_{\{1,\ldots,d\} \setminus\{\nu\}})\] _with respect to the \(\nu\)-th variable. Then the singular vectors satisfy \(u_{1,k}\in H^{1}_{0}(\Omega_{\nu})\) and \(u_{2,k}\in H^{1}_{0}(\bigtimes_{\mu\neq\nu}\Omega_{\mu})\) with_ \[\|u_{1,k}\|_{H^{1}_{0}(\Omega_{\nu})}\leq\frac{1}{\sigma_{k}}\|u\|_{H^{1}_{0}( \Omega)}\quad\text{and}\quad\|u_{2,k}\|_{H^{1}_{0}(\bigtimes_{\mu\neq\nu} \Omega_{\mu})}\leq\frac{1}{\sigma_{k}}\|u\|_{H^{1}_{0}(\Omega)}.\] Proof.: We state the proof for \(\nu=1\). Then \[\sigma_{k}u_{1,k}(x_{1})=\int_{\bigtimes_{\mu=2}^{d}\Omega_{\mu}}u(x_{1},x_{2},\ldots,x_{d})u_{2,k}(x_{2},\ldots,x_{d})\,d(x_{2},\ldots,x_{d})\] and \[\sigma_{k}u_{2,k}(x_{2},\ldots,x_{d})=\int_{\Omega_{1}}u(x_{1},x_{2},\ldots,x_ {d})u_{1,k}(x_{1})\,dx_{1}.\] By the Cauchy-Schwarz inequality, we have \[\sigma_{k}^{2}\|u_{1,k}\|_{H^{1}_{0}(\Omega_{1})}^{2}=\sigma_{k}^ {2}\int_{\Omega_{1}}\left|\nabla_{x_{1}}u_{1,k}\right|^{2}\,dx_{1}=\int_{ \Omega_{1}}\left|\int_{\bigtimes_{\mu=2}^{d}\Omega_{\mu}}\nabla_{x_{1}}u\;u_{ 2,k}\,d(x_{2},\ldots,x_{d})\right|^{2}\,dx_{1}\] \[\qquad\leq\|u_{2,k}\|_{L_{2}((0,1)^{d-1})}^{2}\int_{\Omega}\left| \nabla_{x_{1}}u\right|^{2}\,d(x_{1},\ldots,x_{d})=\|u\|_{H^{1}_{0}(\Omega_{1}) \otimes L^{2}(\bigtimes_{\mu=2}^{d}\Omega_{\mu})}^{2}\leq\|u\|_{H^{1}_{0}( \Omega)}^{2}.\] This proves the first estimate. The other one follows in analogy. #### 5.1.1. Assumption **A3**: Compatibiliy of tangent spaces We now verify **A3**(a). **Lemma 5.3**.: _Let \(u\in H^{1}_{0}(\Omega)\cap\mathcal{M}\) and \(v\in H^{1}_{0}(\Omega)\cap T_{u}\mathcal{M}\). Then there exists a curve \(\varphi\in C^{1}(-\epsilon,\epsilon,H^{1}_{0}(\Omega)\cap\mathcal{M})\) with \(\varphi(0)=u\) and \(\varphi^{\prime}(0)=v\)._ Proof.: Let \(v_{\nu}=P_{\nu}v\) for \(\nu=0,\dots,d\). Note that \(u+tv_{\nu}\in\mathcal{M}\) for sufficiently small \(|t|\) and \(\nu=1,\dots,d\). We can write \[u+tv_{\nu}=\sum_{k_{1}=1}^{r_{1}}\cdots\sum_{k_{d}=1}^{r_{d}}C(k_{1},\dots,k_{ d})(t)\,u^{1}_{k_{1}}\otimes\cdots\otimes u^{\nu-1}_{k_{\nu-1}}\otimes u^{ \nu}_{k_{\nu}}(t)\otimes u^{\nu+1}_{k_{\nu+1}}\otimes\cdots\otimes u^{d}_{k_{ d}},\] where \(u^{\mu}_{\mu_{k}}\) are \(L_{2}\)-orthogonal. Invoking Lemma 5.2, we conclude that \(u^{\nu}_{k_{\nu}}(t)\in H^{1}_{0}(\Omega_{\nu})\). Moreover, \(C(k_{1},\dots,k_{d})^{\prime}(0)=0\) and hence \[v_{\nu}=\sum_{k_{1}=1}^{r_{1}}\cdots\sum_{k_{d}=1}^{r_{d}}C(k_{1},\dots,k_{d}) (0)\,u^{1}_{k_{1}}\otimes\cdots\otimes u^{\nu-1}_{k_{\nu-1}}\otimes(u^{\nu}_{ k_{\nu}})^{\prime}(0)\otimes u^{\nu+1}_{k_{\nu+1}}\otimes\cdots\otimes u^{d}_{k_{d}}.\] Furthermore, there exists a curve \(D\in C^{1}(-\epsilon,\epsilon,\mathcal{M}_{\rm c})\) such that \[\varphi_{0}(t)=\sum_{k_{1}=1}^{r_{1}}\cdots\sum_{k_{d}=1}^{r_{d}}D(k_{1},\dots,k_{d})(t)\,u^{1}_{k_{1}}(0)\otimes\cdots\otimes u^{d}_{k_{d}}(0)\] satisfies \(\varphi_{0}(0)=u\) and \(\varphi^{\prime}_{0}(0)=v_{0}\). We now choose \[\varphi(t)=\sum_{k_{1}=1}^{r_{1}}\cdots\sum_{k_{d}=1}^{r_{d}}D(k_{1},\dots,k_ {d})(t)\,u^{1}_{k_{1}}(t)\otimes\cdots\otimes u^{d}_{k_{d}}(t),\] which is a differentiable curve in \(\mathcal{M}\cap H^{1}_{0}(\Omega)\). By the product rule, it satisfies \(\varphi(0)=u\) and \(\varphi^{\prime}(0)=v\). Assumption **A3**(b), which states that for \(u\in H^{1}_{0}(\Omega)\cap\mathcal{M}\) the \(L_{2}\)-orthogonal projection onto its tangent space is also a bounded operator with respect to the \(H^{1}_{0}\)-norm, follows with a similar technique as Lemma 5.2. **Proposition 5.4**.: _Let \(\mathcal{M}\) be of the form (4.1), let \(u\in H^{1}_{0}(\Omega)\cap\mathcal{M}\) and \(v\in H^{1}_{0}(\Omega)\). Let \(P^{0}_{u},\dots,P^{d}_{u}\) be the projections in (4.7). Then_ \[\|P^{\nu}_{u}v\|_{H^{1}_{0}(\Omega)}\leq C\|v\|_{H^{1}_{0}(\Omega)}\] _for \(\nu=0,\dots,d\), where \(C\) only depends on \(\sigma=\operatorname{dist}(u,\overline{\mathcal{M}}^{w}\setminus\mathcal{M})\), \(\Omega\) and \(\|u\|_{H^{1}_{0}(\Omega)}\)._ Proof.: First, we consider \(P^{0}_{u}\). We note the norm bound \[\|C_{Z}\|\leq\|Z\|_{\ell_{2}(\mathbb{N}^{d})}\] in the definition of \(P^{0}_{X}\) in Proposition 4.2. In terms of the represented function, \[\|C_{v}\|\leq\|v\|_{L_{2}(\Omega)}.\] Furthermore, we have \[\|P_{C}(C_{v})\|\leq\|C_{v}\|\] since \(P_{C}\) is an \(\ell_{2}\)-orthogonal projection. We now consider the summands of \[\|P^{0}_{u}v\|^{2}_{H^{1}_{0}(\Omega)}=\|P^{0}_{u}v\|^{2}_{H^{1}_{0}(\Omega) \otimes L_{2}(\times_{\mu=2}^{d}\Omega_{\mu})}+\dots+\|P^{0}_{u}v\|^{2}_{L_{2}( \times_{\mu=1}^{d-1}\Omega_{\mu})\otimes H^{1}_{0}(\Omega_{d})}\] independently. Let \[u(x)=\sum_{k=1}^{r_{1}}\sum_{\ell=1}^{r_{1}}a_{k\ell}u_{1,k}(x_{1})u_{2,\ell}(x_{ 2},\ldots,x_{d}) \tag{5.2}\] be a decomposition of \(u\) separating the first variable, where \(\{u_{1,k}\}\) and \(\{u_{2,\ell}\}\) are \(L_{2}\)-orthonormal (as, for example, in a singular value decomposition). After an orthogonal change of basis, we may assume the vectors \(u_{1,k}\) to be both \(L_{2}\)-orthonormal and \(H^{1}_{0}\)-orthogonal. Note that the basis vectors are given by \[u_{1,k}=\int_{\times_{\mu=2}^{d}\,\Omega_{\mu}}u\;\sum_{\ell=1}^{r_{1}}b_{k\ell }u_{2,\ell}\,d(x_{2},\ldots,x_{d})\quad\text{and}\quad u_{2,\ell}=\int_{\Omega _{1}}u\;\sum_{k=1}^{r_{1}}b_{k\ell}u_{1,k}\,dx_{1}\] where \(\sum_{\ell=1}^{r_{1}}a_{k_{1}\ell}b_{k_{2}\ell}=\delta_{k_{1},k_{2}}\). The singular values of the corresponding decomposition satisfy \(\sigma_{k}\geq\sigma\) for \(k=1,\ldots,r_{\nu}\), and hence the vectors in (5.2) satisfy \[\begin{split}\sigma\|u_{1,k}\|_{H^{1}_{0}(\Omega_{1})}& \leq\|u\|_{H^{1}_{0}(\Omega_{1})\otimes L_{2}(\times_{\mu=2}^{d} \,\Omega_{\mu})},\\ \sigma\|u_{2,k}\|_{H^{1}_{0}(\times_{\mu=2}^{d}\,\Omega_{\mu})}& \leq\|u\|_{L_{2}(\Omega_{1})\otimes H^{1}_{0}(\times_{\mu=2}^{d} \,\Omega_{\mu})},\end{split} \tag{5.3}\] since \(\sigma\|\sum_{k=1}^{r_{1}}b_{k\ell}u_{1,k}\|_{L_{2}(\times_{\mu=2}^{d}\, \Omega_{\mu})}\leq 1\), \(\sigma\|\sum_{\ell=1}^{r_{1}}b_{k\ell}u_{2,\ell}\|_{L_{2}(\Omega_{1})}\leq 1\) and by the last argument of the proof of Lemma 5.2. By \(L_{2}\)- and \(H^{1}_{0}\)-orthogonality and (5.3), we obtain the estimate \[\begin{split}\|P^{0}_{u}v\|_{H^{1}_{0}(\Omega_{1})\otimes L_{2}( \times_{\mu=2}^{d}\,\Omega_{\mu})}^{2}\\ &\qquad=\bigg{\|}\sum_{k_{1}=1}^{r_{1}}\cdots\sum_{k_{d}=1}^{r_{d }}P_{C}(C_{v})(k_{1},\ldots,k_{d})\,\partial_{x_{1}}u_{k_{1}}^{1}\otimes u_{k_ {2}}^{2}\otimes\cdots\otimes u_{k_{d}}^{d}\bigg{\|}_{L_{2}(\Omega)}^{2}\\ &\qquad=\sum_{k_{1}=1}^{r_{1}}\bigg{\|}\sum_{k_{2}=1}^{r_{2}} \cdots\sum_{k_{d}=1}^{r_{d}}P_{C}(C_{v})(k_{1},\ldots,k_{d})\,u_{k_{2}}^{2} \otimes\cdots\otimes u_{k_{d}}^{d}\bigg{\|}_{L_{2}(\times_{\mu=2}^{d}\,\Omega _{\mu})}^{2}\|u_{k_{1}}^{1}\|_{H^{1}_{0}(\Omega_{1})}^{2}\\ &\qquad\leq\frac{1}{\sigma^{2}}\|P_{C}(C_{v})\|^{2}\|u\|_{H^{1}_{ 0}(\Omega_{1})\otimes L_{2}(\times_{\mu=2}^{d}\,\Omega_{\mu})}^{2}.\end{split}\] Using the Poincare inequality, we have \[\|P_{C}(C_{v})\|\leq\|C_{v}\|=\|v\|_{L_{2}(\Omega)}\leq C_{\Omega}\|v\|_{H^{1}_ {0}(\Omega)}\] with a \(C_{\Omega}>0\) depending only on \(\Omega\). We thus arrive at \[\|P^{0}_{u}v\|_{H^{1}_{0}(\Omega_{1})\otimes L_{2}(\times_{\mu=2}^{d}\, \Omega_{\mu})}\leq\frac{C_{\Omega}}{\sigma}\|v\|_{H^{1}_{0}(\Omega)}\|u\|_{H^{ 1}_{0}(\Omega_{1})\otimes L_{2}(\times_{\mu=2}^{d}\,\Omega_{\mu})}.\] Similarly, \[\begin{split}\|P^{0}_{u}v\|_{L_{2}(\times_{\mu=1}^{\nu-1}\, \Omega_{\mu})\otimes H^{1}_{0}(\Omega_{\nu})\otimes L_{2}(\times_{\mu=\nu+1}^{d }\,\Omega_{\mu})}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\leq\frac{C_{ \Omega}}{\sigma}\|v\|_{H^{1}_{0}(\Omega)}\|u\|_{L_{2}(\times_{\mu=1}^{\nu-1}\, \Omega_{\mu})\otimes H^{1}_{0}(\Omega_{\nu})\otimes L_{2}(\times_{\mu=\nu+1}^ {d}\,\Omega_{\mu})}.\end{split}\] This yields \[\|P^{0}_{u}v\|_{H^{1}_{0}(\Omega)}\leq\frac{C_{\Omega}}{\sigma}\|v\|_{H^{1}_{0} (\Omega)}\|u\|_{H^{1}_{0}(\Omega)}.\] Next we consider \(P^{1}_{u}\); the estimates for the projections \(P^{2}_{u},\ldots,P^{d}_{u}\) follow in analogy. The action of \(P^{1}_{u}\) is given by the tensor product of \(L_{2}\)-orthogonal projections \[P^{1}_{u}(v)=(\operatorname{id}-P_{1})\otimes P_{2}v,\] where \[(P_{1}w)(x_{1})=\sum_{k=1}^{r_{1}}u_{1,k}(x_{1})\int_{\Omega_{1}}u_{1,k}(y_{1})w(y _{1})\,dy_{1}\] and \[(P_{2}w)(x_{2},\ldots,x_{d})=\sum_{k=1}^{r_{1}}u_{2,k}(x_{2},\ldots,x_{d})\int_{ \bigtimes_{\mu=2}^{d}\Omega_{\mu}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Lemma 5.5**.: _Let \(\mathcal{M}\) be of the form (4.1) and \(u\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\cap\mathcal{M}\). Then for matrices \(a_{\nu}\) of the corresponding size, we have \(\sum_{\nu=1}^{d}\nabla_{x_{\nu}}\cdot(a_{\nu}\nabla_{x_{\nu}}u)\in T_{u} \mathcal{M}\)._ Proof.: Since \(u\in\mathcal{M}\), we can write \(u\) as the sum \[u=\sum_{k_{1}=1}^{r_{1}}\dots\sum_{k_{d}=1}^{r_{d}}C(k_{1},\dots,k_{d})u_{k_{1} }^{1}\otimes\dots\otimes u_{k_{d}}^{d}\] and \[\nabla_{x_{1}}\cdot(a_{1}\nabla_{x_{1}}u)=\sum_{k_{1}=1}^{r_{1}}\dots\sum_{k_{ d}=1}^{r_{d}}C(k_{1},\dots,k_{d})\nabla_{x_{1}}\cdot(a_{1}\nabla_{x_{1}}u_{k_{1} }^{1})\otimes u_{k_{2}}^{2}\otimes\dots\otimes u_{k_{d}}^{d}.\] We define \(\varphi(t)=u+t\nabla_{x_{1}}\cdot(a_{1}\nabla_{x_{1}}u)\). For sufficiently small \(|t|\), the Gramian of the system \(\{u_{1}^{1}+t\nabla_{x_{1}}\cdot(a_{1}\nabla_{x_{1}}u_{1}^{1}),u_{2}^{1}+t \nabla_{x_{1}}\cdot(a_{1}\nabla_{x_{1}}u_{2}^{1}),\dots,u_{r_{1}}^{1}+t\nabla_ {x_{1}}\cdot(a_{\nu}\nabla_{x_{1}}u_{r_{1}}^{1})\}\) is invertible and hence \(\varphi(t)\in\mathcal{M}\) with \(\varphi^{\prime}(t)=\nabla_{x_{1}}\cdot(a_{1}\nabla_{x_{1}}u)\). Thus \(\nabla_{x_{1}}\cdot(a_{1}\nabla_{x_{1}}u)\in T_{u}\mathcal{M}\). Analogously, \(\nabla_{x_{\nu}}\cdot(a_{\nu}\nabla_{x_{\nu}}u)\in T_{u}\mathcal{M}\) for \(\nu=1,\dots,d\) and by linearity, \(\sum_{\nu=1}^{d}\nabla_{x_{\nu}}\cdot(a_{\nu}\nabla_{x_{\nu}}u)\in T_{u} \mathcal{M}\). The assumption **A4**(a) now follows by a density argument. For a proof, choose a sequence \((u_{n})\subset\mathcal{M}\cap H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\) converging to \(u\) in \(H^{1}_{0}(\Omega)\)-norm. Then for \(v\in H^{1}_{0}(\Omega)\), we have \[a_{1}(u_{n},v;t)=\langle A_{1}(t)u_{n},v\rangle=\langle A_{1}(t)u_{n},P_{u_{n }}v\rangle=a_{1}(u_{n},P_{u_{n}}v;t)\] since \(A_{1}(t)u_{n}\in T_{u_{n}}\mathcal{M}\) by Lemma 5.5. Moreover, \[a_{1}(u_{n},P_{u_{n}}v;t)=a_{1}(u,P_{u}v)+a_{1}(u,(P_{u_{n}}-P_{u})v)+a_{1}(u_ {n}-u,P_{u_{n}}v).\] We have \(P_{u_{n}}v\to P_{u}v\) strongly in \(L_{2}(\Omega)\) by Proposition 4.7, and Proposition 5.4 yields \(\limsup_{n}\|P_{u_{n}}v\|_{H^{1}_{0}}<\infty\). Since \(L_{2}(\Omega)\) is dense in \(H^{-1}(\Omega)\) it follows that \(P_{u_{n}}v\to P_{u}v\) weakly in \(H^{1}_{0}(\Omega)\) by a standard argument; see for example [38, Prop. 21.23(g)]. Consequently, \(a_{1}(u_{n},P_{u_{n}}v;t)\to a_{1}(u,P_{u}v;t)\). At the same time, \(a_{1}(u_{n},v;t)\to a_{1}(u,v;t)\), so we have verified Assumption **A4**(a). For Assumption **A4**(b), it suffices to verify mixed smoothness for \(u\in H^{1}_{0}(\Omega)\cap\mathcal{M}\). **Lemma 5.6**.: _Let \(\mathcal{M}\) be of the form (4.1), \(u\in H^{1}_{0}(\Omega)\cap\mathcal{M}\), and \(\sigma=\operatorname{dist}_{L_{2}}(u,\overline{\mathcal{M}}^{w}\setminus \mathcal{M})\). Then for \(\nu\neq\mu\), we have \(\|\nabla_{x_{\nu}}\nabla_{x_{\mu}}u\|_{L_{2}(\Omega)}\leq\frac{1}{2\sigma}\|u \|_{H^{1}_{0}(\Omega)}^{2}\)._ Proof.: Let \(u(x)=\sum_{k=1}^{r_{\nu}}\sigma_{k}u_{1,k}(x_{\nu})u_{2,k}(x_{\{1,\dots,d\} \setminus\{\nu\}})\) be a singular value decomposition of \(u\) separating the \(\nu\)-th variable. Then \(\sigma_{k}\geq\sigma\) for \(k=1,\dots,r_{\nu}\). On the one hand, by the triangle inequality, we have \[\|\nabla_{x_{\nu}}\nabla_{x_{\mu}}u\|_{L_{2}(\Omega)} \leq\sum_{k=1}^{r_{\nu}}\sigma_{k}\|\nabla_{x_{\nu}}\nabla_{x_{ \mu}}u_{1,k}\otimes u_{2,k}\|_{L_{2}(\Omega)}\] \[=\sum_{k=1}^{r_{\nu}}\sigma_{k}\|\nabla_{x_{\nu}}u_{1,k}\|_{L_{2}( \Omega_{\nu})}\|\nabla_{x_{\mu}}u_{2,k}\|_{L_{2}(\times_{\mu\neq\nu}\Omega_{ \mu})}.\] On the other hand, by \(L_{2}\)-orthogonality of the singular vectors, we have \[\|u\|_{H^{1}_{0}(\Omega)}^{2}=\int_{\Omega}\sum_{\mu=1}^{d}\left|\nabla_{x_{\mu }}u(x)\right|^{2}\,dx=\sum_{k=1}^{r_{\mu}}\sigma_{k}^{2}\left(\|\nabla_{x_{\nu }}u_{1,k}\|_{L_{2}(\Omega_{\nu})}^{2}+\sum_{\mu\neq\nu}\|\nabla_{x_{\mu}}u_{2,k }\|_{L_{2}(\times_{\mu\neq\nu}\Omega_{\mu})}^{2}\right).\] By Young's inequality and \(\sigma\leq\sigma_{k}\), \[\sigma_{k}\|\nabla_{x_{\nu}}u_{1,k}\|_{L_{2}(\Omega_{\nu})}\|\nabla_{x_{\mu}}u_{ 2,k}\|_{L_{2}(\times_{\mu\neq\nu}\Omega_{\mu})}\leq\frac{\sigma_{k}^{2}}{2\sigma} \left(\|\nabla_{x_{\nu}}u_{1,k}\|_{L_{2}(\Omega_{\nu})}^{2}+\|\nabla_{x_{\mu}}u_{2,k }\|_{L_{2}(\times_{\mu\neq\nu}\Omega_{\mu})}^{2}\right)\] and hence \(\|\nabla_{x_{\nu}}\nabla_{x_{\mu}}u\|_{L_{2}(\Omega)}\leq\frac{1}{2\sigma}\|u\|_{ H^{1}_{0}(\Omega)}^{2}\) as asserted. Assumption **A4**(b) now follows directly by integration by parts. #### 5.1.3. Assumptions **B1** and **B2**: Spatial discretizations We now exhibit space discretizations that can be used to achieve convergence to the infinite dimensional solution as in Theorem 3.2. It turns out to be sufficient for the discretization to allow the format (4.1). This is a natural requirement, since otherwise one does not have the requisite product structure for using the low-rank approximation in practice. For short, let us denote \(H^{1}_{0}(\Omega)=\mathcal{V}=V^{1}\otimes H^{2}\otimes\cdots\otimes H^{d} \cap\cdots\cap H^{1}\otimes\cdots\otimes H^{d-1}\otimes V^{d}\), where \(V^{\nu}=H^{1}_{0}(\Omega_{\nu})\) and \(H^{\nu}=L_{2}(\Omega_{\nu})\). Then \(V^{1}\otimes\cdots\otimes V^{d}\) is a continuously and densely embedded subspace of \(\mathcal{V}\). As the finite-dimensional subspaces, we choose \[\mathcal{V}_{h}=V^{1}_{h}\otimes\cdots\otimes V^{d}_{h}\quad\text{satisfying} \quad\|P_{V^{\mu}_{h}}^{\nu}-v^{\mu}\|_{V^{\mu}}\to 0\quad\text{as}\quad h\to 0 \quad\text{for all }v^{\mu}\in V^{\mu}. \tag{5.4}\] This can be a finite element space, but also any other discretization suitable for \(\Omega_{\mu}\). As a consequence, for \(v\in V^{1}\otimes\cdots\otimes V^{d}\), we have \[\|v-P_{V^{1}_{h}\otimes\cdots\otimes V^{d}_{h}}v\|_{\mathcal{V}}\leq C\|v-P_{ V^{1}_{h}\otimes\cdots\otimes V^{d}_{h}}v\|_{V^{1}\otimes\cdots\otimes V^{d}} \to 0\quad\text{for}\quad h\to 0.\] Since \(V^{1}\otimes\cdots\otimes V^{d}\) is a dense subspace of \(\mathcal{V}\), the \(\mathcal{V}\)-orthogonal projection onto \(V^{1}_{h}\otimes\cdots\otimes V^{d}_{h}\) satisfies **B1**(a). For assumption **B1**(b), let \(u\in\mathcal{V}\cap\mathcal{M}\). Then \(u\in V^{1}\otimes\cdots\otimes V^{d}\) by Lemma 5.2. By (4.2), we have that \(P_{V^{1}_{h}\otimes\cdots\otimes V^{d}_{h}}u\in\overline{\mathcal{M}}^{w}\). Hence there exists \(u_{h}\in V^{1}_{h}\otimes\cdots\otimes V^{d}_{h}\cap\mathcal{M}\) such that \(\|u_{h}-P_{V^{1}_{h}\otimes\cdots\otimes V^{d}_{h}}u\|_{\mathcal{V}}\leq\epsilon\) for any \(\epsilon>0\). Thus \(\|u_{h}-u\|_{\mathcal{V}}\to 0\) as \(h\to 0\). Possibly after rescaling, we can thus construct a sequence \((u_{h})\) that converges to \(u\) in \(\mathcal{V}\) as \(h\searrow 0\) with \(\|u_{h}\|_{\mathcal{V}}\leq\|u\|_{\mathcal{V}}\). Assumption **B2** follows immediately by noting that \(\mathcal{M}\cap\mathcal{V}_{h}\) is of the same form as \(\mathcal{M}\) and Theorem 4.1 can be applied. ### Main results The main results for the model problem are the following specific versions of Theorem 2.2, Theorem 3.2, and Theorem 2.5. They follow directly by applying the results of Section 4 and Section 5.1. **Theorem 5.7** (Existence and uniqueness of solutions).: _Let \(u_{0}\) have positive \(L_{2}\)-distance from \(\overline{\mathcal{M}}^{w}\setminus\mathcal{M}\). There exist \(T^{*}\in(0,T]\) and \(u\in W(0,T^{*};H^{1}_{0}(\Omega),L_{2}(\Omega))\cap L_{\infty}(0,T^{*};H^{1}_{ 0}(\Omega))\) such that \(u\) solves Problem 5.1 on the time interval \([0,T^{*}]\), and its continuous representative \(u\in C(0,T^{*};L_{2}(\Omega))\) satisfies \(u(t)\in\mathcal{M}\) for all \(t\in[0,T^{*})\). Here \(T^{*}\) is maximal for the evolution on \(\mathcal{M}\) in the sense that if \(T^{*}<T\), then_ \[\liminf_{t\to T^{*}}\,\,\inf_{v\in\overline{\mathcal{M}}^{w}\setminus \mathcal{M}}\|u(t)-v\|_{L_{2}(\Omega)}=0.\] _In either case, \(u\) is the unique solution of Problem 5.1 in \(W(0,T^{*};H^{1}_{0}(\Omega),L_{2}(\Omega))\)._ _In particular, with \(\sigma=\operatorname{dist}_{L_{2}(\Omega)}(u_{0},\overline{\mathcal{M}}^{w} \setminus\mathcal{M})\), there exists a constant \(c>0\) such that \(T^{*}\geq\min(\sigma^{2}/c,T)\)._ _The solution satisfies the following estimates:_ \[\|u\|_{L_{2}(0,T^{*};H^{1}_{0}(\Omega))}^{2} \leq\|u_{0}\|_{L_{2}(\Omega)}^{2}+C_{1}\|f\|_{L_{2}(0,T^{*};L_{2} (\Omega))}^{2},\] \[\|u^{\prime}\|_{L_{2}(0,T^{*};L_{2}(\Omega))}^{2} \leq C_{2}\left(\|u_{0}\|_{H^{1}_{0}(\Omega)}^{2}+\|f\|_{L_{2}(0,T^ {*};L_{2}(\Omega))}^{2}\right),\] \[\|u\|_{L^{\infty}(0,T^{*};H^{1}_{0}(\Omega))}^{2} \leq C_{3}\left(\|u_{0}\|_{H^{1}_{0}(\Omega)}^{2}+\|f\|_{L_{2}(0,T^ {*};L_{2}(\Omega))}^{2}\right),\] _where \(C_{1}\), \(C_{2}\), and \(C_{3}\) are the constants from [3, Lemma 4.4]._ **Theorem 5.8** (Convergence of spatial discretizations).: _Let \(\mathcal{V}_{h}\) be of the form (5.4). Let \(u_{0,h}\in\mathcal{M}\cap\mathcal{V}_{h}\) define a sequence that converges to \(u_{0}\) in \(H^{1}_{0}(\Omega)\) as \(h\searrow 0\) and let \(u_{0}\) have positive \(L_{2}(\Omega)\)-distance \(\sigma\) to the relative boundary \(\overline{\mathcal{M}}^{w}\setminus\mathcal{M}\). Then there exists a constant \(c>0\) independent of \(\sigma\) and a constant \(h_{0}>0\) such that there is a unique \(u_{h}\) in \(W(0,T^{*};H^{1}_{0}(\Omega),L_{2}(\Omega))\cap L_{\eta}(0,T^{*};H^{1}_{0}( \Omega))\) that solves Problem 3.1 on the time interval \([0,T^{*}]\) when \(T^{*}<\sigma^{2}/c\) for all \(h\leq h_{0}\). Furthermore, \(u_{h}\) converges to the unique solution \(u\) of Problem 5.1 in \(W(0,T^{*};H^{1}_{0}(\Omega),L_{2}(\Omega))\cap L_{\eta}(0,T^{*};H^{1}_{0}( \Omega))\) weakly in \(L_{2}(0,T^{*};H^{1}_{0}(\Omega))\) and strongly in \(C(0,T^{*};L_{2}(\Omega))\), while the weak derivatives \(u^{\prime}_{h}\) converge weakly to \(u^{\prime}\) in \(L_{2}(0,T^{*},L_{2}(\Omega))\)._ **Theorem 5.9** (Stability).: _Let \(u,v\in W(0,T^{*};H^{1}_{0}(\Omega),L_{2}(\Omega))\) be two solutions of Problem 5.1 on a time interval \([0,T^{*}]\) corresponding to right-hand sides \(f,g\in L_{2}(0,T;L_{2}(\Omega))\) and initial values \(u_{0},v_{0}\in\mathcal{M}\), respectively. Assume that the continuous representatives \(u,v\in C(0,T^{*};L_{2}(\Omega))\) have pointwise positive \(L_{2}(\Omega)\)-distance to \(\overline{\mathcal{M}}^{w}\setminus\mathcal{M}\) of at least \(\sigma\). Then for any \(\varepsilon>0\),_ \[\|u(t)-v(t)\|^{2}_{L_{2}(\Omega)}\leq\left(\|u_{0}-v_{0}\|^{2}_{L_{2}(\Omega) }+\frac{1}{\varepsilon}\int_{0}^{t}\|f(s)-g(s)\|^{2}_{L_{2}(\Omega)}\,ds \right)\exp(\Lambda(t)+\varepsilon t),\] _where_ \[\Lambda(t)\coloneqq 2\kappa\int_{0}^{t}\|u^{\prime}(s)\|_{L_{2}( \Omega)}+\|v^{\prime}(s)\|_{L_{2}(\Omega)}+\gamma\left(\|u(s)\|^{\eta}_{H^{1}_ {0}(\Omega)}+\|v(s)\|^{\eta}_{H^{1}_{0}(\Omega)}\right)\\ +\|f(s)\|_{L_{2}(\Omega)}+\|g(s)\|_{L_{2}(\Omega)}\,ds<\infty\] _with \(\kappa=\kappa(\sigma)=\frac{\sqrt{d+c^{2}}}{\sigma}\) from Proposition 4.9._ ## Appendix A Proofs of Theorem 4.1 and Proposition 4.11 Proof of Theorem 4.1.: Ad (i). We fix a particular \(X_{*}=C_{*}\times_{1}U^{1}_{*}\times_{2}\cdots\times_{d}U^{d}_{*}\). The construction of the submersion follows [29], where this has been done for manifolds of fixed multilinear rank in finite-dimensional tensor spaces. Here we additionally have to take the constraint \(C\in\mathcal{M}_{\mathrm{c}}\) for the coefficient tensor into account. In the following, \(\mathcal{O}\) is an open neighborhood of \(X_{*}\) in \(\ell^{2}(\mathbb{N}^{d})\) that can always be chosen sufficiently small to ensure that all maps are well defined. Since \(X_{*}\in\mathcal{M}\), the matricizations \(M^{\mu}_{X_{*}}\) admit low-rank decompositions (4.4), which can be written in matrix product form as \[M^{\mu}_{X_{*}}=U^{\mu}_{*}(V^{\mu}_{*})^{\mathsf{T}}.\] Here the columns of \(U^{\mu}_{*}\) and \(V^{\mu}_{*}\) are bases of the minimal subspaces \(\mathcal{U}^{\mu}_{*}\subset\ell^{2}(\mathbb{N})\) and \(\mathcal{V}^{\mu}_{*}\subset\ell^{2}(\mathbb{N}^{d-1})\), respectively. For \(X\in\mathcal{M}\) sufficiently close to \(X_{*}\), it will be useful to define a particular basis \(U^{\mu}_{X}\) for the \(\mu\)-th minimal subspace \(\mathcal{U}^{\mu}\), \(\mu=1,\ldots,d\), as a continuous function of \(X\). To this end, we choose \[U^{\mu}_{X}=M^{\mu}_{X}(V^{\mu}_{*})^{\mathsf{T}}_{+},\] (A.1) where \(Y_{+}=[Y^{\mathsf{T}}Y]^{-1}Y^{\mathsf{T}}\) denotes the pseudoinverse of a matrix with full column rank. Then \(U^{\mu}_{X}\) has full column rank for \(X\) close enough to \(X_{*}\), which follows from \(U^{\mu}_{X}\to U^{\mu}_{*}\) for all \(\mu\) for \(X\to X_{*}\) and the lower semicontinuity of the rank. As a result, every \(X\) in the neighborhood of \(X_{*}\) can be written as \[X=C_{X}\times_{1}U^{1}_{X}\times_{2}\cdots\times_{d}U^{d}_{X}.\] Moreover, we can assume that for \(\mu=1,\ldots,d\), the \(r_{\mu}\times r_{\mu}\) matrices \((U^{\mu}_{*})_{+}U^{\mu}_{X}\) are invertible (again, since \(U^{\mu}_{X}\to U^{\mu}_{*}\) for all \(\mu\) for \(X\to X_{*}\)). We then also consider \[\bar{C}_{X}=X\times_{1}(U^{1}_{*})_{+}\times_{2}\cdots\times_{d}(U^{d}_{*})_{+ }=C_{X}\times_{1}(U^{1}_{*})_{+}U^{1}_{X}\times_{2}\cdots\times_{d}(U^{d}_{*})_ {+}U^{d}_{X}.\] (A.2) Clearly, \(\bar{C}_{X_{*}}=C_{*}\). Noting that by (4.2) the condition \(C_{X}\in\mathcal{M}_{\mathrm{c}}\) in (4.1) is independent under invertible changes of basis, we arrive at the following local description of \(\mathcal{M}\): \[\mathcal{M}\cap\mathcal{O}=\{X\in\mathcal{O}\colon\bar{C}_{X}\in\mathcal{M}_{ \mathrm{c}},\ \mathrm{rank}(M^{\mu}_{X})=r_{\mu}\ \text{for}\ \mu=1,\ldots,d\}.\] (A.3) We next describe the constraints as preimages of smooth maps. We begin with the constraint \(\bar{C}_{X}\in\mathcal{M}_{\mathrm{c}}\). Since \(\mathcal{M}_{\mathrm{c}}\) is assumed to be an embedded submanifold of \(\mathbb{R}_{*}^{r_{1}\times\cdots\times r_{d}}\), there exists a submersion \(\phi\) from an open neighborhood of \(C_{*}\in\mathcal{M}_{\mathrm{c}}\) to \(\mathbb{R}^{q}\) (here \(q\) is the co-dimension of \(\mathcal{M}_{\mathrm{c}}\)) such that the conditions \(C\in\mathcal{M}_{\mathrm{c}}\) and \(\phi(C)=0\) are equivalent in this neighborhood. Considering \[g_{0}:\mathcal{O}\to\mathbb{R}^{q},\quad X\mapsto\phi(\bar{C}_{X}),\] we then locally have \(\bar{C}_{X}\in\mathcal{M}_{\mathrm{c}}\) if and only if \(g_{0}(X)=0\). We now consider the rank constraints in (A.3), which first will be reformulated. Let \(P_{\mathcal{U}^{\mu}_{*}}=U^{\mu}_{*}(U^{\mu}_{*})_{+}\) denote the orthogonal projections on the \(\mu\)-th minimal subspaces of \(X_{*}\), and let \[\mathbb{P}_{\mu}=P_{\mathcal{U}^{1}_{*}}\otimes\cdots\otimes P_{\mathcal{U}^{ \mu}_{*}}\otimes\mathrm{id}_{\mu+1}\otimes\cdots\otimes\mathrm{id}_{d},\] with the convention \(\mathbb{P}_{0}=\mathrm{id}\). In the following we consider tensors \(\mathbb{P}_{\mu-1}(X)\), that is, orthogonal projections of \(X\) onto the subspaces \(\mathcal{U}^{*}_{1}\otimes\cdots\otimes\mathcal{U}^{\mu-1}_{*}\otimes\ell_{2}( \mathbb{N}^{d-\mu+1})\). In particular, we claim that for any fixed \(1\leq\nu\leq d\) there exists a neighborhood of \(X_{*}\) in which the condition \(\mathrm{rank}(M^{\mu}_{X})=r_{\mu}\) for \(\mu=1,\ldots,\nu\) is equivalent with \(\mathrm{rank}(M^{\mu}_{\mathbb{P}_{\mu-1}(X)})=r_{\mu}\) for all \(\mu=1,\ldots,\nu\). This is shown by induction over \(\nu\). For \(\nu=1\) the statement is trivial since \(\mathbb{P}_{0}=\mathrm{id}\). In the induction step \(\nu-1\to\nu\), it suffices to show that in some neighborhood of \(X_{*}\), any \(X\) satisfying \(\mathrm{rank}(M^{\mu}_{X})=r_{\mu}\) for \(\mu=1,\ldots,\nu-1\) also satisfies \(\mathrm{rank}(M^{\nu}_{X})=\mathrm{rank}(M^{\nu}_{\mathbb{P}_{\nu-1}(X)})\). Any such \(X\) lies in a subspace \(\mathcal{U}^{1}\otimes\cdots\otimes\mathcal{U}^{\nu-1}\otimes\ell^{2}( \mathbb{N}^{d-\nu+1})\), where \(\mathcal{U}^{\mu}\) are the minimal \(r_{\mu}\)-dimensional subspaces of \(X\). We choose a neighborhood of \(X_{*}\) in which the restrictions of all \(P_{\mathcal{U}^{\mu}_{*}}\) to \(\mathcal{U}^{\mu}\) are necessarily invertible maps between \(\mathcal{U}^{\mu}\) and \(\mathcal{U}^{\mu}_{*}\) (which is equivalent with \((U^{\mu}_{*})_{+}U^{\mu}_{X}\) being invertible). Hence in this neighborhood the projection \(\mathbb{P}_{\nu-1}\) is a tensor product of invertible operators between \(\mathcal{U}^{1}\otimes\cdots\otimes\mathcal{U}^{\nu-1}\otimes\ell^{2}( \mathbb{N}^{d-\nu+1})\) and \(\mathcal{U}^{1}_{*}\otimes\cdots\otimes\mathcal{U}^{\nu-1}_{*}\otimes\ell^{2} (\mathbb{N}^{d-\nu+1})\), which hence leaves all matricization ranks invariant. Hence, for such \(X\), we obtain that \(\mathrm{rank}(M^{\nu}_{X})=\mathrm{rank}(M^{\nu}_{\mathbb{P}_{\nu-1}X})\), which completes the induction. Applying the above equivalence with \(\nu=d\) allows us to replace the conditions \(\mathrm{rank}(M^{\mu}_{X})=r_{\mu}\) in (A.3) with \(\mathrm{rank}(M^{\mu}_{\mathbb{P}_{\mu-1}(X)})=r_{\mu}\) for \(\mu=1,\ldots,d\). These latter conditions are now handled via Schur complements as follows. Note that \[M^{\mu}_{\mathbb{P}_{\mu-1}(X)}\in\ell_{2}(\mathbb{N})\otimes[\mathcal{U}^{1}_{* }\otimes\cdots\otimes\mathcal{U}^{\mu-1}_{*}\otimes\ell_{2}(\mathbb{N}^{d- \mu})].\] We consider orthogonal decompositions \[\ell_{2}(\mathbb{N})=\mathcal{U}^{\mu}_{*}\oplus(\mathcal{U}^{\mu}_{*})^{\perp}\] and \[\mathcal{U}^{1}_{*}\otimes\cdots\otimes\mathcal{U}^{\mu-1}_{*}\otimes\ell_{2}( \mathbb{N}^{d-\mu})=\mathcal{V}^{\mu}_{*}\oplus\mathcal{W}^{\mu}_{*},\] which is possible since \(\mathcal{V}^{\mu}_{*}\) is even contained in the smaller subspace of \(\mathcal{U}^{1}_{*}\otimes\cdots\otimes\mathcal{U}^{\mu-1}_{*}\otimes\mathcal{ U}^{\mu+1}_{*}\otimes\cdots\otimes\mathcal{U}^{d}_{*}\) (which in turn follows from \(X_{*}\in\mathcal{U}^{1}_{*}\otimes\cdots\otimes\mathcal{U}^{d}_{*}\)). Hence in this notation \[M_{\mathbb{P}_{\mu-1}(X)}\in[\mathcal{U}^{\mu}_{*}\oplus(\mathcal{U}_{*})^{ \perp}]\otimes[\mathcal{V}^{\mu}_{*}\oplus\mathcal{W}^{\mu}_{*}].\] By applying a block decomposition of \(M^{\mu}_{\mathbb{P}_{\mu-1}(X)}\) into the four corresponding parts, \[Q^{\mu}_{X} =P_{\mathcal{U}^{\mu}_{*}}M^{\mu}_{\mathbb{P}_{\mu-1}(X)}P_{\mathcal{ V}^{\mu}_{*}}\in\mathcal{U}^{\mu}_{*}\otimes\mathcal{V}^{\mu}_{*},\] \[R^{\mu}_{X} =P_{\mathcal{U}^{\mu}_{*}}M^{\mu}_{\mathbb{P}_{\mu-1}(X)}P_{ \mathcal{W}^{\mu}_{*}}\in\mathcal{U}^{\mu}_{*}\otimes\mathcal{W}^{\mu}_{*},\] \[S^{\mu}_{X} =(\operatorname{id}-P_{\mathcal{U}^{\mu}_{*}})M^{\mu}_{\mathbb{P }_{\mu-1}(X)}P_{\mathcal{V}^{\mu}_{*}}\in(\mathcal{U}^{\mu}_{*})^{\perp} \otimes\mathcal{V}^{\mu}_{*},\] \[T^{\mu}_{X} =(\operatorname{id}-P_{\mathcal{U}^{\mu}_{*}})M^{\mu}_{\mathbb{P }_{\mu-1}(X)}P_{\mathcal{W}^{\mu}_{*}}\in(\mathcal{U}^{\mu}_{*})^{\perp} \otimes\mathcal{W}^{\mu}_{*},\] we can consider the Schur complement functions \[g_{\mu}:\mathcal{O}\to(\mathcal{U}^{\mu}_{*})^{\perp}\otimes\mathcal{W}^{\mu }_{*},\quad X\mapsto T^{\mu}_{X}-S^{\mu}_{X}(Q^{\mu}_{X})^{-1}R^{\mu}_{X}.\] (A.4) Note that \(Q^{\mu}_{X}\) is indeed invertible (as an \(r_{\mu}\times r_{\mu}\) matrix in \(\mathcal{U}^{\mu}_{*}\otimes\mathcal{V}^{\mu}_{*}\)) for \(X\) close enough to \(X_{*}\), since \(Q^{\mu}_{X_{*}}=P_{\mathcal{U}^{\mu}_{*}}M^{\mu}_{X_{*}}P_{\mathcal{V}^{\mu}_ {*}}\) is invertible. As for finite matrices, we then have \(g_{\mu}(X)=0\) if and only if \(\operatorname{rank}(M^{\mu}_{\mathbb{P}_{\mu-1}(X)})=r_{\mu}\). Defining \[g=(g_{0},g_{1},\dots,g_{d})\colon\mathcal{O}\to\mathbb{R}^{q}\times[(\mathcal{ U}^{1}_{*})^{\perp}\otimes\mathcal{W}^{1}_{*}]\times\dots\times[(\mathcal{U}^{d}_{*} )^{\perp}\otimes\mathcal{W}^{d}_{*}]\] we conclude from all the previous considerations that (A.3) can be written as \[\mathcal{M}\cap\mathcal{O}=g^{-1}(0).\] We need to show that \(g\) is a submersion in \(X_{*}\), that is, \(g^{\prime}(X_{*})\) is surjective. First note that for \(\mu=1,\dots,d\) we have \[g^{\prime}_{\mu}(X_{*})[H]=T^{\mu}_{H}=(\operatorname{id}-P_{\mathcal{U}^{\mu }_{*}})M^{\mu}_{\mathbb{P}_{\mu-1}(H)}P_{\mathcal{W}^{\mu}_{*}},\] (A.5) that is, \(g^{\prime}_{\mu}(X_{*})\) is the orthogonal projection (of the \(\mu\)-th matricization) onto the subspace \((\mathcal{U}^{\mu}_{*})^{\perp}\otimes\mathcal{W}^{\mu}_{*}\). This follows by applying a product rule to (A.4) and noting that \(R^{\mu}_{X_{*}}=0\) and \(S^{\mu}_{X_{*}}=0\). When viewed as subspaces of \(\ell^{2}(\mathbb{N}^{d})\), the subspaces \((\mathcal{U}^{\mu}_{*})^{\perp}\otimes\mathcal{W}^{\mu}_{*}\) are mutually orthogonal to each other, since they are contained in the pairwise orthogonal subspaces \(\mathcal{U}^{1}_{*}\otimes\dots\otimes\mathcal{U}^{\mu-1}_{*}\otimes(\mathcal{ U}^{\mu}_{*})^{\perp}\otimes\ell^{2}(\mathbb{N}^{d-\mu})\), respectively. Moreover, all of them are orthogonal to the subspace \(\mathcal{U}^{1}_{*}\otimes\dots\otimes\mathcal{U}^{d}_{*}\). Regarding \(g_{0}\), note that \[g_{0}(C\times_{1}U^{1}_{*}\times_{2}\dots\times_{d}U^{d}_{*})=\phi(C)\] (A.6) (again with \(U^{\mu}_{X_{*}}=U^{\mu}_{*}\)), which shows that already the restriction of \(g_{0}\) to \(\mathcal{U}^{1}_{*}\otimes\dots\otimes\mathcal{U}^{d}_{*}\) is a submersion in \(X_{*}\), since \(\phi^{\prime}(C_{*})\) is surjective to \(\mathbb{R}^{q}\). It is now easy to conclude from these facts that \(g^{\prime}(X_{*})\) is altogether surjective. By the local submersion theorem in Hilbert space, see [37, Thm. 73.C], \(\mathcal{M}\cap\mathcal{O}=g^{-1}(0)\) is a smooth submanifold of \(\mathcal{H}\). The tangent space \(T_{X_{*}}\mathcal{M}\) at \(X_{*}\) is the null space of the \(g^{\prime}(X_{*})\). The proof of part (i) is therefore completed. Ad (ii). The existence of the continuously Frechet-differentiable homeomorphism \(\varphi\) of the asserted form is a consequence of Ljusternik's submersion theorem as stated in [36, Thm. 43.C]. To show that (in a possibly smaller neighborhood around zero) \(\varphi\) is also an immersion, that is, \(\varphi^{\prime}(\xi)\colon T_{X_{*}}\mathcal{M}\to T_{\varphi(\xi)} \mathcal{M}\) is injective and its range splits, it suffices to show that there exists \(c>0\) such that \(\|\varphi^{\prime}(\xi)h\|\geq c\|h\|\) for all \(h\in T_{X_{*}}\mathcal{M}\). This, however, follows immediately from the continuity of \(\varphi^{\prime}\) and \(\varphi^{\prime}(0)h=h\). By definition, \(\varphi\) is therefore a local embedding [37, Def. 73.43]. Ad (iii). Let \(\xi\) be an element of the form (4.5) (but at \(X_{*}=C_{*}\times_{1}U_{*}^{1}\times_{2}\cdots\times_{d}U_{*}^{d}\)). Since \(\check{C}\in T_{C_{*}}\mathcal{M}_{\rm c}\), there exists a curve \(C(t)\) in \(\mathcal{M}_{\rm c}\) such that \(C(0)=C_{*}\) and \(C^{\prime}(0)=\check{C}\). For small enough \(t\), \[X(t)=C(t)\times_{1}(U_{*}^{1}+t\check{U}^{1})\times_{2}\cdots\times_{d}(U_{*}^ {d}+t\dot{U}^{d})\] then defines a curve in \(\mathcal{M}\) because \(U_{*}^{\mu}+t\dot{U}^{\mu}\) has full column rank for \(\mu=1,\ldots,d\). Obviously \(X(0)=X_{*}\), and by multilinearity it is easily seen that \(X^{\prime}(0)=\xi\), which shows \(\xi\in T_{X_{*}}\mathcal{M}\). In order to show that all tangent vectors are of the form (4.5), let \(\xi\in T_{X_{*}}\mathcal{M}\) and a corresponding curve \(X(t)\in\mathcal{M}\) with \(X(0)=X_{*}\) and \(X^{\prime}(0)=\xi\) be given. For small enough \(t\) we represent \(X(t)\) in the particular bases \(U_{X(t)}^{\mu}\) defined in (A.1) as \[X(t)=C_{X(t)}\times_{1}U_{X(t)}^{1}\times_{2}\cdots\times_{d}U_{X(t)}^{d},\] where \(C_{X(t)}\) is in \(\mathcal{M}_{\rm c}\). Clearly, the curves \(t\mapsto U_{X(t)}^{\mu}\) are smooth. It implies that for small enough \(t\) the pseudoinverses \(t\mapsto(U_{X(t)}^{\mu})_{+}\) are also smooth functions. It then follows from (A.2), by applying an inverse transformation, that also \(t\mapsto C_{X(t)}\) is a smooth curve, since \(\bar{C}_{X(t)}\) is. By the product rule we then get that \[\xi=X^{\prime}(0)=\tilde{C}\times_{1}U_{*}^{1}\times_{2}\cdots\times_{d}U_{*} ^{d}+C_{*}\times_{1}\tilde{U}^{1}\times_{2}\cdots\times_{d}U_{*}^{d}+\cdots+C _{*}\times_{1}U_{*}^{1}\times_{2}\cdots\times_{d}\tilde{U}^{d},\] (A.7) where \(\tilde{C}\in T_{C_{*}}\mathcal{M}_{\rm c}\) is the derivative of \(t\mapsto C_{X(t)}\) in \(t=0\), and \(\tilde{U}^{\mu}\in(\ell_{2}(\mathbb{N}))^{r_{\mu}}\) is the derivative of \(t\mapsto U_{X(t)}^{\mu}\) in \(t=0\) for \(\mu=1,\ldots,d\). By decomposing every column of \(\tilde{U}^{\mu}\) into the span of \(U_{*}^{\mu}\) and its orthogonal complement, we can write \[\tilde{U}^{\mu}=U_{*}^{\mu}S_{\mu}+\dot{U}_{*}^{\mu},\] where \(S_{\mu}\) is some \(r_{\mu}\times r_{\mu}\) matrix and \((U_{*}^{\mu})^{\sf T}\dot{U}_{*}^{\mu}=0\). Expanding the expression (A.7) we then have \[\xi=K\times_{1}U_{*}^{1}\times_{2}\cdots\times_{d}U_{*}^{d}+C_{*}\times_{1} \dot{U}_{*}^{1}\times_{2}\cdots\times_{d}U_{*}^{d}+\cdots+C_{*}\times_{1}U_{*} ^{1}\times_{2}\cdots\times_{d}\dot{U}_{*}^{d}\] where \[K=\tilde{C}+C_{*}\times_{1}S_{1}\times_{2}\operatorname{id}\times_{3}\cdots \times_{d}\operatorname{id}+\cdots+C_{*}\times_{1}\operatorname{id}\times_{2} \cdots\times_{d}S_{d}.\] It remains to show that \(K\in T_{C_{*}}\mathcal{M}_{\rm c}\) to conclude that \(\xi\) is of the asserted form (4.5). Since \(T_{C_{*}}\mathcal{M}_{\rm c}\) is a linear space it suffices to show this for every term. For \(\tilde{C}\) there is nothing to show. The sum of the remaining terms equals the derivative of the curve \[C(t)=C_{*}\times_{1}(\operatorname{id}+tS_{1})\times_{2}\cdots\times_{d}( \operatorname{id}+tS_{d})\] at \(t=0\), which for small enough \(t\) lies in \(\mathcal{M}_{\rm c}\) by the invariance condition (4.2). Hence \(C^{\prime}(0)\in T_{C_{*}}\mathcal{M}_{\rm c}\). Proof of Proposition 4.11.: Let \(X^{\{1,\ldots,\mu\}}\in\mathbb{R}^{N_{1}\cdots N_{\mu}\times B_{\mu+1}\cdots N _{d}}\) be a matricization of \(X\). We can decompose \[X^{\{1,\ldots,\mu\}}=(U_{1}\otimes\operatorname{id}_{N_{2}\cdots N_{\mu}}) \cdots(U_{\mu-1}\otimes\operatorname{id}_{N_{\mu}})U_{\mu}\Sigma_{\mu}V_{\mu+1 }^{\sf T}\cdots(\operatorname{id}_{N_{\mu+1}\cdots N_{d-1}}\otimes V_{d}^{\sf T }),\] with \(U_{\nu}^{\sf T}U_{\nu}=\operatorname{id}_{r_{\nu}}\) and \(V_{\nu}^{\sf T}V_{\nu}=\operatorname{id}_{r_{\nu}}\). Furthermore, we define the spaces \(\mathcal{U}_{\{1,\ldots,\mu\}}\) and \(\mathcal{V}_{\{\mu+1,\ldots,d\}}\) via their respective orthonormal bases \[U_{\{1,\ldots,\mu\}}=(U_{1}\otimes\operatorname{id}_{N_{2}\cdots N_{\mu}}) \cdots(U_{\mu-1}\otimes\operatorname{id}_{N_{\mu}})U_{\mu}\] and \[V_{\mu+1,\ldots,d}^{\sf T}=V_{\mu+1}^{\sf T}\cdots(\operatorname{id}_{N_{\mu+1} \cdots N_{d-1}}\otimes V_{d}^{\sf T}).\] The projection \(P_{X}\) can be decomposed as \[P_{X}=P_{1}^{X}+\ldots+P_{d}^{X},\] where \[P_{\mu}^{X}=(P_{\mathcal{U}_{\{1,\ldots,\mu-1\}}}\otimes\operatorname{id}_{N_{ \mu}}-P_{\mathcal{U}_{\{1,\ldots,\mu\}}})\otimes P_{\mathcal{V}_{\{\mu+1, \ldots,d\}}}\] for \(\mu=1,\ldots,d-1\) and \[P_{d}^{X}=P_{\mathcal{U}_{\{1,\ldots,d-1\}}}\otimes\operatorname{id}_{N_{d}};\] see, e.g. [25] or [34, Section 9.3.4]. Let \(\tilde{\mathcal{U}}_{\{1,\ldots,\mu\}}\) and \(\tilde{\mathcal{V}}_{\{\mu+1,\ldots,d\}}\) be the analogous spaces for \(Y\). Then by (4.13), \[\|P_{\mu}^{X}-P_{\mu}^{Y}\| =\|(P_{\mathcal{U}_{\{1,\ldots,\mu-1\}}}\otimes\operatorname{id}_{ N_{\mu}}-P_{\mathcal{U}_{\{1,\ldots,\mu\}}})\otimes P_{\mathcal{V}_{\{\mu+1, \ldots,d\}}}\] \[\quad-(P_{\tilde{\mathcal{U}}_{\{1,\ldots,\mu-1\}}}\otimes \operatorname{id}_{N_{\mu}}-P_{\tilde{\mathcal{U}}_{\{1,\ldots,\mu\}}}) \otimes P_{\tilde{\mathcal{V}}_{\{\mu+1,\ldots,d\}}}\|\] \[\leq\|(P_{\mathcal{U}_{\{1,\ldots,\mu-1\}}}-P_{\tilde{\mathcal{U} }_{\{1,\ldots,\mu-1\}}})\otimes\operatorname{id}_{N_{\mu}}\otimes P_{ \mathcal{V}_{\{\mu+1,\ldots,d\}}}\|\] \[\quad+\|P_{\tilde{\mathcal{U}}_{\{1,\ldots,\mu-1\}}}\otimes \operatorname{id}_{N_{\mu}}\otimes(P_{\mathcal{V}_{\{\mu+1,\ldots,d\}}}-P_{ \tilde{\mathcal{V}}_{\{\mu+1,\ldots,d\}}})\|\] \[\quad+\|(P_{\mathcal{U}_{\{1,\ldots,\mu\}}}-P_{\tilde{\mathcal{U} }_{\{1,\ldots,\mu\}}})\otimes P_{\mathcal{V}_{\{\mu+1,\ldots,d\}}}\|\] \[\quad+\|P_{\tilde{\mathcal{U}}_{\{1,\ldots,\mu\}}}\otimes(P_{ \mathcal{V}_{\{\mu+1,\ldots,d\}}}-P_{\tilde{\mathcal{V}}_{\{\mu+1,\ldots,d\}} })\|\] \[\leq\frac{4}{\sigma}\|X-Y\|\] holds and the first desired inequality readily follows. For the other inequalities, we use the decomposition of the identity matrix \[\operatorname{id}=P_{\mathcal{U}_{\{1,\ldots,d-1\}}}\otimes \operatorname{id}_{N_{d}}+(P_{\mathcal{U}_{\{1,\ldots,d-2\}}}\otimes \operatorname{id}_{N_{d-1}}-P_{\mathcal{U}_{\{1,\ldots,d-1\}}})\otimes \operatorname{id}_{N_{d}}\\ +\ldots+(\operatorname{id}_{N_{1}}-P_{\mathcal{U}_{\{1\}}}) \otimes\operatorname{id}_{N_{2}\ldots N_{d}}\] into orthogonal projections onto mutually orthogonal spaces. Then \[(\operatorname{id}-P_{X})(X-Y) =(P_{\mathcal{U}_{\{1,\ldots,d-2\}}}\otimes \operatorname{id}_{N_{d-1}}-P_{\mathcal{U}_{\{1,\ldots,d-1\}}})\otimes( \operatorname{id}_{N_{d-1}N_{d}}-P_{\mathcal{V}_{\{d\}}})(X-Y)\] \[\quad+\ldots\] \[\quad+(\operatorname{id}_{N_{1}}-P_{\mathcal{U}_{\{1\}}}) \otimes(\operatorname{id}_{N_{2}\ldots N_{d}}-P_{\mathcal{V}_{\{2,\ldots,d\}} })(X-Y)\] \[=(P_{\mathcal{U}_{\{1,\ldots,d-2\}}}\otimes\operatorname{id}_{N_{ d-1}}-P_{\mathcal{U}_{\{1,\ldots,d-1\}}})\otimes(P_{\tilde{\mathcal{V}}_{\{d\}}}-P_{ \mathcal{V}_{\{d\}}})(X-Y)\] \[\quad+\ldots\] \[\quad+(\operatorname{id}_{N_{1}}-P_{\mathcal{U}_{\{1\}}}) \otimes(P_{\tilde{\mathcal{V}}_{\{2,\ldots,d\}}}-P_{\mathcal{V}_{\{2,\ldots,d\} }})(X-Y)\] holds. Note that the operators map onto orthogonal subspaces. Hence, we get the desired estimate \[\|(\operatorname{id}-P_{X})(X-Y)\|\leq\frac{\sqrt{d-1}}{\sigma}\|X-Y\|^{2}.\] By continuity of the projection and taking limits, the estimate also holds for \(Y\in\overline{\mathcal{M}_{\mathrm{c}}}\). In a similar way, the second inequality follows. ### Acknowledgements M.B. acknowledges funding by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummern 442047500, 501389786. The work of H.E. was funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 501389786. The work of A.U. was supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 506561557.
2309.11366
Single-Exponential FPT Algorithms for Enumerating Secluded $\mathcal{F}$-Free Subgraphs and Deleting to Scattered Graph Classes
The celebrated notion of important separators bounds the number of small $(S,T)$-separators in a graph which are 'farthest from $S$' in a technical sense. In this paper, we introduce a generalization of this powerful algorithmic primitive that is phrased in terms of $k$-secluded vertex sets: sets with an open neighborhood of size at most $k$. In this terminology, the bound on important separators says that there are at most $4^k$ maximal $k$-secluded connected vertex sets $C$ containing $S$ but disjoint from $T$. We generalize this statement significantly: even when we demand that $G[C]$ avoids a finite set $\mathcal{F}$ of forbidden induced subgraphs, the number of such maximal subgraphs is $2^{O(k)}$ and they can be enumerated efficiently. This allows us to make significant improvements for two problems from the literature. Our first application concerns the 'Connected $k$-Secluded $\mathcal{F}$-free subgraph' problem, where $\mathcal{F}$ is a finite set of forbidden induced subgraphs. Given a graph in which each vertex has a positive integer weight, the problem asks to find a maximum-weight connected $k$-secluded vertex set $C \subseteq V(G)$ such that $G[C]$ does not contain an induced subgraph isomorphic to any $F \in \mathcal{F}$. The parameterization by $k$ is known to be solvable in triple-exponential time via the technique of recursive understanding, which we improve to single-exponential. Our second application concerns the deletion problem to scattered graph classes. Here, the task is to find a vertex set of size at most $k$ whose removal yields a graph whose each connected component belongs to one of the prescribed graph classes $\Pi_1, \ldots, \Pi_d$. We obtain a single-exponential algorithm whenever each class $\Pi_i$ is characterized by a finite number of forbidden induced subgraphs. This generalizes and improves upon earlier results in the literature.
Bart M. P. Jansen, Jari J. H. de Kroon, Michał Włodarczyk
2023-09-20T14:49:56Z
http://arxiv.org/abs/2309.11366v1
# Single-Exponential FPT Algorithms for ###### Abstract The celebrated notion of important separators bounds the number of small \((S,T)\)-separators in a graph which are 'farthest from \(S\)' in a technical sense. In this paper, we introduce a generalization of this powerful algorithmic primitive, tailored to undirected graphs, that is phrased in terms of \(k\)_-secluded_ vertex sets: sets with an open neighborhood of size at most \(k\). In this terminology, the bound on important separators says that there are at most \(4^{k}\) maximal \(k\)-secluded connected vertex sets \(C\) containing \(S\) but disjoint from \(T\). We generalize this statement significantly: even when we demand that \(G[C]\) avoids a finite set \(\mathcal{F}\) of forbidden induced subgraphs, the number of such maximal subgraphs is \(2^{\mathcal{O}(k)}\) and they can be enumerated efficiently. This enumeration algorithm allows us to make significant improvements for two problems from the literature. Our first application concerns the Connected \(k\)-Secluded \(\mathcal{F}\)-free subgraph problem, where \(\mathcal{F}\) is a finite set of forbidden induced subgraphs. Given a graph in which each vertex has a positive integer weight, the problem asks to find a maximum-weight connected \(k\)-secluded vertex set \(C\subseteq V(G)\) such that \(G[C]\) does not contain an induced subgraph isomorphic to any \(F\in\mathcal{F}\). The parameterization by \(k\) is known to be solvable in triple-exponential time via the technique of recursive understanding, which we improve to single-exponential. Our second application concerns the deletion problem to _scattered graph classes_. A scattered graph class is defined by demanding that every connected component is contained in at least one of the prescribed graph classes \(\Pi_{1},\ldots,\Pi_{d}\). The deletion problem to a scattered graph class is to find a vertex set of size at most \(k\) whose removal yields a graph from the class. We obtain a single-exponential algorithm whenever each class \(\Pi_{i}\) is characterized by a finite number of forbidden induced subgraphs. This generalizes and improves upon earlier results in the literature. 2012 ACM Subject ClassificationMathematics of computing Graph algorithms; Theory of computation Graph algorithms analysis; Theory of computation Parameterized complexity and exact algorithms + Footnote †: journal: Computer Science Keywords:and phrases fixed-parameter tractability, important separators, secluded subgraphs + Footnote †: journal: Computer Science FundingThis project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 803421, ReduceSearch). ## 1 Introduction Graph separations have played a central role in algorithmics since the discovery of mincut/max-flow duality and the polynomial-time algorithm to compute a maximum flow [15]. Nowadays, more complex separation properties are crucial in the study of parameterized complexity, where the goal is to design algorithms for NP-hard problems whose running time can be bounded as \(f(k)\cdot n^{\mathcal{O}(1)}\) for some function \(f\) that depends only on the _parameter_\(k\) of the input. There are numerous graph problems which either explicitly involve finding separations of a certain kind (such as Multiway Cut [32], Multicut [4, 35], \(k\)-Way Cut[24], and Minimum Bisection[11]) or in which separation techniques turn out to be instrumental for an efficient solution (such as Directed Feedback Vertex Set[7] and Almost 2-SAT[38]). The field of parameterized complexity has developed a robust toolbox of techniques based on graph separators, e.g., treewidth reduction [34], important separators [33], shadow removal [35], discrete relaxations [12, 18, 19, 20], protrusion replacement [36], randomized contractions and recursive understanding [8, 10, 30], and flow augmentation [25, 26]. These powerful techniques allowed a large variety of graph separation problems to be classified as fixed-parameter tractable. However, this power comes at a cost. The running times for many applications of these techniques are superexponential: of the form \(2^{p(k)}\cdot n^{\mathcal{O}(1)}\) for a high-degree polynomial \(p\), double-exponential, or even worse. Discrete relaxations form a notable exception, which we discuss in Section 5. The new algorithmic primitive we develop can be seen as an extension of important separators [33][9, 88]. The study of important separators was pioneered by Marx [32, 33] and refined by follow-up work by several authors [6, 28], which was recognized by the EATCS-IPEC Nerode Prize 2020 [3]. The technique is used to bound the number of extremal \((S,T)\)-separators in an \(n\)-vertex graph \(G\) with vertex sets \(S\) and \(T\). The main idea is that, even though the number of distinct inclusion-minimal \((S,T)\)-separators (which are vertex sets potentially intersecting \(S\cup T\)) of size at most \(k\) can be as large as \(n^{\Omega(k)}\), the number of _important_ separators which leave a maximal vertex set reachable from \(S\), is bounded by \(4^{k}\). For Multiway Cut, a pushing lemma [32, Lem. 6] shows that there is always an optimal solution that contains an important separator, which leads to an algorithm solving the problem in time \(2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\). Important separators also form a key ingredient for solving many other problems such as Multicut[4, 35] and Directed Feedback Vertex Set[7]. For our purposes, it will be convenient to view the bound on the number of important separators through the lens of _secluded subgraphs_. A vertex set \(S\subseteq V(G)\) or induced subgraph \(G[S]\) of an undirected graph \(G\) is said to be \(k\)-secluded if \(|N_{G}(S)|\leq k\), that is, the number of vertices outside \(S\) which are adjacent to a vertex of \(S\) is bounded by \(k\). A vertex set \(S\) in a graph \(G\) is called _seclusion-maximal_ with respect to a certain property \(\Pi\) if \(S\) satisfies \(\Pi\) and for all sets \(S^{\prime}\supsetneq S\) that satisfy \(\Pi\) we have \(|N_{G}(S^{\prime})|>|N_{G}(S)|\). Hence a seclusion-maximal set with property \(\Pi\) is inclusion-maximal among all subsets with the same size neighborhood. Consequently, the number of inclusion-maximal \(k\)-secluded sets satisfying \(\Pi\) is at most the number of seclusion-maximal \(k\)-secluded sets with that property. Using the terminology of seclusion-maximal subgraphs, the bound on the number of important \((S,T)\)-separators of size at most \(k\) in a graph \(G\) is equivalent to the following statement: in the graph \(G^{\prime}\) obtained from \(G\) by inserting a new source \(r\) adjacent to \(S\), the number of _seclusion-maximal_\(k\)-secluded connected subgraphs \(C\) containing \(r\) but no vertex of \(T\) is bounded by \(4^{k}\). The neighborhoods of such subgraphs \(C\) correspond exactly to the important \((S,T)\)-separators in \(G\). While a number of previously studied cut problems [29, 34] place further restrictions on the vertex set that forms the separator (for example, requiring it to induce a connected graph or independent set) our generalization instead targets the structure of the \(k\)-secluded connected subgraph \(C\). We will show that, for any fixed finite family \(\mathcal{F}\) of graphs, the number of \(k\)-secluded connected subgraphs \(C\) as above which are seclusion-maximal with respect to satisfying the additional requirement that \(G[C]\) contains no induced subgraph isomorphic to a member of \(\mathcal{F}\) is still bounded by \(2^{\mathcal{O}(k)}\). Observe that the case \(\mathcal{F}=\emptyset\) corresponds to the original setting of important separators. Note that a priori, it is not even clear that the number of seclusion-maximal graphs of this form can be bounded by any function \(f(k)\), let alone a single-exponential one. ### Our contribution Having introduced the background of secluded subgraphs, we continue by stating our result exactly. This will be followed by a discussion on its applications. For a finite set \(\mathcal{F}\) of graphs we define \(||\mathcal{F}||:=\max_{F\in\mathcal{F}}|V(F)|\), the maximum order of any graph in \(\mathcal{F}\). We say that a graph is \(\mathcal{F}\)-free if it does not contain an _induced_ subgraph isomorphic to a graph in \(\mathcal{F}\). Our generalization of important separators is captured by the following theorem, in which we use \(\mathcal{O}_{\mathcal{F}}(\ldots)\) to indicate that the hidden constant depends on \(\mathcal{F}\). Let \(\mathcal{F}\) be a finite set of graphs. For any \(n\)-vertex graph \(G\), non-empty vertex set \(S\subseteq V(G)\), potentially empty \(T\subseteq V(G)\setminus S\), and integer \(k\), the number of \(k\)-secluded induced subgraphs \(G[C]\) which are seclusion-maximal with respect to being connected, \(\mathcal{F}\)-free, and satisfying \(S\subseteq C\subseteq V(G)\setminus T\), is bounded by \(2^{\mathcal{O}_{\mathcal{F}}(k)}\). A superset of size \(2^{\mathcal{O}_{\mathcal{F}}(k)}\) of these subgraphs can be enumerated in time \(2^{\mathcal{O}_{\mathcal{F}}(k)}\cdot n^{||\mathcal{F}||+\mathcal{O}(1)}\) and polynomial space. The single-exponential bound given by the theorem is best-possible in several ways. Existing lower bounds on the number of important separators [9, Fig. 8.5] imply that even when \(\mathcal{F}=\emptyset\) the bound cannot be improved to \(2^{o(k)}\). The term \(n^{||\mathcal{F}||}\) in the running time is unlikely to be avoidable, since even testing whether a single graph is \(\mathcal{F}\)-free is equivalent to Induced Subgraph Isomorphism and cannot be done in time \(n^{o(||\mathcal{F}||)}\)[9, Thm. 14.21] assuming the Exponential Time Hypothesis (ETH) due to lower bounds for \(k\)-Clique. The polynomial space bound applies to the internal space usage of the algorithm, as the output size may be exponential in \(k\). More precisely, we consider polynomial-space algorithms equipped with a command that outputs an element and we require that for each element in the enumerated set, this command is called at least once. The algorithm could also enumerate just the set in question (rather than its superset) by postprocessing the output and comparing each pair of enumerated subgraphs. However, storing the entire output requires exponential space. By executing the enumeration algorithm for every singleton set \(S\) of the form \(\{v\}\), \(v\in V(G)\), and \(T=\emptyset\), we immediately obtain the following. Let \(\mathcal{F}\) be a finite set of graphs. For any \(n\)-vertex graph \(G\) and integer \(k\), the number of \(k\)-secluded induced subgraphs \(G[C]\) which are seclusion-maximal with respect to being connected and \(\mathcal{F}\)-free is \(2^{\mathcal{O}_{\mathcal{F}}(k)}\cdot n\). A superset of size \(2^{\mathcal{O}_{\mathcal{F}}(k)}\cdot n\) of these subgraphs can be enumerated in time \(2^{\mathcal{O}_{\mathcal{F}}(k)}\cdot n^{||\mathcal{F}||+\mathcal{O}(1)}\) and polynomial space. Note that we require that the set \(\mathcal{F}\) of forbidden induced subgraphs is finite. This is necessary in order to obtain a bound independent of \(n\) in Theorem 3. For example, the number of seclusion-maximal (\(k=1\))-secluded connected subgraphs \(C\) containing a prescribed vertex \(r\) for which \(C\) induces an acyclic graph is already as large as \(n-1\) in a graph consisting of a single cycle, since each way of omitting a vertex other than \(r\) gives such a subgraph. For this case, the forbidden induced subgraph characterization \(\mathcal{F}\) consists of all cycles. Extending this example to a flower structure of \(k\) cycles of length \(n/k\) pairwise intersecting only in \(r\) shows that the number of seclusion-maximal \(k\)-secluded \(\mathcal{F}\)-free connected subgraphs containing \(r\) is \(\Omega(n^{k}/k^{k})\) and cannot be bounded by \(f(k)\cdot n^{\mathcal{O}(1)}\) for any function \(f\). We give two applications of Theorem 3.1 to improve the running time of existing super-exponential (or even triple-exponential) parameterized algorithms to single-exponential, which is optimal under ETH. For each application, we start by presenting some context. ### Application I: Optimization over connected \(k\)-secluded \(\mathcal{F}\)-free subgraphs The computation of secluded versions of graph-theoretic objects such as paths [2, 5, 31], trees [13], Steiner trees [14], or feedback vertex sets [1], has attracted significant attention over recent years. This task becomes hard already for detecting \(k\)-secluded disconnected sets satisfying very simple properties. In particular, detecting a \(k\)-secluded independent set of size \(s\) is W[1]-hard when parameterized by \(k+s\)[1]. Golovach, Heggernes, Lima, and Montealegre [17] suggested then to focus on _connected_\(k\)-secluded subgraphs and studied the problem of finding one, which belongs to a graph class \(\mathcal{H}\), of maximum total weight. They therefore studied the Connected \(k\)-secluded \(\mathcal{F}\)-free subgraph problem for a finite family \(\mathcal{F}\) of forbidden induced subgraphs. Given an undirected graph \(G\) in which each vertex \(v\) has a positive integer weight \(w(v)\), and an integer \(k\), the problem is to find a maximum-weight connected \(k\)-secluded vertex set \(C\) for which \(G[C]\) is \(\mathcal{F}\)-free. They presented an algorithm based on recursive understanding to solve the problem in time \(2^{2^{2^{\mathcal{O}}\mathcal{F}(k\log k)}}\cdot n^{\mathcal{O}_{\mathcal{F}}(1)}\). We improve the dependency on \(k\) to single-exponential. For each fixed finite family \(\mathcal{F}\), Connected \(k\)-secluded \(\mathcal{F}\)-free subgraph can be solved in time \(2^{\mathcal{O}_{\mathcal{F}}(k)}\cdot n^{\|\mathcal{F}\|+\mathcal{O}(1)}\) and polynomial space. This result follows directly from Corollary 3.1 since a maximum-weight \(k\)-secluded \(\mathcal{F}\)-free subgraph must be seclusion-maximal. Hence it suffices to check for each enumerated subgraph whether it is \(\mathcal{F}\)-free, and remember the heaviest one for which this is the case. The parameter dependence of our algorithm for Connected \(k\)-secluded \(\mathcal{F}\)-free subgraph is optimal under ETH. This follows from an easy reduction from Maximum Independent Set, which cannot be solved in time \(2^{o(n)}\) under ETH [9, Thm. 14.6]. Finding a maximum independent set in an \(n\)-vertex graph \(G\) is equivalent to finding a maximum-weight triangle-free connected induced (\(k=n\))-secluded subgraph in the graph \(G^{\prime}\) that is obtained from \(G\) by inserting a universal vertex of weight \(n\) and setting the weights of all other vertices to \(1\). Consequently, an algorithm with running time \(2^{o(k)}\cdot n^{\mathcal{O}(1)}\) for Connected \(k\)-secluded triangle-free induced subgraph would violate ETH and our parameter dependence is already optimal for \(\mathcal{F}=\{K_{3}\}\). ### Application II: Deletion to scattered graph classes When there are several distinct graph classes (e.g., split graphs and claw-free graphs) on which a problem of interest (e.g. Vertex Cover) becomes tractable, it becomes relevant to compute a minimum vertex set whose removal ensures that each resulting component belongs to one such tractable class. This can lead to fixed-parameter tractable algorithms for solving the original problem on inputs which are _close_ to such so-called _islands of tractability_[16]. The corresponding optimization problem has been coined the deletion problem to _scattered_ graph classes [21, 23]. Jacob, Majumdar, and Raman [22] (later joined by de Kroon for the journal version [21]) consider the \((\Pi_{1},\ldots,\Pi_{d})\)-deletion problem; given hereditary graph classes \(\Pi_{1},\ldots,\Pi_{d}\), find a set \(X\subseteq V(G)\) of at most \(k\) vertices such that each connected component of \(G-X\) belongs to \(\Pi_{i}\) for some \(i\in[d]\). Here \(d\) is seen as a constant. When the set of forbidden induced subgraphs \(\mathcal{F}_{i}\) of \(\Pi_{i}\) is finite for each \(i\in[d]\), they show [21, Lem. 12] that the problem is solvable in time \(2^{q(k)+1}\cdot n^{\mathcal{O}_{\Pi}(1)}\), where \(q(k)=4k^{10(pd)^{2}+4}+1\). Here \(p\) is the maximum number of vertices of any forbidden induced subgraph. Using Theorem 3.1 as a black box, we obtain a single-exponential algorithm for this problem. (\(\Pi_{1},\ldots,\Pi_{d}\))-deletion can be solved in time \(2^{\mathcal{O}_{\Pi}(k)}\cdot n^{\mathcal{O}_{\Pi}(1)}\) and polynomial space when each graph class \(\Pi_{i}\) is characterized by a finite set \(\mathcal{F}_{i}\) of (not necessarily connected) forbidden induced subgraphs. The main idea behind the algorithm is the following. For an arbitrary vertex \(v\), either it belongs to the solution, or we may assume that in the graph that results by removing the solution, the vertex \(v\) belongs to a connected component that forms a exclusion-maximal connected \(k\)-secluded \(\mathcal{F}_{i}\)-free induced subgraph of \(G\) for some \(i\in[d]\). Branching on each of the \(2^{\mathcal{O}_{\Pi}(k)}\) options gives the desired running time by exploiting the fact that in most recursive calls, the parameter decreases by more than a constant (cf. [9, Thm. 8.19]). Prior to our work, single-exponential algorithms were only known for a handful of ad-hoc cases where \(d=2\), such as deleting to a graph in which each component is a tree or a clique [21], or when one of the sets of forbidden induced subgraphs \(\mathcal{F}_{i}\) contains a path. Similarly as our first application, the resulting algorithm for \((\Pi_{1},\ldots,\Pi_{d})\)-deletion is ETH-tight: the problem is a strict generalization of \(k\)-Vertex Cover, which is known not to admit an algorithm with running time \(2^{o(k)}\cdot n^{\mathcal{O}(1)}\) unless ETH fails. ### Techniques The proof of Theorem 3.1 is based on a bounded-depth search tree algorithm with a nontrivial progress measure. By adding vertices to \(S\) or \(T\) in branching steps of the enumeration algorithm, the sets grow and the size of a minimum \((S,T)\)-separator increases accordingly. The size of a minimum \((S,T)\)-separator disjoint from \(S\) is an important progress measure for the algorithm: if it ever exceeds \(k\), there can be no \(k\)-secluded set containing all of \(S\) and none of \(T\) and therefore the enumeration is finished. The branching steps are informed by the farthest minimum \((S,T)\)-separator (see Lemma 3.1), similarly as the enumeration algorithm for important separators, but are significantly more involved because we have to handle the forbidden induced subgraphs. A distinctive feature of our algorithm is that the decision made by branching can be to add certain vertices to the set \(T\), while the important-separator enumeration only branches by enriching \(S\). A key step is to use submodularity to infer that a certain vertex set is contained in _all_ exclusion-maximal secluded subgraphs under consideration when other branching steps are inapplicable. As an illustrative example consider the case \(\mathcal{F}=\{K_{3}\}\), that is, we want to enumerate seclusion-maximal vertex sets \(C\subseteq V(G)\setminus T\), \(C\supseteq S\), which induce connected triangle-free subgraphs with at most \(k\) neighbors. Let \(\lambda^{\mathrm{L}}(S,T)\) denote the size of a minimum vertex set disjoint from \(S\) that separates \(T\) from \(S\)--we will refer to such separators as _left-restricted_. Then \(\lambda^{\mathrm{L}}(S,T)\) corresponds to the minimum possible size of \(N(C)\). Similarly to the enumeration algorithm for important separators, we keep track of two measures: (M1) the value of \(k\), and (M2) the gap between \(k\) and \(\lambda^{\mathrm{L}}(S,T)\). We combine them into a single progress measure which is bounded by \(2k\) and decreases during branching. The first branching scenario occurs when there is some triangle in the graph \(G\) which intersects or is adjacent to \(S\); then we guess which of its vertices should belong to \(N(C)\), remove it from the graph, and decrease \(k\) by one. Otherwise, let \(\mathcal{U}=\{U_{1},\ldots U_{d}\}\) be the collection of all vertex sets of triangles in \(G\) (which are now disjoint from \(S\)). When there exists a triangle \(U_{i}\) whose addition to \(T\) increases the value \(\lambda^{\mathrm{L}}(S,T)\), we branch into two possibilities: either \(U_{i}\) is disjoint from \(N[C]\)--then we set \(T\gets T\cup U_{i}\) so the measure (M2) decreases--or \(U_{i}\) intersects \(N(C)\)--then we perform branching as above. We show that in the remaining case all the triangles are separated from \(S\) by the minimum left-restricted \((S,T)\)-separator closest to \(S\); hence the value of \(\lambda^{\mathrm{L}}(S,T)\) equals the value of \(\lambda^{\mathrm{L}}(S,T\cup V(\mathcal{U}))\). Next, let \(P\) be the farthest minimum left-restricted \((S,T\cup V(\mathcal{U}))\)-separator; we use submodularity to justify that we can now safely add to \(S\) all the vertices reachable from \(S\) in \(G-P\). This allows us to assume that when \(u\in P\) then either \(u\in N(C)\) or \(u\in C\), which leads to the last branching strategy. We either delete \(u\) (so \(k\) drops) or add \(u\) to \(S\); note that in this case the progress measure may not change directly. The key observation is that adding \(u\) to \(S\) invalidates the farthest \((S,T\cup V(\mathcal{U}))\)-separator \(P\) and now we are promised to make progress in the very next branching step. The different branching scenarios are illustrated in Figure 1. The only property of \(K_{3}\) that we have relied on is connectivity: if a triangle intersects a triangle-free set \(C\) then it must intersect \(N(C)\) as well. This is no longer true when \(\mathcal{F}\) contains a disconnected graph. For example, the forbidden family for the class of split graphs includes \(2K_{2}\). A subgraph of \(F\in\mathcal{F}\) that can be obtained by removing some components from \(F\) is called a _partial forbidden graph_. We introduce a third measure to keep track of how many different partial forbidden graphs appear as induced subgraph in \(G[S]\). The main difficulty in generalizing the previous approach lies in justification of the greedy argument: when \(P\) is a farthest minimum separator between \(S\) and a certain set then we want to replace \(S\) with the set \(S^{\prime}\) of vertices reachable from \(S\) in \(G-P\). In the setting of connected obstacles this fact could be proven easily because \(S^{\prime}\) was disjoint from all the obstacles. The problem is now it may contain some partial forbidden subgraphs. We handle this issue by defining \(P\) in such a way that the sets of partial forbidden graphs appearing in \(G[S]\) and \(G[S^{\prime}]\) are the same and giving a rearrangement argument about subgraph isomorphisms. This allows us to extend the analysis to any family \(\mathcal{F}\) of forbidden subgraphs. ### Organization The remainder of the paper is organized as follows. We provide formal preliminaries in Section 2. The algorithm for enumerating secluded \(\mathcal{F}\)-free subgraphs is presented in Section 3. Then in Section 4 we apply it to improve the running times of the two discussed problems. We conclude in Section 5. Figure 1: Illustration of the branching steps for enumerating triangle-free \(k\)-secluded subgraphs for \(k=3\). Left: the green triangle intersects \(S\); we branch to guess which vertex belongs to \(N(C)\). Middle: setting where \(2=\lambda^{\mathrm{L}}(S,T)<\lambda^{\mathrm{L}}(S,T\cup V(\mathcal{U}))=3\); adding the top triangle to \(T\) increases \(\lambda^{\mathrm{L}}\). The set \(\mathcal{U}\) consists of the colored triangles. Right: setting where \(\lambda^{\mathrm{L}}(S,T)=\lambda^{\mathrm{L}}(S,T\cup V(\mathcal{U}))=2\), with a corresponding farthest separator \(P\). In this case every seclusion-maximal triangle-free set \(C\supseteq S\) must be a superset of the reachability set of \(S\) in \(G-P\). ## 2 Preliminaries ### Graphs and separators We consider finite, simple, undirected graphs. We denote the vertex and edge sets of a graph \(G\) by \(V(G)\) and \(E(G)\) respectively, with \(|V(G)|=n\) and \(|E(G)|=m\). For a set of vertices \(S\subseteq V(G)\), by \(G[S]\) we denote the graph induced by \(S\). We use shorthand \(G-v\) and \(G-S\) for \(G[V(G)\setminus\{v\}]\) and \(G[V(G)\setminus S]\), respectively. The open neighborhood \(N_{G}(v)\) of \(v\in V(G)\) is defined as \(\{u\in V(G)\mid\{u,v\}\in E(G)\}\). The closed neighborhood of \(v\) is \(N_{G}[v]=N_{G}(v)\cup\{v\}\). For \(S\subseteq V(G)\), we have \(N_{G}[S]=\bigcup_{v\in S}N_{G}[v]\) and \(N_{G}(S)=N_{G}[S]\setminus S\). The set \(C\) is called connected if the graph \(G[C]\) is connected. We proceed by introducing notions concerning separators which are crucial for the branching steps of our algorithms. For two sets \(S,T\subseteq V(G)\) in a graph \(G\), a set \(P\subseteq V(G)\) is an unrestricted \((S,T)\)-separator if no connected component of \(G-P\) contains a vertex from both \(S\setminus P\) and \(T\setminus P\). Note that such a separator may intersect \(S\cup T\). Equivalently, \(P\) is an \((S,T)\)-separator if each \((S,T)\)-path contains a vertex of \(P\). A restricted \((S,T)\)-separator is an unrestricted \((S,T)\)-separator \(P\) which satisfies \(P\cap(S\cup T)=\emptyset\). A left-restricted \((S,T)\)-separator is an unrestricted \((S,T)\)-separator \(P\) which satisfies \(P\cap S=\emptyset\). Let \(\lambda_{G}^{\mathrm{L}}(S,T)\) denote the minimum size of a left-restricted \((S,T)\)-separator, or \(+\infty\) if no such separator exists (which happens when \(S\cap T\neq\emptyset\)). [Ford-Fulkerson] There is an algorithm that, given an \(n\)-vertex \(m\)-edge graph \(G=(V,E)\), disjoint sets \(S,T\subseteq V(G)\), and an integer \(k\), runs in time \(\mathcal{O}(k(n+m))\) and determines whether there exists a restricted \((S,T)\)-separator of size at most \(k\). If so, then the algorithm returns a separator of minimum size. By the following observation we can translate properties of restricted separators into properties of left-restricted separators. Let \(G\) be a graph and \(S,T\subseteq V(G)\). Consider the graph \(G^{\prime}\) obtained from \(G\) by adding a new vertex \(t\) adjacent to each \(v\in T\). Then \(P\subseteq V(G)\) is a left-restricted \((S,T)\)-separator in \(G\) if and only if \(P\) is a restricted \((S,t)\)-separator in \(G^{\prime}\). ### Extremal separators and submodularity The following submodularity property of the cardinality of the open neighborhood is well-known; cf. [39, SS44.12] and [27, Fn. 3]. [Submodularity] Let \(G\) be a graph and \(A,B\subseteq V(G)\). Then the following holds: \[|N_{G}(A)|+|N_{G}(B)|\geq|N_{G}(A\cap B)|+|N_{G}(A\cup B)|.\] For a graph \(G\) and vertex sets \(S,P\subseteq V(G)\), we denote by \(R_{G}(S,P)\) the set of vertices which can be reached in \(G-P\) from at least one vertex in the set \(S\setminus P\). Let \(G\) be a graph and \(S,T\subseteq V(G)\) be two disjoint non-adjacent vertex sets. There exist minimum restricted \((S,T)\)-separators \(P^{-}\) (closest) and \(P^{+}\) (farthest), such that for each minimum restricted \((S,T)\)-separator \(P\), it holds that \(R_{G}(S,P^{-})\subseteq R_{G}(S,P)\subseteq R_{G}(S,P^{+})\). Moreover, if a minimum restricted \((S,T)\)-separator has size \(k\), then \(P^{-}\) and \(P^{+}\) can be identified in \(\mathcal{O}(k(n+m))\) time. Proof.: It is well-known (cf. [9, Thm. 8.5] for the edge-based variant of this statement, or [27, SS3.2] for the same concept with slightly different terminology) that the existence of these separators follows from submodularity (Lemma 8), while they can be computed by analyzing the residual network when applying the Ford-Fulkerson algorithm to compute a minimum separator. We sketch the main ideas for completeness. By merging \(S\) into a single vertex \(s^{+}\) and merging \(T\) into a single vertex \(t^{-}\), which is harmless because a restricted separator is disjoint from \(S\cup T\), we may assume that \(S\) and \(T\) are singletons. Transform \(G\) into an edge-capacitated directed flow network \(D\) in which \(s^{+}\) is the source and \(t^{-}\) is the sink. All remaining vertices \(v\in V(G)\setminus(S\cup T)\) are split into two representatives \(v^{-},v^{+}\) connected by an arc \((v^{-},v^{+})\) of capacity 1. For each edge \(uv\in E(G)\) with \(u,v\in V(G)\setminus\{s^{+},t^{-}\}\) we add arcs \((u^{+},v^{-}),(u^{-},v^{+})\) of capacity 2. For edges of the form \(s^{+}v\) we add an arc \((s^{+},v^{-})\) of capacity 2 to \(D\). Similarly, for edges of the form \(t^{-}v\) we add an arc \((v^{+},t^{-})\) of capacity 2. Then the minimum size \(k\) of a restricted \((S,T)\)-separator in \(G\) equals the maximum flow value in the constructed network, which can be computed by \(k\) rounds of the Ford-Fulkerson algorithm. Each round can be implemented to run in time \(\mathcal{O}(n+m)\). From the state of the residual network when Ford-Fulkerson terminates we can extract \(P^{-}\) and \(P^{+}\) as follows: the set \(P^{-}\) contains all vertices \(v\in V(G)\setminus(S\cup T)\) for which the source can reach \(v^{-}\) but not \(v^{+}\) in the final residual network. Similarly, \(P^{+}\) contains all vertices \(v\in V(G)\setminus(S\cup T)\) for which \(v^{+}\) can reach the sink but \(v^{-}\) cannot. By Observation 7, we can apply the lemma above for left-restricted separators too; when the sets \(S,T\) are disjoint, then \(S\) is non-adjacent to \(t\) in the graph obtained by adding a vertex \(t\) adjacent to every vertex in \(T\). The extremal separators identified in Lemma 9 explain when adding a vertex to \(S\) or \(T\) increases the separator size. The following statement is not symmetric because we work with the non-symmetric notion of a left-restricted separator. Let \(G\) be a graph, let \(S,T\) be disjoint vertex sets, and let \(P^{-}\) and \(P^{+}\) be the closest and farthest minimum left-restricted \((S,T)\)-separators. Then for any vertex \(v\in V(G)\), the following holds: 1. \(\lambda^{\mathrm{L}}_{G}(S\cup\{v\},T)>\lambda^{\mathrm{L}}_{G}(S,T)\) if and only if \(v\in R_{G}(T,P^{+})\cup P^{+}\). 2. \(\lambda^{\mathrm{L}}_{G}(S,T\cup\{v\})>\lambda^{\mathrm{L}}_{G}(S,T)\) if and only if \(v\in R_{G}(S,P^{-})\). Proof.: Adding a vertex to \(S\) or \(T\) can never decrease the separator size, so for both cases, the left-hand side is either equal to or strictly greater than the right-hand side. (1).Observe that if \(v\notin R_{G}(T,P^{+})\cup P^{+}\), then \(P^{+}\) is also a left-restricted \((S\cup\{v\},T)\)-separator which implies \(\lambda^{\mathrm{L}}_{G}(S\cup\{v\},T)=\lambda^{\mathrm{L}}_{G}(S,T)\). If \(v\in T\), then (1) holds as \(\lambda^{\mathrm{L}}_{G}(S\cup\{v\},T)=+\infty\). Consider now \(v\in(R_{G}(T,P^{+})\cup P^{+})\setminus T\); we argue that adding it to \(S\) increases the separator size. Assume for a contradiction that there exists a minimum left-restricted \((S\cup\{v\},T)\)-separator \(P\) of size at most \(\lambda^{\mathrm{L}}_{G}(S,T)=|P^{+}|\). Note that since \(P\) is left-restricted, we have \(v\notin P\). Observe that \(P\) is also a left-restricted \((S,T)\)-separator. By Lemma 9 we have \(R_{G}(S,P)\subseteq R_{G}(S,P^{+})\). Since \(v\in(R_{G}(T,P^{+})\cup P^{+})\setminus T\), it follows that \(v\notin R_{G}(S,P)\). We do a case distinction on \(v\) to construct a path \(Q\) from \(v\) to \(T\). * In the case that \(v\in P^{+}\setminus T\), then since \(P^{+}\) is a minimum separator it must be inclusion-minimal. Therefore, since \(P^{+}\setminus\{v\}\) is not an \((S,T)\)-separator, it follows that \(v\) has a neighbor in \(R_{G}(T,P^{+})\) and so there is a path \(Q\) from \(v\) to \(T\) in the graph induced by \(R_{G}(T,P^{+})\cup\{v\}\) such that \(V(Q)\cap P^{+}=\{v\}\). * In the case that \(v\in R_{G}(T,P^{+})\setminus T\), then by definition there is a path from \(v\) to \(T\) in the graph induced by \(R_{G}(T,P^{+})\). Since \(P\) is a left-restricted \((S\cup\{v\},T)\)-separator and therefore \(v\notin P\), it follows that \(P\) contains at least one vertex \(u\in V(Q)\) that is not in \(R_{G}(S,P^{+})\cup P^{+}\). Let \(P^{\prime}\) be the set of vertices adjacent to \(R_{G}(S,P)\). Since all vertices of \(P^{\prime}\) belong to \(P\) while \(u\notin P^{\prime}\), it follows that \(P^{\prime}\) is a left-restricted \((S,T)\)-separator that is strictly smaller than \(P\), a contradiction to \(|P|\leq\lambda_{G}^{\mathrm{L}}(S,T)\). If \(v\notin R_{G}(S,P^{-})\), then \(P^{-}\) is a left-restricted \((S,T\cup\{v\})\)-separator as well which implies \(\lambda_{G}^{\mathrm{L}}(S,T\cup\{v\})=\lambda_{G}^{\mathrm{L}}(S,T)\). If \(v\in R_{G}(S,P^{-})\), suppose that there exists a minimum left-restricted \((S,T\cup\{v\})\)-separator \(P\) of size \(|P^{-}|\). Note that \(v\notin S\), as otherwise no such separator exists. Furthermore \(P\) is also a left-restricted \((S,T)\)-separator. By Lemma 3 we have \(R_{G}(S,P^{-})\subseteq R_{G}(S,P)\). But since \(v\notin R_{G}(S,P)\) we reach a contradiction as \(R_{G}(S,P)\not\supseteq R_{G}(S,P^{-})\). The following lemma captures the idea that if \(\lambda_{G}^{\mathrm{L}}(S,T\cup Z)>\lambda_{G}^{\mathrm{L}}(S,T)\), then there is a single vertex from \(Z\) whose addition to \(T\) already increases the size of a minimum left-restricted \((S,T)\)-separator. We will use it to argue that when it is cheaper to separate \(S\) from \(T\) than to separate \(S\) from \(T\) together with all obstacles of a certain form, then there is already a single vertex from one such obstacle which causes this increase. Let \(G\) be a graph, \(S\subseteq V(G)\), and \(T,Z\subseteq V(G)\setminus S\). If there is no vertex \(v\in Z\) such that \(\lambda_{G}^{\mathrm{L}}(S,T\cup\{v\})>\lambda_{G}^{\mathrm{L}}(S,T)\), then \(\lambda_{G}^{\mathrm{L}}(S,T)=\lambda_{G}^{\mathrm{L}}(S,T\cup Z)\). Furthermore if \(\lambda_{G}^{\mathrm{L}}(S,T)\leq k\), then in \(\mathcal{O}(k(n+m))\) time we can either find such a vertex \(v\) or determine that no such vertex exists. Proof.: Let \(P^{-}\) be the minimum left-restricted \((S,T)\)-separator which is closest to \(S\). If for every \(v\in Z\) the value of \(\lambda_{G}^{\mathrm{L}}(S,T\cup\{v\})\) equals \(\lambda_{G}^{\mathrm{L}}(S,T)\) then Lemma 3 implies that each \(v\in Z\) lies outside \(R_{G}(S,P^{-})\) so \(Z\cap R_{G}(S,P^{-})=\emptyset\). Then \(P^{-}\) is a left-restricted \((S,T\cup Z)\)-separator of size \(\lambda_{G}^{\mathrm{L}}(S,T)\). On the other hand, if there is a vertex \(v\in Z\) for which \(\lambda_{G}^{\mathrm{L}}(S,T\cup\{v\})>\lambda_{G}^{\mathrm{L}}(S,T)\) then \(v\in R_{G}(S,P^{-})\). Hence, in order to detect such a vertex it suffices to compute the closest minimum left-restricted \((S,T)\)-separator \(P^{-}\), which can be done in time \(\mathcal{O}(k(n+m))\) via Lemma 3. Finally, the last lemma of this section uses submodularity to argue that the neighborhood size of a vertex set \(C\) with \(S\subseteq C\subseteq V(G)\setminus T\) does not increase when taking its union with the reachable set \(R_{G}(S,P)\) with respect to a minimum left-restricted \((S,T)\)-separator \(P\). If \(P\subseteq V(G)\) is a minimum left-restricted \((S,T)\)-separator in a graph \(G\) and \(S^{\prime}=R_{G}(S,P)\), then for any set \(C\) with \(S\subseteq C\subseteq V(G)\setminus T\) we have \(|N_{G}(C\cup S^{\prime})|\leq|N_{G}(C)|\). Proof.: Observe that since \(P\) is a minimum left-restricted \((S,T)\)-separator, we have \(|P|=\lambda_{G}^{\mathrm{L}}(S,T)\) and \(P=N_{G}(S^{\prime})\). We apply the submodular inequality to the sets \(C\) and \(S^{\prime}\). \[|N_{G}(C)|+|N_{G}(S^{\prime})|\geq|N_{G}(C\cup S^{\prime})|+|N_{G}(C\cap S^{ \prime})|\geq|N_{G}(C\cup S^{\prime})|+\lambda_{G}^{\mathrm{L}}(S,T).\] Here the last step comes from the fact that \(S\subseteq S^{\prime}\subseteq V(G)\setminus T\) since it is the set reachable from \(S\) with respect to a left-restricted \((S,T)\)-separator, so that \(C\cap S^{\prime}\) contains all of \(S\) and is disjoint from \(T\). This implies that \(N_{G}(C\cap S^{\prime})\) is a left-restricted \((S,T)\)-separator, so that \(|N_{G}(C\cap S^{\prime})|\geq\lambda_{G}^{\mathrm{L}}(S,T)\). As \(|N_{G}(S^{\prime})|=|P|=\lambda_{G}^{\mathrm{L}}(S,T)\), canceling these terms from both sides gives \(|N_{G}(C)|\geq|N_{G}(C\cup S^{\prime})|\) which completes the proof. ## 3 The enumeration algorithm We need the following concept to deal with forbidden subgraphs which may be disconnected. A partial forbidden graph \(F^{\prime}\) is a graph obtained from some \(F\in\mathcal{F}\) by deleting zero or more connected components. (So each \(F\in\mathcal{F}\) itself is also considered a partial forbidden graph.) We use the following notation to work with induced subgraph isomorphisms. An induced subgraph isomorphism from \(H\) to \(G\) is an injection \(\phi\colon V(H)\to V(G)\) such that for all distinct \(u,v\in V(H)\) we have \(\{u,v\}\in E(H)\) if and only if \(\{\phi(u),\phi(v)\}\in E(G)\). For a vertex set \(U\subseteq V(H)\) we let \(\phi(U):=\{\phi(u)\mid u\in U\}\). For a subgraph \(H^{\prime}\) of \(H\) we write \(\phi(H^{\prime})\) instead of \(\phi(V(H^{\prime}))\). The following definition will be important to capture the progress of the recursive algorithm. See Figure 2 for an illustration. We say that a vertex set \(U\subseteq V(G)\) enriches a vertex set \(S\subseteq V(G)\) with respect to \(\mathcal{F}\) if there exists a partial forbidden graph \(F^{\prime}\) such that \(G[S\cup U]\) contains an induced subgraph isomorphic to \(F^{\prime}\) but \(G[S]\) does not. We call such a set \(U\) an enrichment. An enrichment \(U\) is called tight if \(U=\phi(F^{\prime})\setminus S\) for some induced subgraph isomorphism \(\phi\colon V(F^{\prime})\to V(G)\) from some partial forbidden graph \(F^{\prime}\) for which \(G[S]\) does not contain an induced subgraph isomorphic to \(F^{\prime}\). The following observation will be used to argue for the correctness of the recursive scheme. Note that we get an implication only in one way (being seclusion-maximal in \(G\) implies being seclusion-maximal in \(G-v\), not the other way around), which is the reason why we output a superset of the sought set in Theorem 3. Let \(G\) be a graph containing disjoint sets \(S,T\subseteq V(G)\) and let \(C\subseteq V(G)\) be seclusion-maximal with respect to being connected, \(\mathcal{F}\)-free, \(k\)-secluded and satisfying \(S\subseteq C\subseteq V(G)\setminus T\). For each \(v\in N_{G}(C)\) it holds that \(C\) is seclusion-maximal in \(G-v\) with respect to being connected, \(\mathcal{F}\)-free, \((k-1)\)-secluded and satisfying \(S\subseteq C\subseteq V(G-v)\setminus T\). With these ingredients, we present the enumeration algorithm. Recall that \(||\mathcal{F}||=\max_{F\in\mathcal{F}}|V(F)|\) denotes the maximum order of any graph in \(\mathcal{F}\). Figure 2: Illustration of the idea of enrichment and the branching steps in the proof of Theorem 3. Here \(F=C_{4}\uplus K_{4}\). Left: The graph \(G[S]\) contains \(C_{4}\) and \(K_{4}\), but not \(F\). The set \(U\) enriches \(S\) since \(G[S\cup U]\) contains a new partial forbidden graph \(F\). Every component of \(G[U]\) is adjacent to \(S\), so Step 3 applies. Right: The two top copies of \(C_{4}\) do not enrich \(S\). One of them intersects the only copy of \(K_{4}\) in \(G[S]\); the other one is adjacent to the only copy of \(K_{4}\), while \(F\) has to appear as an induced subgraph. However the connected set \(U\) enriches \(S\) and it gets detected in Step 4. In both cases the enrichments are tight. **Theorem 2**.: _Let \(\mathcal{F}\) be a finite set of graphs. For any \(n\)-vertex graph \(G\), non-empty vertex set \(S\subseteq V(G)\), potentially empty \(T\subseteq V(G)\setminus S\), and integer \(k\), the number of \(k\)-secluded induced subgraphs \(G[C]\) which are exclusion-maximal with respect to being connected, \(\mathcal{F}\)-free, and satisfying \(S\subseteq C\subseteq V(G)\setminus T\), is bounded by \(2^{\mathcal{O}_{\mathcal{F}}(k)}\). A superset of size \(2^{\mathcal{O}_{\mathcal{F}}(k)}\) of these subgraphs can be enumerated in time \(2^{\mathcal{O}_{\mathcal{F}}(k)}\cdot n^{||\mathcal{F}||+\mathcal{O}(1)}\) and polynomial space._ Proof.: Algorithm \(\mathsf{Enum}_{\mathcal{F}}(G,S,T,k)\) solves the enumeration task as follows. 1. Stop the algorithm if one of the following holds: 1. \(\lambda_{G}^{\mathrm{L}}(S,T)>k\), 2. the vertices of \(S\) are not contained in a single connected component of \(G\), or 3. the graph \(G[S]\) contains an induced subgraph isomorphic to some \(F\in\mathcal{F}\). _There are no secluded subgraphs satisfying all imposed conditions._ 2. If the connected component \(C\) of \(G\) which contains \(S\) is \(\mathcal{F}\)-free and includes no vertex of \(T\): output \(C\) and stop. _Component \(C\) is the unique seclusion-maximal one satisfying the imposed conditions._ 3. If there is a vertex set \(U\subseteq V(G)\setminus(S\cup T)\) such that: 1. each connected component of \(G[U]\) is adjacent to a vertex of \(S\), and 2. the set \(U\) is a tight enrichment of \(S\) with respect to \(\mathcal{F}\) (so \(G[S\cup U]\) contains a new partial forbidden graph) then execute the following calls and stop: 1. For each \(u\in U\) call \(\mathsf{Enum}_{\mathcal{F}}(G-u,S,T,k-1)\). 2. Call \(\mathsf{Enum}_{\mathcal{F}}(G,S\cup U,T,k)\). _A tight enrichment can have at most \(||\mathcal{F}||\) vertices which bounds the branching factor in Step 3a. Note that these are exhaustive even though we do not consider adding \(U\) to \(T\): since each component of \(G[U]\) is adjacent to a vertex of \(S\), if a relevant secluded subgraph does not contain all of \(U\) then it contains some vertex of \(U\) in its neighborhood and we find it in Step 3a._ 4. For the rest of the algorithm, let \(\mathcal{U}\) denote the collection of all connected vertex sets \(U\subseteq V(G)\setminus(S\cup T)\) which form tight enrichments of \(S\) with respect to \(\mathcal{F}\). Let \(V(\mathcal{U}):=\bigcup_{U\in\mathcal{U}}U\). 1. If \(\lambda_{G}^{\mathrm{L}}(S,T)<\lambda_{G}^{\mathrm{L}}(S,T\cup V(\mathcal{U}))\): then (using Lemma 11) there exists \(U\in\mathcal{U}\) such that \(\lambda_{G}^{\mathrm{L}}(S,T\cup U)>\lambda_{G}^{\mathrm{L}}(S,T)\), execute the following calls and stop: 1. For each \(u\in U\) call \(\mathsf{Enum}_{\mathcal{F}}(G-u,S,T,k-1)\). (The value of \(k\) decreases.) 2. Call \(\mathsf{Enum}_{\mathcal{F}}(G,S\cup U,T,k)\). (We absorb a new partial forbidden graph.) 3. Call \(\mathsf{Enum}_{\mathcal{F}}(G,S,T\cup U,k)\). (The separator size increases.) 4. If \(\lambda_{G}^{\mathrm{L}}(S,T)=\lambda_{G}^{\mathrm{L}}(S,T\cup V(\mathcal{U}))\), then let \(P\) be the farthest left-restricted minimum \((S,T\cup V(\mathcal{U}))\)-separator in \(G\), and let \(S^{\prime}=R_{G}(S,P)\supseteq S\). Pick an arbitrary \(p\in P\) (which may be contained in \(T\) but not in \(S\)). 1. Call \(\mathsf{Enum}_{\mathcal{F}}(G-p,S^{\prime},T\setminus\{p\},k-1)\). (The value of \(k\) decreases.) 2. If \(p\notin T\), then call \(\mathsf{Enum}_{\mathcal{F}}(G,S^{\prime}\cup\{p\},T,k)\). (Either here or in the next iteration we will be able to make progress.) _It might happen that \(\mathcal{U}\) is empty; in this case the algorithm will execute Step 4b. Also note that \(P\) is non-empty because the algorithm did not stop in Step 2; hence it is always possible to choose a vertex \(p\in P\)._ Before providing an in-depth analysis of the algorithm, we establish that it always terminates. For each recursive call, either a vertex outside \(S\) is deleted, or one of \(S\) or \(T\) grows in size while the two remain disjoint. Since \(S\) and \(T\) are vertex subsets of a finite graph, this process terminates. The key argument in the correctness of the algorithm is formalized in the following claim. * [leftmargin=*,noitemsep,topsep=0pt] * **Claim 16.** If the algorithm reaches Step 4b, then every seclusion-maximal \(k\)-secluded subgraph satisfying the conditions of the theorem statement contains \(S^{\prime}\). * **Proof.** We prove the claim by showing that for an arbitrary \(k\)-secluded \(\mathcal{F}\)-free connected induced subgraph \(G[C]\) satisfying \(S\subseteq C\subseteq V(G)\setminus T\), the subgraph induced by \(C\cup S^{\prime}\) also satisfies these properties while \(|N_{G}(C\cup S^{\prime})|\leq|N_{G}(C)|\). Hence any seclusion-maximal subgraph satisfying the conditions contains \(S^{\prime}\). Under the conditions of Step 4b, we have \(\lambda_{G}^{\mathrm{L}}(S,T)=\lambda_{G}^{\mathrm{L}}(S,T\cup V(\mathcal{U}))\), so that the set \(P\) is a left-restricted minimum \((S,T)\)-separator. Next, we have \(S^{\prime}=R_{G}(S,P)\). By exploiting submodularity of the size of the open neighborhood, we prove in Lemma 12 that \(|N_{G}(C\cup S^{\prime})|\leq|N_{G}(C)|\). The key part of the argument is to prove that \(C\cup S^{\prime}\) induces an \(\mathcal{F}\)-free subgraph. Assume for a contradiction that \(G[C\cup S^{\prime}]\) contains an induced subgraph isomorphic to \(F\in\mathcal{F}\) and let \(\phi\colon V(F)\to C\cup S^{\prime}\) denote an induced subgraph isomorphism. Out of all ways to choose \(\phi\), fix a choice that minimizes the number of vertices \(|\phi(F)\setminus S|\) the subgraph uses from outside \(S\). We distinguish two cases. ### Neighborhood of \(S\) intersects \(\phi(F)\) If \(\phi(F)\cap N_{G}(S)\neq\emptyset\), then we will use the assumption that Step 3 of the algorithm was not applicable to derive a contradiction. Let \(F^{\prime}\) be the graph consisting of those connected components \(F_{i}\) of \(F\) for which \(\phi(F_{i})\cap N_{G}[S]\neq\emptyset\); let \(U=\phi(F^{\prime})\setminus S\). Observe that each connected component of \(G[U]\) is adjacent to a vertex of \(S\). By construction \(U\) is disjoint from \(S\), and \(U\) is disjoint from \(T\) since \(\phi(F)\subseteq C\cup S^{\prime}\) while both these sets are disjoint from \(T\). Hence \(U\) satisfies all but one of the conditions for applying Step 3. Since the algorithm reached Step 4b, it follows that \(U\) failed the last criterion which means that the partial forbidden graph \(F^{\prime}\) also exists as an induced subgraph in \(G[S]\). Let \(\phi_{F^{\prime}}\colon V(F^{\prime})\to S\) be an induced subgraph isomorphism from \(F^{\prime}\) to \(G[S]\). Since all vertices \(v\in V(F)\) for which \(\phi(v)\in N_{G}[S]\) satisfy \(v\in V(F^{\prime})\), we can define a new subgraph isomorphism \(\phi^{\prime}\) of \(F\) in \(G[C\cup S^{\prime}]\) as follows for each \(v\in V(F)\): \[\phi^{\prime}(v)=\begin{cases}\phi_{F^{\prime}}(v)&\text{if }v\in F^{\prime}\\ \phi(v)&\text{otherwise.}\end{cases} \tag{1}\] Observe that this is a valid induced subgraph isomorphism since \(F^{\prime}\) consists of some connected components of \(F\), and we effectively replace the model of \(F^{\prime}\) by \(\phi_{F^{\prime}}\). Since the model of the remaining graph \(\overline{F^{\prime}}=F-F^{\prime}\) does not use any vertex of \(N_{G}[S]\) by definition of \(F^{\prime}\), there are no edges between vertices of \(\phi_{F^{\prime}}(F^{\prime})\) and vertices of \(\phi(\overline{F^{\prime}})\), which validates the induced subgraph isomorphism. Since \(\phi(F)\) contains at least one vertex from \(N_{G}(S)\) while \(\phi^{\prime}(F)\) does not, and the only vertices of \(\phi^{\prime}(F)\setminus\phi(F)\) belong to \(S\), we conclude that \(\phi^{\prime}(F)\) contains strictly fewer vertices outside \(S\) than \(\phi(F)\); a contradiction to minimality of \(\phi\). ### Neighborhood of \(S\) does not intersect \(\phi(F)\) Now suppose that \(\phi(F)\cap N_{G}(S)=\emptyset\). If \(\phi(F)\subseteq C\), then \(\phi(F)\) is an induced \(F\)-subgraph in \(G[C]\), a contradiction to the assumption that \(C\) is \(\mathcal{F}\)-free. Hence \(\phi(F)\) must contain a vertex \(v\in S^{\prime}\setminus C\subseteq S^{\prime}\setminus S\). Since the previous case was not applicable, \(v\notin N_{G}(S)\) and therefore \(v\in S^{\prime}\setminus N_{G}[S]\). Fix an arbitrary connected component \(F_{i}\) of \(F\) for which \(\phi(F_{i})\) contains a vertex of \(S^{\prime}\setminus N_{G}[S]\). We derive several properties of \(\phi(F_{i})\). 1. Since \(F_{i}\) is a connected component of \(F\), the graph \(G[\phi(F_{i})]\) is connected. 2. We claim that \(\phi(F_{i})\cap S=\emptyset\). Note that a connected subgraph cannot both contain a vertex from \(S\) and a vertex outside \(N_{G}[S]\) without intersecting \(N_{G}(S)\). Since \(\phi(F)\cap N_{G}(S)=\emptyset\) by the case distinction, the graph \(G[\phi(F_{i})]\) is connected since \(F_{i}\) is connected, and \(\phi(F_{i})\) contains a vertex of \(S^{\prime}\setminus N_{G}[S]\), we find \(\phi(F_{i})\cap S=\emptyset\). 3. \(\phi(F_{i})\cap T=\emptyset\), since \(\phi(F)\subseteq C\cup S^{\prime}\) while both \(C\) and \(S^{\prime}\) are disjoint from \(T\). 4. We claim that \(\phi(F_{i})\notin\mathcal{U}\). To see that, recall that \(S^{\prime}=R_{G}(S,P)\) is the set of vertices reachable from \(S\) when removing the \((S,T\cup V(\mathcal{U}))\)-separator \(P\). The definition of separator therefore ensures that no vertex of \(S^{\prime}\) belongs to \(V(\mathcal{U})\). Since \(\phi(F_{i})\) contains a vertex of \(S^{\prime}\setminus N_{G}[S]\) by construction, some vertex of \(\phi(F_{i})\) does not belong to \(V(\mathcal{U})\) and therefore \(\phi(F_{i})\notin\mathcal{U}\). Now note that \(\phi(F_{i})\) satisfies almost all requirements for being contained in the set \(\mathcal{U}\) defined in Step 4: it induces a connected subgraph and it is disjoint from \(S\cup T\). From the fact that \(\phi(F_{i})\notin\mathcal{U}\) we therefore conclude that it fails the last criterion: the set \(\phi(F_{i})\) is not a tight enrichment of \(S\). Let \(F^{\prime}\) be the graph formed by \(F_{i}\) together with all components \(F_{j}\) of \(F\) for which \(\phi(F_{j})\subseteq S\); then \(\phi(F_{i})=\phi(F^{\prime})\setminus S\). Since \(\phi(F_{i})\) is not a tight enrichment of \(S\), the partial forbidden graph \(F^{\prime}\) is also contained in \(G[S]\). Let \(\phi_{F^{\prime}}\colon F^{\prime}\to S\) denote an induced subgraph isomorphism of \(F^{\prime}\) to \(G[S]\). Since \(\phi(F)\) contains no vertex of \(N_{G}(S)\), we can define a new subgraph isomorphism \(\phi^{\prime}\) of \(F\) in \(G[C\cup S^{\prime}]\) exactly as in (1). Since the graph \(F^{\prime}\) consists of some connected components of \(F\), while \(\phi_{F^{\prime}}(F^{\prime})\subseteq S\) and \(\phi(\overline{F^{\prime}})\cap N_{G}[S]=\emptyset\), it follows that \(\phi^{\prime}\) is an induced subgraph isomorphism of \(F\) in \(G[C\cup S^{\prime}]\). But \(|\phi^{\prime}(F)\setminus S|\) is strictly smaller than \(|\phi(F)\setminus S|\) since \(\phi(F_{i})\) intersects \(S^{\prime}\setminus N_{G}[S]\) while \(\phi^{\prime}(F_{i})\subseteq\phi^{\prime}(F^{\prime})\subseteq S\) and \(\phi\) and \(\phi^{\prime}\) coincide on \(\overline{F^{\prime}}\). This contradicts the minimality of the choice of \(\phi\). Since the case distinction is exhaustive, this proves the claim. \(\lhd\) Using the previous claim, we can establish the correctness of the algorithm. If \(G[C]\) is an induced subgraph of \(G\) that is seclusion-maximal with respect to being connected, \(\mathcal{F}\)-free, \(k\)-secluded and satisfying \(S\subseteq C\subseteq V(G)\setminus T\), then \(C\) occurs in the output of \(\mathsf{Enum}_{\mathcal{F}}(G,S,T,k)\). Proof.: We prove this claim by induction on the recursion depth of the \(\mathsf{Enum}_{\mathcal{F}}\) algorithm, which is valid as we argued above it is finite. In the base case, the algorithm does not recurse. In other words, the algorithm either stopped in Step 1 or 2. If the algorithm stops in Step 1, then there can be no induced subgraph satisfying the conditions and so there is nothing to show. If the algorithm stops in Step 2, then the only seclusion-maximal induced subgraph is the \(\mathcal{F}\)-free connected component containing \(S\). Note that this component is \(k\)-secluded since \(k\geq 0\) as \(\lambda_{G}^{\mathrm{L}}(S,T)\geq 0\) and the algorithm did not stop in Step 1a. For the induction step, we may assume that each recursive call made by the algorithm correctly enumerates a superset of the seclusion-maximal subgraphs satisfying the conditions imposed by the parameters of the recursive call, as the recursion depth of the execution of those calls is strictly smaller than the recursion depth for the current arguments \((G,S,T,k)\). Consider a connected \(\mathcal{F}\)-free \(k\)-secluded induced subgraph \(G[C]\) of \(G\) with \(S\subseteq C\subseteq V(G)\setminus T\) that is seclusion-maximal with respect to satisfying all these conditions. Suppose there is a vertex set \(U\subseteq V(G)\setminus(S\cup T)\) that satisfies the conditions of Step 3. If \(U\subseteq C\), then by induction \(C\) is part of the enumerated output of Step 3b. Otherwise, since each connected component of \(G[U]\) is adjacent to a vertex in \(S\), there is at least one vertex \(u\in U\) such that \(u\in N_{G}(C)\). By Observation 15, the output of the corresponding call in Step 3a contains \(C\) Note that since \(U\cap T=\emptyset\), we have \(T\subseteq V(G)\setminus(S\cup U)\) and therefore the recursive calls satisfy the input requirements. Next we consider the correctness in case such a set \(U\) does not exist so the algorithm reaches Step 4. Let \(\mathcal{U}\) be the set of tight enrichments as defined in Step 4. First suppose that \(\lambda_{G}^{\mathrm{L}}(S,T)<\lambda_{G}^{\mathrm{L}}(S,T\cup V(\mathcal{U}))\). Then by the contrapositive of the first part of Lemma 11 with \(Z=V(\mathcal{U})\), there is a vertex \(v\in V(\mathcal{U})\setminus T\) such that \(\lambda_{G}^{\mathrm{L}}(S,T\cup\{v\})>\lambda_{G}^{\mathrm{L}}(S,T)\). By picking an enrichment \(U\in\mathcal{U}\) such that \(v\in U\), this implies \(\lambda_{G}^{\mathrm{L}}(S,T\cup U)>\lambda_{G}^{\mathrm{L}}(S,T)\). Now if there is a vertex \(u\in U\) such that \(u\in N_{G}(C)\), then by induction and Observation 15 we get that \(C\) is output by the corresponding call in Step 4(a)ii. Otherwise, either \(U\subseteq C\) or \(U\cap C=\emptyset\) (since \(U\) is connected) and \(C\) is found in Step 4(a)ii or Step 4(a)iii respectively. Again observe that these recursive calls satisfy the input requirements as \(U\cap(S\cup T)=\emptyset\). Finally suppose that \(\lambda_{G}^{\mathrm{L}}(S,T)=\lambda_{G}^{\mathrm{L}}(S,T\cup V(\mathcal{U}))\). By Claim 16 we get that \(S^{\prime}\subseteq C\). We first argue that \(P=N_{G}(S^{\prime})\) is non-empty. Note that since the algorithm did not stop in Step 1, the graph \(G[S]\) is \(\mathcal{F}\)-free and \(S\) is contained in a single connected component of \(G\). Furthermore since it did not stop in Step 2, the connected component containing \(S\) either has a vertex of \(T\) or is not \(\mathcal{F}\)-free. Note that the former case already implies \(\lambda_{G}^{\mathrm{L}}(S,T)>0\). If the component has no vertex of \(T\) and is not \(\mathcal{F}\)-free, then it contains a vertex set \(J\) for which \(G[J]\) is isomorphic to some \(F\in\mathcal{F}\). Observe that \(J\setminus(S\cup T)=J\setminus S\) is a tight enrichment of \(S\). We have established that it is possible to enrich \(S\) but we need an enrichment that meets the conditions of Step 4. Let \(U\subseteq V(G)\setminus(S\cup T)\) be a tight enrichment of minimum size and let \(\phi\colon V(F^{\prime})\to V(G)\) be the corresponding subgraph isomorphism from some partial forbidden graph \(F^{\prime}\); we have \(U=\phi(F^{\prime})\setminus S\). We argue that \(G[U]\) is connected. If each connected component of \(G[U]\) is adjacent to a vertex of \(S\), then Step 3 would have applied, contradicting the fact that the algorithm reaches Step 4. Hence, there exists a connected component of \(G[U]\) that is non-adjacent to \(S\); let \(U^{\prime}\) be the vertex set of such a component. Since \(U\) is chosen to be minimum, we get that \(U\setminus U^{\prime}\) is not a tight enrichment, and so there is an induced subgraph of \(G[S]\) isomorphic to the partial forbidden graph \(F^{\prime\prime}=G[\phi(F^{\prime})\setminus U^{\prime}]\). This subgraph of \(G[S]\) combines with the graph \(G[U^{\prime}]\) to form an induced subgraph isomorphic to \(F^{\prime}\) (we exploit that \(U^{\prime}\) is not adjacent to \(S\)), which shows that \(U^{\prime}\) is a tight enrichment. By minimality of \(U\) we obtain \(U=U^{\prime}\). Hence \(U\) is not adjacent to \(S\) and the graph \(G[U]\) is connected so \(U\in\mathcal{U}\). Since \(U\) and \(S\) are contained in the same connected component we get that \(\lambda_{G}^{\mathrm{L}}(S,T\cup V(\mathcal{U}))>0\). This implies there exists some vertex \(p\in P=N_{G}(S^{\prime})\). Since \(S^{\prime}\subseteq C\), we either get \(p\in N_{G}(C)\), or (if \(p\notin T\)) \(p\in C\). By induction (and Observation 15) we conclude that \(C\) is part of the output of Step 4(b)i or Step 4(b)ii. The condition \(p\notin T\) ensures that the input requirements of the latter recursive call are satisfied. \(\lhd\) As the previous claim shows that the algorithm enumerates a superset of the relevant exclusion-maximal induced subgraphs, to prove Theorem 2 it suffices to bound the size of the search tree generated by the algorithm, and thereby the running time and total number of induced subgraphs which are given as output. To that end, we argue that for any two successive recursive calls in the recursion tree, at least one of them makes strict progress on a relevant measure. Since no call can increase the measure, this will imply a bound on the depth of the recursion tree. Since it is easy to see that the branching factor is a constant depending on \(||\mathcal{F}||\), this will lead to the desired bound. \(\rhd\) Claim 18. The search tree generated by the call \(\mathsf{Enum}_{\mathcal{F}}(G,S,T,k)\) has depth \(\mathcal{O}_{\mathcal{F}}(k)\) and \(2^{\mathcal{O}_{\mathcal{F}}(k)}\) leaves. Proof. Let \(g(X)\) denote the number of partial forbidden graphs of \(\mathcal{F}\) contained in \(G[X]\); note that \(g(X)\leq\sum_{F\in\mathcal{F}}2^{|V(F)|}\). For the running time analysis, we consider the progress measure \(k+(k-\lambda_{G}^{\mathrm{L}}(S,T))+(g(V(G))-g(S))\). We argue that the measure drops by at least one after two consecutive recursive calls to the algorithm. For most cases, the measure already drops in the first recursive call. First suppose that a recursive call is made in Step 3a, then the third summand does not increase: \(S\) does not change while \(g(V(G)\setminus\{u\})\leq g(V(G))\). We have \(\lambda_{G-u}^{\mathrm{L}}(S,T)\geq\lambda_{G}^{\mathrm{L}}(S,T)-1\). Since \(k\) is decreased by one, the measure strictly goes down. Next suppose a recursive call is made in Step 3b. Since \(g(S\cup U)>g(S)\) by construction, and \(\lambda_{G}^{\mathrm{L}}(S\cup U,T)\geq\lambda_{G}^{\mathrm{L}}(S,T)\), again the measure strictly goes down. The fact that the measure drops for a recursive call in Step 4(a)i follows akin to the arguments for Step 3a. The same holds for Step 4(a)ii akin to Step 3b. For a recursive call made in Step 4(a)iii, we know by assumption that \(\lambda_{G}^{\mathrm{L}}(S,T\cup U)>\lambda_{G}^{\mathrm{L}}(S,T)\). Since \(k\) and \(S\) remain the same, the measure strictly decreases. The reasoning becomes more involved for a recursive call in Step 4b. For a recursive call in Step 4(b)i, we have \(g(S^{\prime})\geq g(S)\) as \(S\subseteq S^{\prime}\), while \(\lambda_{G-p}^{\mathrm{L}}(S^{\prime},T\setminus\{p\})=\lambda_{G}^{\mathrm{ L}}(S,T)-1\) since \(p\) belongs to a minimum left-restricted \((S,T)\)-separator in \(G\), which is also a left-restricted minimum \((S^{\prime},T)\)-separator. Since \(k\) goes down by one, the measure strictly decreases. Finally, consider a recursive call made in Step 4(b)ii (so \(p\notin T\)). Note that \(g(S^{\prime}\cup\{p\})\geq g(S)\) as \(S\subseteq S^{\prime}\), \(k\) remains the same, and \(\lambda_{G}^{\mathrm{L}}(S^{\prime}\cup\{p\},T)\geq\lambda_{G}^{\mathrm{L}}(S,T)\). We distinguish three cases, depending on whether \(p\) is in some enrichment. * If \(\{p\}\in\mathcal{U}\), then actually \(g(S^{\prime}\cup\{p\})>g(S)\) and the measure strictly drops. * If \(\{p\}\) is not a tight enrichment of \(S\), but \(p\in U\) for some \(U\in\mathcal{U}\), observe that \(U\setminus\{p\}\) is disjoint from \(S^{\prime}\cup\{p\}\cup T\), forms a tight enrichment of \(S^{\prime}\cup\{p\}\), and each connected component of \(G[U\setminus\{p\}]\) is adjacent to \(p\in S^{\prime}\cup\{p\}\) as \(G[U]\) is connected. It follows that in the next call Step 3 applies (which it reaches as we assumed the algorithm recurses twice) and again we make progress. * In the remainder we have \(p\notin V(\mathcal{U})\). First consider the case that \(\lambda_{G}^{\mathrm{L}}(S^{\prime}\cup\{p\},T)=\lambda_{G}^{\mathrm{L}}(S^{ \prime}\cup\{p\},T\cup V(\mathcal{U}))\). Then since \(P=N_{G}(S^{\prime})\) was a farthest \((S,T\cup V(\mathcal{U}))\)-separator, by Lemma 10 we get that \(\lambda_{G}^{\mathrm{L}}(S^{\prime}\cup\{p\},T)=\lambda_{G}^{\mathrm{L}}(S^{ \prime}\cup\{p\},T\cup V(\mathcal{U}))>\lambda_{G}^{\mathrm{L}}(S,T\cup V( \mathcal{U}))=\lambda_{G}^{\mathrm{L}}(S,T)\), and therefore the progress measure strictly drops. In the remaining case we have \(\lambda_{G}^{\mathrm{L}}(S^{\prime}\cup\{p\},T)<\lambda_{G}^{\mathrm{L}}(S^{ \prime}\cup\{p\},T\cup V(\mathcal{U}))\). Since \(p\notin V(\mathcal{U})\), if the algorithm reaches Step 4 in the next iteration, the set of enrichments \(\mathcal{U}\) remains the same. But then Step 4a applies, which makes progress in the measure as argued above. We have shown that the measure decreases by at least one after two consecutive recursive calls. The algorithm cannot proceed once the measure becomes negative because \(g(S)\) cannot grow beyond \(g(V(G))\) and whenever \(k<0\) or \(\lambda_{G}^{\mathrm{L}}(S,T)>k\) the algorithm immediately stops. Since \(g(V(G))\) is upper-bounded by a constant depending on \(||\mathcal{F}||\) and \(|\mathcal{F}|\), we infer that the search tree has depth \(\mathcal{O}_{\mathcal{F}}(k)\). Any tight enrichment detected in Step 3 or Step 4 can have at most \(||\mathcal{F}||\) vertices, so the branching factor is bounded by \(||\mathcal{F}||\). Hence, the search tree has \(2^{\mathcal{O}_{\mathcal{F}}(k)}\) leaves as required. \(\lhd\) The previous claim implies that the number of seclusion-maximal connected \(\mathcal{F}\)-free \(k\)-secluded induced subgraphs containing all of \(S\) and none of \(T\) is \(2^{\mathcal{O}_{\mathcal{F}}(k)}\), since the algorithm outputs at most one subgraph per call and only does so in leaf nodes of the recursion tree. As Claim 18 bounds the size of the search tree generated by the algorithm, the desired bound on the total running time follows from the claim below. A single iteration of \(\mathsf{Enum}_{\mathcal{F}}(G,S,T,k)\) can be implemented to run in time \(|\mathcal{F}|\cdot 2^{||\mathcal{F}||}\cdot n^{||\mathcal{F}||+\mathcal{O}(1)}\) and polynomial space. Proof.: Within this proof, for a graph \(F\) we abbreviate \(|V(F)|\) to \(|F|\). Deciding whether \(\lambda_{G}^{\mathrm{L}}(S,T)>k\), as required in Step 1, can be done in \(\mathcal{O}(k(n+m))\) time by Theorem 6 and Observation 7. Finding the connected components of \(G\), and deciding if \(S\) is contained in only one can be done in \(\mathcal{O}(n+m)\) time. Deciding if \(G[S]\) contains an induced subgraph isomorphic to some \(F\in\mathcal{F}\) can be done in \(|\mathcal{F}|\cdot n^{||\mathcal{F}||+\mathcal{O}(1)}\) time. In the same running time we can decide if the connected component containing \(S\) is \(\mathcal{F}\)-free and contains nothing of \(T\) as needed for Step 2. For Step 3, we proceed as follows. For each \(F\in\mathcal{F}\), for each partial forbidden graph \(F^{\prime}\) of \(F\) (which consists of some subset of the connected components of \(F\)), verify whether there is an induced subgraph of \(G[S]\) isomorphic to \(F^{\prime}\) by checking all of the at most \(n^{|F^{\prime}|}\) ways in which it could appear and verifying in \(\mathcal{O}(n^{2})\) time if the right adjacencies are there. Keep track of which partial forbidden graphs are not present in \(G[S]\). Next, for each partial forbidden graph \(F^{\prime}\) not appearing in \(G[S]\), for each of the at most \(n^{|F^{\prime}|}\) induced subgraph isomorphisms \(\phi:V(F^{\prime})\to V(G)\setminus T\) we verify whether each connected component of \(U=\phi(F^{\prime})\setminus S\) is adjacent to a vertex of \(S\). This brings the total time for Step 3 to \(|\mathcal{F}|\cdot 2^{||\mathcal{F}||}\cdot n^{||\mathcal{F}||+\mathcal{O}(1)}\). In the same time we can compute \(\mathcal{U}\) for Step 4 (this time, \(G[U]\) should be connected rather than each component being adjacent to \(S\)). Then, deciding if \(\lambda_{G}^{\mathrm{L}}(S,T)<\lambda_{G}^{\mathrm{L}}(S,T\cup V(\mathcal{U}))\) for Step 4a can be done in \(\mathcal{O}(k(n+m))\) time by Theorem 6 and Observation 7 since \(\lambda_{G}^{\mathrm{L}}(S,T)\leq k\). Finding \(U\in\mathcal{U}\) such that \(\lambda_{G}^{\mathrm{L}}(S,T\cup U)>\lambda_{G}^{\mathrm{L}}(S,T)\) can be done in \(\mathcal{O}(n^{||\mathcal{F}||}\cdot k(n+m))\) time. If Step 4a does not apply, then automatically Step 4b does and so we get \(\lambda_{G}^{\mathrm{L}}(S,T)=\lambda_{G}^{\mathrm{L}}(S,T\cup V(\mathcal{U}))\). Finally, computing the farthest left-restricted minimum \((S,T\cup V(\mathcal{U}))\)-separator can be done in \(\mathcal{O}(k(n+m))\) time by Lemma 9 and Observation 7. It is easy to see the steps above can be carried out using polynomial space. \(\lhd\) This concludes the proof of Theorem 2. ## 4 Applications As applications of Theorem 2, we derive faster algorithms for two problems studied in the literature. The first problem is formally defined as follows [17] for any finite set \(\mathcal{F}\) of undirected graphs. Connected \(k\)-secluded \(\mathcal{F}\)-free subgraph **Parameter:**\(k\) **Input:** Graph \(G\), integer \(k\), weight function \(w\colon V(G)\to\mathbb{Z}_{>0}\) **Task:** Find a connected \(k\)-secluded set \(C\subseteq V(G)\) for which \(G[C]\) is \(\mathcal{F}\)-free which maximizes \(\sum_{v\in C}w(v)\). A single-exponential algorithm for this problem follows easily from Corollary 3. For each fixed finite family \(\mathcal{F}\), Connected \(k\)-secluded \(\mathcal{F}\)-free subgraph can be solved in time \(2^{\mathcal{O}_{\mathcal{F}}(k)}\cdot n^{||\mathcal{F}||+\mathcal{O}(1)}\) and polynomial space. Proof.: Since the weights are positive, any maximum-weight solution to the problem is seclusion-maximal with respect to being \(k\)-secluded, connected, and \(\mathcal{F}\)-free. We can therefore solve an instance \((G,k,w)\) as follows. Invoke Corollary 3 to enumerate a superset of the all seclusion-maximal connected \(\mathcal{F}\)-free \(k\)-secluded induced subgraphs containing \(S:=\{v\}\). For each enumerated set \(C\), check whether it is indeed \(\mathcal{F}\)-free in time \(n^{||\mathcal{F}||+\mathcal{O}(1)}\). The heaviest, taken over all choices of \(v\) and \(C\), is given as the output. Our second application concerns deletion problems to scattered graph classes, which are defined for finite sequences \((\Pi_{1},\ldots,\Pi_{d})\) of graph classes. \begin{tabular}{|l l|} \hline \((\Pi_{1},\ldots,\Pi_{d})\)-deletion & **Parameter:**\(k\) \\ **Input:** Graph \(G\) and integer \(k\). \\ **Question:** Is there a vertex set \(X\subseteq V(G)\) of size at most \(k\), such that for each connected component \(C\) of \(G-X\) there exists \(i\in[d]\) such that \(C\in\Pi_{i}\)? \\ \hline \end{tabular} By exploiting the fact that each connected component of \(G-X\) is \(k\)-secluded, we can obtain single-exponential FPT algorithms for this problem when each graph class \(\Pi\) is characterized by a finite number of forbidden induced subgraphs. In the following statement, both \(\mathcal{O}_{\Pi}\)'s hide factors depending on the choice of \((\Pi_{1},\ldots,\Pi_{d})\). \((\Pi_{1},\ldots,\Pi_{d})\)-deletion can be solved in time \(2^{\mathcal{O}_{\Pi}(k)}\cdot n^{\mathcal{O}_{\Pi}(1)}\) and polynomial space when each graph class \(\Pi_{i}\) is characterized by a finite set \(\mathcal{F}_{i}\) of (not necessarily connected) forbidden induced subgraphs. Proof.: We describe an algorithm for the problem. If \(k<0\), report that it is a no-instance. If there is a connected component that belongs to \(\Pi_{i}\) for some \(i\in[d]\), then delete the component and continue ([21, Reduction Rule 1]). If the graph becomes empty, return that it is a yes-instance. Otherwise, if \(k=0\), report that it is a no-instance. In the remainder we have \(k>0\) and \(G\) non-empty. Pick a vertex \(v\in V(G)\). There are two cases; \(v\) either belongs to the solution set \(X\), or belongs to a component in \(G-X\) in some graph class \(\Pi_{i}\). We perform branching to cover both options. For the first option, recursively call the algorithm on \(G-v\) searching for a solution of size \(k-1\). For the second option, for each \(i\in[d]\) and \(s\in[k]\), apply Theorem 2 to enumerate (a superset of) the seclusion-maximal connected \(\mathcal{F}_{i}\)-free \(s\)-secluded subgraphs containing \(v\). Note that the theorem implies this output has at most \(c^{s}\) elements for some constant \(c\). For each of the enumerated subgraphs \(C\) such that \(G[C]\in\Pi_{i}\) and \(|N_{G}(C)|=s\), recursively call the algorithm on \(G-N_{G}[C]\) searching for a solution of size \(k-|N_{G}(C)|\). Output yes if and only if one of the recursive calls results in a yes-instance. For correctness of the algorithm, we argue that the enumeration of seclusion-maximal secluded subgraphs suffices. Suppose there is a solution \(X\) not containing \(v\) such that the component \(C\) containing \(v\) in \(G-X\) belongs to \(\Pi_{i}\). If \(C\) was among the output of the enumeration algorithm, it is easy to see the algorithm is correct. Suppose that \(C\) was not enumerated because it is not seclusion-maximal. For this choice of \(i\) and \(s=|N_{G}(C)|\), the enumeration included some connected \(\mathcal{F}_{i}\)-free \(s\)-secluded subgraph \(C^{\prime}\) with \(C\subseteq C^{\prime}\) and \(|N_{G}(C^{\prime})|\leq|N_{G}(C)|\). Since the target graph classes are hereditary and graph \(G-N_{G}[C]\) admits solution \(X\setminus N_{G}(C)\) of size at most \(k-|N_{G}(C)|\), then its induced subgraph \(G-N_{G}[C^{\prime}]\) admits a solution \(X^{\prime}\) of size at most \(k-|N_{G}(C^{\prime})|\). Hence, \(X^{\prime}\cup N_{G}(C^{\prime})\) is also a valid solution for \(G\) of size at most \(k\). We conclude that the branching algorithm always finds a solution if there is one. We turn to the running time. Let \(T(k)\) denote the number of leaves in the recursion tree for a call with parameter \(k\), where \(T(0)=1\). By grouping the secluded subgraphs by their neighborhood size, observe that this satisfies \(T(k)=T(k-1)+d\cdot\sum_{i=1}^{k}c^{i}\cdot T(k-i)\leq(d+1)\cdot\sum_{i=1}^{k}c ^{i}\cdot T(k-i)\) (the inequality clearly holds if \(c\geq 1\)). By induction we argue that \(T(k)\leq((d+1)2c)^{k}\), which trivially holds if \(k=0\). Suppose that it holds for all values below \(k\); then we derive: \[T(k) \leq(d+1)\cdot\sum_{i=1}^{k}c^{i}\cdot T(k-i)\] By grouping on neighborhood size. \[\leq(d+1)\cdot\sum_{i=1}^{k}c^{i}\cdot((d+1)2c)^{k-i}\] By induction. \[\leq((d+1)c)^{k}\cdot\sum_{i=1}^{k}2^{k-i}\] Using \[(d+1)^{k-i}\leq(d+1)^{k-1}\] \[\leq((d+1)2c)^{k}.\] Since \[\sum_{i=0}^{k-1}2^{i}<2^{k}.\] Since the depth of the recursion tree is at most \(k\), the recursion tree has at most \(k\cdot((d+1)2c)^{k}\) nodes. Finally we consider the running time per node of the recursion tree. Finding the connected components can be done in \(\mathcal{O}(n+m)\) time. Checking if one of them belongs to \(\Pi_{i}\) for some \(i\in[d]\) can be done in \(n^{\mathcal{O}_{\Pi}(1)}\) time. The time needed for the \(d\cdot k\) calls to Theorem 2 is \(dk\cdot 2^{\mathcal{O}_{\Pi}(k)}\cdot n^{\mathcal{O}_{\Pi}(1)}\). Since \(d\) and \(c\) are constants, we get the claimed running time. Note that since Theorem 2 uses polynomial space, and we process its output one at a time without storing it, we conclude that the described algorithm uses polynomial space. ## 5 Conclusion We have introduced a new algorithmic primitive based on secluded connected subgraphs which generalizes important separators. The high-level idea behind the algorithm is _enumeration via separation_: by introducing an artificial set \(T\) and considering the more general problem of enumerating secluded subgraphs containing \(S\) but disjoint from \(T\), we can analyze the progress of the recursion in terms of the size of a minimum (left-restricted) \((S,T)\)-separator. We expect this idea to be useful in scenarios beyond the one studied here. We presented a single-exponential, polynomial-space FPT algorithm to enumerate the family of seclusion-maximal connected \(\mathcal{F}\)-free subgraphs for finite \(\mathcal{F}\), making it potentially viable for practical use [37]. The combination of single-exponential running time and polynomial space usage sets our approach apart from others such as recursive understanding [8, 10, 30] and treewidth reduction [35]. Algorithms exploiting half-integrality of the linear-programming relaxation or other discrete relaxations also have these desirable properties, though [12, 18, 19, 20, 40]. Using this approach, Iwata, Yamaguchi, and Yoshida [20] even obtained a _linear-time_ algorithm in terms of the number of vertices \(n\), solving (vertex) Multiway Cut in time \(2^{k}\cdot k\cdot(n+m)\). At a high level, there is some resemblance between their approach and ours. They work on a discrete relaxation of deletion problems in graphs which are not standard LP-relaxations, but are based on relaxations of a _rooted_ problem in which only constraints involving a prescribed set \(S\) are active. This is reminiscent of the fact that we enumerate secluded subgraphs containing a prescribed set \(S\). Their branching algorithms are based on the notion of an extremal optimal solution to the LP relaxation, which resembles our use of the farthest minimum left-restricted \((S,T)\)-separator. However, the two approaches diverge there. To handle problems via their approach, they should be expressible as a \(0/1\)/ALL CSP. Problems for which the validity of a solution can be verified by unit propagation (such as Node Unique Label Cover, Node Multiway Cut, Subset and Group Feedback Vertex Set) belong to this category, but it seems impossible to express the property of being \(\mathcal{F}\)-free for arbitrary finite sets \(\mathcal{F}\) in this framework. The branching steps underlying our algorithm were informed by the structure of the subgraphs induced by certain vertex sets. In the considered setting, where certain possibly disconnected structures are not allowed to appear inside \(C\), it is necessary to characterize the forbidden sets in terms of the graph structure they induce. But when the forbidden sets are connected, we believe our proof technique can be used in a more general setting to establish the following. For any \(n\)-vertex graph \(G\), non-empty vertex set \(S\subseteq V(G)\), potentially empty \(T\subseteq V(G)\setminus S\), integer \(k\), and collection \(F_{1},\ldots,F_{m}\subseteq V(G)\) of vertex sets of size at most \(\ell\) which are connected in \(G\), the number of \(k\)-secluded induced subgraphs \(G[C]\) which are seclusion-maximal with respect to being connected, not containing any set \(F_{i}\), and satisfying \(S\subseteq C\subseteq V(G)\setminus T\), is bounded by \((2+\ell)^{\mathcal{O}(k)}\), and a superset of them can be enumerated in time \((2+\ell)^{\mathcal{O}(k)}\cdot m\cdot n^{\mathcal{O}(1)}\) and polynomial space. The reason why dealing with general connected obstacles is feasible is that whenever \(F_{i}\cap C\neq\emptyset\) then also \(F_{i}\cap N(C)\neq\emptyset\); this allows us to always make progress using the simpler branching strategy without keeping track of partial forbidden graphs. The corresponding generalization for _disconnected_ vertex sets \(F_{i}\) is false, even for \(|F_{i}|=2\). To see this, consider a graph consisting of a cycle on \(2m+1\) vertices consecutively labeled \(s,a_{1},\ldots,a_{m},b_{1},\ldots,b_{m}\) with \(F_{i}=\{a_{i},b_{i}\}\) for each \(i\in[m]\), in which the number of relevant seclusion-maximal \(2\)-secluded sets containing \(s\) is \(\Omega(m)\). We leave it to future work to consider generalizations of our ideas to _directed graphs_. Since important separators also apply in that setting, we expect the branching step in terms of left-restricted minimum separators to be applicable in directed graphs as well. However, there are multiple ways to generalize the notion of a connected secluded induced subgraph to the directed setting: one can consider weak connectivity, strong connectivity, or a rooted variant where we consider all vertices reachable from a source vertex \(x\). Similarly, one can define seclusion in terms of the number of in-neighbors, out-neighbors, or both.
2309.10078
Simple and Optimal Online Contention Resolution Schemes for $k$-Uniform Matroids
We provide a simple $(1-O(\frac{1}{\sqrt{k}}))$-selectable Online Contention Resolution Scheme for $k$-uniform matroids against a fixed-order adversary. If $A_i$ and $G_i$ denote the set of selected elements and the set of realized active elements among the first $i$ (respectively), our algorithm selects with probability $1-\frac{1}{\sqrt{k}}$ any active element $i$ such that $|A_{i-1}| + 1 \leq (1-\frac{1}{\sqrt{k}})\cdot \mathbb{E}[|G_i|]+\sqrt{k}$. This implies a $(1-O(\frac{1}{\sqrt{k}}))$ prophet inequality against fixed-order adversaries for $k$-uniform matroids that is considerably simpler than previous algorithms [Ala14, AKW14, JMZ22]. We also prove that no OCRS can be $(1-\Omega(\sqrt{\frac{\log k}{k}}))$-selectable for $k$-uniform matroids against an almighty adversary. This guarantee is matched by the (known) simple greedy algorithm that accepts every active element with probability $1-\Theta(\sqrt{\frac{\log k}{k}})$ [HKS07].
Atanas Dinev, S. Matthew Weinberg
2023-09-18T18:46:27Z
http://arxiv.org/abs/2309.10078v2
# Simple and Optimal Online Contention Resolution Schemes for \(k\)-Uniform Matroids ###### Abstract We provide a simple \((1-O(\frac{1}{\sqrt{k}}))\)-selectable Online Contention Resolution Scheme for \(k\)-uniform matroids against a fixed-order adversary. If \(A_{i}\) and \(G_{i}\) denote the set of selected elements and the set of realized active elements among the first \(i\) (respectively, our algorithm selects with probability \(1-\frac{1}{\sqrt{k}}\) any active element \(i\) such that \(|A_{i-1}|+1\leq(1-\frac{1}{\sqrt{k}})\cdot\mathbb{E}[|G_{i}|]+\sqrt{k}\). This implies a \((1-O(\frac{1}{\sqrt{k}}))\) prophet inequality against fixed-order adversaries for \(k\)-uniform matroids that is considerably simpler than previous algorithms [1, 1, 1]. We also prove that no OCRS can be \((1-\Omega(\sqrt{\frac{\log k}{k}}))\)-selectable for \(k\)-uniform matroids against an almighty adversary. This guarantee is matched by the (known) simple greedy algorithm that accepts every active element with probability \(1-\Theta(\sqrt{\frac{\log k}{k}})\)[1]. Introduction **Background: OCRSs.** Online contention resolution schemes (OCRS) are a broadly applicable rounding technique for online selection problems [10]. These are problems in which an algorithm makes irrevocable decisions for whether to select elements arriving online, often subject to combinatorial constraints. Offline, the algorithm knows a distribution over which elements will be "active." Online, elements are revealed to be active or inactive one at a time, and the algorithm must immediately and irrevocably decide whether to accept an active element (inactive elements must be rejected). There are feasibility constraints \(\mathscr{F}\) on which elements can be simultaneously accepted. An OCRS for a class of instances is \(c\)-selectable if it guarantees that every element is selected with probability at least \(c\), conditioned on being active. See Definition 2.1 for a formal definition. Online contention resolution schemes have direct applications to prophet inequalities [10]. In a prophet inequality, a gambler knows the distribution of a sequence of independent random variables \(X_{1},\ldots,X_{n}\). Online, each random variable will be sampled one at a time and revealed to the gambler, who immediately and irrevocably decides whether to accept the element, subject to feasibility constraints \(\mathscr{F}\). The gambler's goal is to maximize the expected sum of weights of accepted elements, and a prophet inequality compares the ratio of the gambler's expected performance to that of a prophet (who knows all random variables before making decisions). Seminal work of Krengel, Sucheston, and Garling establishes a tight \(1/2\)-approximation for the single-choice prophet inequality, and seminal work of Samuel-Cahn shows that the same result can be achieved with an especially simple thresholding algorithm [10, 22]. In their work introducing OCRSs, Feldman, Svensson, and Zenklusen prove that a \(c\)-selectable OCRS implies a \(c\)-approximation for the corresponding prophet inequality [10]. In fact, a \(c\)-selectable OCRS provides a \(c\)-approximation even to the ex ante relaxation, and Lee and Singla show that OCRSs are _equivalent_ to ex ante prophet inequalities [11]. \(k\)**-Uniform Matroids.**\(k\)-uniform matroids are a canonical set of feasibility constraints: any set of up to \(k\) elements can be selected. Here, a \((1-O(\sqrt{\frac{\log k}{k}}))\)-approximate prophet inequality (whose analysis implicitly extends to a \((1-O(\sqrt{\frac{\log k}{k}}))\)-selectable OCRS) is first developed in [11]. [1] later develops a \((1-O(\frac{1}{\sqrt{k}}))\)-approximation (which also implies a \((1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS), which is tight. [1] further shows how to achieve the same \((1-O(\frac{1}{\sqrt{k}}))\)-approximation using a single sample from each distribution (but their work does not imply any OCRS). [12] tightens the analysis of [1] to nail down exactly the optimal achievable prophet inequality for all \(k\) (and this same analysis applies to the implied OCRS). We overview in Section 1.1 these results in more detail, and in particular clarify against what kind of adversary (who selects the order in which the elements are revealed) the guarantees hold. While the optimal competitive ratio has been known for a decade, and recently tightened to even nail down the precise constants, these algorithms are significantly more complex than Samuel-Cahn's elegant algorithm for the single-choice prophet inequality. For example, [1, 12] both require to solve and analyze a mathematical program in order to accept elements with precisely the correct probability. The rehearsal algorithm of [1] is perhaps simpler, but still requires several lines of pseudocode, and some care with minor details. **Main Result: A Simple, Optimal OCRS.** Our main result is a significantly simpler OCRS/prophet inequality for \(k\)-uniform matroids that still achieves the optimal guarantee of \(1-O(\frac{1}{\sqrt{k}})\). Of course, it is still not nearly as simple as Samuel-Cahn's single-choice prophet inequality, but a full description fits in two sentences, and the complete analysis is just a few pages.1 Our OCRS simply denotes by \(A_{i}\) the set of elements it has selected amongst the first \(i\) and by \(G_{i}\) the set of realized active elements amongst the first \(i\). Then, when processing element \(i\), we select \(i\) with probability \(1-\frac{1}{\sqrt{k}}\) if and only if \(i\) is active and \(|A_{i-1}|+1\leq(1-\frac{1}{\sqrt{k}})\cdot\mathbb{E}[|G_{i}|]+\sqrt{k}\) (otherwise, we discard). Intuitively, our OCRS selects an element if, so far, the number of selected elements does not exceed the expected number of active elements by too much. To turn our OCRS into a prophet inequality, simply let \(T\) denote the unique value such that \(\sum_{i}\Pr[X_{i}>T]=k\).2 Then, declare \(X_{i}\) to be active if and only if \(X_{i}>T\) and plug this into our OCRS. Compared to prior optimal algorithms for the same setting, our algorithm has the advantage that it is very simple to implement since it does not require solving a complicated dynamic/linear program. (See Section 1.1) We state our algorithm precisely, and prove that it is \((1-O(\frac{1}{\sqrt{k}}))\)-selectable against a fixed-order adversary in Section 4.3 Footnote 2: If the distributions have point-masses, smooth them out by adding a uniformly random draw from \([0,\varepsilon]\) for arbitrarily small \(\varepsilon\). Footnote 3: A fixed-order adversary sets the order to reveal the elements offline, and based only on the distributions. **An Impossibility for OCRS against almighty adversary.** While the fixed-order adversary is standard in the prophet inequality literature, it is also important to explore the extent to which these same guarantees can hold against an almighty adversary.4 For example, the prophet inequality of [1] holds against an almighty adversary, but does not imply an OCRS. Our second result shows that this is for good reason: no OCRS can guarantee a selectability better than \(1-\Omega(\sqrt{\frac{\log k}{k}})\) against an almighty adversary. We state and prove this in Section 5. Footnote 4: An almighty adversary sets the order to reveal online, and with full knowledge of all random variables and all past decisions of the algorithm. ### Detailed Discussion of Related Work As previously referenced, prior to our work it is already known that the optimal selectability for OCRS and the optimal prophet inequality against a fixed-order adversary is \(1-\Theta(\frac{1}{\sqrt{k}})\) ([11] proves the impossibility, and [1] designs the first algorithm matching it). [1] designs a prophet inequality that achieves the same \((1-O(\frac{1}{\sqrt{k}}))\)-approximation against an almighty adversary, but this does not imply an OCRS. The analysis in [11] implies an extremely simple OCRS (accept every active element with probability \(1-\Theta(\sqrt{\frac{\log k}{k}}))\) that is \((1-\Theta(\sqrt{\frac{\log k}{k}}))\)-selectable against an almighty adversary. We show that this is the best possible guarantee (Theorem 5.1). Because we view our main result as a simpler algorithm achieving (asymptotically) the same guarantees as prior work, we now overview these works in greater detail. **The \(\gamma\)-Conservative magician in [1].** As previously mentioned, [1] implies an optimal \((1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS against a fixed-order adversary. [1, Definition 2] describes a \(\gamma\)-Conservative Magician, which is an algorithm that adaptively computes thresholds \(\theta_{i}\) and accepts an active element on step \(i\) if and only if the number of selected elements (or broken wands in the terminology of [1]\(W_{i}\) in steps \(1,\ldots,i-1\) is less than \(\theta_{i}\). The cumulative distribution function of \(W_{i}\) is computed adaptively at every step through dynamic programming equation. Once the CDF of \(W_{i}\) has been computed, \(\theta_{i}\) is chosen so that the ex-ante probability that \(W_{i}\leq\theta_{i}\) is at least \(\gamma\). [1] shows that one can choose \(\gamma=1-\frac{1}{\sqrt{k+3}}\) and \(\theta_{i}\leq k\) for all \(i\), which effectively guarantees an OCRS against a fixed-order adversary that is \(1-\frac{1}{\sqrt{k+3}}\)-selectable [1, Theorem 4]. In comparison to [1], our main result achieves the same asymptotic guarantee, but is considerably simpler (in particular, the analysis requires minimal calculations, and there is no dynamic program). **Characterization of the optimal OCRS and prophet inequality for \(k\)-uniform matroids in [1].**[1] studies the optimal OCRS for \(k\)-uniform matroids (i.e. with optimal selection probability \(c\)). They characterize the optimal OCRS for \(k\)-uniform matroids as the solution to a linear program. Then, using a differential equation, they show that this optimal solution corresponds to a \(\gamma_{k}^{*}\)-Conservative magician, where \(\gamma_{k}^{*}>1-\frac{1}{\sqrt{k+3}}\). [13] extend their OCRS guarantees against an online adversary 5. Footnote 5: An online adversary adaptively decides which elements to reveal next, based on which elements were active. But the online adversary does not know the status of the unrevealed elements (i.e. it has the same information as the algorithm) In comparison to [13], our main result achieves the same \(1-O(\frac{1}{\sqrt{k}})\) asymptotic guarantee against a fixed-order adversary, but is, again, considerably simpler and does not require solving any mathematical program. Although our analysis holds against the weaker fixed-order adversary, this assumption is sufficient for many popular applications of OCRS and prophet inequalities in online stochastic optimization. **The Rehearsal algorithm in [1].** As previously discussed, [1] gives an optimal \((1-O(\frac{1}{\sqrt{k}}))\)-approximation prophet inequality against an almighty adversary, and even when knowing only a single sample from each distribution, using the _rehearsal algorithm_. Their rehearsal algorithm takes a sample from each distribution, and stores the \(k-2\sqrt{k}\) highest samples at \(T_{1},\ldots,T_{k-2\sqrt{k}}\), then repeats \(T_{i}:=T_{k-2\sqrt{k}}\) for all \(i\in[k-2\sqrt{k},k]\). When processing the online element \(X_{e}\), \(e\) is accepted if and only if there is an unfilled slot \(i\) with \(X_{e}>T_{i}\). If \(X_{e}\) is accepted, it fills the highest such slot (the slot with the highest threshold). Their analysis does not imply an OCRS (indeed, it is not even clear what it would mean to set the thresholds \(T_{i}\) in an OCRS). But, their analysis does hold against an almighty adversary. In comparison, our prophet inequality is simpler, and implies an OCRS. But, our algorithm requires some knowledge of the distributions, rather than just a single sample.6 Footnote 6: Turning our OCRS into a prophet inequality requires a value \(T\) such that \(\sum_{i}\Pr[X_{i}>T]\approx k\), and an accurate estimate of \(\sum_{j\leq i}\Pr[X_{j}>T]\) for all \(i\). Estimates up to an additive \(\sqrt{k}\) with high probability suffice. This can certainly be achieved with polynomially-many samples, but not a single sample. Our analyses have similar flavors: both works connect our algorithms' performance to a random walk. These random walks are quite different (for example, the random walk in [1] is correlated, and ours is not. Our random walk has non-integral step sizes, while theirs does not), and are used to analyze different algorithms. While there are some coincidental similarities (for example, our Lemma A.2 is a generalization of their Lemma 10), the core of our proof is simply connecting our algorithm to a random walk, whereas the bulk of their proof is coping with the correlation in their random walk and any associated calculations. **Other Related Work.** There is substantial additional work on both prophet inequalities and online contention resolution schemes, subject to various other constraints [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Aside from this, there is not much technical overlap with this works (in particular, a substantial fraction of these works consider richer feasibility constraints, and therefore achieve constant-factor approximations rather than approximations approaching \(1\)). Other works have also considered a special class of static threshold policies for \(k\)-unit prophet inequalities which set a single threshold and accept any element that exceeds it subject to the feasibility constraint. [10] provides a prophet inequality with a static threshold which is a \(1-O(\sqrt{\frac{\log(k)}{k}})\) approximation. [1] proposes a different static threshold prophet inequality with the same \(1-O(\sqrt{\frac{\log(k)}{k}})\) asymptotic guarantee improving the approximation for small \(k\). [13] uses a mathematical programming approach to show that the policy in [1] is worst-case optimal within all static threshold policies. In contrast to these works, we study the design and limitations of the richer class of adaptive strategies for the more general setting of OCRS. In the context of _offline_ contention resolution schemes (introduced in [13]), [14] shows a simple optimal contention resolution scheme for \(k\)-uniform matroids, which is \((1-\binom{n}{k}(1-\frac{k}{n})^{n+1-k}(\frac{k}{n})^{k})\)-selectable. Their method is not extendable to the online case because it requires knowing the set of active elements \(A\) in advance. ### Roadmap Section 2 follows with preliminaries and definitions. Section 3 is a warmup that rules out optimal selectability via _extremely_ simple greedy algorithms. Section 4 presents a complete proof of our new \((1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS. Section 5 contains a proof of the \(1-\Omega(\sqrt{\frac{\log(k)}{k}})\) upper bound on the probability of selection of any OCRS against almighty adversaries. Section 6 concludes. ## 2 Preliminaries ### Online contention resolution schemes Online contention resolution schemes were first introduced by [10] as a broadly applicable online rounding framework. Suppose we are given a finite ground set of elements \(N=\{e_{1},\ldots,e_{n}\}\). Consider a family of feasible sets \(\mathscr{F}\subset 2^{N}\) which is downwards-closed (that is, if \(I\in\mathscr{F}\) and \(J\subseteq I\), then \(J\in\mathscr{F}\)). Let \[P_{\mathscr{F}}=\text{conv}(\mathbbm{1}_{I}|I\in\mathscr{F})\in[0,1]^{n}\] be the convex hull of all characteristic vectors in \(\mathscr{F}\). We will refer to \(P_{\mathscr{F}}\) as the polytope corresponding to the family \(\mathscr{F}\). **Definition 2.1** (Online contention resolution scheme (OCRS)): _Consider the following online selection setting. A point \(x\in P_{\mathscr{F}}\) is given and let \(R(x)\) be a random subset of active elements, where element \(e_{i}\) is active with probability \(x_{i}\). The elements \(e\in N\) reveal one by one whether they are active, i.e. \(e\in R(x)\), and the decision of the algorithm whether to select an active element is taken irrevocably before the next element is revealed. An OCRS for \(P_{\mathscr{F}}\) is an online algorithm that selects a subset \(I\subseteq R(x)\) such that \(\mathbbm{1}_{I}\in P_{\mathscr{F}}\)._ Many of the natural OCRS considered in [10] are also greedy. **Definition 2.2** (Greedy OCRS): _A greedy OCRS \(\pi\) for \(P_{\mathscr{F}}\) is an OCRS that for any \(x\in P_{\mathscr{F}}\), defines a downwards-closed subfamily of feasible sets \(\mathscr{F}_{x}\subseteq\mathscr{F}\) and an element \(e\) is selected when it arrives if, together with the already selected elements, the obtained set is in \(\mathscr{F}_{x}\)_ Our next goal is to define the notion of \(c\)-selectability, which is a notion of performance of the OCRS. Intuitively, an OCRS is \(c\)-selectable if for any element \(e\) the probability that the OCRS selects \(e\) given that it is active is at least \(c\), where we desire \(c\) to be as large as possible. In order to talk about \(c\)-selectability in a rigorous way, we need to specify the power of the adversary that chooses the order of the elements revealed to the OCRS in an online fashion (Definition 2.1). There are three main types of adversaries considered in prior work, which we define below. **Definition 2.3**: _(Strength of adversary) In the setting of Definition 2.1, there is an underlying adversary which can choose the order in which the elements are revealed to the OCRS. We define three different types of adversaries:_ _(i) **Offline/Fixed-Order** adversary, which chooses the order of the elements upfront before any are revealed. Such an adversary knows \(x\) and the distribution of \(R(x)\), but not the realized active elements._ _(ii) **Online** adversary, which adaptively chooses next element to reveal using the same information available to the algorithm (\(x\), the distribution of \(R(x)\), and which elements have been revealed, which were active, and which were selected)._ _(iii) **Almighty** adversary, which knows upfront the outcomes of all random events, which includes the realization of \(R(x)\) and the outcome of the random bits that the OCRS might query._ We are now ready to define the notion of \(c\)-selectability. **Definition 2.4**: _(\(c\)-selectability) Let \(c\in[0,1]\). An OCRS for \(P\) is \(c\)-selectable against an adversary \(\mathcal{A}\) if for any \(x\in P\) and \(e\in N\), we have_ \[\Pr[\text{$e$ is selected by the OCRS against $\mathcal{A}$}|\text{$e$ is active}]\geq c\] It is often true that a larger probability of selection \(c\) can be achieved when \(x\) is supposed to be in a down-scaled version of \(P\). **Definition 2.5**: _(\((b,c)\)-selectability). Let \(b,c\in[0,1]\). An OCRS for \(P\) is \((b,c)\)-selectable against an adversary \(\mathcal{A}\) if for any \(x\in b\cdot P\) and \(e\in N\), we have_ \[\Pr[\text{$e$ is selected by the OCRS against $\mathcal{A}$}|\text{$e$ is active}]\geq c\] An important observation is that a \((b,c)\)-selectable OCRS for \(b\cdot P\) against \(\mathcal{A}\) implies a \(bc\)-selectable OCRS for \(P\) against \(\mathcal{A}\). **Observation 2.6**: _([11]) A \((b,c)\)-selectable OCRS for \(P\) implies a \(bc\)-selectable OCRS for \(P\)._ The reduction in Observation 2.6 is as follows: The \(bc\)-selectable OCRS essentially runs the given \((b,c)\)-selectable OCRS while scaling down by \(b\) each of the probabilities \(x_{i}\) online (i.e. consider selecting an active element independently with probability \(b\)). For more details see [11]. **Remark 2.7**: _It is important to emphasize that a \(c\)-selectable ORCS for \(P\) gives selection guarantees on all \(x\in P\), while a \((b,c)\)-selectable OCRS for \(P\) gives guarantees only when \(x\in b\cdot P\) (i.e. a scaled-down version of \(P\))._ ### OCRS and Prophet inequalities As previously discussed, one of the many applications of OCRS is to the prophet inequality problem. Here we define the general setting of the prophet inequality problem. We begin with a setup for the environment. **General setting and prophet:** We are given a group set \(N=\{e_{1},\ldots,e_{n}\}\) and a downwards-closed family \(\mathscr{F}\subseteq 2^{N}\) of feasible subsets. Each of the elements \(e_{i}\) is associated with a value \(v_{i}\). A prophet is an offline algorithm, which sees the vector \((v_{1},\ldots,v_{n})\) and outputs the feasible set \(\text{MAX}(v)=\text{argmax}_{I\in F}\sum_{i\in I}v_{i}\). We denote by \(\text{OPT}(v)=\sum_{i\in\text{MAX}(v)}v_{i}\) the weight of the maximum set. **Definition 2.8**: _(Prophet inequality) Suppose we are given a downwards-closed feasibility constraint \(\mathscr{F}\subseteq 2^{N}\). Suppose each element \(e_{i}\in N\) takes value \(v_{i}\in\mathbb{R}_{\geq 0}\) independently from some known distribution \(\mathscr{D}_{i}\). These values are presented one-by-one to an online algorithm \(\pi\) in an adversarial order (again specified as offline, online, or almighty). On seeing a value the algorithm needs to immediately and irrevocably decide whether to select the next element \(e_{i}\), while always maintaining that the set of selected elements so far is in \(\mathscr{F}\). Let's denote the set of selected elements by \(\pi\) as \(A^{*}(v)\). We say that \(\pi\) induces a prophet inequality with competitive ratio \(c\) for \(\mathscr{F}\) if_ \[\mathbb{E}_{v\sim\mathscr{G}}[\sum_{i\in A^{*}(v)}v_{i}]\geq c\cdot\mathbb{E}_ {v\sim\mathscr{G}}[\text{OPT}(v)]\] _where \(\mathscr{D}=\mathscr{D}_{1}\times\ldots\times\mathscr{D}_{n}\) is the product of the independent distributions \(\mathscr{D}_{i}\)_ As with OCRS, in order to talk about \(\alpha\)-approximation, we need to specify the power of the adversary which specifies the order of the elements revealed. Completely analogously to Definition 2.3, we could have _offline_, _online_, and _almighty_ adversaries. [11] showed that a \(c\)-selectable OCRS against a particular adversary \(\mathcal{A}\) implies a \(c\)-approximation prophet inequality against an adversary of the same strength. **Theorem 2.9** ([16]): _A \(c\)-selectable OCRS against (offline/online/almighty) adversary implies the existence of a \(c\)-approximation prophet inequality algorithm._ In the classical prophet inequality formulation [16], the value of the online algorithm \(\pi\) is compared directly to the offline optimum. [17] consider an ex-ante prophet inequality, where the value of \(\pi\) is compared to the optimal value of a convex relaxation, which upper bounds the offline optimum. [17] show that this stronger notion of an ex-ante prophet inequality is equivalent to an OCRS. ### \(k\)-uniform matroids In this section, we give a definition for \(k\)-uniform matroids, which is the feasibility constraint that we will use throughout the paper. Given a ground set \(N=\{e_{1},\ldots,e_{n}\}\), the \(k\)-uniform matroid is the matroid consisting of all subsets of \(N\) of size at most \(k\). **Definition 2.10**: _(\(k\)-uniform matroid) The \(k\)-uniform matroid for \(N\) is \(M_{k}=(N,\mathscr{F}_{k})\), where_ \[\mathscr{F}_{k}=\{S\subseteq N||S|\leq k\}\] _and the corresponding polytope of \(\mathscr{F}_{k}\) is given by_ \[P_{k}=\{x\in\mathbb{R}_{\geq 0}^{n}|\sum_{i=1}^{n}x_{i}\leq k\}\] We remind the reader of prior work on OCRSs and prophet inequalities for \(k\)-uniform matroids below. **Theorem 2.11** ([1, 1, 1, 1]): _The following is known, prior to our work, on OCRSs and prophet inequalities for \(k\)-uniform matroids:_ * _Against a fixed-order/online adversary, the best prophet inequalities and OCRSs for_ \(\mathscr{F}_{k}\) _achieve a guarantee of_ \(1-\Theta(\frac{1}{\sqrt{k}})\) _(lower bound:_ _[_1_]__, algorithm:_ _[_1_, 1_]__)._ * _Against an almighty adversary, the best prophet inequalities for_ \(\mathscr{F}_{k}\) _achieve a guarantee of_ \(1-\Theta(\frac{1}{\sqrt{k}})\) _(lower bound:_ _[_1_]__, algorithm:_ _[_1_]__)._ * _Against an almighty adversary, the best-known OCRS achieves a guarantee of_ \(1-\Theta(\sqrt{\frac{\log k}{k}})\) _(implicit in_ _[_1_]__)._ ## 3 Warmup: Naive approaches towards an OCRS The goal of this section is to explore a few exceptionally simple algorithms that one might try to use to construct an optimal OCRS for \(k\)-uniform matroids. We will present results about whether optimal factors are possible against adversaries of variable strengths. In particular, we will show that one cannot achieve a \((1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS by using a very simple greedy algorithm or a tweaked variant of it utilizing a partition matroid. We first consider a naive greedy OCRS, which greedily selects active elements until it has selected \(k\) elements. Formally in the language of Definition 2.2, for this OCRS we have \(\mathscr{F}=\{S\subseteq N||S|=k\}\) and \(\mathscr{F}_{x}=\mathscr{F}\) for all \(x\). We quickly establish that the naive greedy OCRS is not \((b,c)\)-selectable even against an offline/fixed-order adversary for any \(b,c\) satisfying \(bc=1-O(\frac{1}{\sqrt{k}})\). This would rule out constructing exceptionally simple OCRS via Observation 2.6. **Theorem 3.1**: _There are no \(b,c\), satisfying \(bc=1-O(\frac{1}{\sqrt{k}})\), such that the naive greedy OCRS is \((b,c)\)-selectable against the offline adversary._ **Proof.** See Appendix A.1 for a proof. Our second result is that even if we complicate the naive greedy OCRS slightly it does not imply an optimal factor. Suppose instead of using the \(k\)-uniform matroid, we use a partition matroid and the algorithm is to greedily select active elements as long as the set of selected elements lies in the partition matroid. Formally in the language of Definition 2.2, the feasibility family \(\mathscr{F}_{x}\) is given by a partition matroid (which could depend on \(x\)). For a given \(x\), the partition matroid is of the form \(\{(n_{i},k_{i},S_{i})\}_{i=1}^{s}\), where \(\sum_{i=1}^{s}k_{i}=k\), \(\sum_{i=1}^{s}n_{i}=n\) and the sets \(S_{i}\) are pairwise disjoint and satisfy \(\cup_{i=1}^{s}S_{i}=N\) and \(|S_{i}|=n_{i}\). Here we can select at most \(k_{i}\) elements from \(S_{i}\). We next prove that such scheme is not \((b,c)\)-selectable even against the _offline_ adversary for any \(b,c\) satisfying \(bc=1-O(\frac{1}{\sqrt{k}})\). **Theorem 3.2**: _There are no \(b,c\), satisfying \(bc=1-O(\frac{1}{\sqrt{k}})\), such that the naive greedy OCRS with a partition matroid is \((b,c)\)-selectable against the offline adversary._ **Proof.** See Appendix A.1 for a proof. **Remark 3.3**: _Theorems 3.1 and 3.2 say that one cannot use the transformation in Observation 2.6 on a naive greedy OCRS to obtain a \((1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS. Thus ruling out some exceptionally simple ways to construct a \((1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS for \(k\)-uniform matroids._ **Remark 3.4**: _Intuitively, the above variations of the naive greedy schemes fail to be optimal because they tend to select too many elements early in the process in comparison to the expected number of active elements so far. Thus, they are likely to run out of space when they reach the last element. Our main algorithm in Section 4 attempts to counter this._ We conclude by reminding the reader that the naive greedy OCRS is \((b,c)\)-selectable against the almighty adversary for some \(b,c\) satisfying \(bc=1-O(\sqrt{\frac{\log(k)}{k}})\). The proof is implicit in the analysis of the \((1-O(\sqrt{\frac{\log(k)}{k}}))\)-approximate prophet inequality of [10]. **Theorem 3.5** (Implicit in [10]): _The naive greedy OCRS is \((1-\sqrt{\frac{2\log(k)}{k}},1-\frac{1}{k})\)-selectable against the almighty adversary. By the tranformation in Observation 2.6 this implies a \((1-O(\sqrt{\frac{\log(k)}{k}}))\)-selectable OCRS for \(k\)-uniform matroids._ **Proof.** We remind the reader of the simple proof in Appendix A.1. ## 4 A simple optimal OCRS for \(k\)-uniform matroids The goal of this section is to give a new \((1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS against offline/fixed-order adversaries. Because the adversary must commit to an ordering using just knowledge of \(x\), and the distribution of \(R(x)\), we let \(e_{1},e_{2}\ldots,e_{n}\) refer to the elements that are revealed, in order. Note that the events \(e_{i}\in R(x)\) are independent. We will show that the following algorithm is a \((1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS for \(k\)-uniform matroids. **OCRS(\(x\))** 1. Initialize the set of selected elements \(A_{0}=\emptyset\). 2. For \(i=1,\ldots,n\) do: 1. If \(e_{i}\) is active and \(|A_{i-1}|{+}1\leq(1-\frac{1}{\sqrt{k}})(\sum_{j\leq i}x_{j})+\sqrt{k}\), then select \(e_{i}\) with probability \((1-\frac{1}{\sqrt{k}})\) (i.e. \(A_{i}=A_{i-1}\cup e_{i}\)) and otherwise discard it (i.e. \(A_{i}=A_{i-1}\)). 2. If \(|A_{i-1}|{+}1>(1-\frac{1}{\sqrt{k}})(\sum_{j\leq i}x_{j})+\sqrt{k}\) or \(e_{i}\) is inactive, then discard \(e_{i}\) (i.e. \(A_{i}=A_{i-1}\)). Observe that **OCRS** is derived from an even simpler \((1-\frac{1}{\sqrt{k}},1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS, by using the reduction of Observation 2.6. We clearly state this simpler algorithm below (we also state it parameterized by \(d\), as our entire analysis follows for general \(d\), and then is optimized for \(d:=\sqrt{k}\) at the very end). **Algorithm(\(d,x\))** 1. Initialize the set of selected elements \(B_{0}=\emptyset\). 2. For \(i=1,\ldots,n\) do: 1. If \(e_{i}\) is active and \(|B_{i-1}|+1\leq\sum_{j\leq i}x_{j}+d\), then select \(e_{i}\), and otherwise discard it. 2. If \(|B_{i-1}|+1>\sum_{j\leq i}x_{j}+d\) or \(e_{i}\) is inactive, then discard \(e_{i}\). **Observation 4.1**: _If **Algorithm(\(\sqrt{k},x\))** is a \((1-\frac{1}{\sqrt{k}},1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS for \(P_{k}\), then **OCRS(\(x\))** is a \((1-O(\frac{1}{\sqrt{k}}))\)-selectable OCRS for \(P_{k}\)._ **Proof.****OCRS(\(x\))** is exactly the result of applying the [10] reduction of Observation 2.6 to **Algorithm**(\(\sqrt{k},x\)), with \(b=1-\frac{1}{\sqrt{k}}\) and \(c=\). Therefore, by Observation 2.6, **OCRS(\(x\))** is \((1-\frac{1}{\sqrt{k}})\cdot(1-O(\frac{1}{\sqrt{k}}))\)-selectable (i.e. \((1-O(\frac{1}{\sqrt{k}}))\)-selectable) whenever **Algorithm**(\(\sqrt{k},x\)) is \((1-\frac{1}{\sqrt{k}},1-O(\frac{1}{\sqrt{k}}))\)-selectable. In line with Remark 2.7, we emphasize that **Algorithm**(\(\sqrt{k},x\)) operates in a universe where the probabilities \(x_{i}\) are scaled down by \(b=1-\frac{1}{\sqrt{k}}\), while **OCRS(\(x\))** operates with the original probabilities \(x_{i}\). Our key proposition is that **Algorithm**(\(d\),\(x\)) is indeed sufficiently selectable. **Proposition 4.2**: _Algorithm(\(d\),\(x\)) is \((1-\frac{d}{k},1-\frac{2}{d-1})\)-selectable over \(P_{k}\). That is, for all \(x\in(1-\frac{d}{k})\cdot P_{k}\), **Algorithm(\(d\),\(x\))** is \((1-\frac{2}{d-1})\)-selectable._ The proof of Proposition 4.2 proceeds in two steps. The first (shorter) step is to guarantee that **Algorithm**(\(d,x\)) always selects at most \(k\) elements. The second is to show that every element is accepted with sufficient probability. **Observation 4.3**: _For all \(x\in(1-\frac{d}{k})\cdot P_{k}\), **Algorithm**(\(d,x\)) accepts at most \(k\) elements._ **Proof.** Observe that, at all times, \(|B_{i}|\leq\sum_{j\leq i}x_{j}+d\). As \(\sum_{j}x_{j}\leq k\cdot(1-\frac{d}{k})=k-d\), this implies that \(|B_{n}|\leq k\), and the algorithm accepts at most \(k\) total elements. We now proceed to prove that the algorithm is sufficiently selectable. For this part of the analysis, it will be convenient to consider the process \(S_{i}:=|B_{i}|-\sum_{j\leq i}x_{j}\). Observe that \(S_{i}\) has the following dynamics. First, \(S_{0}=0\). Further if \(S_{i-1}+1-x_{i}\leq d\) (i.e. have "space" to accept \(e_{i}\)), then \[S_{i}:=\left\{\begin{array}{ll}S_{i-1}+1-x_{i}&\mbox{if $e_{i}$ is active (with probability $x_{i}$)}\\ S_{i-1}-x_{i}&\mbox{if $e_{i}$ is inactive (with probability $1-x_{i}$)}\end{array}\right.\] and if \(S_{i-1}+1-x_{i}>d\) (i.e. don't have "space" to accept \(e_{i}\)), then \(S_{i}=S_{i-1}-x_{i}\) regardless of whether \(e_{i}\) is active (with probability 1). Additionally, consider the process \(W_{i}=\sum_{j=1}^{i}X_{j}\), where \[X_{i}=\left\{\begin{array}{ll}1-x_{i}&\mbox{if $e_{i}$ is active (with probability $x_{i}$)}\\ -x_{i}&\mbox{if $e_{i}$ is inactive (with probability $1-x_{i}$)}\end{array}\right.\] Intuitively, \(W_{i}\) tracks the number of _active_ elements above expectation (among the first \(i\)), and \(S_{i}\) tracks the number of _selected_ elements above the expected number of active elements (among the first \(i\)). **Lemma 4.4**: \(W_{i}-S_{i}\) _is exactly equal to the number of active elements that are discarded amongst the first \(i\)._ **Proof.** Let \(G_{i}\) be the set of active elements in the first \(i\) revealed elements. Then, by definition \(W_{i}=|G_{i}|-\sum_{j\leq i}x_{j}\). Thus, \(W_{i}-S_{i}=|G_{i}|-|B_{i}|\) and the conclusion follows because \(|B_{i}|\) is the number of active and selected elements in the first \(i\) revealed ones. By Lemma 4.4 it follows that the difference \(W_{i}-S_{i}\) increases by one if and only if \(e_{i}\) is active and discarded, and stays the same otherwise. The rest of the proof requires just two more natural steps. First, we characterize which elements are active and discarded, just as a property of the random process \(W\). Second, we bound the probability of this occurring. **Lemma 4.5**: _Element \(e_{i}\) is active and discarded by **Algorithm**(\(d,x\)) if and only if \(\lceil W_{i}\rceil>d\) and \(\lceil W_{i}\rceil>\lceil W_{j}\rceil\) for all \(j<i\). That is, \(e_{i}\) is active and discarded by **Algorithm**(\(d,x\)) if and only if the random process \(W\) reaches a new integral height for the first time._ **Proof.** By Lemma 4.4 the difference \(W_{i}-S_{i}\) increases by 1 when \(e_{i}\) is active and discarded and stays the same otherwise. Therefore, the \(n\)-th active and discarded elements is \(e_{i}\), where \(i\) is the smallest index such that \(W_{i}-S_{i}=n\). Let's now fix an arbitrary \(n\geq 1\). To prove the lemma, it is enough to show that \(i\) is the smallest index such that \(W_{i}-S_{i}=n\) if and only if \(i\) is the smallest index such that \(\lceil W_{i}\rceil=d+n\). We begin with the "only if" direction. Suppose \(i_{n}\) is the smallest index such that \(W_{i_{n}}-S_{i_{n}}=n\). By Lemma 4.4, element \(e_{i_{n}}\) is active and discarded. Therefore, by the selection rule of **Algorithm**(\(d,x\)) we get that \(S_{i_{n}-1}+1-x_{i_{n}}>d\) and therefore \(S_{i_{n}}=S_{i_{n}-1}-x_{i_{n}}\). This implies that \(S_{i_{n}}>d-1\). We also know that \(S_{i_{n}}\leq d\) (because \(S_{i}\) stores the difference between \(|B_{i}|\) and \(\sum_{j\leq i}x_{i}\), which is hard-coded to be at most \(d\)). Using these two inequalities along with our hypothesis that \(\bar{W}_{i_{n}}-S_{i_{n}}=n\) we obtain: \[W_{i_{n}}=S_{i_{n}}+n\in(d+n-1,d+n],\mbox{ and therefore: }\lceil W_{i_{n}} \rceil=d+n.\] Further, by definition of \(i_{n}\) as the _first_ index such that \(W_{i}-S_{i}=n\), we know that for all \(j<i_{n}\): \(W_{j}\leq S_{j}+n-1\). Moreover, we also know that \(S_{j}\leq d\) for all \(j\) (again, because \(S_{j}\) stores the difference between \(|B_{j}|\) and \(\sum_{\ell\leq j}x_{\ell}\), which is at most \(d\)). Therefore, \(W_{j}\leq d+n-1\), and we conclude that \(\lceil W_{j}\rceil\leq d+n-1<\lceil W_{i_{n}}\rceil\) for all \(j<i_{n}\). Therefore, \(i_{n}\) is the smallest index such that \(\lceil W_{i_{n}}\rceil=d+n\). This establishes that if \(i_{n}\) is the smallest index such that \(W_{i_{n}}-S_{i_{n}}=n\), then \(i_{n}\) is the smallest index such that \(\lceil W_{i_{n}}\rceil=d+n\). Now we show the "if" direction. Suppose that \(i_{n}\) is the smallest index such that \(\lceil W_{i_{n}}\rceil=d+n\). Since \(W_{i_{n}}-S_{i_{n}}\) is an integer, and because \(S_{i_{n}}\leq d\), it must be the case that \(W_{i_{n}}-S_{i_{n}}\geq n\). Let \(i^{*}\leq i_{n}\) be the smallest index such that \(W_{i^{*}}-S_{i^{*}}=n\) (such \(i^{*}\) exists because \(W_{i}-S_{i}\) always increases by 1 or stays the same, and because we have just shown that \(W_{i_{n}}-S_{i_{n}}=n\)). By the "only if" proof above, \(i^{*}\) is the smallest index such that \(\lceil W_{i^{*}}\rceil=d+n\), implying that in fact \(i^{*}=i_{n}\), as desired. This establishes that if \(i_{n}\) is the smallest index such that \(\lceil W_{i_{n}}\rceil=d+n\), then \(i_{n}\) is also the smallest index such that \(W_{i_{n}}-S_{i_{n}}=n\). This completes the proof: the \(n^{th}\) active element discarded by the algorithm is the smallest index such that \(W_{i}-S_{i}=n\). By the work above, this is exactly the smallest index such that \(\lceil W_{i}\rceil=n+d\). Therefore, discarded elements are exactly those that reach a new integral height for the first time. Our remaining task is simply to upper bound the probability that \(W_{i}\) reaches a new integral height, for all \(i\). **Lemma 4.6**: _For each \(m\in[1,n]\)_ \[\Pr(\lceil W_{m}\rceil>d\mbox{ and }\lceil W_{m}\rceil>\lceil W_{i}\rceil \mbox{ for }i<m)\leq\frac{2x_{m}}{d-1}\] **Proof.** Fix \(m\in[1,n]\). Consider the process \(Q_{i}:=W_{m-i-1}-W_{m-1}\) for \(i=0,1,\ldots,m-1\). Intuitively, \(\{Q_{i}\}_{i=0}^{m-1}\) is the "reversed" process \(W\) starting at time-step \(m-1\). Note that \(Q_{0}=0\) and \(Q_{m-1}=-W_{m-1}\). Observe also that \[Q_{i}-Q_{i-1}=W_{m-i-1}-W_{m-1}-(W_{m-i}-W_{m-1})=W_{m-i-1}-W_{m-i}=-X_{m-i}\] Note that since the adversary does not see which elements are active and has to commit to their order a priori, we know that \(X_{1},\ldots,X_{n}\) are independent. We also note that \[\mathbb{E}[X_{i}]=x_{i}(1-x_{i})-x_{i}(1-x_{i})=0\] By combining the previous two facts, we obtain that \(\{Q_{i}\}_{i=0}^{m-1}\) is a discrete martingale. Let's denote its maximum by \(M_{m-1}=\max_{1\leq i\leq m-1}Q_{i}\). We will next show that if \(\lceil W_{m}\rceil>d\) and \(\lceil W_{m}\rceil>\lceil W_{i}\rceil\) for \(i<m\), then the following two events have to hold: * \(e_{m}\) is active. * \(M_{m-1}<1\) and \(Q_{m-1}\leq-(d-1)\). That is, the martingale \(Q\) can never reach a height of \(1\), _and_ it must finish below \(-(d-1)\). Indeed, since \(\lceil W_{m}\rceil>\lceil W_{m-1}\rceil\), then \(e_{m}\) is active. Also since \(\lceil W_{m}\rceil>\lceil W_{i}\rceil\) for \(i<m\), we know that \(W_{m}>W_{m-i-1}\) for all \(i\). Combining this with the inequality \(W_{m}\leq W_{m-1}+1\), we obtain \[Q_{i}=W_{m-i-1}-W_{m-1}\leq W_{m-i-1}-W_{m}+1<1\] Thus, \(Q_{i}<1\) for all \(i\in[1,m-1]\) or equivalently \(M_{m-1}<1\). The condition \(\lceil W_{m}\rceil>d\) implies that \(W_{m}>d\). Using the last inequality we obtain \[Q_{m-1}=-W_{m-1}\leq-W_{m}+1\leq-(d-1)\] Therefore, we obtained that \(e_{m}\) is active, \(M_{m-1}<1\) and \(Q_{m-1}\leq-(d-1)\) as desired. Using the above property combined with the fact that the event whether \(e_{m}\) is active is independent of the events \(M_{m-1}<1\) and \(Q_{m-1}\leq-(d-1)\) (because \(M_{m-1}\) and \(Q_{m-1}\) are determined entirely by the previous \(m-1\) elements) we obtain that: \[\Pr(\lceil W_{m}\rceil>d\text{ and }\lceil W_{m}\rceil>\lceil W _{i}\rceil\text{ for }i<m)\] \[\leq\Pr(\{e_{m}\text{ is active}\}\cap\{M_{m-1}<1\text{ and }Q_{m-1}\leq-(d-1)\})\] \[=\Pr(e_{m}\text{ is active})\cdot\Pr(M_{m-1}<1\text{ and }Q_{m-1}\leq-(d-1))\] \[=x_{m}\cdot\Pr(M_{m-1}<1\text{ and }Q_{m-1}\leq-(d-1))\] So, the final step is to upper bound \(\Pr(M_{m-1}<1\text{ and }Q_{m-1}\leq-(d-1))\), which is just a claim about martingales that change by at most \(1\) in each step. Lemma A.2, which is a short application of the Optional Stopping Theorem, applied for \(a=1,b=-(d-1)<0\), and \(K=1\) implies that: \[\Pr(M_{m-1}<1\text{ and }Q_{m-1}\leq-(d-1))\leq\frac{2}{d-1}.\] Therefore, \[\Pr(\lceil W_{m}\rceil>\lceil W_{i}\rceil\text{ for }i<m\text{ and } \lceil W_{m}\rceil>d)\leq\frac{2x_{m}}{d-1},\] which concludes the proof of the lemma. This suffices to wrap up the proof of Proposition 4.2. **Proof of Proposition 4.2.** We have that \[\Pr(e_{m}\text{ is selected}|e_{m}\text{ is active}) =1-\Pr(e_{m}\text{ is discarded}|e_{m}\text{ is active})\] \[=1-\frac{\Pr(e_{m}\text{ is active and discarded })}{\Pr(e_{m}\text{ is active})}\] \[\geq 1-\frac{\frac{2x_{m}}{d-1}}{x_{m}}=1-\frac{2}{d-1}\text{ (by Lemma \ref{lem:2.1}).}\] Setting \(d=\sqrt{k}\) in Proposition 4.2 we get **Corollary 4.7**: _Algorithm(\(\sqrt{k},x\)) is \((1-\frac{1}{\sqrt{k}},1-\frac{2}{\sqrt{k-1}})\)-selectable, and therefore **OCRS(\(x\))** is \((1-O(\frac{1}{\sqrt{k}}))\)-selectable._ **Remark 4.8**: _An intuitive comparison with the extremely simple OCRS implied in [10] is the following. The OCRS from [10] needs to "scale" probabilities down with \((1-\Theta(\sqrt{\frac{\log(k)}{k}}))\) in order to have a \((1-\Theta(\sqrt{\frac{\log(k)}{k}}))\) chance of not running out of space. Our result shows that if we adaptively select active elements whenever the number of selected ones does not exceed the expected number by \(d\), then we only need to "scale" down probabilities by \((1-\Theta(\frac{1}{\sqrt{k}}))\) to have a \((1-\Theta(\frac{1}{\sqrt{k}}))\) chance of not running out of space._ **Remark 4.9**: _Intuitively, an online adversary can potentially manipulate the algorithm by revealing an element \(e_{j}\) with a high with large (resp. small) \(x_{j}\) when the algorithm has selected many (resp. few) elements above the expectation. In this scenario, the proof of Lemma 4.6 will not go through since the "reversed" process \(Q_{i}\) could have correlated steps and will (in general) fail to be a martingale._ ## 5 Upper bound against almighty adversaries Recall that [10] implies that the naive greedy OCRS can be used to obtain a \((1-O(\sqrt{\frac{\log(k)}{k}}))\)-selectable OCRS against the almighty adversary. (Theorem 3.5) We recall that the almighty adversary knows which elements are active a priori and also knows everything about the OCRS (Definition 2.3). In this section we show that, against the almighty adversary, the probability of selection of any OCRS cannot be greater than \((1-\Omega(\sqrt{\frac{\log(k)}{k}}))\). This implies that, against almighty adversaries the factor of \((1-O(\sqrt{\frac{\log(k)}{k}}))\), achieved by the naive greedy OCRS, is asymptotically optimal. In particular, our main goal in this section is to prove the following theorem. **Theorem 5.1**: _Suppose that a \(c\)-selectable OCRS for the \(k\)-uniform matroid against the almighty adversary exists. Then \(c\leq 1-\Omega(\sqrt{\frac{\log(k)}{k}})\)_ In our proof of Theorem 5.1 we will only consider the instance \(n=2k\) and vector of probabilities \(x_{i}=\frac{1}{2}\) for all \(i\in[1,2k]\) (i.e. each element is active with probability \(\frac{1}{2}\)). To execute the proof, we will use the following strategy. * We will consider the following subclass of almighty adversaries. The adversary knows which elements are active and everything about the OCRS. However, it needs to: * commit to the order in which the elements will be revealed a priori * reveal all active elements before all inactive ones This type of restriction will be convenient for the analysis. We will refer to the class of such adversaries as \(\mathscr{H}\). * Assuming the existence of a \(c\)-selectable OCRS \(\pi\) against adversaries of class \(\mathscr{H}\), we show that there exists \(c\)-selectable OCRS \(\pi^{s}\) against adversaries of class \(\mathscr{H}\), which selects the \(i\)-th revealed active element with probability independent of the identities of the first \(i\) revealed elements. In other words, the probability that \(\pi^{s}\) selects the \(i\)-th revealed active element is a function \(g(i)\) (which depends on \(\pi\) but not on the identities of the first \(i\) revealed elements). (Section 5.1) * By using the probability values \(\{g(i)\}_{i=1}^{2k}\), we construct an adversary in \(\mathscr{H}\). Based on this adversary, we upper bound the probability of selection \(c\) by using the solution to a linear programming relaxation. (Section 5.2) Before we proceed with the proof we will fully describe a general model for how an OCRS works. An arbitrary OCRS \(\pi\) operates in the following way: **1.** Before any elements are revealed \(\pi\) can flip some random coins \(coins_{1}\). **2.** Once the first element \(a_{1}\) is revealed (and whether it is active or not), \(\pi\) can flip more random coins \(coins^{\prime}_{1}\) and it makes a decision to select / discard \(a_{1}\) with some probability based on \(coins_{1}\), \(coins^{\prime}_{1}\) and the identity of the element \(a_{1}\). Let \(b_{1}\) be the indicator random variable of the event that \(a_{1}\) is selected. **3.** Based on the history so far, i.e. \(coins_{1},a_{1},coins^{\prime}_{1},b_{1}\), it flips more random coins \(coins_{2}\) before the second element is revealed. **4.** After the identity of the second element \(a_{2}\) (and its activity) is revealed \(\pi\) flips more coins \(coins^{\prime}_{2}\) (based on the history so far), and makes a decision to select \(a_{2}\) or not, which is recorded in an indicator variable \(b_{2}\). **5.** Let \(B_{i}=(coins_{i},a_{i},coins^{\prime}_{i},b_{i})\) where \(coins_{i}\) is the random coins \(\pi\) flips right before seeing the \(i\)-the revealed element \(a_{i}\), \(coins^{\prime}_{i}\) is the random coins \(\pi\) flips after seeing \(a_{i}\), and \(b_{i}\) is the indicator of the event that \(\pi\) selects \(a_{i}\). **6.** In general, as a function of the history \(\overline{B}_{i}=\{B_{1},\ldots,B_{i}\}\), \(\pi\) flips random coins \(coins_{i+1}\). Then the \((i+1)\)-th element \(a_{i+1}\) is revealed. Based on the history \(\overline{B}_{i}\cup coins_{i+1}\cup a_{i+1}\), \(\pi\) flips more random coins \(coins^{\prime}_{i+1}\). Finally, based on the history \(\overline{B}_{i}\cup coins_{i+1}\cup a_{i+1}\cup coins^{\prime}_{i+1}\), \(\pi\) decides to select / discard \(a_{i+1}\), which generates the indicator \(b_{i+1}\). **7.** The procedure described in **6.** is repeated until the \(n\)-th element is selected / discard. ### Defining a symmetric OCRS Suppose we have a \(c\)-selectable OCRS \(\pi\). Our goal in this section will first be to define what it means to "apply a permutation" to \(\pi\). Then we will define the "symmetric" OCRS \(\pi^{s}\) in the following way: **1** Sample uniformly random permutation \(\sigma\) of \(N\). **2** Apply \(\sigma\) to \(\pi\). We will then show that the probability that \(\pi^{s}\) selects the \(i\)-th active revealed element is independent on the identities of the first \(i\) revealed elements. We will first introduce some definitions. Given an OCRS \(\pi\), and a permutation \(\sigma\) of the ground set \(N\), we define \(\pi_{\sigma}\) as the OCRS which "treats" each element \(a_{i}\) exactly like \(\pi\) would "treat" \(\sigma^{-1}(a_{i})\). Formally we use the following definition. **Definition 5.2**: _Let \(\pi\) be an OCRS and \(\sigma\) a permutation of \(N\). Define \(\pi_{\sigma}\) as an OCRS which uses \(\pi\) as a black-box in the following way:_ _1. Before any elements are revealed \(\pi_{\sigma}\) queries \(\pi\) to flip some random coins \(coins_{1}\)._ _2. Once the first element \(a_{1}\) is revealed (and whether it is active or not), \(\pi_{\sigma}\) queries \(\pi\) on \((coins_{1},\sigma^{-1}(a_{1}))\) to flip more random coins \(coins^{\prime}_{1}\). Based on \((coins_{1},\sigma^{-1}(a_{1}),coins^{\prime}_{1})\), \(\pi\) will make some decision to select / discard \(\sigma^{-1}(a_{1})\) with some probability. Then \(\pi_{\sigma}\) selects \(a_{1}\) if and only if \(\pi\) selects \(\sigma^{-1}(a_{1})\). Let \(b_{1}\) be the indicator random variable of the event that \(a_{1}\) is selected by \(\pi_{\sigma}\)._ _3. \(\pi_{\sigma}\) queries \(\pi\) on history \(coins_{1},\sigma^{-1}(a_{1}),coins^{\prime}_{1},b_{1}\) to generate random coins \(coins_{2}\) before the second element is revealed._ _4. After the identity of the second element \(a_{2}\) (and its activity) is revealed \(\pi_{\sigma}\) queries \(\pi\) on \(coins_{1},\)\(\sigma^{-1}\), \((a_{1}),coins^{\prime}_{1},b_{1},coins_{2},\sigma^{-1}(a_{2})\) to flip more random coins \(coins^{\prime}_{2}\). Then it queries \(\pi\) on \(coins_{1},\)\(\sigma^{-1}(a_{1}),coins^{\prime}_{1},b_{1},coins_{2},\sigma^{-1}(a_{2}),coins^{ \prime}_{2}\) whether to select / discard \(\sigma^{-1}(a_{2})\), and \(\pi_{\sigma}\) selects \(a_{2}\) if and only if \(\pi\) selects \(\sigma^{-1}(a_{2})\)._ _5. Let \(B_{i}=(coins_{i},a_{i},coins^{\prime}_{i},b_{i})\) where \(coins_{i}\) is the random coins \(\pi_{\sigma}\) flips by querying \(\pi\) right before seeing the \(i\)-the revealed element \(a_{i}\), \(coins^{\prime}_{i}\) is the random coins \(\pi_{\sigma}\) flips by querying \(\pi\) after seeing \(a_{i}\), and \(b_{i}\) is the indicator of the event that \(\pi_{\sigma}\) selects \(a_{i}\)._ _6. In general, as a function of the history \(\overline{B}_{i}=\{B_{1},\ldots,B_{i}\}\), \(\pi_{\sigma}\) will query \(\pi\) on \(\overline{B^{\prime}}_{i}=\{B^{\prime}_{1},\ldots,B^{\prime}_{i}\}\), where \(B^{\prime}_{i}=(coins_{i},\sigma^{-1}(a_{i}),coins^{\prime}_{i+1},b_{i})\) to flips random coins \(coins_{i+1}\) before the \((i+1)\)-th active element \(a_{i+1}\) is revealed. Then it queries \(\pi\) on \(\overline{B^{\prime}}_{i}\cup coins_{i+1}\cup\sigma^{-1}(a_{i+1})\), to generate more random coins \(coins^{\prime}_{i+1}\), and then it queries it again on \(\overline{B^{\prime}}_{i}\cup coins_{i+1}\cup\sigma^{-1}(a_{i+1})\cup coins^{ \prime}_{i+1}\) whether to select \(a_{i+1}\) if and only if \(\pi\) selects \(\sigma^{-1}(a_{i+1})\). This generates an indicator \(b_{i+1}\)._ **7.** _The procedure described in_ **6.** _is repeated until the \(n\)-th element is selected / discarded._ Similarly to applying a permutation to an OCRS \(\pi\), we can apply a permutation to an adversary \(\mathcal{A}\), resulting in an adversary \(\mathcal{A}_{\sigma}\). Intuitively, \(\mathcal{A}_{\sigma}\) "treats" element \(a\) like \(\mathcal{A}\) would treat \(\sigma^{-1}(a)\). We give the following formal definition. Recall that \(\mathcal{H}\) is the class of adversaries defined in the beginning of this section. **Definition 5.3**: _Let \(\mathcal{A}\in\mathcal{H}\) be an adversary and \(\sigma\) a permutation of the ground set \(N\). We define the adversary \(\mathcal{A}_{\sigma}\) as operating against an OCRS \(\pi\) in the following way:_ _1. Given a set of active elements \(A\), \(\mathcal{A}_{\sigma}\) queries \(\mathcal{A}\) on a set of active elements \(\sigma^{-1}(A)\) against the OCRS \(\pi_{\sigma^{-1}}\). Upon this query \(\mathcal{A}\) returns an order in which to reveal the elements from \(N\)._ _2. Given this order, if \(\mathcal{A}\) chose to reveal element \(a\) in the \(i\)-th position, \(\mathcal{A}_{\sigma}\) reveals \(\sigma(a)\) in the \(i\)-th position._ **Remark 5.4**: _It is easy to see that if \(\mathcal{A}\in\mathcal{H}\), then \(\mathcal{A}_{\sigma}\in\mathcal{H}\)._ We next show that the operation of applying a permutation to an OCRS or adversary is invertible. **Proposition 5.5**: _For a given permutation \(\sigma\), the maps \(\pi\to\pi_{\sigma}\) and \(\mathcal{A}\to\mathcal{A}_{\sigma}\) are bijections, with inverses \(\pi\to\pi_{\sigma^{-1}}\) and \(\mathcal{A}\to\mathcal{A}_{\sigma^{-1}}\) respectively._ **Proof.** See Appendix A.3 for a proof. We will now need the following helpful lemma. **Lemma 5.6**: _Let \(a\in N\) be an element and \(A\subseteq N\) a subset of elements. Let \(\sigma\) be a permutation of \(N\), \(\pi\) an OCRS, and \(\mathcal{A}\in\mathcal{H}\) an adversary. Then_ \[\Pr(\pi\text{ selects }a\text{ against }\mathcal{A}|A\text{ are active})=\Pr(\pi_{\sigma}\text{ selects }\sigma(a)\text{ against }\mathcal{A}_{\sigma}|\sigma(A)\text{ are active})\] **Proof.** Consider the interaction between \(\pi_{\sigma}\) and \(\mathcal{A}_{\sigma}\) on set of active elements \(\sigma(A)\). By Definition 5.3 before the process begins \(\mathcal{A}_{\sigma}\) queries \(\mathcal{A}\) on \(\sigma^{-1}(\sigma(A))=A\) against OCRS \((\pi_{\sigma})_{\sigma^{-1}}=\pi\) (Proposition 5.5). Based on this if \(\mathcal{A}\) chooses to first reveal the active elements \(A\) to \(\pi\) in some order, \(\mathcal{A}\) will reveal \(\sigma(A)\) to \(\pi_{\sigma}\) in the same order (Definition 5.3). Thus, if we now pair \(\pi\) and \(\pi_{\sigma}\) as in Definition 5.2, we know that in the interaction when \(\pi\) is given history \(\overline{B}\) and \(\pi_{\sigma}\) will be given history \(\overline{B^{\prime}}\), obtained from \(\overline{B}\) by replacing each element \(b\) by \(\sigma(b)\). Thus, by Definition 5.2 it follows that \(\pi_{\sigma}\) will select element \(\sigma(a)\) if and only if \(\pi\) selects element \(a\). This finishes the proof. We are now ready to show if \(\pi\) is \(c\)-selectable against adversaries in \(\mathcal{H}\), then \(\pi_{\sigma}\) is also \(c\)-selectable against adversaries in \(\mathcal{H}\). **Lemma 5.7**: _If \(\pi\) is \(c\)-selectable against adversaries in \(\mathcal{H}\), then \(\pi_{\sigma}\) is also \(c\)-selectable against adversaries in \(\mathcal{H}\)._ **Proof.** Let \(\mathcal{A}\in\mathcal{H}\) be an adversary and \(a\in N\) an arbitrary element. We have that \[\Pr(\pi_{\sigma}\text{ selects }a\text{ against }\mathcal{A})=\] \[=\sum_{A\subseteq N}\Pr(\pi_{\sigma}\text{ selects }a\text{ against } \mathcal{A}|A\text{ are active})\Pr(A\text{ are active})\] \[=\sum_{A\subseteq N}\Pr(\pi_{\sigma}\text{ selects }a\text{ against } \mathcal{A}|A\text{ are active})\frac{1}{2^{2k}}\] \[=\sum_{A\subseteq N}\Pr\bigl{(}\pi\text{ selects }\sigma^{-1}(a)\text{ against } \mathcal{A}_{\sigma^{-1}}|\sigma^{-1}(A)\text{ are active}\bigr{)}\frac{1}{2^{2k}}\] (Lemma 5.6 and Prop. 5.5) \[=\sum_{A\subseteq N}\Pr\bigl{(}\pi\text{ selects }\sigma^{-1}(a)\text{ against } \mathcal{A}_{\sigma^{-1}}|\sigma^{-1}(A)\text{ are active}\bigr{)}\Pr\bigl{(}\sigma^{-1}(A) \text{ are active}\bigr{)}\ (x_{i}=\frac{1}{2},\forall i)\] \[=\Pr\bigl{(}\pi\text{ selects }\sigma^{-1}(a)\text{ against } \mathcal{A}_{\sigma^{-1}}\bigr{)}\geq\frac{c}{2}\ (\text{by }c\text{-selectability of }\pi)\] Thus, \[\Pr(\pi_{\sigma}\text{ selects }a\text{ against }\mathscr{A}|\text{ }a\text{ is active})=\frac{\Pr(\pi_{\sigma}\text{ selects }a\text{ against }\mathscr{A})}{\Pr(a\text{ is active})}\geq c\] since \(\Pr(a\text{ is active})=\frac{1}{2}\). **Remark 5.8**: _Notice that in the proof of Lemma 5.7 it was crucial that \(x_{i}=\frac{1}{2}\) for all \(i\), which we used to claim \(\Pr(A\text{ are active})=\Pr\bigl{(}\sigma^{-1}(A)\text{ are active}\bigr{)}=\frac{1}{2^{2k}}\)._ Suppose we have a \(c\)-selectable OCRS \(\pi\) against adversaries in \(\mathscr{H}\). We define the "symmetric" OCRS \(\pi^{s}\) in the following way: **1.** Sample a uniformly random permutation \(\sigma\) of the ground set \(N\). **2.** Operate like \(\pi_{\sigma}\) We will next show that \(\pi^{s}\) is \(c\)-selectable against adversaries in \(\mathscr{H}\) and that it does not differentiate between identities of different elements. **Lemma 5.9**: _The OCRS \(\pi^{s}\) is \(c\)-selectable against adversaries in \(\mathscr{H}\)._ **Proof.** Let \(\mathscr{A}\in\mathscr{H}\) be an adversary. Since \(\mathscr{A}\) needs to decide on the order in which to reveal the elements before seeing what permutation \(\sigma\) is drawn in step **1.**, we know that the order of elements revealed does not depend on \(\sigma\). Thus, for an arbitrary element \(a\) we have that \[\Pr(\pi^{s}\text{ selects }a\text{ against }\mathscr{A}) =\sum_{\sigma}\Pr(\pi_{\sigma}\text{ selects }a\text{ against }\mathscr{A})\frac{1}{n!}\geq \text{(by Lemma \ref{lemma:c-selectable})}\] \[\geq\sum_{\sigma}\frac{c}{2}\frac{1}{n!}=\frac{c}{2}\] as desired. We will now state the key lemma for this Section. Namely, that \(\pi^{s}\) selects the \(i\)-th revealed active element with probability independent of the identities of the first \(i\) revealed elements. **Lemma 5.10**: _Against adversaries in \(\mathscr{H}\), the probability that \(\pi^{s}\) selects the \(i\)-th revealed element, conditioned on it being active, is given by a function \(g(i)\) that is independent of the identities of the first \(i\) revealed elements and their order._ **Proof.** Consider any adversary \(\mathscr{A}\in\mathscr{H}\). By definition \(\mathscr{A}\) decides on the order in which to reveals the elements apriori, and reveals all active elements before all inactive ones. Suppose that the first \(i\) elements that \(\mathscr{A}\) reveals are \(a_{1},\ldots,a_{i}\) in that order. To prove the lemma it is enough to show that \[\Pr(\pi^{s}\text{ selects }a_{i}|a_{1},\ldots,a_{i}\text{ are revealed})=g(i)\] where \(g\) is allowed to depend only on \(\pi\). Note that the permutation \(\sigma\) drawn is independent of the order of the elements \(a_{1},\ldots,a_{i}\) by definition of \(\mathscr{H}\). Thus, we have \[\Pr(\pi^{s}\text{ selects }a_{i}|a_{1},\ldots,a_{i}\text{ are revealed})=\] \[=\sum_{\sigma}\Pr(\pi_{\sigma}\text{ selects }a_{i}|a_{1},\ldots,a_{i}\text{ are revealed})\Pr(\sigma\text{ is drawn}|a_{1},\ldots,a_{i}\text{ are revealed})\] \[=\sum_{\sigma}\Pr(\pi_{\sigma}\text{ selects }a_{i}|a_{1},\ldots,a_{i}\text{ are revealed})\frac{1}{n!}\] \[=\frac{1}{n!}\sum_{\sigma}\Pr(\pi_{\sigma}\text{ selects }a_{i}|a_{1},\ldots,a_{i}\text{ are revealed})\] \[=\frac{1}{n!}\sum_{\sigma}\sum_{B\subseteq\{a_{1},\ldots,a_{i-1} \}}\Pr(\pi_{\sigma}\text{ selects }a_{i}\text{ and }\pi_{\sigma}\text{ selected }B|a_{1},\ldots,a_{i}\text{ are revealed})\] (Definition 5.2) \[=\frac{1}{n!}\sum_{\sigma}\sum_{B\subseteq\{a_{1},\ldots,a_{i-1} \}}\Pr\bigl{(}\pi\text{ selects }\sigma^{-1}(a_{i})\text{ and }\pi\text{ selected }\sigma^{-1}(B)|\sigma^{-1}(\{a_{1},\ldots,a_{i}\})\text{ revealed}\bigr{)}\ (\Delta)\] In the second inequality we used the fact that the event that \(\sigma\) is drawn is independent of the decision of the adversary for which \(a_{1},\ldots,a_{i}\) to reveal. The key observation is that expression \((\Delta)\) does not depend on the elements \(a_{1},\ldots,a_{i}\) but only on \(i\). Notice that for fixed \(A^{\prime},B^{\prime},a^{\prime}\), such that \(|A^{\prime}|\)\(=\)\(i\), \(B^{\prime}\subset A^{\prime}\) and \(a^{\prime}\in A^{\prime}\setminus B^{\prime}\), there are exactly \[{i-1\choose|B^{\prime}|}(|B^{\prime}|)!\left(i-1-|B^{\prime}|\right)!\left(n- i\right)!\] terms in the sum (\(\Delta\)) of the form \[\Pr\bigl{(}\pi\mbox{ selects }a^{\prime}\mbox{ and }\pi\mbox{ selected }B^{\prime}|A^{\prime}\mbox{ revealed}\bigr{)}\] To see this consider the number of \((\sigma,B)\) which are solutions to \[\sigma^{-1}(a_{i})=a^{\prime},\sigma^{-1}(B)=B^{\prime},\sigma^{-1}(A)=A^{\prime}\] where \(A=\{a_{1},\ldots,a_{i}\}\). There are \({i-1\choose|B^{\prime}|}\) ways to choose \(B\). Given \(B\) there are \((|B^{\prime}|)!\left(i-1-|B^{\prime}|\right)!\left(n-i\right)!\) ways to choose \(\sigma\) in order to send \(a^{\prime}\) to \(a_{i}\), \(B^{\prime}\) to \(B\), and \(A^{\prime}\) to \(A\). Therefore, \(\Delta\) is equal to \[\sum_{|A^{\prime}|=i,B^{\prime}\subset A^{\prime},a^{\prime}\in A^{\prime} \setminus B^{\prime}}\Pr\bigl{(}\pi\mbox{ selects }a^{\prime}\mbox{ and }\pi\mbox{ selected }B^{\prime}|A^{\prime}\mbox{ revealed}\bigr{)}{i-1\choose|B^{\prime}|}(|B^{ \prime}|)!\left(i-1-|B^{\prime}|\right)!\left(n-i\right)!\] which only depends on \(i\). Therefore, the probability that \(\pi^{s}\) selects the \(i\)-th revealed element given that it is active only depends on \(i\) we will denote it by \(g(i)\). ### Upper bound on the probability of selection In this section we will present an adversary in the class \(\mathscr{H}\) and show how this adversary implies the upper bound of \(1-\Omega(\sqrt{\frac{\log(k)}{k}})\) on \(c\). By Lemma 5.10 it follows that, against adversaries in \(\mathscr{H}\), the probability that \(\pi^{s}\) selects the \(i\)-th revealed active element is given by a function \(g(i)\). We now describe the adversary. We will only specify what the adversary does when \(e_{1}\) is active. **Adversary \(\mathscr{A}^{*}\):** **1.** If there are \(m\) active elements except for \(e_{1}\), the adversary computes \(g(1),\ldots,g(m+1)\). **2.** Before the process starts, the adversary finds \(j=\arg\min_{i\in[1,m+1]}g(i)\) and reveals element \(e_{1}\) at position \(j\) and all other \(m\) active elements on positions \(j^{\prime}\in[1,j-1]\cup[j+1,m+1]\) in arbitrary order. It is not hard to see that the adversary \(\mathscr{A}^{*}\) is in the class \(\mathscr{H}\) because it commits to the order apriori and reveals all active elements before all inactive. We will next show the following lemma for the probability that \(\pi^{s}\) selects \(e_{1}\) against the above adversary. **Lemma 5.11**: _Let \(h(m+1)=\min_{i\in[1,m+1]}g(i)\) for \(m\in[0,2k-1]\). Then_ \[\Pr(\pi^{s}\mbox{ selects }e_{1}\mbox{ against }\mathscr{A}^{*}|e_{1}\mbox{ is active})=\sum_{i=1}^{2k}h(i){ \frac{2k-1}{i-1}\choose 2^{2k-1}}\] **Proof.** Suppose \(e_{1}\) is active. Notice that the number of active elements is equal to \(N+1\), where \(N\sim Bin(2k-1,\frac{1}{2})\). By definition of \(\mathscr{A}^{*}\), when there are \(m\) active elements (except for \(e_{1}\)), the probability that \(\pi^{s}\) selects \(e_{1}\) is equal to \(h(m+1)\). Therefore, by the law of total probability, \(\pi^{s}\) selects \(e_{1}\) with probability \[\Pr(\pi^{s}\text{ selects }e_{1}\text{ against }\mathscr{A}^{*}|e_{1}\text{ is active}) =\sum_{i=1}^{2k}h(i)\Pr[N+1=i]\] \[=\sum_{i=1}^{2k}h(i)\Pr\biggl{[}Bin(2k-1,\frac{1}{2})=i-1\biggr{]}\] \[=\sum_{i=1}^{2k}h(i)\frac{\binom{2k-1}{i-1}}{2^{2k-1}}\] as desired. We will next show a property on the values \(\{g(i)\}_{i=1}^{2k}\) that will be useful later. **Lemma 5.12**: \[\sum_{i=1}^{2k}g(i)\leq k\] **Proof.** Suppose that all \(2k\) elements in \(N\) are active. We know that any adversary in \(\mathscr{H}\) will choose an order for the elements to be revealed before \(\pi^{s}\) draws \(\sigma\). In that case we know by Lemma 5.10 that \(\pi^{s}\) will select the \(i\)-th revealed element with probability \(g(i)\). Let \(I_{i}\) be the indicator random variable that \(\pi^{s}\) selects that \(i\)-th revealed element. Note that \(E[I_{i}]=g(i)\). Since \(\pi^{s}\) never selects more than \(k\) elements (Definition 2.1) we have that \[k\geq\mathbb{E}[\sum_{i=1}^{2k}I_{i}]=\sum_{i=1}^{2k}\mathbb{E}[I_{i}]=\sum_{ i=1}^{2k}g(i)\] as desired. As a last step towards Theorem 5.1, we consider a linear program relaxation whose optimal objective upper bounds the probability of selection \(c\) (Lemma 5.13), and characterize an optimal solution to the program (Lemma 5.14). Finally, we show that the optimal objective of the linear program is at most \(1-\Omega(\sqrt{\frac{\log(k)}{k}})\) (Lemma 5.15). **Lemma 5.13**: _Let \(c^{*}\) denote the optimal value of the following linear program_ \[\begin{array}{ll}\text{maximize}&\sum_{i=1}^{2k}f(i)\frac{ \binom{2k-1}{i-1}}{2^{2k-1}}\\ \text{subject to}&\sum_{i=1}^{2k}f(i)\leq k\\ &f(i)\geq f(i+1)\text{ for }i=1,\ldots,2k-1\\ &f(2k)\geq 0\end{array} \tag{1}\] _Then if \(\pi^{s}\) is \(c\)-selectable it holds that_ \[c\leq c^{*}\] **Proof.** We first claim that \(\{h(i)\}_{i=1}^{2k}\) is a feasible assignment to (1). Notice that by definition we have that \(h(i)\leq g(i)\) for \(i\in[1,2k]\). Using this combined with Lemma 5.12 we obtain \[\sum_{i=1}^{2k}h(i)\leq\sum_{i=1}^{2k}g(i)\leq k\] Further, note that by definition we have that \(h(i)\geq h(i+1)\) for \(i\in[1,2k-1]\) and clearly \(h(2k)\geq 0\). Combining the aforementioned observations we get that the vector \(\{h(i)\}_{i=1}^{2k}\) is a feasible assignment of (1). Therefore, by Lemma 5.11 we obtain that \[c^{*}\geq\sum_{i=1}^{2k}h(i)\frac{\binom{2k-1}{i-1}}{2^{2k-1}}=\Pr(\pi^{s}\text { selects }e_{1}\text{ against }\mathscr{A}^{*}|e_{1}\text{ is active})\] By Lemma 5.9 we know that \(\pi^{s}\) is \(c\)-selectable i.e. \[\Pr(\pi^{s}\text{ selects }e_{1}\text{ against }\mathscr{A}^{*}|e_{1}\text{ is active})\geq c\] By combining the above two inequalities we obtain that \[c^{*}\geq c\] which finishes the proof. We will next prove a claim for the optimal solution of the linear program (1). **Lemma 5.14**: _The optimal solution of (1) has the form \(f(i)=x\) for \(i=1\ldots,k+a\), \(f(k+a+1)=y\), and \(f(i)=0\) for \(i>k+a+1\) for some \(x\geq y\geq 0\) and \(a\in[0,k]\)._ **Proof.** First, note that the constraints of the linear program (1) define a bounded convex polytope, so there exists a feasible assignment that achieves the optimum of (1). Let \(b_{i}=\frac{\binom{2k-1}{i-1}}{2^{2k-1}}\), we know that \(b_{i}<b_{i+1}\) for \(i<k\), \(b_{k}=b_{k+1}\), and \(b_{i}>b_{i+1}\) for \(i>k\). Let \(f^{*}\) be an optimal solution to (1). Suppose that \(f^{*}(i)>f^{*}(i+1)\) for some \(i<k\). Then consider \(f^{**}\) defined by \(f^{**}(j)=f^{*}(j)\) for \(j\not\in\{i,i+1\}\), and \(f^{**}(i)=f^{**}(i+1)=\frac{f^{*}(i)+f^{*}(i+1)}{2}\). Note that \(f^{**}\) still has decreasing non-negative entries and the sum of its entries is equal to that of \(f^{*}\) i.e. it is feasible. The difference between the objective values of \(f^{**}\) and \(f^{*}\) is equal to \[\frac{f^{*}(i)+f^{*}(i+1)}{2}(b_{i}+b_{i+1})-b_{i}f^{*}(i)-b_{i+1}f^{*}(i+1)= \frac{(f^{*}(i+1)-f^{*}(i))(b_{i}-b_{i+1})}{2}>0\] since \(f^{*}(i+1)<f^{*}(i)\) and \(b_{i}<b_{i+1}\). Thus, a contradiction with the optimality of \(f^{*}\). Therefore, \(f^{*}(i)=f^{*}(1)\) for all \(i\leq k\). Let \(f^{*}(i)=x\) for \(i\leq k\). If \(f^{*}(j)\in\{x,0\}\) for \(j>k\) we are done. Otherwise let \(j>k\) be the smallest index such that \(x>f^{*}(j)>0\), and let \(f^{*}(j)=y\). This means \(f^{*}(i)=x\) for \(i<j\). Assume that \(f^{*}(j+1)>0\). Then, let \(l>j\) be the largest index such that \(f^{*}(l)>0\), and let \(f^{*}(l)=z\). Choose \(\epsilon<\min(x-y,z)\) and consider \(f^{**}\) defined by \(f^{**}(j)=f^{*}(j)+\epsilon\), \(f^{**}(l)=f^{*}(l)-\epsilon\), and \(f^{**}(i)=f^{*}(i)\) for \(i\not\in\{j,l\}\). Note that \(f^{**}\) is feasible by the choice of \(\epsilon\) since its entries are still decreasing and have the same sum as those of \(f\). The difference between the objective of \(f^{**}\) and \(f^{*}\) equals to \[(f^{*}(j)+\epsilon)b_{j}+(f^{*}(l)-\epsilon)b_{l}-f^{*}(j)b_{j}-f^{*}(l)b_{l}= \epsilon(b_{j}-b_{l})>0\] as \(b_{j}>b_{l}\) because \(l>j>k\), which contradicts the optimality of \(f^{*}\). Thus, we showed that \(f^{*}(j+1)=0\), which finishes the proof of the Lemma. Note that by Lemma 5.14 we know that the optimal value of (1) has the following form \[c^{*}=x\Pr\biggl{(}Bin(2k-1,\frac{1}{2})\leq k+a-1\biggr{)}+y\Pr\biggl{(}Bin(2k -1,\frac{1}{2})=k+a\biggr{)} \tag{2}\] for some \(x\geq y\geq 0\) satisfying \(x(k+a)+y\leq k\), where \(a\geq 0\). By using these constraints we easily obtain that \(y\leq x\leq\frac{k}{k+a}\). By using this inequality in equation (2), we obtain that \[c^{*}\leq\frac{k}{k+a}\Pr\biggl{(}Bin(2k-1,\frac{1}{2})\leq k+a\biggr{)} \tag{3}\] We now show the final Lemma of this section, which provides an upper bound for the RHS of (3). **Lemma 5.15**: _Let \(a\in[0,k]\), then_ \[\frac{k}{k+a}\Pr\biggl{(}Bin(2k-1,\frac{1}{2})\leq k+a\biggr{)}\leq 1-\Omega( \sqrt{\frac{\log(k)}{k}})\] **Proof.** It is enough to show that \[\frac{k}{k+a}\Pr\biggl{(}Bin(2k-1,\frac{1}{2})\leq k+a\biggr{)}\leq 1-\frac{1}{100 }\sqrt{\frac{\log(k)}{k}} \tag{4}\] Let's assume, for the sake of contradiction, that inequality (4) is not true. By this assumption we have the following chain of inequalities \[1-\frac{1}{100}\sqrt{\frac{\log(k)}{k}} <\frac{k}{k+a}\Pr\biggl{(}Bin(2k-1,\frac{1}{2})\leq k+a\biggr{)}\] \[\leq\min\left(\frac{k}{k+a},\Pr\biggl{(}Bin(2k-1,\frac{1}{2}) \leq k+a\biggr{)}\right)\] \[\leq\min\left(\frac{k}{k+a},\Pr\biggl{(}Bin(2k-2,\frac{1}{2}) \leq k+a\biggr{)}\right)\] where in the second line we used that each of the terms in the product is at most 1 and in the third line that \(Bin(2k-1,\frac{1}{2})\) stochastically dominates \(Bin(2k-2,\frac{1}{2})\). Thus, \[1-\frac{1}{100}\sqrt{\frac{\log(k)}{k}}<\min\left(\frac{k}{k+a},\Pr\biggl{(}Bin (2k-2,\frac{1}{2})\leq k+a\biggr{)}\right) \tag{5}\] By (5), we get \[\frac{k}{k+a} >1-\frac{1}{100}\sqrt{\frac{\log(k)}{k}}\] \[k >k+a-\frac{k+a}{100}\sqrt{\frac{\log(k)}{k}}\] \[a <\frac{k+a}{100}\sqrt{\frac{\log(k)}{k}}\leq\frac{2k}{100}\sqrt{ \frac{\log(k)}{k}}=\frac{1}{50}\sqrt{k\log(k)}\text{ (since $a\leq k$)}\] Combining the above inequality with (5) again we obtain \[\Pr\biggl{(}Bin(2k-2,\frac{1}{2})<k+\frac{1}{50}\sqrt{k\log(k)}\biggr{)}\geq \Pr\biggl{(}Bin(2k-2,\frac{1}{2})\leq k+a\biggr{)}>1-\frac{1}{100}\sqrt{\frac {\log(k)}{k}}\] Subtracting one from both sides we get \[\Pr\biggl{(}Bin(2k-2,\frac{1}{2})\geq k+\frac{1}{50}\sqrt{k\log(k)}\biggr{)}< \frac{1}{100}\sqrt{\frac{\log(k)}{k}} \tag{6}\] We will not use the following anti-concentration for binomial distribution given in Proposition B.3. For \(k^{\prime}\in[\frac{n}{2},\frac{5n}{8}]\) and even \(n\) we have \[\Pr\biggl{(}Bin(n,\frac{1}{2})\geq k^{\prime}\biggr{)}\geq\frac{1}{15}\exp \Big{(}-16n(\frac{1}{2}-\frac{k^{\prime}}{n})^{2}\biggr{)} \tag{7}\] Substituting \(n=2k-2\) and \(k^{\prime}=k+\frac{1}{50}\sqrt{k\log(k)}\) in (7) we obtain \[\Pr\biggl{(}Bin(2k-2,\frac{1}{2})\geq k+\frac{1}{50}\sqrt{k\log(k)} \biggr{)} \geq\frac{1}{15}\exp\Big{(}-16(2k-2)\Big{(}\frac{1}{2}-\frac{k+ \frac{1}{50}\sqrt{k\log(k)}}{2k-2}\Big{)}^{2}\Big{)}\] \[=\frac{1}{15}\exp\Big{(}-32(k-1)\Big{(}\frac{1+\frac{1}{50}\sqrt{ k\log(k)}}{2k-2}\Big{)}^{2}\Big{)}\] \[=\frac{1}{15}\exp\Big{(}-8\frac{(1+\frac{1}{50}\sqrt{k\log(k)})^{ 2}}{k-1}\Big{)}\] \[\geq\frac{1}{15}\exp\Big{(}-8\frac{(\frac{2}{25}\sqrt{k\log(k)})^ {2}}{k}\Big{)}\ (\text{for big enough }k)\] \[=\frac{1}{15}\exp\Big{(}-\frac{32}{625}\log(k)\Big{)}=\frac{1}{15 }\frac{1}{k^{\frac{32}{625}}}\] Combining the last inequality with (6) we obtain \[\frac{1}{100}\sqrt{\frac{\log(k)}{k}} >\frac{1}{15}\frac{1}{k^{\frac{32}{625}}}\] \[\iff\frac{15}{100}\sqrt{\log(k)} \geq k^{\frac{1}{2}-\frac{32}{625}}=k^{\frac{561}{1250}}\] which fails to hold for large enough \(k\). Thus, we obtain a contradiction. Therefore, (4) is true, which proves the lemma. **Remark 5.16**: _The optimal solution to the LP in Lemma 5.13 turns out not have the same values \(f(i)\) as implied by the asymptotically optimal OCRS from [1] (Theorem 3.5). This is because the values implied by this OCRS would have an asymptotically optimal performance as opposed to exactly instance optimal._ ### Proof of Theorem 5.1 By Lemma 5.13, we know that \[c\leq c^{*} \tag{8}\] Additionally,by combining (3) and Lemma 5.15 we know that \[c^{*}\leq 1-\Omega(\sqrt{\frac{\log(k)}{k}}) \tag{9}\] Combining (8) and (9) we obtain \[c\leq 1-\Omega(\sqrt{\frac{\log(k)}{k}})\] finishing the proof of Theorem 5.1. ## 6 Conclusion We provide a new, simple, and optimal OCRS for \(k\)-uniform matroids against a fixed-order adversary. In particular, our algorithm has the advantage that it is extremely simple to implement and it does not require solving a mathematical program. Our analysis connects its performance to a random walk, and follows by concluding properties of this random walk. We expect that the tools we develop in analyzing our algorithm to be of independent interest and to have a broader applicability within online stochastic optimization. As our second main result, we show that no OCRS for \(k\)-uniform matroids can be \((1-\Omega(\sqrt{\frac{\log k}{k}}))\)-selectable against an almighty adversary, establishing that the simple greedy OCRS implied by [1] is optimal.
2301.13353
Measurement-efficient quantum Krylov subspace diagonalisation
The Krylov subspace methods, being one category of the most important classical numerical methods for linear algebra problems, can be much more powerful when generalised to quantum computing. However, quantum Krylov subspace algorithms are prone to errors due to inevitable statistical fluctuations in quantum measurements. To address this problem, we develop a general theoretical framework to analyse the statistical error and measurement cost. Based on the framework, we propose a quantum algorithm to construct the Hamiltonian-power Krylov subspace that can minimise the measurement cost. In our algorithm, the product of power and Gaussian functions of the Hamiltonian is expressed as an integral of the real-time evolution, such that it can be evaluated on a quantum computer. We compare our algorithm with other established quantum Krylov subspace algorithms in solving two prominent examples. To achieve an error comparable to that of the classical Lanczos algorithm at the same subspace dimension, our algorithm typically requires orders of magnitude fewer measurements than others. Such an improvement can be attributed to the reduced cost of composing projectors onto the ground state. These results show that our algorithm is exceptionally robust to statistical fluctuations and promising for practical applications.
Zongkang Zhang, Anbang Wang, Xiaosi Xu, Ying Li
2023-01-31T01:08:02Z
http://arxiv.org/abs/2301.13353v3
# Measurement-efficient quantum Krylov subspace diagonalisation ###### Abstract The Krylov subspace methods, being one category of the most important classical numerical methods for linear algebra problems, their quantum generalisation can be much more powerful. However, quantum Krylov subspace algorithms are prone to errors due to inevitable statistical fluctuations in quantum measurements. To address this problem, we develop a general theoretical framework to analyse the statistical error and measurement cost. Based on the framework, we propose a quantum algorithm to construct the Hamiltonian-power Krylov subspace that can minimise the measurement cost. In our algorithm, the product of power and Gaussian functions of the Hamiltonian is expressed as an integral of the real-time evolution, such that it can be evaluated on a quantum computer. We compare our algorithm with other established quantum Krylov subspace algorithms in solving two prominent examples. It is shown that the measurement number in our algorithm is typically \(10^{4}\) to \(10^{12}\) times smaller than other algorithms. Such an improvement can be attributed to the reduced cost of composing projectors onto the ground state. These results show that our algorithm is exceptionally robust to statistical fluctuations and promising for practical applications. ## I Introduction Finding the ground-state energy of a quantum system is of vital importance in many fields of physics [1; 2; 3]. The Lanczos algorithm [4; 5] is one of the most widely used algorithms to solve this problem. It belongs to the Krylov subspace methods [6], in which the solution usually converges to the true answer with an increasing subspace dimension. However, such methods are unscalable for many-body systems because of the exponentially-growing Hilbert space dimension [7]. In quantum computing, there is a family of hybrid quantum-classical algorithms that can be regarded as a quantum generalisation of the classical Lanczos algorithm [8; 9; 10; 11; 12; 13; 14; 15; 16]. Following Refs. [11; 15; 16], we call them quantum Krylov subspace diagonalisation (KSD). These algorithms are scalable with the system size by carrying out the classically-intractable vector and matrix arithmetic on the quantum computer. They possess potential advantages as the conventional quantum algorithm to solve the ground-state problem, quantum phase estimation, requires considerable quantum resources [21; 22], while the variational quantum eigensolver is limited by the ansatz and classical optimisation bottlenecks [23; 24]. However, Krylov subspace methods are often confronted with the obstacle that small errors can cause large deviations in the ground-state energy. This issue is rooted in the Krylov subspace that is spanned by an almost linearly dependent basis [25; 26]. Contrary to classical computing, in which one can exponentially suppress rounding errors by increasing the number of bits, quantum computing is inherently subject to statistical error. Since statistical error decreases slowly with the measurement number \(M\) as \(\propto 1/\sqrt{M}\), an extremely large \(M\) can be required to reach an acceptable error. In this case, although quantum KSD algorithms perform well in principle, the measurement cost has to be assessed and optimised for realistic implementations [16; 20]. In this work, we present a general and rigorous analysis of the measurement cost in quantum KSD algorithms. Specifically, we obtain an upper bound formula of the measurement number that is applicable to all quantum KSD algorithms. Then, we propose an algorithm to construct the Hamiltonian-power Krylov subspace [6]. In our algorithm, we express the product of Hamiltonian power and a Gaussian function of Hamiltonian as an integral of real-time evolution. In this way, the statistical error decreases exponentially with the power, which makes our algorithm measurement-efficient. We benchmark quantum KSD algorithms by estimating their measurement costs in solving the anti-ferromagnetic Heisenberg model and the Hubbard model. Various lattices of each model are taken in the benchmarking. We find that typically our algorithm requires \(10^{4}\) to \(10^{12}\) times fewer measurements in comparison with others. ## II Krylov subspace diagonalisation First, we introduce the KSD algorithm and some relevant notations. The algorithm starts with a reference state \(|\varphi\rangle\). Then we generate a set of basis states \(|\phi_{k}\rangle=f_{k}(H)|\varphi\rangle\), where \(H\) is the Hamiltonian, and \(f_{1},f_{2},\ldots,f_{d}\) are linearly-independent (\(d-1\))-degree polynomials (to generate a \(d\)-dimensional Krylov subspace). For example, it is conventional to take the power function \(f_{k}(H)=H^{k-1}\) in the Lanczos algorithm. These states span a subspace called the Krylov subspace. We compute the ground-state energy by solving the generalised eigenvalue problem \(\mathbf{Ha}=E\mathbf{Sa}\), where \(\mathbf{H}_{k,q}=\langle\phi_{k}|H|\phi_{q}\rangle\) and \(\mathbf{S}_{k,q}=\langle\phi_{k}|\phi_{q}\rangle\). Let \(E_{min}\) be the minimum eigenvalue of the generalised eigenvalue problem. The error in the ground-state energy is \(\epsilon_{K}=E_{min}-E_{g}\), where \(E_{g}\) is the true ground-state energy. We call \(\epsilon_{K}\) the subspace error. One can also construct generalised Krylov subspaces in which \(f_{k}\) are functions of \(H\) other than polynomials; see Table 1. In the literature, subspaces generated by variational quantum circuits [27; 28; 29; 30] and stochastic time evolution [31] are also proposed. Quantum KSD techniques can also be used to mitigate errors caused by imperfect gates [32; 27; 33]. In this work, we focus on Hamiltonian functions because of their similarities to conventional Krylov subspace methods. ## III General analysis of the measurement cost In addition to \(\epsilon_{K}\), the other error source is the statistical error. In quantum KSD algorithms, matrices \(\mathbf{H}\) and \(\mathbf{S}\) are obtained by measuring qubits at the end of certain quantum circuits. In this work, we rigorously analyse the number of measurements required for achieving a given permissible total error \(\epsilon\) (including both \(\epsilon_{K}\) and statistical error). We divide the measurement number into three factors. The first two factors are the same for different bases of Krylov subspaces, and the third factor depends on the basis (i.e. \(f_{k}\) operators in Table 1). In this work, we show that the third factor is drastically reduced in our measurement-efficient algorithm. The total measurement number required for implementing a quantum KSD algorithm consists of three factors, \[M_{tot}=\frac{\alpha(\kappa)\|H\|_{2}^{2}}{p_{g}^{2}\epsilon^{2}}\times\beta(d )\times\gamma. \tag{1}\] The first factor is the cost of measuring the energy. Roughly speaking, it is the cost in the ideal case that one can prepare the ground state by an ideal projection; see Appendix A. The spectral norm \(\|H\|_{2}\) characterises the range of the energy. Assume that the statistical error and \(\epsilon\) are comparable, \(M_{tot}\propto\|H\|_{2}^{2}/\epsilon^{2}\) according to the standard scaling of the statistical error. \(\kappa\) is the permissible failure probability: The actual error may exceed \(\epsilon\) due to the statistical error, and \(\alpha(\kappa)\) is the overhead for achieving the probability. We take \(\alpha(\kappa)=256/\kappa\) in the rigorous bound analysis, although \(\alpha(\kappa)\approx 16\ln(1/\kappa)\) is sufficient in practice; see Appendix B. The success of KSD algorithms depends on a finite overlap between the reference state and the true ground state \(|\psi_{g}\rangle\)[6; 16; 20]. This requirement is reflected in \(M_{tot}\propto 1/p_{g}^{2}\), where \(p_{g}=|\langle\psi_{g}|\varphi\rangle|^{2}\). The second factor, \(\beta(d)\), is the overhead due to measuring \(d\times d\) matrices. There are more sources of statistical errors (matrix entries) influencing the eventual result when \(d\) is larger. We take \(\beta(d)=d^{6}\) in the rigorous bound analysis, and it can be reduced to \(\beta(d)=d(2d-1)\) after optimisation; see Appendix B. The remaining factor \(\gamma\) depends on the spectrum of \(\mathbf{S}\) (i.e. the basis), as we will show later. In the numerical result, we find that the typical value of \(\gamma\) in quantum KSD algorithms can be as large as \(10^{12}\) to achieve certain \(\epsilon\), and our algorithm reduces \(\gamma\) to about 3. ## IV Rigorous bound of the measurement cost For a rigorous bound analysis, we assume that each matrix entry is measured individually. In Appendix B, we give a better setup for the practical implementation. The budget for each matrix entry is \(2M\) measurements [i.e. \(M=M_{tot}/(4d^{2})\)]. Notice that the entry is complex and has two parts in general, and the budget for each part is \(M\). Measurements yield estimators \(\hat{\mathbf{H}}\) and \(\hat{\mathbf{S}}\) of two matrices. Variances of estimators have upper bounds in the form \(\mathrm{Var}(\hat{\mathbf{H}}_{k,q})\leqslant 2C_{\mathbf{H}}^{2}/M\) and \(\mathrm{Var}(\hat{\mathbf{S}}_{k,q})\leqslant 2C_{\mathbf{S}}^{2}/M\), where \(C_{\mathbf{H}}\) and \(C_{\mathbf{S}}\) are some factors depending on the protocol. Let \(\hat{E}_{min}\) be the minimum eigenvalue computed using \(\hat{\mathbf{H}}\) and \(\hat{\mathbf{S}}\) (see Algorithm 1). The total error is \(|\hat{E}_{min}-E_{g}|\), and the computing succeeds when \(|\hat{E}_{min}-E_{g}|\leqslant\epsilon\). If estimators are unbiased, \(\hat{E}_{min}\to E_{min}\) in the limit \(M_{tot}\rightarrow+\infty\). Therefore, we can achieve any permissible error \(\epsilon>\epsilon_{K}\) with some sufficiently large \(M_{tot}\). The first result of this work is about \begin{table} \begin{tabular}{l c c} Abbr. & \(f_{k}(x)\) & Refs. \\ \hline P & \(x^{k-1}\) & [12; 19] \\ CP & \(T_{k-1}(x/h_{tot})\) & [20] \\ GP & \(x^{k-1}e^{-\frac{1}{2}x^{2}\tau^{2}}\) & This work \\ IP & \(x^{-(k-1)}\) & [18] \\ ITE & \(e^{-\tau(k-1)x}\) & [8; 9] \\ RTE & \(e^{-ix\Delta t\left(k-\frac{d+1}{2}\right)}\) & [10; 11; 12; 13; 14; 15; 16; 17] \\ F & \(L^{-1}\sum_{l=1}^{L}e^{-\left[x-\Delta E(k-1)\right]\Delta t\left(l-\frac{L+1}{ 2}\right)}\) & [15] \\ \end{tabular} \end{table} Table 1: Operators \(f_{k}(x)\) generating the basis of a (generalised) Krylov subspace. Here \(x=H-E_{0}\), where \(E_{0}\) is a constant. In different algorithms, \(f_{k}\) can be power (P), Chebyshev polynomial (CP), Gaussian-power (GP), inverse power (IP) or exponential [i.e. imaginary-time evolution (ITE), real-time evolution (RTE) and filter (F)] functions of the Hamiltonian. \(T_{n}\) is the \(n\)-th Chebyshev polynomial of the first kind, and \(E_{0}=0\) in the CP basis. For a Hamiltonian expressed in the form \(H=\sum_{j}h_{j}\sigma_{j}\), where \(\sigma_{j}\) are Pauli operators, \(h_{tot}=\sum_{j}|h_{j}|\) is the 1-norm of coefficients. \(\tau\), \(\Delta t\) and \(\Delta E\) are some real parameters. the sufficiently large \(M_{tot}\), which is summarised in the following theorem. **Theorem 1**.: _Suppose \(\epsilon>\epsilon_{K}\) and \(E_{g}+\epsilon<0\). The total measurement number in Eq. (1) with \(\alpha(\kappa)=256/\kappa\), \(\beta(d)=d^{6}\) and_ \[\gamma=\frac{p_{g}^{2}\epsilon^{2}}{16\|H\|_{2}^{2}\eta^{2}} \tag{2}\] _is sufficient for achieving the permissible error \(\epsilon\) and failure probability \(\kappa\), i.e. \(\hat{E}_{min}\in[E_{g},E_{g}+\epsilon]\) with a probability of at least \(1-\kappa\). Here \(\eta\) is the solution to the equation \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})=E_{g}+\epsilon\), and_ \[E^{\prime}(\eta,\mathbf{a})=\frac{\mathbf{a}^{\dagger}(\mathbf{H}+2C_{ \mathbf{H}}\eta)\mathbf{a}}{\mathbf{a}^{\dagger}(\mathbf{S}+2C_{\mathbf{S}} \eta)\mathbf{a}}. \tag{3}\] See Appendix A for the proof. About the condition \(E_{g}+\epsilon<0\), notice that we can always subtract a positive constant from the Hamiltonian to satisfy this condition. The measurement overhead \(\gamma\) depends on the spectrum of \(\mathbf{S}\). Let's consider a small \(\eta\) and the Taylor expansion of \(E^{\prime}\). The minimum value of the zeroth-order term \(\frac{\mathbf{a}^{\dagger}\mathbf{H}\mathbf{a}}{\mathbf{a}^{\dagger}\mathbf{ S}\mathbf{a}}\) is \(E_{min}\) according to the Rayleigh quotient theorem [34]. Take this minimum value, we have \(E^{\prime}\simeq E_{min}+s\eta\), where \(s\propto\frac{\mathbf{a}^{\dagger}\mathbf{a}}{\mathbf{a}^{\dagger}\mathbf{S} \mathbf{a}}\) is the first derivative. The solution to \(E^{\prime}=E_{g}+\epsilon\) is \(\eta\simeq(E_{g}+\epsilon-E_{min})/s\), then \(\gamma\simeq u^{2}\left(\frac{p_{g}\mathbf{a}^{\dagger}\mathbf{a}}{\mathbf{a} ^{\dagger}\mathbf{S}\mathbf{a}}\right)^{2}\), where \(u\leqslant\frac{(C_{\mathbf{H}}+\|H\|_{2}C_{\mathbf{S}})\mathbf{a}}{2\|H\|_{2} (\epsilon-\epsilon_{K})}\approx C_{\mathbf{S}}\) under assumptions \(C_{\mathbf{H}}\approx\|H\|_{2}C_{\mathbf{S}}\) and \(\epsilon-\epsilon_{K}\approx\epsilon\). Therefore, when \(\mathbf{S}\) is close to singular, the overhead can be large. In the above analysis, we add a positive diagonal matrix to \(\hat{\mathbf{S}}\) to overcome the problem caused by an ill-conditioned \(\mathbf{S}\) (see Algorithm 1 and Appendix A). There is an alternative approach called thresholding procedure [8; 16] dealing with the same problem, based on which the asymptotic behaviour of \(M\) is provided [20]. ## V Measurement-efficient algorithm As we have shown, the measurement overhead \(\gamma\) depends on the overlap matrix \(\mathbf{S}\), i.e. how we choose the basis of Krylov subspace. The main advantage of our algorithm is that we utilise a basis resulting in a small \(\gamma\). To generate a standard Krylov subspace, we choose operators \[f_{k}=(H-E_{0})^{k-1}e^{-\frac{1}{2}(H-E_{0})^{2}\tau^{2}}, \tag{4}\] where \(E_{0}\) is a constant up to choice. We call it the Gaussian-power basis. The corresponding subspace is a conventional Hamiltonian-power Krylov subspace, but the reference state \(|\varphi\rangle\) has been _effectively_ replaced by \(e^{-\frac{1}{2}(H-E_{0})^{2}\tau^{2}}|\varphi\rangle\). A property of the Gaussian-power basis is that the spectral norm \(\|f_{k}\|_{2}\leqslant(\frac{k-1}{e\tau^{2}})^{\frac{k-1}{2}}\leqslant(\frac{ d-1}{e\tau^{2}})^{\frac{k-1}{2}}\) has an upper bound decreasing exponentially with \(k\), under the condition \(e\tau^{2}>d-1\). Here, \(e\) is Euler's constant. This property is essential for the measurement efficiency of our algorithm. We propose to realise the Gaussian-power basis by expressing the operator of interest as a linear combination of unitaries (LCU). The same approach has been taken to realise other bases like power [19; 12], inverse power [18], imaginary-time evolution [35], filter [15] and Gaussian function of the Hamiltonian [36]. Suppose we want to measure the quantity \(\langle\varphi|A|\varphi\rangle\). Here \(A=f_{k}^{\dagger}Hf_{q}\) and \(A=f_{k}^{\dagger}f_{q}\) for \(\mathbf{H}_{k,q}\) and \(\mathbf{S}_{k,q}\), respectively. First, we work out an expression in the from \(A=\sum_{s}q_{s}U_{s}\), where \(U_{s}\) are unitary operators, and \(q_{s}\) are complex coefficients. Then, there are two ways to measure \(\langle\varphi|A|\varphi\rangle\). When the expression has finite terms, we can apply \(A\) on the state \(|\varphi\rangle\) by using a set of ancilla qubits; In the literature, LCU usually refers to this approach [37]. Alternatively, we can individually measure each term \(\langle\varphi|U_{s}|\varphi\rangle\) with the Hadamard-test circuit [38] and compute the summation \(\langle\varphi|A|\varphi\rangle\) using the Monte Carlo method [39]. The second way works for an expression with even infinite terms and requires only one or zero ancilla qubits [40; 41]. We focus on the Monte Carlo method in this work. Now we give our LCU expression. Suppose we already express \(H\) as a linear combination of Pauli operators, it is straightforward to work out the LCU expression of \(A\) given the expression of \(f_{k}\). Therefore, we only give the expression of \(f_{k}\). Utilising the Fourier transformation of a modified Hermite function (which is proved in Appendix C), we can express \(f_{k}\) in Eq. (4) as \[f_{k}=\frac{i^{k-1}}{2^{\frac{k-1}{2}}\tau^{k-1}}\int_{-\infty}^{+\infty}dtH_{k -1}\left(\frac{t}{\sqrt{2}\tau}\right)g_{\tau}(t)e^{-ixt}, \tag{5}\] where \(H_{n}(u)\) denotes Hermite polynomials, \(g_{\tau}(t)=\frac{1}{\tau\sqrt{2}\pi}e^{-\frac{t^{2}}{2\tau^{2}}}\) is the normalised Gaussian function, \(x=H-E_{0}\), and \(e^{-ixt}\) is the real-time evolution (RTE) operator. ## VI Real time evolution There are various protocols for implementing RTE, including Trotterisation [42; 43; 44], LCU [37; 39; 45; 46] and qDRIFT [47], etc. In most of the protocols, RTE is inexact but approaches the exact one when the circuit depth increases. Therefore, all these protocols work for our algorithm as long as the circuit is sufficiently deep. In this work, we specifically consider an LCU-type protocol, the zeroth-order leading-order-rotation formula [48]. In the leading-order-rotation protocol, RTE is _exact_ even when the circuit depth is finite. In this protocol, we express RTE in the LCU form \(e^{-iHt}=\left[\sum_{r}v_{r}(t/N)V_{r}(t/N)\right]^{N}\), where \(V_{r}(t)=e^{-i\phi(t)\sigma},\sigma\) is a rotation or Pauli operator, \(v_{r}(t)\) are complex coefficients, and \(N\) is the time step number. This exact expression of RTE is worked out in the spirit of Taylor expansion and contains infinite terms. Notice that \(N\) determines the circuit depth. See Appendix D for details. Substituting the expression of \(e^{-iHt}\) into Eq. (5), we obtain the eventual LCU expression of \(f_{k}\). ## VII Bias and variance Let \(\hat{A}\) be the estimator of \(\langle\varphi|A|\varphi\rangle\). There are two types of errors in \(\hat{A}\), bias and variance. Because RTE is exact in the leading-order-rotation protocol, the estimator \(\hat{A}\) is unbiased. Therefore, we can focus on the variance from now on. If we use the Monte Carlo method and the one-ancilla Hadamard test to evaluate the LCU expression of \(A\), the variance has the upper bound \[\text{Var}(\hat{A})\leqslant\frac{2C_{A}^{2}}{M}, \tag{6}\] where \(2M\) is the measurement number, and \(C_{A}=\sum_{s}|q_{s}|\) is the 1-norm of coefficients in the expression. We call \(C_{A}\) the cost of the LCU expression. We remark that the variance in this form is universal and valid for all algorithms with some proper factor \(C_{A}\). The cost of an LCU expression is related to the spectral norm. Notice \(\|U_{s}\|_{2}=1\), we immediately conclude that \(C_{A}\geqslant\|A\|_{2}\). Therefore, when \(\|f_{k}\|_{2}\) decreases exponentially with \(k\), it is possible to measure \(\langle\varphi|A|\varphi\rangle\) with an error decreasing exponentially with \(k\). In our algorithm, we find that the cost has the form \[C_{A}=h_{tot}c_{k}c_{q}\text{ and }c_{k}c_{q} \tag{7}\] for \(\mathbf{H}_{k,q}\) and \(\mathbf{S}_{k,q}\), respectively. \(c_{k}\) is the cost due to \(f_{k}\), and \[c_{k} =\frac{1}{2^{\frac{k-1}{2}}\tau^{k-1}}\int_{-\infty}^{+\infty}dt \left|H_{k-1}\left(\frac{t}{\sqrt{2}\tau}\right)\right|g_{\tau}(t)\] \[\quad\times\left[\sum_{r}\left|v_{r}\left(\frac{t}{N}\right) \right|\right]^{N}\leqslant 2\left(\frac{k-1}{e\tau^{2}}\right)^{\frac{k-1}{2}}. \tag{8}\] Here, we have taken the time step number \(N=4eh_{tot}^{2}\tau^{2}\) to work out the upper bound. We can find that we already approach the lower bound of the variance given by the spectral norm (The difference is a factor of 2). As a result, the variance of \(\hat{A}\) decreases exponentially with \(k\) and \(q\). Detailed pseudocodes and analysis are given in Appendix D. There are two ways to apply Theorem 1: First, we can take \(C_{\mathbf{H}}=h_{tot}C_{\mathbf{S}}\) and \(C_{\mathbf{S}}=\max\{c_{k}c_{q}\}\); Second, we can rescale the basis by taking \(f_{k}^{\prime}=f_{k}/c_{k}\). Matrix entries \(\hat{\mathbf{H}}_{k,q}^{\prime}\) and \(\hat{\mathbf{S}}_{k,q}^{\prime}\) of the rescaled basis \(\{f_{k}^{\prime}|\varphi\rangle\}\) have variance upper bounds \(2h_{tot}^{2}/M\) and \(2/M\), respectively. Therefore, \(C_{\mathbf{H}}=h_{tot}\) and \(C_{\mathbf{S}}=1\) for the rescaled basis. We take the rescaled basis in the numerical study. ## VIII Measurement overhead benchmarking We benchmark the measurement cost in quantum KSD algorithms listed in Table 1. Two models of strongly correlated systems, namely, the anti-ferromagnetic Heisenberg model and Hubbard model [49; 50; 51; 7], are used in benchmarking. For each model, the lattice is taken from two categories: regular lattices, including chain and ladder, and randomly generated graphs. We fix the lattice size such that the Hamiltonian can be encoded into ten qubits. For each Hamiltonian specified by the model and lattice, we evaluate all quantum KSD algorithms with the same subspace dimension \(d\) taken from \(2,3,\ldots,30\). In total, \(257\) instances of \((model,lattice,d)\) are generated to benchmark quantum KSD algorithms. We choose the power algorithm as the standard for comparison. We remark that the power algorithm and the classical Lanczos algorithm yield the same result when the implementation is error-free (i.e. without statistical error and rounding error), and the eventual error in this ideal case is the subspace error \(\epsilon_{K}\). Given the model, lattice and subspace dimension, we compute \(\epsilon_{K}\) of the power algorithm. Then, we take the permissible error \(\epsilon=2\epsilon_{K}\) and compute the overhead factor \(\gamma\) for each algorithm. The empirical distribution of \(\gamma\) is illustrated in Fig. 1. From the distribution, we can find that our Gaussian-power algorithm has the smallest measurement overhead, and \(\gamma\) is smaller than \(10^{2}\) for almost all instances. The filter algorithm is the most well-performed one among others. Part of the reason is that we have taken the optimal \(\Delta E\) found by grid search. The median value of \(\gamma\) is about 3 for our Gaussian-power algorithm, \(3\times 10^{4}\) for Figure 1: Empirical distribution of the measurement overhead \(\gamma\) for algorithms listed in Table 1. In the Gaussian-power algorithm, we take a random \(E_{0}\) in the interval \([E_{g}-0.1\|H\|_{2},E_{g}+0.1\|H\|_{2}]\), i.e. we assume that we have a preliminary estimation of the ground-state energy with an uncertainty as large as \(10\%\) of the entire spectrum. Details of the numerical calculation are given in Appendix E. the filter algorithm and as large as \(10^{12}\) for some other algorithms. ## IX Composing a projector We can explain the measurement efficiency of our algorithm by composing a projector. In our algorithm with the rescaled basis, the solution is a state in the form \(|\Psi(\mathbf{a})\rangle=\sum_{k=1}^{d}a_{k}c_{k}^{-1}f_{k}|\varphi\rangle\). Ideally, the linear combination of \(f_{k}\) realises a projector onto the ground state, i.e. \(\sum_{k=1}^{d}a_{k}c_{k}^{-1}f_{k}=|\psi_{g}\rangle\langle\psi_{g}|\). Then, \(|\Psi(\mathbf{a})\rangle=\sqrt{p_{g}}|\psi_{g}\rangle\) up to a base and \(\mathbf{a}^{\dagger}\mathbf{S}\mathbf{a}=\langle\Psi(\mathbf{a})|\Psi( \mathbf{a})\rangle=p_{g}\). Recall \(\gamma\approx\left(\frac{p_{\mathbf{a}}s^{\dagger}\mathbf{a}}{n^{\dagger} \mathbf{S}\mathbf{a}}\right)^{2}=\left(\mathbf{a}^{\dagger}\mathbf{a}\right) ^{2}\) (notice that \(C_{\mathcal{S}}=1\)), we can find that \(\mathbf{a}^{\dagger}\mathbf{a}\) determines \(\gamma\). When \(c_{k}\) is smaller, \(|a_{k}|\) required for composing the projector is smaller. In our algorithm, \(c_{k}\) decreases exponentially with \(k\), and the speed of decreasing is controllable via the parameter \(\tau\). In this way, our algorithm results in a small \(\gamma\). To be concrete, let's consider a classic way of composing a projector from Hamiltonian powers using Cheby-shev polynomials \(T_{n}\), which has been used for proving the convergence of the Lanczos algorithm [52]. Without loss of generality, we suppose \(\|H\|_{2}=1\). An approximate projector in the form of Chebyshev polynomials is \[\frac{T_{n}(Z)}{T_{n}(z_{1})}=|\psi_{g}\rangle\langle\psi_{g}|+\Omega, \tag{9}\] where \(Z=1-(H-E_{g}-\Delta)\), \(\Delta\) is the energy gap between the ground state and the first excited state, \(z_{1}=1+\Delta\), and \(\Omega\) is an operator with the upper bound \(\|\Omega\|_{2}\leq 2/(z_{1}^{n}+z_{1}^{-n})\). Notice that the error \(\Omega\) depends on \(n\), and its upper bound decreases exponentially with \(n\) if \(\Delta\) is finite. For simplicity, we focus on the case that \(E_{0}=E_{g}\) in the Hamiltonian power. Then, in the expansion \(T_{n}(Z)=\sum_{j=0}^{n}b(H-E_{g})^{l}\), coefficients have upper bounds \(|b_{l}|\leqslant n^{l}T_{n}(z_{1})\) increasing exponentially with \(l\). See Appendix F. In the LCU and Monte Carlo approach, large coefficients lead to large variance. Therefore, it is difficult to realise this projector because of the exponentially increasing coefficients. In our algorithm, the exponentially decreasing \(c_{k}\) can cancel out the large \(b_{l}\). We can compose the Chebyshev-polynomial projector with an additional Gaussian factor. Taking \(d=n+1\) and \(a_{k}=c_{k}b_{k-1}/T_{n}(z_{1})\), we have \(\sum_{k=1}^{d}a_{k}c_{k}^{-1}f_{k}=\frac{T_{n}(Z)}{T_{n}(z_{1})}e^{-\frac{1}{ 2}(H-E_{g})^{2}\tau^{2}}\). Because the Gaussian factor is also a projector onto the ground state, the overall operator is a projector with a smaller error than the Chebyshev-polynomial projector. The corresponding overhead factor is \[\gamma\lesssim 4\frac{1}{1-n^{3}/(e\tau^{2})}. \tag{10}\] When \(\tau\) is sufficiently large, \(\gamma\lesssim 4\). This example shows that the Gaussian-power basis can compose a projector with a small measurement cost. ## X Conclusions In this work, we have proposed a regularised estimator of the ground-state energy (see Algorithm 1). Even in the presence of statistical errors, the regularised estimator is variational (i.e. equal to or above the true ground-state energy) and has a rigorous upper bound. Because of these properties, it provides a universal way of analysing the measurement cost in quantum KSD algorithms, with the result summarised in Theorem 1. This approach can be generalised to analyse the effect of other errors, e.g. imperfect quantum gates. The robustness to statistical errors in our algorithm implies the robustness to other errors. To minimise the measurement cost, we have proposed a protocol for constructing Hamiltonian power with an additional Gaussian factor. This Gaussian factor is important because it leads to the statistical error decreasing exponentially with the power. Then, we benchmark quantum KSD algorithms with two models of quantum many-body systems. We find that our algorithm requires the smallest measurement number, and a measurement overhead \(\gamma\) of a hundred is almost always sufficient for approaching the theoretical-limit accuracy \(\epsilon_{K}\) of the Lanczos algorithm. In addition to the advantage over the classical Lanczos algorithm in scalability, our result suggests that the quantum algorithm is also competitive in accuracy at a reasonable measurement cost. ###### Acknowledgements. YL thanks Xiaoting Wang and Zhenyu Cai for the helpful discussions. This work is supported by the National Natural Science Foundation of China (Grant No. 12225507 and 12088101). ## Appendix A Proof of Theorem 1 In this section, we briefly overview the KSD algorithm and then give the proof of Theorem 1. ### General formalism of KSD algorithm Given a reference state \(|\varphi\rangle\), a Hamiltonian \(H\) and an integer \(d\), the standard Krylov subspace is \(\mathcal{K}=\text{Span}\left(|\varphi\rangle,H|\varphi\rangle,H^{2}|\varphi \rangle,\ldots,H^{d-1}|\varphi\rangle\right)\). This can be seen as taking polynomials \(f_{k}(x)=x^{k-1}\). The subspace is the same when \(\{f_{1},f_{2},\ldots,f_{d}\}\) is an arbitrary set of linearly-independent \((d-1)\)-degree polynomials. Any state in the Krylov subspace can be expressed as \[|\psi_{K}\rangle=\sum_{k=1}^{d}a_{k}f_{k}(H)|\varphi\rangle. \tag{11}\] The state that minimises the Rayleigh quotient \(\langle\psi_{K}|H|\psi_{K}\rangle/\langle\psi_{K}|\psi_{K}\rangle\) is regarded as the approximate ground state of the Hamiltonian \(H\), and the corresponding energy is the approximate ground-state energy, i.e. \[|\psi_{g}\rangle\sim|\psi_{min}\rangle=\operatorname*{arg\,min}_{ \psi_{K}\in\mathcal{K}}\frac{\langle\psi_{K}|H|\psi_{K}\rangle}{\langle\psi_{K }|\psi_{K}\rangle}, \tag{10}\] \[E_{g}\sim E_{min}=\min_{\psi_{K}\in\mathcal{K}}\frac{\langle\psi_ {K}|H|\psi_{K}\rangle}{\langle\psi_{K}|\psi_{K}\rangle}. \tag{11}\] **Definition 1** (Rayleigh quotient).: For any matrix \(\mathbf{A}\in\mathbb{C}^{d\times d}\), and any non-zero vector \(\mathbf{a}\in\mathbb{C}^{d}\), the Rayleigh quotient is defined as \[\langle\mathbf{A}\rangle(\mathbf{a})=\frac{\mathbf{a}^{\dagger}\mathbf{A} \mathbf{a}}{\mathbf{a}^{\dagger}\mathbf{a}}. \tag{12}\] The approximate ground state energy can be rewritten as \[E_{min}=\min_{\mathbf{a}\neq 0}\frac{\mathbf{a}^{\dagger}\mathbf{H}\mathbf{a}}{ \mathbf{a}^{\dagger}\mathbf{S}\mathbf{a}}=\min_{\mathbf{a}\neq 0}\frac{\langle \mathbf{H}\rangle(\mathbf{a})}{\langle\mathbf{S}\rangle(\mathbf{a})}, \tag{13}\] where \(\mathbf{H}\) and \(\mathbf{S}\) are \(d\times d\) Hermitian matrices as defined in the main text. It can be shown that \(H^{k}|\varphi\rangle\) converges to the eigenstate with the largest absolute eigenvalue when \(k\) increases. The eigenstate is usually the ground state (If not, take \(H\gets H-E_{0}\) where \(E_{0}\) is a positive constant). Therefore, it is justified to express the ground state as a linear combination of \(\{H^{k}|\varphi\rangle\}\) as long as \(|\varphi\rangle\) has a non-zero overlap with the true ground state \(|\psi_{g}\rangle\). However, the convergence with \(k\) causes inherent trouble that basis vectors of the Krylov subspace are nearly linearly dependent, i.e. the overlap matrix \(\mathbf{S}\) is nearly singular and has a large condition number. As a result, the denominator of the Rayleigh quotient can be very small, and a tiny error in computing may cause a large deviation of the approximate ground-state energy \(E_{min}\). According to the Rayleigh-Ritz theorem (Rayleigh quotient theorem) [34], the minimisation problem is equivalent to the generalised eigenvalue problem. The generalised eigenvalue problem can be reduced to an eigenvalue problem while the singularity issue remains. For example, one can use the Cholesky decomposition to express \(\mathbf{S}\) as the product of an upper-triangular matrix and its conjugate transpose, i.e. \(\mathbf{S}=\mathbf{R}^{\dagger}\mathbf{R}\), and the eigenvalue problem becomes \(\left(\mathbf{R}^{\dagger}\right)^{-1}\mathbf{H}\mathbf{R}^{-1}(\mathbf{R} \mathbf{a})=E\mathbf{R}\mathbf{a}\). The Cholesky decomposition, however, also requires the matrix \(\mathbf{S}\) to be positive-definite. When \(\mathbf{S}\) is nearly singular, the Cholesky decomposition is unstable. One way to address the singularity issue is by adding a positive diagonal matrix to \(\mathbf{S}\). In this work, we also add a diagonal matrix to \(\mathbf{H}\) to ensure that the Rayleigh quotient is variational, i.e. the energy is not lower than the ground-state energy (with a controllable probability). An alternative way to address the singularity issue is the so-called thresholding method, see Refs. [8; 16]. ### Measurement cost in quantum KSD In this section, we develop theoretical tools for analysing the measurement cost in quantum KSD algorithms. The exact values of \(\mathbf{H}\) and \(\mathbf{S}\) are unknown, and what we have are their approximate matrices \(\hat{\mathbf{H}}\) and \(\hat{\mathbf{S}}\) computed by the quantum computer. We assume that the quantum computer is fully fault-tolerant, and errors in \(\hat{\mathbf{H}}\) and \(\hat{\mathbf{S}}\) are due to the statistical fluctuations. Details of computing \(\mathbf{H}\) and \(\mathbf{S}\) with a quantum computer are given in Appendix D. **Lemma 1**.: _Let \(\kappa\) and \(\eta\) be any positive numbers. The inequality_ \[\Pr\left(\|\hat{\mathbf{H}}-\mathbf{H}\|_{2}\leqslant C_{\mathbf{ H}}\eta\text{ and }\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}\leqslant C_{\mathbf{S}}\eta\right)\] \[\geqslant 1-\kappa \tag{14}\] _holds when \(M\geqslant\frac{4d^{4}}{\kappa\eta^{2}}\)._ Proof.: Recall the variance upper bounds of \(\hat{\mathbf{H}}_{k,q}\) and \(\hat{\mathbf{S}}_{k,q}\) given in the main text. When \(M\geqslant\frac{4d^{4}}{\kappa\eta^{2}}\), \[\operatorname{Var}(\hat{\mathbf{H}}_{k,q}) \leqslant\frac{\kappa C_{\mathbf{H}}^{2}\eta^{2}}{2d^{4}}, \tag{15}\] \[\operatorname{Var}(\hat{\mathbf{S}}_{k,q}) \leqslant\frac{\kappa C_{\mathbf{S}}^{2}\eta^{2}}{2d^{4}}. \tag{16}\] According to Chebyshev's inequality, we have \[\Pr\left(|\hat{\mathbf{H}}_{k,q}-\mathbf{H}_{k,q}|\geqslant\frac{ C_{\mathbf{H}}\eta}{d}\right) \leqslant\frac{\kappa}{2d^{2}}, \tag{17}\] \[\Pr\left(|\hat{\mathbf{S}}_{k,q}-\mathbf{S}_{k,q}|\geqslant\frac{ C_{\mathbf{S}}\eta}{d}\right) \leqslant\frac{\kappa}{2d^{2}}. \tag{18}\] Since matrix entries are measured independently, estimators \(\hat{\mathbf{H}}_{k,q}\) or \(\hat{\mathbf{S}}_{k,q}\) are independent. We have \[\Pr\left(\begin{array}{c}\forall k,q\ |\hat{\mathbf{H}}_{k,q}- \mathbf{H}_{k,q}|\leqslant\frac{C_{\mathbf{H}}\eta}{d}\\ \text{ and }|\hat{\mathbf{S}}_{k,q}-\mathbf{S}_{k,q}|\leqslant\frac{C_{\mathbf{S}} \eta}{d}\end{array}\right)\] \[\geqslant\left[\left(1-\frac{\kappa}{2d^{2}}\right)^{d^{2}} \right]^{2}\geqslant 1-\kappa, \tag{19}\] where Bernoulli's inequality is used. Let \(\mathbf{A}\) be an \(m\times n\) matrix with entries \(A_{i,j}\), its spectral norm satisfies \[\|A\|_{2}\leqslant\sqrt{mn}\max_{i,j}|A_{i,j}|. \tag{20}\] This is because of the well known relation \(\|A\|_{2}\leqslant\|A\|_{F}\), where the Frobenius norm is \(\|A\|_{F}=\left(\sum_{i=1}^{m}\sum_{j=1}^{n}|A_{i,j}|^{2}\right)^{1/2}\). Therefore, when \(|\hat{\mathbf{H}}_{k,q}-\mathbf{H}_{k,q}|\leqslant\frac{C_{\mathbf{H}}\eta}{d}\) and \(|\hat{\mathbf{S}}_{k,q}-\mathbf{S}_{k,q}|\leqslant\frac{C_{\mathbf{S}}\eta}{d}\) for all \(k\) and \(q\), we have \(\|\hat{\mathbf{H}}-\mathbf{H}\|_{2}\leqslant C_{\mathbf{H}}\eta\) and \(\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}\leqslant C_{\mathbf{S}}\eta\). **Definition 2**.: \[E(\mathbf{a}) = \frac{\langle\mathbf{H}\rangle(\mathbf{a})}{\langle\mathbf{S}\rangle( \mathbf{a})},\] (13) \[\hat{E}(\eta,\mathbf{a}) = \frac{\langle\hat{\mathbf{H}}\rangle(\mathbf{a})+C_{\mathbf{H}} \eta}{\langle\hat{\mathbf{S}}\rangle(\mathbf{a})+C_{\mathbf{S}}\eta},\] (14) \[E^{\prime}(\eta,\mathbf{a}) = \frac{\langle\mathbf{H}\rangle(\mathbf{a})+2C_{\mathbf{H}}\eta}{ \langle\mathbf{S}\rangle(\mathbf{a})+2C_{\mathbf{S}}\eta}.\] (15) **Lemma 2**.: _If \(\|\hat{\mathbf{H}}-\mathbf{H}\|_{2}\leqslant C_{\mathbf{H}}\eta\) and \(\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}\leqslant C_{\mathbf{S}}\eta\), the following statements hold:_ 1. \(\hat{E}(\eta,\mathbf{a})\geqslant\min\{E(\mathbf{a}),0\}\)_;_ 2. _If_ \(\hat{E}(\eta,\mathbf{a})<0\)_, then_ \(\hat{E}(\eta,\mathbf{a})\leqslant E^{\prime}(\eta,\mathbf{a})\)_;_ 3. _If_ \(E^{\prime}(\eta,\mathbf{a})<0\)_, then_ \(\hat{E}(\eta,\mathbf{a})<0\)_._ Proof.: Consider the error in the denominator, \[|\langle\hat{\mathbf{S}}\rangle(\mathbf{a})-\langle\mathbf{S} \rangle(\mathbf{a})|=\left|\frac{\mathbf{a}^{\dagger}\hat{\mathbf{S}}\mathbf{ a}-\mathbf{a}^{\dagger}\mathbf{S}\mathbf{a}}{\mathbf{a}^{\dagger}\mathbf{a}}\right|\] \[= \left|\frac{\mathbf{a}^{\dagger}(\hat{\mathbf{S}}-\mathbf{S}) \mathbf{a}}{\mathbf{a}^{\dagger}\mathbf{a}}\right|\leqslant\frac{|\mathbf{a}^ {\dagger}\|\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}|\mathbf{a}|}{\mathbf{a}^{ \dagger}\mathbf{a}}\leqslant C_{\mathbf{S}}\eta, \tag{16}\] from which we obtain \[0<\langle\mathbf{S}\rangle(\mathbf{a})\leqslant\langle\hat{ \mathbf{S}}\rangle(\mathbf{a})+C_{\mathbf{S}}\eta\leqslant\langle\mathbf{S} \rangle(\mathbf{a})+2C_{\mathbf{S}}\eta. \tag{17}\] Similarly, we have \[\langle\mathbf{H}\rangle(\mathbf{a})\leqslant\langle\hat{ \mathbf{H}}\rangle(\mathbf{a})+C_{\mathbf{H}}\eta\leqslant\langle\mathbf{H} \rangle(\mathbf{a})+2C_{\mathbf{H}}\eta. \tag{18}\] Let \(a\), \(b\), \(c\) and \(d\) be positive numbers, it can be shown that \[\frac{-a}{b}<\frac{-a+c}{b+d}. \tag{19}\] If \(\hat{E}(\eta,\mathbf{a})<0\), then \(\langle\hat{\mathbf{H}}\rangle(\mathbf{a})+C_{\mathbf{H}}\eta<0\). Under this condition, by using Eqs. (17)-(19), we get \[\frac{\langle\mathbf{H}\rangle(\mathbf{a})}{\langle\mathbf{S} \rangle(\mathbf{a})}\leqslant\frac{\langle\hat{\mathbf{H}}\rangle(\mathbf{a})+ C_{\mathbf{H}}\eta}{\langle\hat{\mathbf{S}}\rangle(\mathbf{a})+C_{\mathbf{S}} \eta}\leqslant\frac{\langle\mathbf{H}\rangle(\mathbf{a})+2C_{\mathbf{H}}\eta}{ \langle\mathbf{S}\rangle(\mathbf{a})+2C_{\mathbf{S}}\eta}, \tag{20}\] i.e. \(E(\mathbf{a})\leqslant\hat{E}(\eta,\mathbf{a})\leqslant E^{\prime}(\eta, \mathbf{a})\). The second statement is proven. If \(E(\mathbf{a})<0\), we have \(\langle\mathbf{H}\rangle(\mathbf{a})<0\), then \(\hat{E}(\eta,\mathbf{a})\geqslant E(\mathbf{a})\). If \(E(\mathbf{a})\geqslant 0\), it is obvious that \(\hat{E}(\eta,\mathbf{a})\geqslant 0\). Combining these two cases, we prove the first statement. If \(E^{\prime}(\eta,\mathbf{a})<0\), then \(\langle\hat{\mathbf{H}}\rangle(\mathbf{a})+C_{\mathbf{H}}\eta\leqslant\langle \mathbf{H}\rangle(\mathbf{a})+2C_{\mathbf{H}}\eta<0\). Therefore, \(\hat{E}(\eta,\mathbf{a})<0\). The third statement is proven. The first quotient \(E(\mathbf{a})\) is the Rayleigh quotient, which gives \(E_{min}\) after minimisation. However, exact matrices \(\mathbf{H}\) and \(\mathbf{S}\) are not available. In practice, we minimise the second quotient \(\hat{E}(\eta,\mathbf{a})\) to calculate the ground-state energy. The error in the ground-state energy calculated using \(\hat{E}(\eta,\mathbf{a})\) depends on the noise in matrices \(\hat{\mathbf{H}}\) and \(\hat{\mathbf{S}}\). Therefore, we introduce the third quotient \(E^{\prime}(\mathbf{a},\eta)\), which is a conditional upper bound of \(\hat{E}(\eta,\mathbf{a})\). Notice that \(E(\mathbf{a})\) is a conditional lower bound of \(\hat{E}(\eta,\mathbf{a})\). Then, \(E(\mathbf{a})\) and \(E^{\prime}(\mathbf{a},\eta)\) give an upper bound of the error. We have shown the relation between these three quotients. Next, we give the relation between them after minimisation. **Lemma 3**.: _Under conditions_ 1. \(\|\hat{\mathbf{H}}-\mathbf{H}\|_{2}\leqslant C_{\mathbf{H}}\eta\) _and_ \(\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}\leqslant C_{\mathbf{S}}\eta\)_, and_ 2. \(E_{g}<0\) _and_ \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})<0\)_,_ _the following statement holds,_ \[E_{g}\leqslant\min_{\mathbf{a}\neq 0}\hat{E}(\eta,\mathbf{a})\leqslant\min_{ \mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a}). \tag{21}\] Proof.: According to the first statement in Lemma 2, \(\forall\mathbf{a}\neq 0\), \(\hat{E}(\eta,\mathbf{a})\geqslant\min\{E(\mathbf{a}),0\}\). Since \(E(\mathbf{a})\geqslant E_{g}\) and \(0>E_{g}\), we have \(\min_{\mathbf{a}}\hat{E}(\eta,\mathbf{a})\geqslant E_{g}\). Let \(\mathbf{a}^{*}=\arg\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})\). According to the third statement in Lemma 2, the condition \(E^{\prime}(\eta,\mathbf{a}^{*})<0\) implies that \(\hat{E}(\eta,\mathbf{a}^{*})<0\). Under this condition, according to the second statement in Lemma 2, \(\hat{E}(\eta,\mathbf{a}^{*})\leqslant E^{\prime}(\eta,\mathbf{a}^{*})\). Therefore, we have \[\min_{\mathbf{a}\neq 0}\hat{E}(\eta,\mathbf{a})\leqslant\hat{E}(\eta, \mathbf{a}^{*})\] \[\leqslant E^{\prime}(\eta,\mathbf{a}^{*})=\min_{\mathbf{a}\neq 0}E^{ \prime}(\eta,\mathbf{a}). \tag{22}\] The error of the KSD algorithm with exact matrices is \(\min_{\mathbf{a}\neq 0}E(\mathbf{a})-E_{g}\). The actual error in practice is \(\min_{\mathbf{a}\neq 0}\hat{E}(\eta,\mathbf{a})-E_{g}\). The upper bound of the actual error is \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})-E_{g}\). **Lemma 4**.: _Under the condition \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})<0\), when \(\epsilon>\min_{\mathbf{a}\neq 0}E(\mathbf{a})-E_{g}\), the equation \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})=E_{g}+\epsilon\) has a positive solution, and the solution is unique._ Proof.: Let \(\mathbf{a}^{*}=\arg\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})\), and let \(\eta^{\prime}<\eta\) be any positive numbers. It is obvious that \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta^{\prime},\mathbf{a})\leqslant E^{\prime}(\eta^{ \prime},\mathbf{a}^{*})<E^{\prime}(\eta,\mathbf{a}^{*})=\min_{\mathbf{a}\neq 0}E^{\prime}(\eta, \mathbf{a})<0\). Therefore, under the condition \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})<0\), \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})\) is a continuous monotonically increasing function of \(\eta\) when \(\eta\geqslant 0\). When \(\eta=0\), \(\min_{\mathbf{a}\neq 0}E^{\prime}(0,\mathbf{a})=\min_{\mathbf{a}\neq 0}E(\mathbf{a})\). Therefore, for all \(E_{g}+\epsilon>\min_{\mathbf{a}\neq 0}E(\mathbf{a})\), the equation has a positive solution, and the solution is unique. #### a.2.1 Proof of the theorem Proof.: The existence of the solution \(\eta\) is proved in Lemma 4. With the solution \(\eta\), \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})=E_{g}+\epsilon\). If \(\|\hat{\mathbf{H}}-\mathbf{H}\|_{2}\leqslant C_{\mathbf{H}}\eta\) and \(\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}\leqslant C_{\mathbf{S}}\eta\) hold up to the failure probability \(\kappa\) as proved in Lemma 1. Then, according to Lemma 3, \(E_{g}\leqslant\min_{\mathbf{a}\neq 0}\hat{E}(\eta,\mathbf{a})\leqslant\min_{ \mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})=E_{g}+\epsilon\) holds up to the failure probability. The total number of measurements is \[M_{tot} =M\times 2\times d^{2}\times 2=\frac{16d^{6}}{\kappa\eta^{2}}\] \[=\frac{256\|H\|_{2}^{2}}{\kappa p_{g}^{2}\epsilon^{2}}\times d^{6 }\times\frac{p_{g}^{2}\epsilon^{2}}{16\|H\|_{2}^{2}\eta^{2}}. \tag{10}\] Notice that for each matrix entry \(\hat{\mathbf{H}}_{k,q}\) or \(\hat{\mathbf{S}}_{k,q}\), we need \(M\) measurements for its real part and \(M\) measurements for its imaginary part. There are two matrices, and each matrix has \(d^{2}\) entries. ### Cost for measuring the energy We consider the ideal case that we can realise an ideal projection onto the true ground state. To use results in Appendix A.2, we assume that \(f_{1}=|\psi_{g}\rangle\langle\psi_{g}|\) is the projection operator and take \(d=1\) in the KSD algorithm. Then, we have \[\mathbf{H}_{1,1} =\langle\varphi|f_{1}^{\dagger}Hf_{1}|\varphi\rangle=p_{g}E_{g}, \tag{11}\] \[\mathbf{S}_{1,1} =\langle\varphi|f_{1}^{\dagger}f_{1}|\varphi\rangle=p_{g}, \tag{12}\] and \(E_{min}=\mathbf{H}_{1,1}/\mathbf{S}_{1,1}=E_{g}\), i.e. \(\epsilon_{K}=0\). We suppose \(C_{\mathbf{H}}=\|H\|_{2}\) and \(C_{\mathbf{S}}=1\). When we take \(\eta=\frac{p_{g}\epsilon}{4\|H\|_{2}}\), \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})\leqslant E_{g}+\epsilon\). Notice that \(\min_{\mathbf{a}\neq 0}E^{\prime}(\eta,\mathbf{a})=(\mathbf{H}_{1,1}+2\|H\|_{2} \eta)/(\mathbf{S}_{1,1}+2\eta)\leqslant E_{g}+4\|H\|_{2}\eta/p_{g}\). Accordingly, the total measurement number in the ideal case has an upper bound \[M_{tot}\leqslant\frac{256\|H\|_{2}^{2}}{\kappa p_{g}^{2}\epsilon^{2}}, \tag{13}\] which is the first factor in Eq. (10). ## Appendix B Optimised \(\alpha\) and \(\beta\) factors In Appendix A, we have demonstrated that in the general and rigorous result, the measurement overhead factors are \(\alpha(\kappa)=256/\kappa\) and \(\beta(d)=d^{6}\). In this section, we will show that these two factors can be reduced in our algorithm and some other KSD algorithms. Now, we show that the factor of \(256\) can be reduced to \(16\). First, in our algorithm (i.e. GP) and some other KSD algorithms (i.e. P, CP, IP, ITE and F), matrices \(\mathbf{H}\) and \(\mathbf{S}\) are real. In this case, we only need to measure the real part. The cost is directly reduced by a factor of two. Only measuring the real part also reduces variances by a factor of two, i.e. from \(\mathrm{Var}(\hat{\mathbf{H}}_{k,q})\leqslant 2C_{\mathbf{H}}^{2}/M\) and \(\mathrm{Var}(\hat{\mathbf{S}}_{k,q})\leqslant 2C_{\mathbf{S}}^{2}/M\) to \(\mathrm{Var}(\hat{\mathbf{H}}_{k,q})\leqslant C_{\mathbf{H}}^{2}/M\) and \(\mathrm{Var}(\hat{\mathbf{S}}_{k,q})\leqslant C_{\mathbf{S}}^{2}/M\). Because of the reduced variance, the cost is reduced by another factor of two. Overall, the factor of \(256\) is reduced to \(64\) in algorithms using real matrices. Second, when the failure probability \(\kappa\) is low, typically \(\|\hat{\mathbf{H}}-\mathbf{H}\|_{2}\ll C_{\mathbf{H}}\eta\) and \(\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}\ll C_{\mathbf{S}}\eta\). In this case, the error is overestimated in \(E^{\prime}\), and the typical error is approximately given by \[E^{\prime\prime}(\eta,\mathbf{a})=\frac{\mathbf{a}^{\dagger}(\mathbf{H}+C_{ \mathbf{H}}\eta)\mathbf{a}}{\mathbf{a}^{\dagger}(\mathbf{S}+C_{\mathbf{S}}\eta) \mathbf{a}}. \tag{14}\] Notice that a factor of two has been removed from the denominator and numerator compared with \(E^{\prime}\). If we consider the typical error \(\epsilon=E^{\prime\prime}-E_{g}\) instead of the error upper bound \(\epsilon=E^{\prime}-E_{g}\), the required sampling cost is reduced by a factor of four. Due to the two reasons, the factor of \(256\) is reduced to \(16\). Next, we show that taking \(\alpha=O(\ln\frac{1}{\kappa})\) is sufficient in practice. Further, we show that \(\beta\) can be reduced to \(O(d^{2})\). ### Gaussian statistical error We have used Chebyshev's inequality to analyse the sampling cost, which gives a rigorous bound that holds for general distributions of \(\hat{\mathbf{H}}_{k,q}\) and \(\hat{\mathbf{S}}_{k,q}\). However, in practice usually \(\hat{\mathbf{H}}_{k,q}\) and \(\hat{\mathbf{S}}_{k,q}\) (approximately) obey the normal distribution. Let's focus on real-matrix algorithms. Under the normal distribution assumption, \[\mathrm{Pr}\left(|\hat{\mathbf{H}}_{k,q}-\mathbf{H}_{k,q}|\geqslant \frac{C_{\mathbf{H}}\eta}{d}\right)=\mathrm{erfc}\left(\frac{C_{\mathbf{H}} \eta}{\mathrm{d}\sqrt{2\mathrm{Var}(\hat{\mathbf{H}}_{k,q})}}\right)\] \[\leqslant \mathrm{erfc}\left(\frac{\sqrt{M}\eta}{\sqrt{2}d}\right)\leqslant e ^{-\frac{M\eta^{2}}{2d^{2}}}, \tag{15}\] in which inequalities \(\mathrm{Var}(\hat{\mathbf{H}}_{k,q})\leqslant\frac{C_{\mathbf{H}}^{2}}{M}\) and \(\mathrm{erfc}(x)\leqslant e^{-x^{2}}\) have been used. Similarly, we have \[\mathrm{Pr}\left(|\hat{\mathbf{S}}_{k,q}-\mathbf{S}_{k,q}|\geqslant \frac{C_{\mathbf{S}}\eta}{d}\right)\leqslant e^{-\frac{M\eta^{2}}{2d^{2}}}. \tag{16}\] To make sure that \(\|\hat{\mathbf{H}}-\mathbf{H}\|_{2}\leqslant C_{\mathbf{H}}\eta\) and \(\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}\leqslant C_{\mathbf{S}}\eta\) with a probability of at least \(1-\kappa\), we take \(M\) such that \[e^{-\frac{M\eta^{2}}{2d^{2}}}=\frac{\kappa}{2d^{2}}. \tag{17}\] Then, \[M=\frac{2d^{2}}{\eta^{2}}\ln\frac{2d^{2}}{\kappa}, \tag{18}\] and the total measurement number is \[M_{tot}=M\times 1\times d^{2}\times 2=\frac{4d^{4}}{\eta^{2}}\ln\frac{2d^{2}}{ \kappa}. \tag{19}\] Assume \(\kappa\ll\frac{1}{2d^{2}}\), neglecting \(2d^{2}\) in logarithm yields \(\alpha(\kappa)=64\ln\frac{1}{\kappa}\) and \(\beta(d)=d^{4}\). Consider the typical error instead of the upper bound, the factor of \(64\) can be reduced to \(16\). ### Optimised measurement protocol and Hankel matrices In our algorithm (i.e. GP) and some other KSD algorithms (i.e. P, IP and ITE), \(\mathbf{H}\) and \(\mathbf{S}\) are real Hankel matrices. This section gives an optimised measurement protocol for real Hankel matrices. **Definition 3**.: Let \(\mathbf{g}\in\mathbb{R}^{2d-1}\) and \(\mathbf{G}\in\mathbb{R}^{d\times d}\). If \(\mathbf{G}_{i,j}=\mathbf{g}_{i+j-1}\), then \(\mathbf{G}\) is a \(d\times d\) real Hankel matrix. When \(\mathbf{H}\) and \(\mathbf{S}\) are Hankel matrices, we only need to measure \(2d-1\) matrix entries for each matrix to construct \(\hat{\mathbf{H}}\) and \(\hat{\mathbf{S}}\). Specifically, we measure \(\mathbf{H}_{\lceil\frac{d}{2}\rceil,\lfloor\frac{d}{2}\rfloor}\) and \(\mathbf{S}_{\lceil\frac{d}{2}\rceil,\lfloor\frac{d}{2}\rfloor}\) with \(l=2,3,\ldots,2d\). Then we take \(\hat{\mathbf{H}}_{i,j}=\mathbf{H}_{\lceil\frac{i+j}{2}\rceil,\lfloor\frac{i+j }{2}\rfloor}\) and \(\mathbf{S}_{i,j}=\mathbf{S}_{\lceil\frac{i+j}{2}\rceil,\lfloor\frac{i+j}{2}\rfloor}\) for all \(i\) and \(j\). Using the above measurement protocol, \(\hat{\mathbf{H}}\) and \(\hat{\mathbf{S}}\) are also Hankel matrices. Then \(\hat{\mathbf{H}}-\mathbf{H}\) and \(\hat{\mathbf{S}}-\mathbf{S}\) are Hankel matrices. **Lemma 5**.: _If \(\mathbf{g}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\), then_ \[\Pr(\|\mathbf{G}\|_{2}\geqslant\eta)\leqslant 2de^{-\frac{\eta^{2}}{2d \sigma^{2}}}. \tag{30}\] Proof.: The proof of Lemma 5 can be found in Ref. [53]. For real matrices, variance upper bounds are \(\mathrm{Var}(\hat{\mathbf{H}}_{k,q})\leqslant\frac{C_{\mathbf{H}}^{2}}{M}\) and \(\mathrm{Var}(\hat{\mathbf{S}}_{k,q})\leqslant\frac{C_{\mathbf{H}}^{2}}{M}\). Assuming that the variance takes its upper bound, the standard deviation of \((\hat{\mathbf{H}}_{k,q}-\mathbf{H}_{k,q})/C_{\mathbf{H}}\) is \(\sigma=\frac{1}{\sqrt{M}}\). Then, we can apply the concentration inequality in Lemma 5 to the Hankel matrix \((\hat{\mathbf{H}}-\mathbf{H})/C_{\mathbf{H}}\), and we have \[\Pr(\|\hat{\mathbf{H}}-\mathbf{H}\|_{2}\geqslant C_{\mathbf{H}} \eta)\leqslant 2de^{-\frac{M\eta^{2}}{2d}}. \tag{31}\] Similarly, \[\Pr(\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}\geqslant C_{\mathbf{S}} \eta)\leqslant 2de^{-\frac{M\eta^{2}}{2d}}. \tag{32}\] To make sure \(\|\hat{\mathbf{H}}-\mathbf{H}\|_{2}\leqslant C_{\mathbf{H}}\eta\) and \(\|\hat{\mathbf{S}}-\mathbf{S}\|_{2}\leqslant C_{\mathbf{S}}\eta\) with a probability of at least \(1-\kappa\), we take \(M\) such that \[2de^{-\frac{M\eta^{2}}{2d}}=\frac{\kappa}{2}. \tag{33}\] Then, \[M=\frac{2d}{\eta^{2}}\ln\frac{4d}{\kappa}, \tag{34}\] and the total measurement number is \[M_{tot}=M\times 1\times(2d-1)\times 2=\frac{4d(2d-1)}{\eta^{2}}\ln\frac{4d }{\kappa}. \tag{35}\] Assume \(\kappa\ll\frac{1}{4d}\), neglecting \(4d\) in logarithm yields \(\alpha=64\ln\frac{1}{\kappa}\) and \(\beta=d(2d-1)\). Again, consider the typical error instead of the upper bound, the factor of \(64\) can be reduced to \(16\). ## Appendix C Gaussian-power basis In this section, first, we work out the norm upper bound of \(f_{k}\) operators, and then we prove Eq. (5). ### Norm upper bound Notice that, for \(f_{k}\) operators of the Gaussian-power basis, \[\tau^{k-1}\|f_{k}\|_{2}\leqslant\max_{y\in\mathbb{R}}|y^{k-1}e^{ -\frac{1}{2}y^{2}}|. \tag{36}\] The above inequality can be proved via the spectral decomposition of the operator \((H-E_{0})\tau\), and \(y\) corresponds to eigenvalues of \((H-E_{0})\tau\). It is obvious that \(\max_{y\in\mathbb{R}}|y^{k-1}e^{-\frac{1}{2}y^{2}}|=\left(\frac{k-1}{e}\right) ^{\frac{k-1}{2}}\). Therefore, \(\|f_{k}\|_{2}\leqslant(\frac{k-1}{e\tau^{2}})^{\frac{k-1}{2}}\). ### The integral The following three properties of Hermite polynomials will be used later: i) The Fourier transform of \(H_{n}(u)e^{-\frac{u^{2}}{2}}\) gives \[\int_{-\infty}^{+\infty}duH_{n}(u)e^{-\frac{u^{2}}{2}}e^{-i\nu u }=(-i)^{n}\sqrt{2\pi}e^{-\frac{u^{2}}{2}}H_{n}(\nu); \tag{37}\] ii) The inverse explicit expression of the Hermite polynomials is \[u^{n}=\sum_{m=0}^{\lfloor\frac{n}{2}\rfloor}2^{-n}\frac{n!}{m!(n-2m)!}H_{n-2m} (u); \tag{38}\] and iii) The multiplication theorem of the Hermite polynomials, i.e. \[H_{n}(\gamma u)=\sum_{m=0}^{\lfloor\frac{n}{2}\rfloor}\gamma^{n-2m}(\gamma^{2} -1)^{m}\frac{n!}{m!(n-2m)!}H_{n-2m}(u). \tag{39}\] **Lemma 6**.: _Eq. (5) is true._ Proof.: Taking \(u=\frac{t}{\tau}\) and \(\gamma=\frac{1}{\sqrt{2}}\) in Eq. (39), we have \[H_{n}\left(\frac{t}{\sqrt{2}\tau}\right)=\sum_{m=0}^{\lfloor \frac{n}{2}\rfloor}(-1)^{m}2^{-\frac{n}{2}}\frac{n!}{m!(n-2m)!}H_{n-2m}\left( \frac{t}{\tau}\right). \tag{40}\] Taking \(u=\frac{t}{\tau}\) and \(\nu=x\tau\) in Eq. (37), we have \[\int_{-\infty}^{+\infty}dtH_{n-2m}\left(\frac{t}{\tau}\right)g_{ \tau}(t)e^{-ixt}\] \[= (-i)^{n-2m}e^{-\frac{1}{2}x^{2}\tau^{2}}H_{n-2m}(x\tau). \tag{41}\] Combining Eq. (30) and Eq. (31), we have \[\int_{-\infty}^{+\infty}dtH_{n}\left(\frac{t}{\sqrt{2}\tau}\right)g_ {\tau}(t)e^{-ixt}\] \[=\sum_{m=0}^{\lfloor\frac{n}{2}\rfloor}(-i)^{n}2^{-\frac{n}{2}} \frac{n!}{m!(n-2m)!}e^{-\frac{1}{2}x^{2}\tau^{2}}H_{n-2m}(x\tau). \tag{32}\] Taking \(u=x\tau\) in Eq. (31) and substituting it into Eq. (32), we have \[\int_{-\infty}^{+\infty}dtH_{n}\left(\frac{t}{\sqrt{2}\tau} \right)g_{\tau}(t)e^{-ixt}=(-i)^{n}2^{\frac{n}{2}}(x\tau)^{n}e^{-\frac{1}{2}x^ {2}\tau^{2}}, \tag{33}\] which is equivalent to Eq. (5). Notice that \(n=k-1\). ## Appendix D Algorithm and variance In this section, first, we review the zeroth-order leading-order-rotation formula [48], then we analyse the variance and work out the upper bounds of the cost, finally, we give the pseudocode of our measurement-efficient algorithm. ### Zeroth-order leading-order-rotation formula Assume that the Hamiltonian is expressed as a linear combination of Pauli operators, i.e. \[H=\sum_{j}h_{j}\sigma_{j}. \tag{34}\] The Taylor expansion of the time evolution operator is \[e^{-iH\Delta t}=\openone-iH\Delta t+T_{0}(\Delta t), \tag{35}\] where the summation of high-order terms is \[T_{0}(\Delta t)=\sum_{k=2}^{\infty}\sum_{j_{1},\ldots,j_{k}}\frac {\prod_{a=1}^{k}(-ih_{j_{a}}\Delta t)}{k!}\sigma_{j_{k}}\cdots\sigma_{j_{1}}. \tag{36}\] The leading-order terms can be expressed as a linear combination of rotation operators, \[\openone-iH\Delta t=\sum_{j}\beta_{j}(\Delta t)e^{-i\text{sgn} (h_{j})\phi(\Delta t)\sigma_{j}}, \tag{37}\] where \(\phi(\Delta t)=\arctan(h_{tot}\Delta t)\), \(\beta_{j}(\Delta t)=|h_{j}|\Delta t/\sin\phi(\Delta t)\) and \(h_{tot}=\sum_{j}|h_{j}|\). The zeroth-order leading-order-rotation formula is \[e^{-iH\Delta t}=\sum_{j}\beta_{j}(\Delta t)e^{-i\text{sgn}(h_{ j})\phi(\Delta t)\sigma_{j}}+T_{0}(\Delta t), \tag{38}\] which is a linear combination of rotation and Pauli operators. Accordingly, for the evolution time \(t\) and time step number \(N\), the LCU expression of the time evolution operator is \[e^{-iHt}=\left[\sum_{j}\beta_{j}(t/N)e^{-i\text{sgn}(h_{j})\phi( t/N)\sigma_{j}}+T_{0}(t/N)\right]^{N}. \tag{39}\] The above equation is the explicit form of the expression \(e^{-iHt}=\left[\sum_{r}v_{r}(t/N)V_{r}(t/N)\right]^{N}\) referred in the main text. Now, we consider the cost factor, i.e. the 1-norm of coefficients in an LCU expression. For Eq. (38), the corresponding cost factor is \[c(\Delta t) =\sum_{j}|\beta_{j}(\Delta t)|+\sum_{k=2}^{\infty}\sum_{j_{1}, \ldots,j_{k}}\frac{\prod_{a=1}^{k}|h_{j_{a}}\Delta t|}{k!}\] \[=\sqrt{1+h_{tot}^{2}\Delta t^{2}}+e^{h_{tot}|\Delta t|}-(1+h_{tot }|\Delta t|). \tag{40}\] Accordingly, the cost factor of Eq. (39) is \([c(t/N)]^{N}\). **Lemma 7**.: \[c(\Delta t)\leqslant e^{\frac{\openone}{k}h_{tot}^{2}\Delta t^{2}}.\] (41) Proof.: Let \(x=h_{tot}|\Delta t|\), thus \(x\geqslant 0\). Then \[c(\Delta t) =\sqrt{1+x^{2}}+e^{x}-(1+x)\] \[\leqslant e^{\frac{x^{2}}{2}}+e^{x}-(1+x). \tag{42}\] Define the function \[y(x)=e^{\frac{\openone}{2}x^{2}}+(1+x)-(e^{x}+e^{\frac{x^{2}}{2 }}). \tag{43}\] It follows that \(y(0)=0\) and \[y^{\prime}(x) =exe^{\frac{\openone}{2}x^{2}}+1-(e^{x}+xe^{\frac{x^{2}}{2}})\] \[\geqslant(e-1)xe^{\frac{\openone}{2}x^{2}}+1-e^{x} \tag{44}\] \[\geqslant(e-1)x+1-e^{x}. \tag{45}\] Let \(z(x)=(e-1)x+1-e^{x}\). Since \(z(0)=z(1)=0\), it is easy to see that \(z(x)\geqslant 0\) when \(0\leqslant x\leqslant 1\), which indicates that \(y^{\prime}(x)\geqslant 0\) when \(0\leqslant x\leqslant 1\). Based on Eq. (44) and the fact that \(e-1>1\), we have \(y^{\prime}(x)\geqslant 1\) when \(x\geqslant 1\). As a result, \(y^{\prime}(x)\geqslant 0\) when \(x\geqslant 0\), which means \(y(x)\geqslant 0\). Therefore, \[c(\Delta t)\leqslant e^{\frac{\openone}{2}x^{2}}-y(x)\leqslant e ^{\frac{\openone}{2}x^{2}}. \tag{46}\] According to Lemma 7, the cost factor \(e^{-iHt}\) has the upper bound \[\left[c(t/N)\right]^{N}\leqslant e^{\frac{\openoneone}{2}t^{2}}. \tag{47}\] Therefore, the cost factor approaches one when the time step number \(N\) increases. ### Variance analysis In this section, first, we prove the general upper bound of the variance, then we work out the upper bound of the cost \(c_{k}\). #### d.2.1 Variance upper bound Given the LCU expression \(A=\sum_{s}q_{s}U_{s}\), we can compute \(\langle\varphi|A|\varphi\rangle\) using the Monte Carlo method and the one-ancilla Hadamard-test circuit shown in Fig. 2. In the Monte Carlo calculation, we sample the unitary operator \(U_{s}\) with a probability of \(|q_{s}|/C_{A}\) in the spirit of importance sampling. The corresponding algorithm is given in Algorithm 2. ``` 1:Input \(|\varphi\rangle\), \(\{(q_{s},U_{s})\}\) and \(M\). 2:\(C_{A}\leftarrow\sum_{s}|q_{s}|\) 3:\(\hat{A}\gets 0\) 4:for\(l=1\) to \(M\)do 5: Choose \(s\) with a probability of \(|q_{s}|/C_{A}\). 6: Implement the circuit \(\mathcal{C}_{s}\) for one shot, measure the ancilla qubit in the \(X\) basis and record the measurement outcome \(\mu^{X}\). 7: Implement the circuit \(\mathcal{C}_{s}\) for one shot, measure the ancilla qubit in the \(Y\) basis and record the measurement outcome \(\mu^{Y}\). 8:\(\hat{A}\gets\hat{A}+e^{i\arg(q_{s})}(\mu^{X}+i\mu^{Y})\) 9:Output \(\hat{A}\leftarrow\frac{C_{A}}{M}\hat{A}\). ``` **Algorithm 2** Monte Carlo evaluation of an LCU expression of \(A\). **Lemma 8**.: _According to Algorithm 2, the estimator \(\hat{A}\) is unbiased, and the variance upper bound in Eq. (6) is true._ Proof.: First, we rewrite the LCU expression as \[A=C_{A}\sum_{s}\frac{|q_{s}|}{C_{A}}e^{i\arg(q_{s})}U_{s}. \tag{101}\] Then, \[\langle\varphi|A|\varphi\rangle=C_{A}\sum_{s}\frac{|q_{s}|}{C_{A}}e^{i\arg(q_{ s})}\langle\varphi|U_{s}|\varphi\rangle. \tag{102}\] Each unitary operator \(U_{s}\) has a corresponding Hadamard-test circuit denoted by \(\mathcal{C}_{s}\), which is shown in Fig. 2. When the ancilla qubit is measured in the \(W=X,Y\) basis, the measurement has a random outcome \(\mu^{W}=\pm 1\). Let \(P^{W}_{\mu^{W}}\) be the probability of the measurement outcome \(\mu^{W}\) in \(\mathcal{C}_{s}\). According to Ref. [38], we have \[\langle\varphi|U_{s}|\varphi\rangle=\sum_{\mu^{X}=\pm 1}P^{X}_{\mu^{X}}\mu^{X }+i\sum_{\mu^{Y}=\pm 1}P^{Y}_{\mu^{Y}}\mu^{Y}. \tag{103}\] The probability of \((s,\mu^{X},\mu^{X})\) is \((|q_{s}|/C_{A})P^{X}_{\mu^{X}}P^{Y}_{\mu^{Y}}\). Using Eqs. (102) and (103), we have \[\langle\varphi|A|\varphi\rangle=C_{A}\sum_{s,\mu^{X},\mu^{Y}} \frac{|q_{s}|}{C_{A}}P^{X}_{\mu^{X}}P^{Y}_{\mu^{Y}}e^{i\arg(q_{s})}(\mu^{X}+i \mu^{Y}). \tag{104}\] The corresponding Monte Carlo algorithm is given in Algorithm 2. The estimator \(\hat{A}\) is unbiased. According to Eq. (104), the expected value of \(C_{A}e^{i\arg(q_{s})}(\mu^{X}+i\mu^{Y})\) is \(\langle\varphi|A|\varphi\rangle\). Therefore, the expected value of \(\hat{A}\) is also \(\langle\varphi|A|\varphi\rangle\). Notice that \(\hat{A}\) is the average of \(C_{A}e^{i\arg(q_{s})}(\mu^{X}+i\mu^{Y})\) over \(M\) samples. The variance of \(\hat{A}\) is \[\mathrm{Var}(\hat{A}) = \frac{1}{M}\left[\sum_{s,\mu^{X},\mu^{Y}}\frac{|q_{s}|}{C_{A}}P^{ X}_{\mu^{X}}P^{Y}_{\mu^{Y}}\right. \tag{105}\] \[\quad\times\left|C_{A}e^{i\arg(q_{s})}(\mu^{X}+i\mu^{Y})\right|^{ 2}-\left|\langle\varphi|A|\varphi\rangle\right|^{2}\right]\] \[\leqslant \frac{2C_{A}^{2}}{M}.\] #### d.2.2 Cost upper bound Substituting Eq. (102) into Eq. (5), we can obtain the eventual LCU expression of \(f_{k}\). Substituting LCU expressions of \(f_{k}\) and \(f_{q}\) as well as the expression of \(H\) into \(A=f_{k}^{\dagger}Hf_{q},f_{k}^{\dagger}f_{q}\), we can obtain the LCU expression of \(A\). It is straightforward to work out \(C_{A}=h_{tot}c_{k}c_{q},c_{k}c_{q}\), respectively, where \[c_{k}=\frac{1}{2^{\frac{k-1}{2}}\frac{1}{T}k^{-1}}\int_{-\infty} ^{+\infty}dt\left|H_{k-1}\left(\frac{t}{\sqrt{2}\tau}\right)\right|g_{\tau}(t )[c(t/N)]^{N}. \tag{106}\] Here, we have replaced \(\left[\sum_{r}\left|v_{r}\left(\frac{t}{N}\right)\right|\right]^{N}\) with \([c(t/N)]^{N}\) in Eq. (8). To work out the cost of \(A\), we have used that the cost factor is additive and multiplicative. Suppose LCU expressions of \(A_{1}\) and \(A_{2}\) are \(A_{1}=q_{1}U_{1}+q_{1}^{\prime}U_{1}^{\prime}\) and \(A_{2}=q_{2}U_{2}+q_{2}^{\prime}U_{2}^{\prime}\), respectively. The cost factors of \(A_{1}\) and \(A_{2}\) are \(C_{1}=|q_{1}|+|q_{1}^{\prime}|\) and \(C_{2}=|q_{2}|+|q_{2}^{\prime}|\), respectively. Substituting LCU expressions of \(A_{1}\) and \(A_{2}\) into \(A=A_{1}+A_{2}\), the expression of \(A\) reads \(A=q_{1}U_{1}+q_{1}^{\prime}U_{1}^{\prime}+q_{2}U_{2}+q_{2}^{\prime}U_{2}^{ \prime}\). Then, the cost factor of \(A\) is \(C_{A}=|q_{1}|+|q_{1}^{\prime}|+|q_{2}|+|q_{2}^{\prime}|=C_{1}+C_{2}\). Substituting LCU expressions of \(A_{1}\) and \(A_{2}\) into \(A^{\prime}=A_{1}A_{2}\), the expression of \(A^{\prime}\) reads \(A^{\prime}=(q_{1}U_{1}+q_{1}^{\prime}U_{1}^{\prime})(q_{2}U_{2}+q_{2}^{\prime }U_{2}^{\prime})=q_{1}q_{2}U_{1}U_{2}+q_{1}q_{2}^{\prime}U_{1}U_{2}^{\prime}+ q_{1}q_{2}^{\prime}U_{1}^{\prime}U_{2}+q_{1}^{\prime}q_{2}^{\prime}U_{1}^{ \prime}U_{2}^{\prime}\). Then, the cost factor of \(A^{\prime}\) is \(C_{A}^{\prime}=|q_{1}q_{2}|+|q_{1}^{\prime}q_{2}|+|q_{1}^{\prime}q_{2}^{\prime }|=C_{1}C_{2}\). The upper bound of \(c_{k}\) is \[c_{k}\leqslant c_{k}^{ub}=\frac{1}{2^{\frac{k-1}{2}}\tau^{k-1}} \int_{-\infty}^{+\infty}dt\left|H_{k-1}\left(\frac{t}{\sqrt{2}\tau}\right) \right|g_{\tau}(t)e^{\chi\frac{t^{2}}{\tau^{2}}}, \tag{101}\] where \(\chi=\frac{\varepsilon h_{tot}^{2}\tau^{2}}{2N}\). Here, we have used Lemma 7. Taking \(\chi=\frac{1}{8}\) (i.e. \(N=4eh_{tot}^{2}\tau^{2}\)), we numerically evaluate the upper bound \(c_{k}^{ub}\) and plot it in Fig. 3. We can find that \[c_{k}^{ub}\leqslant 2\left(\frac{k-1}{e\tau^{2}}\right)^{\frac{k-1}{2}}. \tag{102}\] ### Pseudocode Given the LCU expression of \(A\), we can compute \(\langle\varphi|A|\varphi\rangle\) according to Algorithm 2. We would like to repeat how we compose the LCU expression of \(A\): Substituting Eq. (100) into Eq. (5), we can obtain the eventual LCU expression of \(f_{k}\); Substituting LCU expressions of \(f_{k}\) and \(f_{q}\) as well as the expression of \(H\) into \(A=f_{k}^{\dagger}Hf_{q}\) and \(A=f_{k}^{\dagger}f_{q}\) (corresponding to \(\mathbf{H}_{k,q}\) and \(\mathbf{S}_{k,q}\), respectively), we can obtain the LCU expression of \(A\). Algorithm 2 does not involve details of the LCU expression. In this section, we give pseudocodes involving details. We remark that matrix entries of the rescaled basis \(f_{k}^{\prime}=f_{k}/c_{k}\) are \(\mathbf{H}_{k,q}^{\prime}=\mathbf{H}_{k,q}/(c_{k}c_{q})\) and \(\mathbf{S}_{k,q}^{\prime}=\mathbf{S}_{k,q}/(c_{k}c_{q})\), where \(\mathbf{H}_{k,q}\) and \(\mathbf{S}_{k,q}\) are computed by the pseudocodes. In pseudocodes, we use notations \(\mathbf{h}=(h_{1},h_{2},\ldots)\) and \(\mathbf{\sigma}=(\sigma_{1},\sigma_{2},\ldots)\) to represent the Hamiltonian \(H=\sum_{j}h_{j}\sigma_{j}\). We define probability density functions \[p_{\tau,k}(t)=\frac{1}{c_{k}2^{\frac{k-1}{2}}\tau^{k-1}}\left|H_{ k-1}\left(\frac{t}{\sqrt{2}\tau}\right)\right|g_{\tau}(t)[c(t/N)]^{N}, \tag{103}\] and probabilities \[P_{\mathrm{L}}(\Delta t) =\frac{\sqrt{1+h_{tot}^{2}\Delta t^{2}}}{c(\Delta t)}, \tag{104}\] \[P_{\mathrm{T}}(\Delta t) =\frac{e^{h_{tot}|\Delta t|}-(1+h_{tot}|\Delta t|)}{c(\Delta t)}. \tag{105}\] We use Poisson(\(\lambda\)) to denote a subroutine that returns \(k\) with a probability of \(\lambda^{k}e^{-\lambda}/k!\). ``` 1:Input \(|\varphi\rangle\), \((\mathbf{h},\mathbf{\sigma})\), \(E_{0}\), \(\tau\), \(N\) and \(M\). 2:\(h_{tot}\leftarrow\sum_{j}|h_{j}|\) 3:Compute \(c_{k}\) and \(c_{q}\). 4:\(\hat{\mathbf{H}}_{k,q}\gets 0\) 5:for\(l=1\) to \(M\)do 6: Choose \(j\) with a probability of \(|h_{j}|/h_{tot}\). 7:\((\bar{v}_{k},\bar{V}_{k})\leftarrow\) BasisGen(\(\mathbf{h}\),\(\mathbf{\sigma}\),\(E_{0}\),\(\tau\),\(N\),\(k\)) 8:\((\bar{v}_{q},\bar{V}_{q})\leftarrow\) BasisGen(\(\mathbf{h}\),\(\mathbf{\sigma}\),\(E_{0}\),\(\tau\),\(N\),\(q\)) 9: Implement the circuit \(\mathcal{C}_{s}\) with \(U_{s}=\bar{V}_{k}^{\dagger}\sigma_{j}\bar{V}_{q}\) for one shot, measure the ancilla qubit in the \(X\) basis and record the measurement outcome \(\mu^{X}\). 10: Implement the circuit \(\mathcal{C}_{s}\) with \(U_{s}=\bar{V}_{k}^{\dagger}\sigma_{j}\bar{V}_{q}\) for one shot, measure the ancilla qubit in the \(Y\) basis and record the measurement outcome \(\mu^{Y}\). 11:\(\hat{\mathbf{H}}_{k,q}\leftarrow\hat{\mathbf{H}}_{k,q}+e^{i\arg(\xi_{k}^{\dagger} \bar{v}_{k})\xi_{q}})(\mu^{X}+i\mu^{Y})\) 12:Output \(\hat{\mathbf{H}}_{k,q}\leftarrow\frac{b_{tot}c_{q}}{M}\hat{\mathbf{H}}_{k,q}\). ``` **Algorithm 4** Measurement of matrix entry \(\mathbf{S}_{k,q}\). ### Pseudocode Given the LCU expression of \(A\), we can compute \(\langle\varphi|A|\varphi\rangle\) according to Algorithm 2. We would like to repeat how we compose the LCU expression of \(A\): Substituting Eq. (100) into Eq. (5), we can obtain the eventual LCU expression of \(f_{k}\); Substituting LCU expressions of \(f_{k}\) and \(f_{q}\) as well as the expression of \(H\) into \(A=f_{k}^{\dagger}Hf_{q}\) and \(A=f_{k}^{\dagger}f_{q}\) (corresponding to \(\mathbf{H}_{k,q}\) and \(\mathbf{S}_{k,q}\), respectively), we can obtain the LCU expression of \(A\). Algorithm 5 does not involve details of the LCU expression. In this section, we give pseudocodes involving details. We remark that matrix entries of the rescaled basis \(f_{k}^{\prime}=f_{k}/c_{k}\) are \(\mathbf{H}_{k,q}^{\prime}=\mathbf{H}_{k,q}/(c_{k}c_{q})\) and \(\mathbf{S}_{k,q}^{\prime}=\mathbf{S}_{k,q}/(c_{k}c_{q})\), where \(\mathbf{H}_{k,q}\) and \(\mathbf{S}_{k,q}\) are computed by the pseudocodes. In pseudocodes, we use notations \(\mathbf{h}=(h_{1},h_{2},\ldots)\) and \(\mathbf{\sigma}=(\sigma_{1},\sigma_{2},\ldots)\) to represent the Hamiltonian \(H=\sum_{j}h_{j}\sigma_{j}\). We define probability density functions \[p_{\tau,k}(t)=\frac{1}{c_{k}2^{\frac{k-1}{2}}\tau^{k-1}}\left|H_{k-1} \left(\frac{t}{\sqrt{2}\tau}\right)\right|g_{\tau}(t)[c(t/N)]^{N}, \tag{101}\] and probabilities \[P_{\mathrm{L}}(\Delta t) =\frac{\sqrt{1+h_{tot}^{2}\Delta t^{2}}}{c(\Delta t)}, \tag{102}\] \[P_{\mathrm{T}}(\Delta t) =\frac{e^{h_{tot}|\Delta t|}-(1+h_{tot}|\Delta t|)}{c(\Delta t)}. \tag{103}\] We use Poisson(\(\lambda\)) to denote a subroutine that returns \(k\) with a probability of \(\lambda^{k}e^{-\lambda} ## Appendix E Details in numerical calculation In this section, first, we describe models and reference states taken in the benchmarking, and then we give details. ### Models and reference states Two models are used in the benchmarking: the antiferromagnetic Heisenberg model \[H=J\sum_{\langle i,j\rangle}(\sigma_{i}^{X}\sigma_{j}^{X}+\sigma_{i}^{Y}\sigma_{ j}^{Y}+\sigma_{i}^{Z}\sigma_{j}^{Z}), \tag{10}\] where \(\sigma_{i}^{X}\), \(\sigma_{i}^{Y}\) and \(\sigma_{i}^{Z}\) are Pauli operators of the \(i\)th qubit; and the Hubbard model \[H = -J\sum_{\langle i,j\rangle}\sum_{\sigma=\uparrow,\downarrow}(a_{ i,\sigma}^{\dagger}a_{j,\sigma}+a_{j,\sigma}^{\dagger}a_{i,\sigma}) \tag{11}\] \[+U\sum_{i}\left(a_{i,\uparrow}^{\dagger}a_{i,\uparrow}-\frac{ \openone}{2}\right)\left(a_{i,\downarrow}^{\dagger}a_{i,\downarrow}-\frac{ \openone}{2}\right),\] where \(a_{i,\sigma}\) is the annihilation operator of the \(i\)th orbital and spin \(\sigma\). For the Heisenberg model, we take the spin number as ten, and for the Hubbard model, we take the orbital number as five. Then both models can be encoded into ten qubits. For the Hubbard model, we take \(U=J\). Without loss of generality, we normalise the Hamiltonian for simplicity: We take \(J\) such that \(\|H\|_{2}=1\), i.e. eigenvalues of \(H\) are in the interval \([-1,1]\). For each model, we consider three types of lattices: chain, ladder and randomly generated graphs, see Fig. 4. We generate a random graph in the following way: i) For a vertex \(v\), randomly choose a vertex \(u\neq v\) and add the edge \((v,u)\); ii) Repeat step-i two times for the vertex \(v\); iii) Implement steps i and ii for each vertex \(v\). On a graph generated in this way, each vertex is connected to four vertices on average. Notice that some pairs of vertices are connected by multiple edges, and then the interaction between vertices in the pair is amplified by the number of edges. It is non-trivial to choose and prepare a good reference state that has a sufficiently large overlap with the true ground state. Finding the ground-state energy of a local Hamiltonian is QMA-hard [54; 55]. If one can prepare a good reference state, then quantum computing is capable of solving the ground-state energy using quantum phase estimation [56]. So far, there does not exist a universal method for reference state preparation; see Ref. [57] for relevant discussions. Despite this, there are some practical ways of choosing and preparing the reference state. For fermion systems, we can take the mean-field approximate solution, i.e. a Hartree-Fock state, as the reference Figure 4: Lattices of the Heisenberg model and Hubbard model. One of the randomly generated graphs for each model is illustrated in the figure as an example. state. For general systems, one may try adiabatic state preparation, etc. We stress that preparing good reference states is beyond the scope of this work. Here, we choose two reference states which are likely to overlap with true ground states as examples. For the Heisenberg model, we take the pairwise singlet state \[|\varphi\rangle=|\Phi\rangle_{1,2}\otimes|\Phi\rangle_{3,4}\otimes\cdots, \tag{10}\] where \[|\Phi\rangle_{i,j}=\frac{1}{\sqrt{2}}\left(|0\rangle_{i}\otimes|1\rangle_{j}- |1\rangle_{i}\otimes|0\rangle_{j}\right) \tag{11}\] is the singlet state of spins \(i\) and \(j\). For the Hubbard model, we take a Hartree-Fock state as the reference state: We compute the ground state of the one-particle Hamiltonian (which is equivalent to taking \(U=0\)) and take the ground state of the one-particle Hamiltonian (a Slater determinant) as the reference state. ### Instances We test the algorithms listed in Table 1 with many instances. Each instance is a triple \((model,lattice,d)\), in which \(model\) takes one of the two models (Heisenberg model and Hubbard model), \(lattice\) takes a chain, ladder or random graph, and \(d\) is the dimension of the subspace. For example, Fig. 5 shows the result of instance \((\text{Heisenberg},\text{chain},d=5)\). Instances for computing empirical distributions in Fig. 1 consist of six groups: the Heisenberg model on the chain, ladder and random graphs, and the Hubbard model on the chain, ladder and random graphs. For chain and ladder, we evaluate each subspace dimension \(d=2,3,\ldots,30\); For a random graph, we only evaluate one subspace dimension randomly taken in the interval. Some instances are discarded. To avoid any potential issue caused by the precision of floating-point numbers, we discard instances that the subspace error \(\epsilon_{K}\) of the P algorithm is lower than \(10^{-9}\) due to a large \(d\). We also discard instances that \(\epsilon_{K}\) of the P algorithm is higher than \(10^{-2}\) due to a small \(d\), because the dimension may be too small to achieve an interesting accuracy in this case. For randomly generated graphs, the two reference states may have a small overlap with the true ground state. Because a good reference state is necessary for all quantum KSD algorithms, we discard graphs with \(p_{g}<10^{-3}\). Eventually, we have eight instances of the Heisenberg chain, seven instances of the Heisenberg Ladder, twenty-two instances of the Hubbard chain and twenty instances of the Hubbard ladder. For each model, we generate a hundred random-graph instances. The total number of instances is \(257\). ### Parameters In this section, we give parameters taken in each algorithm listed in Table 1, namely, \(E_{0}\), \(\tau\), \(\Delta t\), \(\Delta E\), \(C_{\mathbf{H}}\) and \(C_{\mathbf{S}}\). We summarise these parameters in Table 2. For \(E_{0}\), we expect that taking \(E_{0}\) close to \(E_{g}\) is optimal for our GP algorithm. As the exact value of \(E_{g}\) is unknown, we assume that we have a preliminary estimation of the ground-state energy with uncertainty as large as \(10\%\) of the entire spectrum, i.e. we take \(E_{0}\in[E_{g}-0.1,E_{g}+0.1]\). For other algorithms, we take \(E_{0}\) by assuming that the exact value of \(E_{g}\) is known. In the P algorithm, we take \(E_{0}=E_{g}+1\), such that the ground state is the eigenstate of \(H=E_{0}\) with the largest absolute eigenvalue and \(\|H-E_{0}\|_{2}=1\). Similarly, in the IP algorithm, we take \(E_{0}=E_{g}-1\), such that the ground state is the eigenstate of \((H-E_{0})^{-1}\) with the largest absolute eigenvalue and \(\|(H-E_{0})^{-1}\|_{2}=1\). In the ITE algorithm, \(E_{0}\) causes a constant factor \(e^{\tau(k-1)E_{0}}\) in each operator \(f_{k}\), i.e. determines the spectral norm of \(f_{k}\). Because the variance is related to the norm, a large \(E_{0}\) is problematic. Therefore, in the ITE algorithm, we take \(E_{0}=E_{g}\), such that \(\|f_{k}\|=1\) for all \(k\). In the F algorithm, we expect that \(E_{0}=E_{g}\) is optimal because \(f_{1}\) is an energy filter centred at the ground-state energy in this case. Therefore, we take \(E_{0}=E_{g}\) in the F algorithm. The RTE algorithm is closely related to the F algorithm. Therefore, we also take \(E_{0}=E_{g}\) in the RTE algorithm. We remark that in the GP algorithm, we take random \(E_{0}\) uniformly distributed in the interval \([E_{g}-0.1,E_{g}+0.1]\) to generate data in Fig. 1, and we take \(E_{0}=-E_{g}+i\delta E\), where \(i=-50,-9,\ldots,-1,0,1,\ldots,9,50\) and \(\delta E=0.002\), to generate data in Fig. 5. We remark that we have normalised the Hamiltonian such that \(\|H\|_{2}=1\). For \(\Delta t\) in the F algorithm, we take \(\Delta t=2\tau/(L-1)\) Figure 5: Comparison between quantum KSD algorithms listed in Table 1. Here, we take the instance \((\text{Heisenberg},\text{chain},d=5)\) as an example. \(\epsilon_{K}\) is the subspace error of the P algorithm. The grey area illustrates the range of \(\gamma\) when we take \(E_{0}\in[E_{g}-0.1,E_{g}+0.1]\) in the GP algorithm. Notice that \(\|H\|_{2}=1\). The blue curve represents the result of the GP algorithm with \(E_{0}=E_{g}\). For simplicity, we take the limit \(L\to+\infty\), i.e. \[f_{k} =\frac{1}{2\tau}\int_{-\tau}^{\tau}dte^{-i\left[H-E_{0}-\Delta E \left(k-1\right)\right]t}\] \[=\frac{\sin\left[H-E_{0}-\Delta E\left(k-1\right)\right]\tau}{ \left[H-E_{0}-\Delta E\left(k-1\right)\right]\tau}. \tag{100}\] Notice that this \(f_{k}\) is an energy filter centred at \(E_{0}+\Delta E\left(k-1\right)\), and the filter is narrower when \(\tau\) is larger. Now, we have three algorithms that have the parameter \(\tau\): GP, ITE and F. Similar to the F algorithm, \(f_{1}\) in the GP algorithm is an energy filter centred at \(E_{0}\). For these two algorithms, if \(E_{0}=E_{g}\), \(f_{1}|\varphi)\) converges to the true ground state in the limit \(\tau\to+\infty\). It is also similar in the ITE algorithm, in which \(f_{d}\) is a projector onto the ground state, and \(f_{d}|\varphi)\) converges to the true ground state in the limit \(\tau\to+\infty\). Realising a large \(\tau\) is at a certain cost, specifically, the circuit depth increases with \(\tau\)[43, 8]. Therefore, it is reasonable to take a finite \(\tau\). Next, we give the protocol for determining the value of \(\tau\) in each algorithm. Without getting into details about realising the filters and projectors, we choose \(\tau\) under the assumption that if filters and projectors have the same power in computing the ground-state energy, they probably require similar circuit depths. In GP and F algorithms, if \(E_{0}=E_{g}\), the energy error achieved by filters reads \(\mathbf{H}_{1,1}/\mathbf{S}_{1,1}-E_{g}\); and in the ITE algorithm, the energy error achieved by the projector reads \(\mathbf{H}_{d,d}/\mathbf{S}_{d,d}-E_{g}\). We take \(\tau\) such that errors achieved by filters and projectors take the same value \(\epsilon_{B}\). Specifically, for the GP and F algorithms, we determine the value of \(\tau\) by solving the equation (taking \(E_{0}=E_{g}\)) \[\frac{\mathbf{H}_{1,1}(\tau)}{\mathbf{S}_{1,1}(\tau)}-E_{g}=\epsilon_{B}; \tag{101}\] and for the ITE algorithm, we determine the value of \(\tau\) by solving the equation \[\frac{\mathbf{H}_{d,d}(\tau)}{\mathbf{S}_{d,d}(\tau)}-E_{g}=\epsilon_{B}. \tag{102}\] To choose the value of \(\epsilon_{B}\), we take the P algorithm as the standard, in which \(f_{d}\) is a projector onto the ground state, i.e. \(f_{d}|\varphi)\) converges to the true ground state in the limit \(d\to+\infty\). Overall, we determine the value of \(\tau\) in the following way: Given an instance \((model,lattice,d)\), i) first, compute the energy error achieved by the projector in the P algorithm, take \(\epsilon_{B}=\mathbf{H}_{d,d}/\mathbf{S}_{d,d}-E_{g}\); then, ii) solve equations of \(\tau\). In this way, filters and projectors in P, GP, ITE and F algorithms have the same power. For \(\Delta t\) in the RTE algorithm and \(\Delta E\) in the F algorithm, there are works on how to choose them in the literature [14, 15, 17]. In this work, we simply determine their values by a grid search. For the RTE algorithm, we take \(\Delta t=i\delta t\), where \(i=1,2,\ldots,100\) and \(\delta t=\frac{2\pi}{100}\); we compute the subspace error \(\epsilon_{K}\) for all \(\Delta t\); and we choose \(\Delta t\) of the minimum \(\epsilon_{K}\). For the F algorithm, we take \(\Delta E=i\delta E\), where \(i=1,2,\ldots,100\) and \(\delta E=\frac{2}{100\delta}\) (In this way, when we take the largest \(\Delta E\), filters span the entire spectrum); we compute the subspace error \(\epsilon_{K}\) for all \(\Delta E\); and we choose \(\Delta E\) of the minimum \(\epsilon_{K}\). For \(C_{\mathbf{H}}\) and \(C_{\mathbf{S}}\), we take \(C_{\mathbf{H}}=h_{tot}\) and \(C_{\mathbf{S}}=1\) in our GP algorithm following the analysis in Appendix D. For other algorithms, we take these two parameters in the following way. For P, IP, ITE and RTE algorithms, we take \(C_{\mathbf{H}}\) and \(C_{\mathbf{S}}\) according to spectral norms (the lower bound of cost) without getting into details about measuring matrix entries. In these four algorithms, \(\|f_{k}^{\dagger}Hf_{q}\|_{2}=\|f_{k}^{\dagger}f_{q}\|_{2}=1\) for all \(k\) and \(q\), therefore, we take \(C_{\mathbf{H}}=C_{\mathbf{S}}=1\). In the CP algorithm, we take \(C_{\mathbf{H}}\) and \(C_{\mathbf{S}}\) in a similar way. Because \(\|H/h_{tot}\|_{2}\leqslant 1\), the spectrum of \(H/h_{tot}\) is in the interval \([-1,1]\). In this interval, the Chebyshev polynomial of the first kind takes values in the interval \([-1,1]\). Therefore, \(\|\bar{I}_{k}(H/h_{tot})\|_{2}\leqslant 1\), and consequently \(\|f_{k}^{\dagger}Hf_{q}\|_{2},\|f_{k}^{\dagger}f_{q}\|_{2}\leqslant 1\). These norms depend on the spectrum of \(H\). For simplicity, we take the upper bound of norms, i.e. \(C_{\mathbf{H}}=C_{\mathbf{S}}=1\), in the CP algorithm. In the F algorithm, each \(f_{k}\) is a linear combination of \(e^{-iHt}\) operators, i.e. \(f_{k}\) is expressed in the LCU form, and the cost factor of the LCU expression is one. Therefore, we take \(C_{\mathbf{H}}=C_{\mathbf{S}}=1\) in the F algorithm, assuming that \(H\) is expressed in the LCU form with a cost factor of one (Notice that one is the lower bound). \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline Abbr. & \(E_{0}\) & \(\tau\) & \(\Delta t\) & \(\Delta E\) & \(C_{\mathbf{H}}\) & \(C_{\mathbf{S}}\) \\ \hline P & \(E_{g}+1\) & N/A & N/A & N/A & 1 & 1 \\ CP & 0 & N/A & N/A & N/A & 1 & 1 \\ GP & \(\left[E_{g}-0.1,E_{g}+0.1\right]\) & Solving Eq. (101) & N/A & N/A & \(h_{tot}\geqslant 1\) & 1 \\ IP & \(E_{g}-1\) & N/A & N/A & N/A & 1 & 1 \\ ITE & \(E_{g}\) & Solving Eq. (102) & N/A & N/A & 1 & 1 \\ RTE & \(E_{g}\) & N/A & Minimising \(\epsilon_{K}\) & N/A & 1 & 1 \\ F & \(E_{g}\) & Solving Eq. (101) & \(2\tau/(L-1)\) & Minimising \(\epsilon_{K}\) & 1 & 1 \\ \hline \hline \end{tabular} \end{table} Table 2: Parameters taken in the numerical calculation. ## Appendix F Composing Chebyshev polynomials ### Chebyshev-polynomial projector In this section, we explain how to use Chebyshev polynomials as projectors onto the ground state. The \(n\)th Chebyshev polynomial of the first kind is \[T_{n}(z)=\left\{\begin{array}{cc}\cos(n\arccos z),&\text{if }|z|\leqslant 1,\\ \text{sgn}(z)^{n}\cosh(n\arccos||z|),&\text{if }|z|>1.\end{array}\right. \tag{11}\] Chebyshev polynomials have the following properties: i) When \(|z|\leqslant 1\), \(|T_{n}(z)|\leqslant 1\); and ii) when \(|z|>1\), \(|T_{n}(z)|>(|z|^{n}+|z|^{-n})/2\) increases exponentially with \(n\). Let's consider the spectral decomposition of the Hamiltonian, \(H=\sum_{m=1}^{d_{\mathcal{H}}}E_{m}|\psi_{m}\rangle\langle\psi_{m}|\). Here, \(d_{\mathcal{H}}\) is the dimension of the Hilbert space, \(E_{m}\) are eigenenergies of the Hamiltonian, and \(|\psi_{m}\rangle\) are eigenstates. We suppose that eigenenergies are sorted in ascending order, i.e. \(E_{i}\leqslant E_{j}\) if \(i<j\). Then, \(E_{1}=E_{g}\) is the ground-state energy (accordingly, \(|\psi_{1}\rangle=|\psi_{g}\)) is the ground state) and \(E_{2}\) is the energy of the first excited state. The energy gap between the ground state and the first excited state is \(\Delta=E_{2}-E_{1}\). To compose a projector, we take \[Z=1-\frac{H-E_{2}}{\|H\|_{2}}. \tag{12}\] Notice that \(Z\) and \(H\) have the same eigenstates. The spectral decomposition of \(Z\) is \(Z=\sum_{m=1}^{d_{\mathcal{H}}}z_{m}|\psi_{m}\rangle\langle\psi_{m}|\), where \[z_{m}=1-\frac{E_{m}-E_{2}}{\|H\|_{2}}. \tag{13}\] Then, the ground state corresponds to \(z_{1}=1+\frac{\Delta}{\|H\|_{2}}>1\) (suppose the gap is finite), and the first excited state corresponds to \(z_{2}=1\). Because \(|E_{m}-E_{2}|\leqslant 2\|H\|_{2}\), \(z_{m}\geqslant-1\) for all \(m\). Therefore, except the ground state, \(x_{m}\) of all excited states (i.e. \(m\geqslant 2\)) is in the interval \([-1,1]\). The projector reads \[\frac{T_{n}(Z)}{T_{n}(z_{1})} =\sum_{m=1}^{d_{\mathcal{H}}}\frac{T_{n}(z_{m})}{T_{n}(z_{1})}| \psi_{m}\rangle\langle\psi_{m}|\] \[=|\psi_{g}\rangle\langle\psi_{g}|+\Omega, \tag{14}\] where \[\Omega=\sum_{m=2}^{d_{\mathcal{H}}}\frac{T_{n}(z_{m})}{T_{n}(z_{1})}|\psi_{m} \rangle\langle\psi_{m}|. \tag{15}\] \(\Omega\) and \(H\) have the same eigenstates. Because \(|T_{n}(z_{m})|\leqslant 1\) when \(m\geqslant 2\), \[\|\Omega\|_{2}\leqslant\frac{1}{T_{n}(z_{1})}. \tag{16}\] The key in using Chebyshev polynomials as projectors is laying all excited states in the interval \(z\in[-1,1]\) and leaving the ground state out of the interval. For the CP basis, all eigenstates are in the interval \(z\in[-1,1]\). Therefore, \(f_{k}\) operators of the CP basis are not projectors in general. ### Expanding the projector as Hamiltonian powers In this section, we expand the Chebyshev-polynomial projector as a linear combination of Hamiltonian powers \((H-E_{0})^{k-1}\). We focus on the case that \(E_{0}=E_{g}\). The explicit expression of \(T_{n}(z)\) (\(n>0\)) is \[T_{n}(z)=n\sum_{m=0}^{n}(-2)^{m}\frac{(n+m-1)!}{(n-m)!(2m)!}(1-z)^{m}. \tag{17}\] Because \[(1-Z)^{m}=\sum_{l=0}^{m}\frac{m!}{(m-l)!l!}\frac{(E_{0}-E_{2})^{m-l}(H-E_{0}) ^{l}}{\|H\|_{2}^{m}}, \tag{18}\] we have \[T_{n}(Z)=\sum_{l=0}^{n}b_{l}\frac{(H-E_{0})^{l}}{\|H\|_{2}^{l}}, \tag{19}\] where \[b_{l}=n\sum_{m=l}^{n}(-2)^{m}\frac{(n+m-1)!}{(n-m)!(2m)!}\frac{m!}{(m-l)!l!} \frac{(E_{0}-E_{2})^{m-l}}{\|H\|_{2}^{m-l}}. \tag{20}\] Now, we give an upper bound of \(|b_{l}|\). First, we consider \(l=0\). In this case, \[|b_{0}| \leqslant n\sum_{m=0}^{n}2^{m}\frac{(n+m-1)!}{(n-m)!(2m)!}\frac{|E _{0}-E_{2}|^{m}}{\|H\|_{2}^{m}}\] \[=T_{n}(z_{1}). \tag{21}\] Then, for an arbitrary \(l\), \[|b_{l}|\leqslant n^{l}T_{n}(z_{1}), \tag{22}\] where the inequality \[\frac{m!}{(m-l)!l!}\leqslant m^{l}\leqslant n^{l} \tag{23}\] has been used. Finally, taking \(d=n+1\) and \[a_{k}=\frac{c_{k}b_{k-1}}{T_{n}(z_{1})\|H\|_{2}^{k-1}}, \tag{24}\] we have \[\sum_{k=1}^{d}a_{k}c_{k}^{-1}f_{k}=\frac{T_{n}(Z)}{T_{n}(z_{1})}e^{-\frac{1}{2 }(H-E_{0})^{2}\tau^{2}}, \tag{25}\] where \(f_{k}\) are operators defined in Eq. (4). Because of the upper bound \[|a_{k}| \leqslant 2\left(\frac{k-1}{e\tau^{2}}\right)^{\frac{k-1}{2}}\left( \frac{n}{\|H\|_{2}}\right)^{k-1}\] \[\leqslant 2\left(\frac{n^{3}}{e\|H\|_{2}^{2}\tau^{2}}\right)^{ \frac{k-1}{2}}, \tag{101}\] the overhead factor is \[\gamma \approx\left(\mathbf{a}^{\dagger}\mathbf{a}\right)^{2}\leqslant 4 \sum_{k=1}^{d}\left(\frac{n^{3}}{e\|H\|_{2}^{2}\tau^{2}}\right)^{k-1}\] \[\leqslant 4\frac{1}{1-n^{3}/(e\|H\|_{2}^{2}\tau^{2})}. \tag{102}\]
2309.08637
TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild
Large language models with instruction-following abilities have revolutionized the field of artificial intelligence. These models show exceptional generalizability to tackle various real-world tasks through their natural language interfaces. However, their performance heavily relies on high-quality exemplar data, which is often difficult to obtain. This challenge is further exacerbated when it comes to multimodal instruction following. We introduce TextBind, an almost annotation-free framework for empowering larger language models with the multi-turn interleaved multimodal instruction-following capabilities. Our approach requires only image-caption pairs and generates multi-turn multimodal instruction-response conversations from a language model. To accommodate interleaved image-text inputs and outputs, we devise MIM, a language model-centric architecture that seamlessly integrates image encoder and decoder models. We release our dataset, model, and demo to foster future research in the area of multimodal instruction following.
Huayang Li, Siheng Li, Deng Cai, Longyue Wang, Lemao Liu, Taro Watanabe, Yujiu Yang, Shuming Shi
2023-09-14T15:34:01Z
http://arxiv.org/abs/2309.08637v5
# TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild ###### Abstract Large language models with instruction-following abilities have revolutionized the field of artificial intelligence. These models show exceptional generalizability to tackle various real-world tasks through their natural language interfaces. However, their performance heavily relies on high-quality exemplar data, which is often difficult to obtain. This challenge is further exacerbated when it comes to multimodal instruction following. We introduce TextBind, an almost annotation-free framework for empowering LLMs with multi-turn interleaved multimodal instruction-following capabilities. Our approach requires only image-caption pairs and generates multi-turn multimodal instruction-response conversations from a language model. To accommodate interleaved image-text inputs and outputs, we devise MIM, a language model-centric architecture that seamlessly integrates image encoder and decoder models. Extensive quantitative and qualitative experiments demonstrate that MIM trained on TextBind achieves remarkable generation capability in multi-modal conversations compared to recent baselines. ## 1 Introduction Artificial intelligence (AI) has experienced a significant paradigm shift with the rise of large language models (LLMs). These models are capable of processing a wide range of natural language processing (NLP) applications through natural language interactions with users (OpenAI, 2022; 2023). Despite their remarkable performance, these models cannot process and generate visual content. Recently, a number of efforts have been made to augment LLMs with visual perception and understanding abilities. Prior work uses template-based instruction-following datasets for training (Xu et al., 2023; Dai et al., 2023; Li et al., 2023c). These datasets comprise a variety of classic computer vision (CV) tasks, e.g., object detection, with each task being converted into an instructional format using a handful of human-written natural language instructions. However, classic CV tasks often represent manageable and focused abstractions or simplifications of real-world tasks (Marr, 2010), they generally fall short in representing the true variety and complexity of real-world tasks and capturing the lexical diversity of human language. For example, most of them are single-turn inquiries about a single input image, whereas a small fraction supports multi-turn textual interactions or multiple image inputs. Consequently, the instruction-following capabilities of models trained on these datasets remain limited in open-world scenarios (Xu et al., 2023a). This is reminiscent of the early development of instruction tuning in NLP, where public NLP tasks were eventually superseded by high-quality, diverse open-world instruction data (Ouyang et al., 2022). Nevertheless, collecting such data for multimodal models can be extremely costly. In this paper, we address the above challenge by introducing TextBind, an almost annotation-free framework for augmenting LLMs with multi-turn interleaved multimodal instruction-following capabilities. The main idea is to represent images through their textual descriptions, e.g., captions, and utilize an LLM to generate multi-turn instructions and responses. To ensure the coherence and meaningfulness of the constructed multi-turn conversations, we propose a series of strategies such as topic-aware image sampling and human-in-the-loop refinement of in-context demonstrations. TextBind can harvest large-scale datasets given the abundance of public image-caption pairs. TextBind provides examples of processing and generating arbitrarily interleaved image-and-text content. To accommodate interleaved image-text inputs and outputs, we devise MIM, a multimodal model that emphasizes the reasoning abilities of LLMs and seamlessly integrates image encoder and decoder models. The comparison of TextBind and previous representative datasets is shown in Tab. 7 (Appx. D), accompanied by an illustration of the models trained on different datasets in Fig. 1. To assess the generative capabilities of MIM trained on TextBind, we perform comprehensive analyses in the context of multi-modal conversations (SS6). In particular, thorough reference-based automatic evaluation metrics reveal that the MIM model substantially surpasses MiniGPT-4 Zhu et al. (2023) and LLaVA Liu et al. (2023a) in textual response generation, and outperforms GILL Koh et al. (2023a) and Stable Diffusion Podell et al. (2023) in image generation by a considerable margin. Furthermore, our holistic evaluation demonstrates that MIM consistently outperforms the representative baselines. In addition, our qualitative experiments show that MIM trained on TextBind can perform a wide range of tasks, including composing engaging stories inspired by a set of images (Fig. 1), comparing the common and different parts in multiple images (Fig. 7b (Appx. A)), explaining concepts with vivid images (Fig. 6a (Appx. A)), generating long coherent stories with illustrations (Fig. 5 (Appx. A)), etc. More demonstrations are shown in Appx. A. Most interestingly, the core innovation of our model is its capability to interact with users naturally. For instance, rather than requiring users to supply the model with explicit descriptions of the desired image, our model can spontaneously generate images in proper conversation contexts. We hope TextBind serves as an initial step towards building AGI that can interact with humans flexibly in different modalities and broad real-world scenarios. Figure 1: Comparison among models trained on different datasets. Related Work Multimodal DatasetsExisting multimodal datasets can be broadly classified into two categories: (1) Conventional datasets for specific vision-language tasks such as image captioning (Chen et al., 2015; Agrawal et al., 2019; Young et al., 2014) and visually-grounded question answering (Hudson and Manning, 2019; Marino et al., 2019; Singh et al., 2019; Lu et al., 2022; Zhou et al., 2018; Goyal et al., 2017; Gurari et al., 2018). (2) Recent dataset for general instruction following. For instance, Multiltrinstruct (Xu et al., 2023b), InstructBLIP (Dai et al., 2023), and M3IT (Li et al., 2023c) convert existing vision-language datasets into a unified instructional format with handcrafted templates. This approach is reminiscent of the early explorations on instruction tuning in NLP (Wei et al., 2022; Sanh et al., 2022), where existing NLP tasks were phrased as instructions. However, it has been reported that such instruction-tuned multimodal models still generalize poorly to open-world scenarios (Xu et al., 2023a). This finding also aligns with the observations in NLP (Ouyang et al., 2022), where template-based instruction tuning is less effective than instruction tuning data collected from real-world scenarios due to its restricted diversity. There are also some attempts to convert the output of existing vision-language models into natural language answers for constructing instruction-tuning data (Liu et al., 2023a; Zhu et al., 2023; Chen et al., 2023a). Compared to existing instruction-tuning data, the examples in TextBind (1) generally exhibit greater task and lexicon diversity; (2) typically involve multiple images scattered throughout a multi-urn conversation; (3) support multimodal output (image generation). Multimodal ModelsTo augment existing LLMs with visual abilities, one straightforward approach is to employ off-the-shelf vision models as external tools. That is, the LLM calls expert vision models through their language interfaces for completing specific visual tasks when needed (Wu et al., 2023; Shen et al., 2023; Chen et al., 2023b; Zou et al., 2022; Yang et al., 2023; Suris et al., 2023).However, these approaches may suffer from cross-modal information loss and lack of generality. Recently, end-to-end multimodal language models have garnered significant interest. Flamingo (Alayrac et al., 2022) and OpenFlamingo (Alayrac et al., 2022) are among the pioneering work to extend LLMs to vision-language pretraining. Different from training from scratch, subsequent research efforts have focused on integrating pretrained vision and language models. BLIP-2 (Li et al., 2023b) proposes Qformer to align the feature spaces of vision models and language models. To date, various network architectures and training strategies have been proposed (Zhu et al., 2023; Liu et al., 2023; Ye et al., 2023; Li et al., 2023a; Zhang et al., 2023; Du et al., 2022; Chen et al., 2023a; Dai et al., 2023). However, these models are limited to the use of visual content as input. Our work is inspired by recent work on LLM-empowered image retrieval or generation (Koh et al., 2023b;a). Contrary to prior work, we aim to present the first instruction-following model capable of processing and generating arbitrarily interleaved image-text inputs and outputs. EvaluationConventional vision datasets designed for specific tasks and scenarios may suffer from data contamination issues for evaluating LLMs. Recently, efforts have been made to provide systematic evaluations with a broader coverage of diverse visual abilities. MME (Fu et al., 2023) is an evaluation dataset containing visually-grounded Yes/No questions. OwlEval (Ye et al., 2023) is a benchmark comprising 82 questions based on 50 images and relies on human feedback evaluation. The test size is limited, and the results may suffer from subjective bias. In response to these challenges, MMDbench (Liu et al., 2023b) and MM-Vet (Yu et al., 2023) are two recent benchmarks aiming to offer more comprehensive evaluations by incorporating the use of ChatGPT/GPT4 for answer verification. LVLM Arena (Xu et al., 2023a), an online evaluation framework that ranks different models using human judgment, is also introduced. However, the above benchmarks primarily focus on question answering based on a single image at the beginning of a conversation. ## 3 TextBind In this work, we seek to enhance the multi-turn instruction-following capabilities of a language model in the context of arbitrarily interleaved images and text. Constructing such datasets poses significant challenges: 1) it demands inventive thinking for devising high-quality visually-grounded instructions and their responses; 2) it requires specialized expertise to craft appropriate images. To tackle these issues, we introduce TextBind, a method that predominantly resorts to existing _text-only language models1_ to produce the desired data. Footnote 1: Although OpenAI claims that GPT4 supports visual input, this feature is yet to be made public. ### Definition of Data The goal of TextBind is to construct a collection of multi-turn conversation such as \([\mathbf{x}_{u}^{1},\mathbf{x}_{u}^{2},\dots,\mathbf{x}_{n}^{T},\mathbf{x}_{n}^{T}]\), where \(T\) is the number of turns, \(\mathbf{x}_{u}^{i}\) denotes the \(i\)-th instruction from the user, and \(\mathbf{x}_{u}^{i}\) represents the \(i\)-th response from the assistant. The conversation is also accompanied by an image set \(\{\mathbf{m}_{1},\dots,\mathbf{m}_{n}\}\), where \(n\) is the number of unique images in this conversation. Each instruction \(\mathbf{x}_{u}^{i}\) or response \(\mathbf{x}_{u}^{i}\) is a sequence of tokens in \(\mathcal{V}_{\text{lang}}\cup\mathcal{V}_{\text{img}}\), where \(\mathcal{V}_{\text{lang}}\) is the ordinary vocabulary of a language model and \(\mathcal{V}_{\text{img}}\) contains \(n\) distinct pointers to the images \(\mathbf{m}_{1},\dots,\mathbf{m}_{n}\) respectively. It is worth noting that every image can appear at any point within the conversation. ### Automatic Data Generation TextBind consists of a three-step pipeline: 1) topic-aware image sampling for ensuring the coherence of each conversation and the diversity across conversations; 2) LLM-empowered multi-turn instruction-response generation to create natural and practical conversations; 3) post-processing and filtering to eliminate low-quality data. An overview of the TextBind pipeline is shown in Fig. 2. Topic-Aware Image SamplingThe initial step of TextBind entails assembling groups of images that will serve as the foundation for generating multi-turn conversations. In order to facilitate coherent, meaningful, and practical conversations, the images within each group should exhibit meaningful interconnections. Furthermore, to guarantee a comprehensive representation of real-world scenarios, the topics of images across different conversations should demonstrate a wide range of diversity. Following the above inspirations, we employ unsupervised clustering algorithms to group the images in our dataset into clusters and execute a two-step image sampling process for each conversation. Concretely, we use the image encoder of the CLIP model (Radford et al., 2021) to obtain vector representations of images. Then, we execute the \(k\)-means algorithm to classify all images into Figure 2: Illustration of the TextBind method. In the top-left corner, we display five representative images from each of the three example clusters obtained via unsupervised clustering. On the right-hand side, a conversation is showcased and constructed using two randomly sampled images from the cartoon cluster. In the bottom-left corner, we outline the additional TextBind pipeline, which includes human-in-the-loop refinement and post-processing stages. clusters (topics). Examples of such clusters are given in Fig. 2. For each conversation, we randomly sample a cluster from the available \(K\) clusters, then sample \(n\) images from the chosen cluster. Generation of Multi-turn ConversationsAfter selecting a list of images, we proceed to leverage a text-only LLM, such as GPT-4, to simulate a conversation between a user and an assistant based on the chosen images. The core idea is to let LLMs receive and process the textual descriptions of the images as if they see the actual images. Given the abundance of publicly available image-caption pairs, we propose representing an image with an XML-like string <imgX> DESCRIPTION </imgX>, where DESCRIPTION serves as a placeholder for the image caption, <imgX> and </imgX> mark the caption boundaries, and X denotes the image index in the input image list. After generating the conversation, we replace the XML-like strings in the conversation with the original images. Importantly, to ensure that a caption faithfully describes its corresponding image, we employ the CLIP model (Radford et al., 2021) to filter out image-caption pairs with matching scores below a high threshold. The detailed prompt can be found in Appx. B, and examples of generated conversations before mapping the textual descriptions back to visual images are shown in Appx. C. In the prompt, we also provide in-context examples to improve the generation quality. We collect the in-context examples through a human-in-the-loop refinement process, which is elaborated in SS3.3. Post-processing and Low-quality FilteringTo ensure data quality, we filter out conversations where there is a pair of input and output image descriptions with an edit distance higher than \(0.1\). We also exclude conversations containing image descriptions not present in the provided image list and conversations containing formatting errors such as co-reference errors and invalid image tags. ### Human-in-the-loop Refinement In-context learning has been demonstrated to be crucial for enhancing the generation quality of LLMs (Brown et al., 2020; Wang et al., 2023). Therefore, we also construct a seed set of high-quality in-context examples \(\mathcal{S}\). The seed set \(\mathcal{S}\) begins as an empty set and is iteratively updated with human feedback. In each iteration, we follow the steps detailed below: 1. We employ the latest \(\mathcal{S}\) and the template in Appx. B, and generate 100 new conversations using TextBind (SS3). 2. We manually analyze the generated conversations. Each conversation is assigned a quality label ("Excellent", "Satisfactory", or "Poor"). Besides, we label the visual abilities required for each conversation. The detailed annotation guideline for quality labels and visual abilities is outlined in Tab. 8 (Appx. E). 3. We add the generated conversations with "Excellent" or "Satisfactory" labels to \(\mathcal{S}\). To ensure diversity among different conversations, we randomly sample three in-context examples from the seed set for each generation. We further require that at least one in-context example is labeled "Excellent" and the three sampled examples encompass all four visual abilities. After three iterations, we fix the seed set and employ it to generate the remaining data. ## 4 TextBind Data from GPT4 We apply TextBind to GPT4 and the CC3M dataset (Sharma et al., 2018; Changpinyo et al., 2021) as a case study. The details of the construction process can be found in Appx. F. In this section, we present comprehensive analyses of the constructed dataset. StatisticsAs depicted in Tab. 1, our constructed dataset comprises \(25,629\) conversations. The average number of turns per conversation is \(3.36\) (each turn is defined as a pair of instruction and response). The mean number of images in each conversation is \(2.46\). \begin{table} \begin{tabular}{l c} \hline \hline **Statistics** & \\ \hline \# of conversations & \(25,629\) \\ Avg. \# turns in conversations & \(3.36\) \\ Avg. \# images & \\ in conversations & \(2.46\) \\ in instructions & \(0.94\) \\ in responses & \(1.52\) \\ Avg. \# words & \\ in conversations & \(285.90\) \\ in instructions & \(78.66\) \\ in responses & \(207.24\) \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the dataset by applying TextBind to GPT-4. DiversityTo understand the lexical and task diversity of our constructed data, we identify four types of required visual abilities and show their distribution in Fig. 2(b). We observe that a significant portion of conversations in our dataset focuses on more insightful and informative tasks, such as extrinsic understanding and image comparison. For topic diversity, we display three randomly sampled clusters in Fig. 2. The distribution of images across different turns is depicted in Fig. 2(c). We also compare the lexical diversity of our dataset and existing datasets in Tab. 1(a). QualityTo check the quality of the generated data, we randomly sample 100 conversations and perform an in-depth error analysis. As shown in Fig. 2(a), only \(9\%\) conversations in the dataset are labeled as "Poor". Note that we label the whole conversation as "Poor" if any of its turns has a problem. We analyze the error types (image-caption mismatch, incoherence, and hallucination) in Appx. G. ## 5 Augmenting LLMs with Visual I/O ### Model To support interleaved multimodal inputs and outputs, we supplement LLMs with visual input and output modules. Specifically, LLama2-Chat2(Touvron et al., 2023) is employed as the backbone LM. For visual input, we use the vision encoder from BLIP2(Li et al., 2023b)3, followed by a pretrained Q-Former model (Li et al., 2023b) that maps the features from the vision model into the embedding space of the LM. Inspired by GILL (Koh et al., 2023a), we attempt to learn a mapping from the output space of the LM to the input space of a stable diffusion (SD) model (Rombach et al., 2022) (in this work, the embeddings produced by the text encoder of Stable Diffusion XL (Podell et al., 2023)). To this end, we explore three model variants in our preliminary experiments. Footnote 2: [https://huggingface.co/meta-llama/llama-2-7b-chat-hf](https://huggingface.co/meta-llama/llama-2-7b-chat-hf) Footnote 3: [https://huggingface.co/Salesforce/blip2-flam-t5-xxl](https://huggingface.co/Salesforce/blip2-flam-t5-xxl) **Q-Former as Medium**. We add a special token <IMG> to the vocabulary of the LM, indicating that an image should be generated when it is emitted. We then use a Q-Former (Li et al., 2023b) that takes all previous hidden states of the LM as input and outputs the SD embeddings. **Q-Former with Prompt Tokens as Medium**. To further leverage the reasoning abilities of the LM, we incorporate a series of special tokens (<img1>,..., <IMG{r}>), instead of a single token (<IMG>), to the LM. When <img1> is emitted, the generation of the special token sequence is enforced, serving as additional reasoning steps for predicting the forthcoming image. Subsequently, the Q-Former only accepts the hidden states of special tokens as input. \begin{table} \begin{tabular}{l c c c} \hline \hline **Dataset** & **Instruct** & **Response** & **Overall** \\ \hline LLava & \(1.56\) & \(1.84\) & \(1.70\) \\ Mingoff-4 & \(0.00\) & \(1.11\) & \(0.89\) \\ Multi-lustuct & \(0.51\) & \(1.69\) & \(0.51\) \\ playtpus & \(0.98\) & \(0.75\) & \(0.78\) \\ Sinka & \(0.89\) & \(1.08\) & \(0.87\) \\ \hline TextBind & \(1.76\) & \(1.92\) & \(1.84\) \\ \hline \hline \end{tabular} \end{table} Table 2: Averaged diversity scores of roles in various datasets. Details of this analysis are in Appx. D. Figure 3: Statistics of data quality and diversity. The results in Fig. 2(a) and 2(b) are based on the human annotations on 100 randomly sampled conversations. **Language Description as Medium**. The previous two variants try to align the continuous hidden spaces of different models. An alternative is to use discrete language descriptions for information exchange, as depicted in Fig. 4. Specifically, we add two special tokens, <start> and <end>, and encode the generated text between these two tokens using the text encoder in the SD model. Similar to GILL (Koh et al., 2023), we optimize the first two variants by minimizing the mean squared error (MSE) loss between the output embeddings and the SD embeddings. For the third variant, we employ the standard cross-entropy loss. We empirically find that only the last method demonstrates satisfactory performance on multi-turn interleaved multimodal instruction-following, for which we name it MIM. ### Training Our training process consists of two stages, namely, the multimodal alignment stage and the multimodal instruction tuning stage. Multimodal AlignmentThe first stage aims to align the feature spaces of the vision model and the language model. We utilize massive image-caption pairs for training, drawing from datasets such as Conceptual Captions (Changpinyo et al., 2021; Sharma et al., 2018) and SBU (Ordonez et al., 2011). During training, only the Q-Former connecting the vision and language models is optimized while other model components remain frozen. Multimodal Instruction FollowingThe second stage further trains the joint model on multimodal instruction tuning data to improve its instruction-following capabilities. The Q-Former model and LLM are optimized in this stage. In addition to TextBind data, we also explore existing multimodal instruction data including MultiInstruct (Xu et al., 2023), MiniGPT-4 (Zhu et al., 2023), LLaVA (Liu et al., 2023), and Shikra (Chen et al., 2023). ## 6 Experiments To verify the effectiveness of the proposed methods, we carry out quantitative evaluations against a set of recent baselines. Our quantitative evaluations are divided into three parts: textual response generation, image generation, and a holistic evaluation of multimodal instruction-following. ### TextBindEval To facilitate comprehensive and dedicated evaluation for instruction-following in realistic scenarios, we construct a new dataset named TextBindEval. TextBindEval is initially generated through the automatic pipeline of TextBind (SS3) and subsequently refined by human annotators. These annotators are tasked with discarding low-quality examples or rectifying amendable issues such as revising incoherent or hallucinated content. After a rigorous review, we establish an evaluation dataset comprising 278 conversations in total. Figure 4: The architecture of MIM. It integrates a vision model, a language model, and a stable diffusion model. MIM is able to process multi-turn interleaved multimodal inputs and outputs. ### Textual Response Generation SetupWe consider each assistant turn of each conversation in TextBindEval as a test point. All its preceding context is treated as input (which may contain interleaved images and text), and the goal is to generate a coherent and helpful response. We measure the response quality using a set of reference-based evaluation metrics such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and BERTScore (Zhang et al., 2020). We also report the Diversity (Su et al., 2022) scores of the generated responses. For simplicity, we replace any image in the responses with a special token <image>. For a fair comparison, we compare different MIM models trained on different datasets (Xu et al., 2023; Zhu et al., 2023; Liu et al., 2023; Chen et al., 2023a)4 and GILL (Koh et al., 2023a)5. The implementation details are shown in Appx. H. Footnote 4: The original papers of these datasets used distinct model architectures such as different pretrained language models. One common feature is that all of them do not support image generation. Footnote 5: For a fair comparison, we replicate GILL using the same image-captioning data to train by our models. ResultsAs shown in Tab. 3, the MIM model trained on TextBind outperforms all other baselines by wide margins across all evaluation metrics. The results suggest that more realistic and diverse training data such as TextBind is necessary for tackling open-world tasks, which cannot be well-supported by existing template-based and VQA-like datasets. Nevertheless, we also find that the performance can be further improved when combining different datasets, indicating that there is a complementary relationship between TextBind and existing datasets. ### Image Generation SetupThe models trained on existing datasets, i.e., the baselines in SS6.2 except for GILL, are incapable of generating images. To showcase the image generation capabilities of our model, we compare it with Stable Diffusion XL (SD-XL) (Podell et al., 2023) and GILL (Koh et al., 2023a). In addition, we present the results of the two model variants described in SS5.1, namely, **Q-former as Medium** and **Q-former with Prompt Tokens as Medium**. We take each image from the assistant in TextBindEval as a test point. All its preceding context is taken as input, and the models are enforced to output an image. We take the original images in TextBindEval as references. Following Koh et al. (2023a), we evaluate image generation with two reference-based metrics: (1) **CLIP Similarity**. We use the CLIP vision encoder to produce image representations and compute the cosine similarity between generated images and reference images. A higher score means better semantic similarity. (2) **Learned Perceptual Image Path Similarity (LPIPS)**. LPIPS (Zhang et al., 2018) measures the distance between generated images and reference images. A lower score means that images are more similar in perceptual space. ResultsTo gain further insights into the multi-turn instruction-following abilities, we group different test points by the number of previous conversation turns. The results are shown in Tab. 5. As seen, MIM generally achieves better performance than SD-XL and GILL across different turns and evaluation metrics. Importantly, the performance gaps are enlarged as the number of turns increases. This indicates that our model exhibits a better understanding ability of multi-turn conversations. Compared to the two model variants, MIM is substantially better. Our case study reveals that the disparity stems from the _one-to-many_ nature of image generation in real-world conversations. Unlike generating images for explicit descriptions, there can exist numerous distinct images for a given conversation context. Operating in the hidden space may inadvertently average all possibilities, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Methods** & **BLEU-2** & **BLEU-4** & **ROUGE-2** & **ROUGE-L** & **BERTScore** & **Diversity** \\ \hline GILL (Koh et al., 2023a) & 3.97 & 1.44 & 4.61 & 13.97 & 0.847 & 0.902 \\ \hline MultiInstruct (Xu et al., 2023b)6 & 7.16 & 2.27 & 3.16 & 10.60 & 0.830 & 0.654 \\ MiniGPT-4 (Zhu et al., 2023) & 9.24 & 3.29 & 6.77 & 17.56 & 0.858 & 0.658 \\ LLaVA (Liu et al., 2023a) & 12.16 & 4.41 & 8.66 & 19.79 & 0.872 & 0.852 \\ Shikra (Chen et al., 2023a) & 10.37 & 3.83 & 7.79 & 18.63 & 0.864 & 0.722 \\ TextBind & **24.45** & **11.83** & **15.45** & **28.69** & **0.891** & **0.927** \\ \hline Mix & 27.64 & 14.49 & 17.90 & 31.22 & 0.896 & 0.912 \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation of textual response generation. Mix represents the mixture of MultiInstruct, MiniGPT-4, LLaVA, Shikra, and TextBind. resulting in ambiguous or noisy images. However, MIM mitigates the _one-to-many_ issue by taking full advantage of the autoregressive generation of language models for decision-making. ### Holistic Evaluation In addition to the above automatic evaluation, we also conduct a holistic evaluation of instruction-following abilities through human annotation. SetupWe randomly sample 100 contexts from TextBindEval and evaluate the responses generated by MIM and two representative baselines, LLaVA (Liu et al., 2023a) and GILL (Koh et al., 2023a). We instruct three human annotators to score the quality of each generated response on a Likert scale from 1 to 4 (The details of evaluation guideline are in Appx. I). ResultsAs shown in Table 4, MIM achieves higher human scores than GILL and LLaVA, indicating its remarkable generation capability in open-world multi-modal conversations. In addition, the Krippendorff's \(\alpha=0.75\) indicates a high inter-annotation agreement between annotators. ### Results on Existing Benchmark Finally, we report the results on two popular multimodal benchmarks, MME (Fu et al., 2023) and MMBench (Liu et al., 2023b). As shown in Tab. 6, TextBind gets a relatively lower score than other datasets. The reason stems from the intrinsic difference between TextBind and the two benchmarks. TextBind focuses more on realistic instructions (e.g., create a story based on the images, give some suggestions for having fun in the winter). In contrast, MME and MMBench focus more on VQA questions, e.g., who is this person, what is the color of the object, which are more similar to the data in MultiInstruct, LLaVA, and Shikra. For example, the model trained on MultiInstruct achieves the best performance on MME, though it displays the worst performance in open-world scenarios in Tab. 3. Another interesting observation is that the mix of all datasets attains the best overall performance on MMBench, indicating that different datasets are complementary. \begin{table} \begin{tabular}{l c c} \hline \hline **Methods** & **AVG. Score** & **Percent.** (\(\geq 3\)) \\ \hline GILL & \(1.71\) & \(0.19\) \\ LLaVA & \(2.93\) & \(0.89\) \\ \hline MIM & \(3.39\) & \(0.70\) \\ \hline \hline \end{tabular} \end{table} Table 4: Averaged human scores and the percentage of averaged scores \(\geq 3\). Krippendorff’s \(\alpha=0.75\). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**CLIP Similarity** (\(\uparrow\))} & \multicolumn{3}{c}{**LPIPS** (\(\downarrow\))} \\ \cline{2-9} **Model** & Turn-1 & Turn-2 & Turn-3 & Turn-1 & Turn-2 & Turn-3 \\ \hline SD-XL (Podell et al., 2023) & 0.612 & 0.599 & 0.608 & **0.712** & 0.735 & 0.735 \\ GILL (Koh et al., 2023a) & 0.569 & 0.550 & 0.530 & **0.712** & 0.734 & 0.742 \\ \hline Q-Former as Medium & 0.558 & 0.568 & 0.592 & 0.717 & 0.728 & 0.729 \\ Q-Former with Prompt Tokens as Medium & 0.566 & 0.571 & 0.606 & 0.718 & 0.727 & 0.732 \\ \hline MIM & **0.640** & **0.645** & **0.673** & **0.712** & **0.720** & **0.726** \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation of image generation. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**MME**} & \multicolumn{6}{c}{**MMBench**} \\ \cline{2-10} **Training Dataset** & Perception & Cognition & LR & AR & RR & FP-S & FP-C & CP & Overall \\ \hline MultiInstruct (Xu et al., 2023b) & **1099.16** & **302.50** & 11.93 & 39.79 & 28.64 & 28.75 & 23.20 & 41.91 & 31.54 \\ MinGFT-(Zhu et al., 2023) & 0.00 & 0.00 & **14.20** & 50.52 & 17.37 & 32.75 & 15.20 & 41.70 & 31.87 \\ LLAVA (Liu et al., 2023a) & 683.28 & 267.86 & 7.955 & 57.11 & 31.46 & 42.50 & 31.10 & **56.60** & **42.10** \\ Shikra (Chen et al., 2023a) & 166.87 & 2.86 & 18.18 & **64.01** & 22.54 & 39.75 & 31.20 & 50.43 & 41.10 \\ TextBind & 549.00 & 226.43 & 11.93 & 36.33 & 6.57 & 23.25 & 6.00 & 33.83 & 22.64 \\ Mix & 1023.33 & 255.00 & 13.64 & 56.75 & **37.09** & **43.50** & **42.80** & 55.32 & **44.94** \\ \hline \hline \end{tabular} \end{table} Table 6: Results on MME (Fu et al., 2023) and MMBench (Liu et al., 2023b). ## 7 Conclusion In conclusion, the introduction of the TextBind framework has opened new doors for enhancing large language models with multi-turn interleaved multimodal instruction-following capabilities. By requiring only image-caption pairs, our approach significantly reduces the need for high-quality exemplar data, making it a more accessible and scalable solution for various real-world tasks. The MIM architecture seamlessly integrates image encoder and decoder models, enabling the model to effectively handle interleaved image-text inputs and outputs. Comprehensive quantitative and qualitative experiments demonstrate the remarkable performance of MIM, trained on TextBind, when compared to recent baselines in open-world multimodal conversations.
2309.07504
Connected Autonomous Vehicle Motion Planning with Video Predictions from Smart, Self-Supervised Infrastructure
Connected autonomous vehicles (CAVs) promise to enhance safety, efficiency, and sustainability in urban transportation. However, this is contingent upon a CAV correctly predicting the motion of surrounding agents and planning its own motion safely. Doing so is challenging in complex urban environments due to frequent occlusions and interactions among many agents. One solution is to leverage smart infrastructure to augment a CAV's situational awareness; the present work leverages a recently proposed "Self-Supervised Traffic Advisor" (SSTA) framework of smart sensors that teach themselves to generate and broadcast useful video predictions of road users. In this work, SSTA predictions are modified to predict future occupancy instead of raw video, which reduces the data footprint of broadcast predictions. The resulting predictions are used within a planning framework, demonstrating that this design can effectively aid CAV motion planning. A variety of numerical experiments study the key factors that make SSTA outputs useful for practical CAV planning in crowded urban environments.
Jiankai Sun, Shreyas Kousik, David Fridovich-Keil, Mac Schwager
2023-09-14T08:15:31Z
http://arxiv.org/abs/2309.07504v1
Connected Autonomous Vehicle Motion Planning with Video Predictions from Smart, Self-Supervised Infrastructure ###### Abstract Connected autonomous vehicles (CAVs) promise to enhance safety, efficiency, and sustainability in urban transportation. However, this is contingent upon a CAV correctly predicting the motion of surrounding agents and planning its own motion safely. Doing so is challenging in complex urban environments due to frequent occlusions and interactions among many agents. One solution is to leverage smart infrastructure to augment a CAV's situational awareness; the present work leverages a recently-proposed "Self-Supervised Traffic Advisor" (SSTA) framework of smart sensors that teach themselves to generate and broadcast useful video predictions of road users. In this work, SSTA predictions are modified to predict future occupancy instead of raw video, which reduces the data footprint of broadcast predictions. The resulting predictions are used within a planning framework, demonstrating that this design can effectively aid CAV motion planning. A variety of numerical experiments study the key factors that make SSTA outputs useful for practical CAV planning in crowded urban environments. ## I Introduction Connected and autonomous vehicles (CAVs) have the potential to make urban transportation safer, more efficient, and more sustainable. A key challenge towards realizing this goal is that of motion planning; traditional methods rely on static maps and real-time sensor data, and cannot always yield accurate predictions of the future state of the environment due to occlusions or other onboard sensing limitations. Hence, smart infrastructure has been proposed as a way to augment CAV capabilities. However, collecting data to train such smart infrastructure can often require prohibitive amounts of manual labeling. Therefore, there is growing interest in using self-training smart infrastructure to improve CAV safety and performance. The present work considers a specific type of edge device proposed in our prior work: the Self-Supervised Traffic Advisor (SSTA) [1]. This proposed device automatically processes video data from traffic cameras to support CAV motion planning. SSTAs are a scalable edge solution that avoid extensive hand-labeling and can provide large-scale connected coverage of an urban environment through self-training to share data. Our previous work studied the challenges of self-training and networking, whereas this work assesses the potential for SSTAs to benefit CAV motion planning and control. _Contributions:_ First, we develop a new output format for SSTAs as an alternative to predicting raw video frames. Specifically, we propose to predict Time to Next Occupancy (T2NO) and Time to Next Departure (T2ND), which describe when each portion of an SSTA's field of view will next be occupied and subsequently unoccupied by a road user. We abbreviate these together as "T2NO/D." Second, we propose an approach to CAV motion planning that leverages our novel T2NO/D outputs from SSTAs. Third, we numerically evaluate the effectiveness of autonomously predicting T2NO/D in comparison to raw video within a CAV motion planning framework. We find that our proposed method can provide an effective smart infrastructure perception output that is directly amenable to CAV planning and control. ## II Related Work SSTAs [1], and this paper, lie at the intersection of CAV infrastructure, networked learning, and video prediction. CAV InfrastructureTo maximize their utility, CAVs should communicate with self-driving and human-driven vehicles, roadside infrastructure, and other road users; many reviews of this literature are available [2, 3, 4, 5]. Given bandwidth and connectivity limitations, it is critical to decide what, and when, to communicate. For example, Gopalswamy et al. [6] propose to offload CAV responsibilities for situational awareness and information sharing via vehicle-to-infrastructure (V2I) communication. Alternatively, short-range communications allow vehicle-to-vehicle (V2V) transmission of information such as speed, heading, and brake status [7, 8]. Since V2V alone can cause network-wide instabilities as the number of vehicles grows [9], it is critical to explore alternative means of widespread, scalable communication that can improve large-scale traffic metrics such as safety and efficiency [10]. In this work, we propose a scalable infrastructure solution and study its potential for aiding CAVs in an isolated V2I manner; we plan to explore mixing V2V and V2I in future work. Networked LearningThe concept of networked learning has been recently explored in the context of connected and autonomous vehicles (CAVs) [11]. In this paradigm, multiple CAVs can cooperate to improve their individual driving policies through message passing and sharing of information [12, 13, 14]. We note that CAVs can cooperate to perceive their surroundings [13] and to improve traffic congestion and energy efficiency [15, 16, 17]. In this work, we instead consider how networked learning in _infrastructure_ can be used to improve perception and planning for CAVs. Video PredictionVideo prediction has the potential to impact traffic management, motion planning for autonomous vehicles, and autonomous surveillance [18].
2308.00142
Semi-Supervised Laplace Learning on Stiefel Manifolds
Motivated by the need to address the degeneracy of canonical Laplace learning algorithms in low label rates, we propose to reformulate graph-based semi-supervised learning as a nonconvex generalization of a \emph{Trust-Region Subproblem} (TRS). This reformulation is motivated by the well-posedness of Laplacian eigenvectors in the limit of infinite unlabeled data. To solve this problem, we first show that a first-order condition implies the solution of a manifold alignment problem and that solutions to the classical \emph{Orthogonal Procrustes} problem can be used to efficiently find good classifiers that are amenable to further refinement. To tackle refinement, we develop the framework of Sequential Subspace Optimization for graph-based SSL. Next, we address the criticality of selecting supervised samples at low-label rates. We characterize informative samples with a novel measure of centrality derived from the principal eigenvectors of a certain submatrix of the graph Laplacian. We demonstrate that our framework achieves lower classification error compared to recent state-of-the-art and classical semi-supervised learning methods at extremely low, medium, and high label rates.
Chester Holtz, Pengwen Chen, Alexander Cloninger, Chung-Kuan Cheng, Gal Mishne
2023-07-31T20:19:36Z
http://arxiv.org/abs/2308.00142v2
# Semi-Supervised Laplacian Learning on ###### Abstract Motivated by the need to address the degeneracy of canonical Laplace learning algorithms in low label rates, we propose to reformulate graph-based semi-supervised learning as a nonconvex generalization of a _Trust-Region Subproblem_ (TRS). This reformulation is motivated by the well-posedness of Laplacian eigenvectors in the limit of infinite unlabeled data. To solve this problem, we first show that a first-order condition implies the solution of a manifold alignment problem and that solutions to the classical _Orthogonal Procrustes_ problem can be used to efficiently find good classifiers that are amenable to further refinement. Next, we address the criticality of selecting supervised samples at low-label rates. We characterize informative samples with a novel measure of centrality derived from the principal eigenvectors of a certain submatrix of the graph Laplacian. We demonstrate that our framework achieves lower classification error compared to recent state-of-the-art and classical semi-supervised learning methods at extremely low, medium, and high label rates. Our code is available on github1. Footnote 1: anonymized for submission ## 1 Introduction Semi-supervised learning is an important field in machine learning and statistics. Semi-supervised methods leverage both labeled and unlabeled data for tasks such as classification and regression. In semi-supervised learning (SSL), we are given a partially-labeled training set consisting of labeled examples and unlabeled examples. The goal is to leverage the unlabeled examples to learn a predictor that is superior to a predictor that is trained using the labeled examples alone. This setup is motivated by the high cost of obtaining annotated data in practical problems. Consequently, we are typically interested in the regime where the number of labeled examples is significantly smaller than the number of training points. For problems where very few labels are available, the geometry of the unlabeled data can be used to significantly improve the performance of classic machine learning models. Additionally, the choice of labeled vertices is also a critical factor in this regime. In this work, we introduce a unified framework for graph-based semi-supervised and active learning at low label rates. ### Grap-based semi-supervised Learning A seminal work in graph-based semi-supervised learning is Laplace learning [56], which seeks a harmonic function that extends provided labels over the unlabeled vertices. Laplace learning, and its variants (notably, Poisson Learning [12]) have been widely applied in semi-supervised and graph-structured learning [53; 51; 5; 49]. In this work, we improve upon the state-of-the-art for graph-based semi-supervised learning at very low label rates. Classical Laplace learning and label propagation algorithms yield poor classification results [36; 4] in this regime. This is typically attributed to the fact that the solutions develop localized spikes near the labeled vertices and are nearly constant for vertices distant from labels. In other words, Laplace learning-based algorithms often fail to adequately propagate labels over the graph, given few labeled nodes. To address this issue, recent work has suggested imposing small adjustments to classical Laplace learning procedure. For example, \(p\)-Laplace learning [4; 41; 10; 11] for \(p>2\), and particularly for \(p=\infty\), often yields superior empirical performance compared to Laplace learning at low label rates [16]. Other relevant methods for addressing low label rate problems include higher-order Laplacian regularization [54] and spectral classification [7; 55]. ### Active learning on graphs The majority of existing active learning strategies typically involve evaluating the informativeness of unlabeled samples. For example, one of the most commonly used query frameworks is uncertainty sampling [38; 35; 33; 25] where the active learner queries the data samples which it is most uncertain about how to label. Most general uncertainty sampling strategies use some notion of margin as a measure of uncertainty [38; 35]. Many active learning algorithms that excel at low-label rates also employ strategies based on the connectivity of the graph, e.g. the degree centrality or cut structure [13; 19; 31]. Related work includes geometric landmarking methods, which seek to maximize coverage of the collected samples. For example, [40; 24] propose geodesic distance-based strategies to greedily add new landmarks with large cumulative geodesic distance to existing landmarks. However, these methods are computationally prohibitive on most benchmarks. Particularly relevant to our work are algebraic landmarking methods. In particular, [48] proposed an algebraic reconstruction error bound based on the Gershgorin circle theorem (GCT) [17] and an associated greedy algorithm based on this bound. However, this method suffers from high complexity due to logarithmic computations of a large matrix. ### Contribution In this work, we propose to solve a natural semi-supervised extension of Laplacian Eigenmaps and spectral cuts, which are well-posed in the limit of unlabeled data. Our extension is initially motivated by the optimization-based perspective of Laplacian Eigenmaps as a Rayleigh Quotient minimization problem over all labeled and unlabeled vertices. We show that a natural partitioning of the problem yields a more general quadratically constrained quadratic program over the unlabeled vertices. We then generalize the sequential subspace (SSM) framework originally proposed to solve similar problems in \(\mathbb{R}^{n}\) to \(\mathbb{R}^{n\times k}\) and we develop an associated active learning scheme. To summarize, our contributions are: 1. We introduce a natural formulation of graph semi-supervised learning as a rescaled quadratic program on a compact Stiefel Manifold, i.e. a generalization of a _Trust-Region Subproblem_. 2. We describe a scalable approximate method, globally convergent iterative methods, and a graph cut-based refinement scheme and demonstrate robustness in a variety of label rate regimes. 3. We introduce a score to characterize informative samples based on the principal eigenvectors of the _grounded Laplacian_. 4. We compare our approach to competing semi-supervised graph learning algorithms and demonstrate state-of-the-art performance in low, medium, and high label rate settings on MNIST, Fashion-MNIST, and CIFAR-10. The rest of the paper is organized as follows. In Section 2 we briefly introduce Laplacian Eigenmaps and our supervised variant, and then provide a detailed motivation for the algorithm. Our formulation is presented in Section 3. Approximate and iterative algorithms are presented in Section 3.1 and our approach to active learning at low label rates is presented in Section 4. In Section 5 we present numerical experiments. We conclude and discuss future work in Section 6. ## 2 Preliminaries and Related Work We assume the data can be viewed as lying on a graph, such that each vertex is a data-point. Let \(\mathcal{V}=\{v_{1},v_{2},\ldots,v_{M}\}\) denote the \(M\) vertices of the graph with edge weights \(w_{ij}\geq 0\) between \(v_{i}\) and \(v_{j}\). We assume that the graph is symmetric, so \(w_{ij}=w_{ji}\). We define the degree \(d_{i}=\sum_{j=1}^{n}w_{ij}\). For a multi-class classification problem with \(k\) classes, we let the standard basis vector \(e_{i}\in\mathbb{R}^{k}\) represent the \(i\)-th class (i.e. a "one hot encoding"). Without loss of generality, we assume the first \(m\) vertices \(l=\{v_{1},v_{2},\ldots,v_{m}\}\) are given labels \(y_{1},y_{2},\ldots,y_{m}\in\{e_{1},e_{2},\ldots,e_{k}\}\), where \(m\ll M\). Let \(n\) denote the number of unlabeled vertices, i.e. \(n=M-m\). The problem of graph-based semi-supervised learning is to smoothly propagate the labels over the unlabeled vertices \(\mathcal{U}=\{v_{m+1},v_{m+2},\ldots,v_{M}\}\). The compact _Stiefel Manifold_ is denoted \[\text{St}(n,k)=\{X\in\mathbb{R}^{n\times k}:X^{\top}X=I\}. \tag{1}\] Note that the projection of a matrix \(X\in\mathbb{R}^{n\times k}\) onto \(\text{St}(n,k)\), denoted \([X]_{+}:=\arg\min\{||X_{s}-X||_{F}:X_{s}\in\text{St}(n,k)\}\) is given by \[[X]_{+}=UV^{\top}, \tag{2}\] where \(X=U\Sigma V^{\top}\) is the singular value decomposition. Given a graph and a set of labeled vertices, the Laplace learning algorithm [56] extends the labels over the graph by solving the following problem \[x(v_{i}) =y_{i}, \text{if }1\leq i\leq m\] \[\mathcal{L}x(v_{i}) =0, \text{if }m+1\leq i\leq M \tag{3}\] where \(\mathcal{L}\) is the unnormalized graph Laplacian given by \(\mathcal{L}=D-W\), \(D\) is a diagonal matrix whose elements are the node degrees, and \(x:\mathcal{V}\rightarrow\mathbb{R}^{k}\). The prediction for vertex \(v_{i}\) is determined by the largest component of \(x(v_{i})\): \[\operatorname*{arg\,max}_{j\in\{1,\ldots,k\}}\{x_{j}(v_{i})\}. \tag{4}\] Note that Laplace learning is also called _label propagation (LP)_[57], since the Laplace equation eq. (16), can be solved by repeatedly replacing \(x(v_{i})\) with the weighted average of its neighbors. The solution of Laplace learning eq. (16), is the minimizer of the following gradient regularized variational problem with label constraints \(x(v_{i})=y_{i}\): \[\min_{x\in\ell^{2}(\mathcal{V})}\Big{\{}||\nabla x||_{\ell^{2}(\mathcal{V}^{2 })}^{2}|x(v_{i})=y_{i},1\leq i\leq m\Big{\}} \tag{5}\] ## 3 Spectral Embeddings with Supervision In Laplacian Eigenmaps [8], one seeks an embedding of the vertices via the eigenfunctions of the Laplacian corresponding to the smallest nontrivial eigenvalues. Equivalently, this can be expressed as the following _Quadratically Constrained Quadratic Program_ (QCQP) over the vertices of the graph: \[\min_{X_{0}}\langle X_{0},\mathcal{L}X_{0}\rangle\quad\text{s.t. }X_{0}^{\top}X_{0}=I,\ \ \mathbf{1}^{\top}X_{0}=0 \tag{6}\] where \(\langle A,B\rangle\) is is the trace of the matrix \(A^{\top}B\) and \(\mathbf{1}\) is the all-ones vector. The notation \(X_{0}\in\mathbb{R}^{M\times k}\) is the mapping of the \(M\) vertices to a \(k\)-dimensional space. In the case where \(k=1\), eq. (6) is also known in the numerical analysis literature as a _Rayleigh quotient minimiziation problem_[18]. Despite its nonconvexity, a unique (up to orthogonal transformations) global solution is given by the set of eigenvectors of \(\mathcal{L}\) corresponding to the smallest \(k\) nontrivial (nonzero) eigenvalues of \(\mathcal{L}\). We first extend this framework with supervision, similarly to Laplace learning in eq. (5). Additionally, to facilitate the supervised decomposition, we rescale \(I\) uniformly by \(p=M/k\), the balanced proportion of samples associated with each class: \[\min_{X_{0}}\langle X_{0},\mathcal{L}X_{0}\rangle\quad\text{s.t. }X_{0}^{\top}X_{0}= pI,\ \mathbf{1}^{\top}X_{0}=0,\ (X_{0})_{i}=y_{i},\ i\in[m] \tag{7}\] The associated prediction is then \(\ell(x_{i})=\arg\max_{j\in\{1,\ldots,k\}}(X_{0})_{ij}\). Next, we show how supervision naturally leads to a partitioning of the problem. We denote the submatrices of \(X_{0}\) and \(\mathcal{L}\) corresponding to the \(n\) unlabeled vertices \(\mathcal{U}\subseteq\mathcal{V}\) and \(m\) labeled vertices \(l\subseteq\mathcal{V}\) as \(X_{\mathcal{U}}\), \(X_{l}\) and \(L_{\mathcal{U}}\), \(L_{l}\), respectively. More concretely, \(\mathcal{L}=\left[\begin{smallmatrix}L_{l}&L_{l\omega}\\ L_{l\omega}&L_{l\omega}\end{smallmatrix}\right]\) and \(X_{0}=\left[\begin{smallmatrix}X_{l}\\ X_{l}\end{smallmatrix}\right]\). Then, in conjunction with considering \(X_{l}\) fixed, the problem in eq. (7) may be expressed as \[\min_{X_{\mathcal{U}}}\langle X_{\mathcal{U}},L_{\mathcal{U}}X_{\mathcal{U}} \rangle-\langle X_{\mathcal{U}},B_{0}\rangle\quad\text{s.t. }X_{\mathcal{U}}^{\top}X_{\mathcal{U}}=C_{\mathcal{U}},\ \mathbf{1}^{\top}X_{u}=-r^{\top} \tag{8}\] where \(B_{0}=2\cdot L_{\mathcal{U}l}X_{l}\), \(C_{\mathcal{U}}=pI-X_{l}^{\top}X_{l}=(p-\widetilde{p})I\), where \(\widetilde{p}=m/k\), and \(r=(\mathbf{1}^{\top}X_{l})^{\top}=\widetilde{p}\mathbf{1}^{\top}\in\mathbb{R} ^{k}\) are fixed parameters, and \(X_{\mathcal{U}}\) are the decision variables. It should also be mentioned that \(L_{\mathcal{U}}\) is known in the literature as a _grounded Laplacian_. In general, the quadratic equality constraint poses a significant challenge from an optimization standpoint. We propose to address this problem directly by solving an equivalent rescaled problem. By introducing terms to eliminate the linear constraint, we show how the problem may be rescaled and efficiently and robustly solved as a quadratic program on a compact _Stiefel Manifold_. The associated solution to this problem can then be used to determine the labels of the unlabeled vertices, as in Laplace learning (eq. (4)). To eliminate the linear constraint, we introduce two adjustments: first, let \((X)_{i}=(X_{\mathcal{U}})_{i}+\frac{1}{n}r^{\top}\) denote a row-wise centering transformation with respect to the labeled nodes. This yields the new constraint \(\mathbf{1}^{\top}X=0\) and yields \(C=C_{\mathcal{U}}-\frac{1}{n}rr^{\top}\) for \(X\). Second, we introduce the projection \(P=I-\frac{1}{n}\mathbf{1}\mathbf{1}^{\top}\) onto the subspace orthogonal to the vector \(\mathbf{1}\in\mathbb{R}^{n}\), i.e., \(\mathbf{1}^{\top}(PX)=0\), which projects vectors onto the set of mean-zero vectors. To obtain a solution limited to this subspace, we introduce the substitutions \(B=P(B_{0}-L_{\mathcal{U}}\frac{1}{n}\mathbf{1}^{\top})\) and \(L=PL_{\mathcal{U}}P\) which implies \(\mathbf{1}^{\top}B=0\). Consider the substitution \(X\gets XC^{1/2}\). The equivalent, rescaled problem is then \[\min_{X:X\in\text{St}(n,k)}\langle X,LXC\rangle-\langle X,BC^{1/2}\rangle. \tag{9}\] Note that this problem is a generalization of well-known problems that arise in trust-region methods, optimization of a nonconvex quadratic over a unit ball or sphere [42; 15], i.e. problems of the form \[\min_{x\in\mathbb{R}^{n}:||x||=1}\langle x,Lx\rangle-\langle x,b\rangle.\] We define the Lagrangian of eq. (9) where \(\Lambda\in\mathbb{R}^{k\times k}\) are the Lagrange multipliers: \[\langle X,LXC\rangle-\langle X,BC^{1/2}\rangle-\langle\Lambda,(X^{\top}X-I)\rangle. \tag{10}\] The first-order condition is then \[LXC=BC^{1/2}+X\Lambda \tag{11}\] for some \(\Lambda\). Solutions \(X\) that satisfy eq. (11) are _critical points_ or _stationary points_. In general, there could exist many critical points that satisfy this condition. In the appendix we show that at these "stationary points" (maximizers, minimizers, or saddle points), (1.) the eigenvalues of \(\Lambda\) characterize the optimality of \(X\) and (2.) finding good critical points necessitates computation of \(L\)'s eigenvectors. ### Semi-supervised spectral learning algorithms In this section, we introduce approximate and iterative methods to solve eq. (9). In theory, as we show in the appendix, one can start with an arbitrary initialization to obtain a critical point of eq. (9) using a variety of projection- or retraction-based gradient methods, with the descent direction given by the gradient of eq. (9) and projection given by eq. (2). However, the empirical rate of convergence depends significantly on the initialization of the embedding matrix \(X\). In order to improve convergence of our method, we first introduce and motivate an efficient method based on Procrustes Analysis [46] to approximately compute critical points of the _unscaled_ objective (\(C=I\)). This approximation is appropriate in the limit of few labeled examples or unlimited unlabeled data: since \(C=(p-\widetilde{p})I-\frac{\widetilde{p}^{2}}{n}\mathbf{1}\mathbf{1}^{\top}\), then \(C\to pI\) in the limit of \(\frac{m}{n}\to 0\). ### Efficient approximation via Orthogonal Procrustes Here we propose an efficient way to compute approximate critical points of eq. (9). First we solve the canonical eigenvalue problem \(\min_{X}\text{tr}(X^{\top}LX)\) subject to a constraint on the second moment of \(X\): \(X^{\top}X=I\), yields \(X\) are the eigenvectors of \(L\). Second, we appropriately transform the solution so that \(X^{\top}B\) is positive definite (i.e. satisfies a necessary condition for first-order optimality). **Proposition 3.1** (Definiteness conditions of \(X^{\top}B\)).: _Assume \(C=I\). Note the first term of the objective in eq. (9) satisfies the invariance \(\langle X,LX\rangle=\langle\widetilde{X},L\widetilde{X}\rangle\), where \(\widetilde{X}=XQ\) for any orthogonal \(Q\in\mathbb{R}^{k\times k}\). Suppose \(X\) is a local minimizer of eq. (9). Then, \(X^{\top}B\succcurlyeq 0\) and symmetric._ A consequence of this is the following: Let the SVD of \(X^{\top}B=U_{B}D_{B}V_{B}^{\top}\) and let \(Q=U_{B}V_{B}^{\top}\). Algorithmically, this implies that projecting \(X\) onto \(Q\) decreases the objective of eq. (9) (assuming \(C=I\)). Note, in practice, we can additionally rescale predictions by taking \(X\gets XC^{1/2}\) to properly observe the constraint on the second moment of \(X\). This step can be interpreted as an alignment step, where we find an orthogonal transformation \(Q\) that aligns unlabeled vertices with their neighboring labeled vertices, and apply this transformation to all unlabeled vertices. We briefly describe the connection with Orthogonal Procrustes Analysis [46]. Let \(X\) be feasible, i.e. \(X^{\top}X=I\). Note that the invariance is nothing but \(\text{tr}(X^{\top}LX)=\langle X,LX\rangle=\langle XQ,LXQ\rangle\) for any orthogonal \(Q\). Thus, \[\operatorname*{arg\,min}_{Q:Q\in\text{Sf}(k,k)}\langle XQ,LXQ\rangle-\langle XQ,B\rangle=\operatorname*{arg\,max}_{Q:Q\in\text{Sf}(k,k)}\langle XQ,B\rangle= \operatorname*{arg\,min}_{Q:Q\in\text{Sf}(k,k)}||XQ-B||_{F}^{2}. \tag{12}\] This problem is the canonical Orthogonal Procrustes problem in \(\mathbb{R}^{k\times k}\) in the context of finding an alignment between the axis-aligned labeled vertices and their neighborhood of unlabeled vertices. We demonstrate the effect of this procedure in Figure 1. In Figure 0(a)--b, we plot the first pair of eigenvectors corresponding to the smallest two nonzero eigenvalues associated with a barbell graph. In Figure 0(c), we pick a random pair of vertices \(v_{i}\) and \(v_{j}\) with coordinates \(x_{i}\) and \(x_{j}\) from each clique and assign labels \(y_{i}=x_{i}\) and \(y_{j}=x_{j}\). Under this labeling, we say that the embedding is _inconsistent_. In other words, a linear predictor in the space spanned by the embedding would be incapable of recovering a separating hyperplane that accurately classifies the unlabeled samples. We then show that by applying the approximate method based on Procrustes Analysis introduced in Section 3.2, we recover an embedding which is _consistent_ with the labels. Alternatively, the projection and \(Q\)-transform can be interpreted as performing orthogonal multivariate regression in the space spanned by the first \(k\) nontrivial eigenvectors of \(L\): \[Q=\operatorname*{arg\,min}_{Q:Q\in\text{Sf}(k,k)}\sum_{i\in[m]}||x_{i}Q-y_{i} ||_{2}^{2},\] where \(Q\in\mathbb{R}^{k\times k}\) and predictions are given by \(XQ\). Note that this is similar in principal to the Semi-Supervised Laplacian Eigenmaps (SSL) algorithm [7], which solves an ordinary least squares problem using eigenvectors \(X\) of the Laplacian as features: \[Q=\operatorname*{arg\,min}_{Q}\sum_{i\in[m]}||x_{i}Q-y_{i}||_{2}^{2}.\] Figure 1: **Eigenvector method and projection example on the barbell graph. (a): Embedding into \(\mathbb{R}^{k}\) via Laplacian Eigenmaps. (b): Several iterations of gradient-based repulsion are applied to remove vertex overlaps for better visualization. (c): Consider taking an arbitrary vertex from each clique and assigning it a label (green vertices). Spectral embeddings are likely _inconsistent_ with labeled vertices. (d): Procrustes embedding. The orthogonal transform \(Q\) is derived from Prop. 3.1 and applied to \(X\); \(XQ\) resolves the discrepancy between the embeddings and the labeled vertices.** Crucially, the orthogonality constraint on \(Q\) ensures that the solution remains feasible, i.e. that \(XQ\in\text{St}(n,k)\). Furthermore, we show in our experiments that this feasability significantly improves generalization at very low label rates in comparison to standard Laplacian Eigenmaps SSL. These interpretations serve to motivate our initialization and subsequent refinement. In particular, Zhou and Srebro [55] consider the limiting behavior of Laplacian Eigenmaps SSL and show that it is non-degenerate in the limit of infinite unlabeled data. ### Iterative refinement with SSM and KL To refine solutions produced by the Procrustes algorithm into solutions to eq. (9), we introduce an efficient iterative method capable of global convergence to high-quality stationary points. Motivated by the similarity between eq. (9) and standard trust-region subproblems, we develop the framework of the _Sequential Subspace Method (SSM)_. In the \(k=1\) and \(C=I\) case, SSM has been applied to Trust-Region sub-problems with remarkable empirical results [21] and robust global convergence guarantees [20], even for so-called degenerate problems. SSM-based algorithms generate a sequence of iterates \(X_{t}\) by solving a series of rescaled quadratic programs (of the same form as eq. (9)) in subspaces of dimension much smaller than that of the original problem (where \(d=|V|\), the number of vertices in the graph). Although stationary points can be recovered via generic iterative project-descent procedures--gradient-based or Newton-based descent directions (e.g. via SQP or trust-region-type algorithms), SSM is a computationally friendly algorithm designed to address scalability with respect to large problems. We develop a SSM on the Stiefel manifold, which is inspired by the \(1\)-dimensional algorithm of [21], originally proposed to solve large trust-region subproblems. At step \(t\) a tiny subspace, \(S_{t}\), of dimension \(4k\) is derived from the current iterate \(X_{t}\), the gradient of the objective of eq. (9) \(g_{t}=LX_{t}C-BC^{1/2}\), an _SQP_ (i.e. Newton's method applied to the first-order optimality system \(X_{t}\)) iterate \(X_{\text{sqp}}\) derived in Prop. 3.2, and the principal eigenvectors of \(L\). The Sequential Quadratic Programming (SQP) framework [37] is adapted to compute \(X_{\text{sqp}}\). \(X_{t+1}\) is then generated by solving eq. (9) in this small subspace. Crucially, we highlight that when the eigenvectors of \(L\) are included in the subspace, the sequence of iterates generated by SSM exhibits the following global convergence property, which we discuss further in the appendix. **Theorem 1** (Global convergence of SSM).: _A limit \(X_{*}\) of \(\{X_{1},X_{2},\ldots,X_{t},\ldots\}\) generated by SSM is a local minimizer of eq. (8)._ To further adjust predictions, we introduce a multi-class Kernighan-Lin (KL) refinement algorithm to iteratively adjust the classification to improve a cut-based cost. Critically, this method is efficient (linear-time) and, in contrast to the gradient-based refinement method proposed in PoissonMBO [12], robust to the nonconvexity of the cut objective. ### Sequential Subspace Method (SSM) Algorithm Here we introduce an efficient iterative method capable of global convergence to high-quality stationary points. We apply this method to refine the approximate solutions produced using the method introduces in Section 3.2. Motivated by the similarity between eq. (9) and standard trust-region subproblems, we apply the framework of SSM. In the \(k=1\) and \(C=I\) case, SSM has been applied to Trust-Region sub-problems with remarkable empirical results [21] and robust global convergence guarantees [20], even for so-called degenerate problems. At a high level, SSM-based algorithms generate a sequence of iterates \(X_{t}\) by solving a series of rescaled quadratic programs (of the same form as eq. (9)) in subspaces of dimension much smaller than that of the original problem. At step \(t\), we introduce a subspace \(S_{t}\) derived from the current iterate \(X_{t}\), the gradient of the objective of Prob. eq. (9) \(g_{t}=LX_{t}C-BC^{1/2}\), an _SQP_ (i.e. Newton's method applied to the first-order optimality system \(X_{t}\)) iterate \(X_{\text{sqp}}\) derived in Prop. 3.2, and the principal eigenvectors of \(L\). The Sequential Quadratic Programming (SQP) framework [37] is applied to compute \(X_{\text{sqp}}\). Following the principle of SQP, we introduce the SQP direction \(Z\) according to the linearization of eq. (11), the first-order conditions of eq. (9): \[(LZC-Z\Lambda)-X\Delta=E:=BC^{1/2}-(LXC-X\Lambda),\quad X^{\top}Z=0\] **Proposition 3.2** (SQP iterate of the Lagrangian of eq. (10)).: _Assume \(\Lambda\) is symmetric. Let \(P^{\perp}=I-X^{\top}X\) be the projection onto the orthogonal complement of the column space of \(X\) and \(\Lambda C^{-1}=U\text{diag}([\lambda_{1},\dots,\lambda_{k}])U^{-1}\) be the eigenvector decomposition of \(\Lambda C^{-1}\). The Newton direction \(Z\) of \(X\) via the linearization of the first-order conditions is_ \[Z=OU^{\top}, \tag{13}\] _where each column of \(O\), \(o_{j}=(P^{\perp}LP^{\perp}-\lambda_{j}P^{\perp})^{\dagger}BC^{-1}u_{j}\)._ ``` 0: Partial Laplacian \(L\), affine term \(B\), intermediate feasible iterate \(X_{t}\), scaling term \(C\) 0:\(j-th\) columns of Newton updates--\(\Delta_{j},Z_{j}\) 1:functionSQP(\(L,\Lambda_{t},B,X_{t}\)) 2:\(\Lambda_{t}=X_{t}^{\top}(LX_{t}C-BC^{1/2})\) 3:\(U\text{diag}([\lambda_{1},\dots,\lambda_{k}])U=\Lambda C^{-1}=C^{-1/2}\Lambda _{t}C^{-1/2}\) 4:init \(O,P^{\perp}=I-X^{\top}X\) 5:for\(j\in[k]\)do 6:\(o_{j}=(P^{\perp}LP^{\perp})^{\dagger}BC^{-1}u_{j}\) 7:endfor 8:return\(OU^{\top}\) 9:endfunction ``` **Algorithm 1** SQP Update Alg. 1 presents the detailed steps involved in the computation of the Newton directions, i.e. Prop. 3.2. In Section 3.4, we discuss its computational cost. Although stationary points can be recovered via an iterative project-descent procedure using gradient-based or SQP-based descent directions, we introduce the Sequential Subspace Method (SSM) in Alg. 2: a computationally friendly algorithm to address scalability with respect to large problems. Our development of SSM is inspired by the \(1\)-dimensional algorithm of [21], originally proposed to solve large trust-region subproblems. In other words, instead of solving eq. (8) directly as previously mentioned, we instead solve a sequence of quadratic programs in subspaces of much smaller dimension relative to the size of the graph (the dimensions of \(L\)). SSM generically involves repeating the following pair of steps: 1. Compute the SQP direction \(Z=\text{SQP}(L,\Lambda,B,X)\) as defined in Prop. 13 and Alg.1, line 5. Let \(V\) be the orthogonal matrix consisting of columns in \(S\) (Alg.2, lines 6 and 7), where \[S=\text{span}(X_{t},Z_{t},u,g_{t}).\] 2. SSM generates an approximation of \((X,\Lambda)\) and an approximation of the smallest pair of eigenvalues \(\sigma\) and eigenvectors \(u\) of \(L\) in the subspace \(S\), \[[X,\Lambda,u,\sigma]=\text{SSM}(L,B,S)\] consider the approximation \(X=V\widetilde{X}\) for some \(\widetilde{X}\). Compute \[\min_{\widetilde{X}\in\text{St}(\widetilde{n},k)}F_{S}:=\min_{\widetilde{X}}F (\widetilde{X};V^{\top}LV,V^{\top}B).\] (14) Note that eq. (14) is solved using the Projected Gradient Method, where the projection is given by eq. (2): \([X]_{+}=UV^{\top}\). We update \(\Lambda\) according to the least-squares estimate derived from the first order condition in eq. (11) \(\Lambda=X^{\top}(AXC-BC^{1/2})\). We highlight that the sequence of iterates generated by SSM exhibits the following global convergence property, which we discuss further in the appendix. **Theorem 1** (Global convergence of SSM).: _A limit \(X_{*}\) of \(\{X_{1},X_{2},\dots,X_{t},\dots\}\) generated by SSM is a local minimizer of eq. (8)._ ### Complexity of SSM In this section, we discuss the computational cost of our method, dominated by the SQP routine to compute the SQP directions. We claim that the per-iteration complexity of our algorithm is \(T_{\text{matrix}}\), where \(T_{\text{matrix}}\) is the complexity of each call to a sparse matrix (i.e. Laplacian, more generally an _M-matrix_) solver. In particular, the QR-decomposition of \(\text{col}(S)\) takes time linear in \(n\). Likewise, fast, nearly linear-time solvers exist for solving Laplacian and Laplacian-like systems that are robust to ill-conditioning [43]. We adopt Multigrid preconditioned conjugate gradient due to its empirical performance. We further note that the SSM procedure itself exhibits quadratic rates of convergence for nondegenerate problems and global convergence with _at least_ linear rates, even when the problem exhibits certain degenerate characteristics [20]. #### 3.5.1 Computation of the descent direction \(Z\) In Sec 3.3, we express the SQP direction \(Z\) as the solution to the system characterized by the linearization of the first order optimality conditions. Namely, within each iteration of our procedure, we compute the Lagrangian multipliers as well as the SQP update for \(X\) as defined in eq.13. As in Newton's method for unconstrained problems, SQP-based methods necessitate computation of inverse-vector products involving symmetric PSD matrices. We assume that by exploiting the sparsity of \(L\), vector-vector and matrix-matrix multiplication can be done in linear time. In Alg. 1, we present the the SQP routine. The computation in line 3 involves an eigenvalue decomposition of a small \(k\times k\) matrix. Thus, the primary overhead of our method lies in the computation of each column of \(O\); \(o_{j},j=1,\ldots,k\), which necessitates computation of \(k\) Laplacian-like pseudoinverse-vector products. ``` 0: Partial Laplacian matrix \(L\), unit vector \(u\) 0: Embedding coordinates \(X\) 1:function SSM(\(A,u\)) 2:\(L\gets D-A\)\(\triangleright\) Compute the graph Laplacian 3: Initialize \(X\) according to Sec 3.2. 4:while not converged do 5:\(Z\gets SQP(L,\Lambda,B,X,u)\)\(\triangleright\) Eq. 13 & Alg. 2 6:\(\mathcal{S}\leftarrow\text{span}(X_{t},Z_{t},u,g_{t})\) 7:\(V\gets QR(\text{col}(S))\) 8:\(\widetilde{L}\gets V^{\top}LV\), \(\widetilde{B}\gets V^{\top}B\) 9:\(\widetilde{X}\leftarrow\min_{X,X^{\top}X=I}F(X;\widetilde{L},\widetilde{B})\)\(\triangleright\) Solve Eq. 9 in \(S\) 10:\(X_{t}\gets V^{\top}\widetilde{X}\)\(\triangleright\) Lifted coordinates 11:\(t\gets t+1\) 12:endwhile 13:return\(X_{t}\) 14:endfunction ``` **Algorithm 2** Sequential Subspace Minimization ### Cut-based refinement In this section, we provide a detailed overview of the Kernighan-Lin (KL) algorithm and our multi-class extension. The Kernighan-Lin algorithm [27] iteratively improves a given a disjoint bipartition of \(\mathcal{V}\): \((\mathcal{V}_{1},\mathcal{V}_{2})\) such that \(\mathcal{V}_{1}\cup\mathcal{V}_{2}=\mathcal{V}\), by finding subsets of each partition \(A\subset\mathcal{V}_{1}\), \(B\subset\mathcal{V}_{2}\) and then moving the nodes in \(A\) and \(B\) to the opposite block. More concretely, the Kernighan-Lin algorithm repeatedly finds candidate sets \(A\), \(B\) to be exchanged until it reaches a local optimum with respect to the cut objective. Notably, the algorithm has the desirable tendency to escape poor local minima to a certain extent due to the way in which the sets \(A\) and \(B\) are created. This is one of the key features of the algorithm, and is a critical advantage over gradient-based methods for partitioning refinement, such as the MBO method presented in [12]. The gain of a vertex \(v\) is defined \(g(v)=\sum_{j|\ell(v_{i})=\ell(v_{j})}W_{ij}-\sum_{j|\ell(v_{i})\neq\ell(v_{j})}W_{ ij}\)., i.e. the reduction in the cut cost when the vertex \(v\) is moved from partition \(V_{1}\) to partition \(V_{2}\). Thus, when \(g(v)>0\) we can decrease the cut by \(g(v)\) by moving \(v\) to the opposite block. Let \(g(v,w)\) denote the gain of exchanging \(v\) and \(w\) between \(V_{1}\) and \(V_{2}\). Analogously, if \(v\) and \(w\) are not adjacent, then the gain is \(g(v,w)=g(v)+g(w)\). If \(v\) and \(w\) are adjacent, then \(g(v,w)=g(v)+g(w)-2\omega(v,w)\). The KL algorithm characterizes each vertex in \(G\) as having one of two states: marked or unmarked. At each pass of the algorithm, each node is unmarked. A KL pass proceeds by iteratively finding an unmarked pair \(v\in\mathcal{V}_{1}\) and \(w\in\mathcal{V}_{2}\) for which \(g(v,w)\) is maximum (note that \(g(v,w)\) is not necessarily positive), marking \(v\) and \(w\), and updating the gain values of each of remaining unmarked nodes (i.e. the neighbors of \(v\) and \(w\)) assuming an exchange between \(v\) and \(w\). This procedure repeats \(p=\min(|\mathcal{V}_{1}|,|\mathcal{V}_{2}|)\) times. After \(p\) iterations, we have an ordered list \(l\) of vertex pairs \((v_{i},w_{i})\), \(i=1,\ldots,p\). The swap-sets \(A\) and \(B\) are derived by finding the smallest index \(k\in\{0,\ldots,p\}\) such that \(P=\sum_{i=1}^{k}g(v_{i},w_{i})\) is maximum. Then, \(A:=\bigcup_{i=1}^{k}\{v_{i}\}\) and \(B:=\bigcup_{i=1}^{k}\{w_{i}\}\). A nonzero \(k\) implies a reduction of the cut cost if \(A\) and \(B\) are exchanged. In this case, the exchange is performed and a new pass is instantiated. Otherwise, the KL iterations conclude. Note that KL is typically performed over bi-partitions. We extend this framework to \(k\)-partitions in Algorithm 3 by considering a randomly ordered set of \(\binom{k}{2}\) pairs of classes (as defined by the predictions made on the vertex set) and performing KL on the subgraph restricted to these vertices. This procedure continues iteratively until all \(\binom{k}{2}\) pairs have been exhausted. Then, if convergence or a predetermined number of iterations has not been reached, a new random sequence is generated and the procedure continues. ``` 1:Input: KNN weights \(W\) 2:Output: Predictions \(X\) 3: 4:Compute \(g(v)\) for all \(v\in\mathcal{V}\) 5:while not converged do 6:for each pair of \(\binom{n}{2}\) partitions (classes) \(\mathcal{V}=(\mathcal{V}_{1},\mathcal{V}_{2})\)do 7: ordered list \(l\leftarrow\emptyset\) 8: unmark all vertices \(v\in V\) 9:for\(i=1\) to \(n=\min(|\mathcal{V}_{1}|,|\mathcal{V}_{2}|)\)do 10:\((v_{1},v_{2})\leftarrow\arg\max_{v_{1},v_{2}}g(v_{1},v_{2})\) 11: update \(g\)-values for all \(v\in N(v_{1})\cup N(v_{2})\) 12: add \((v_{1},v_{2})\) to \(l\) and mark \(v_{1},v_{2}\) 13:endfor 14:\(k^{*}\leftarrow\arg\max_{k}\sum_{i=1}^{k}g(v_{i},w_{i})\) 15: Update \((\mathcal{V}_{1},\mathcal{V}_{2})\): swap \((v_{i},w_{i})\in l\), \(i=1,\ldots,k^{*}\) 16:endfor 17:endwhile ``` **Algorithm 3** Kernighan-Lin refinement ## 4 Graph-Based Active Learning In this section, we introduce an active-learning scheme motivated by the criticality of label selection at low label rates and the benefits of diversity sampling. In the low label-rate regime, it is well-known that active learning strategies which emphasize _exploration_ of the sample-space, i.e. _diversity_ of the labeled samples, outperform those that rely on _exploitation_ of a classifiers decision boundary, e.g. notions of margin [34]. Therefore, we propose a computationally efficient technique inspired by algebraic methods for selecting landmarks in graphs--i.e. a method that aims to select well-connected vertices diversely over the graph (vertices with large degree that are maximally separated) according to the spectral properties of the Laplacian. ### Spectral score for diversity sampling on graphs We propose to select vertices from the set of unlabeled vertices according to the following measure: \[\operatorname*{arg\,max}_{v_{i}}\{s(v_{i}):=\tilde{d}_{i}u_{i}^{2}\} \tag{15}\] where \(\tilde{d}_{i}\) denotes the degree of vertex \(i\) defined on the sub-graph associated with the set of unlabeled vertices, \(\mathcal{U}\) and \(u_{i}\) corresponds to the \(i\)-th entry of \(u\), the solution to the boundary-constrained eigenvalue problem: \[\mathcal{L}u_{i} =\lambda u_{i},\quad\text{if }m+1\leq i\leq M\] \[u_{i} =0,\qquad\text{if }1\leq i\leq m \tag{16}\] Note that \(\text{supp}(u)\) is nothing but the entries of the eigenvector corresponding to the smallest eigenvalue of \(L_{\mathcal{U}}\). Naturally, \(u_{i}\) encodes various notions of centrality. Notably, Cheng et al. [14] demonstrate an intimate connection between the solution \(u\) in eq. (16) for a normalized random walk Laplacian and the absorption time of a random walk / diffusion distance of vertex \(i\) with respect to the boundary vertices \(l\). More concretely, they prove that for solutions to boundary-constrained eigenvalue problems defined for certain Laplacians (e.g. absorbing random walk Laplacians), the diffusion distance from vertex \(i\) to the boundary, \(d_{l}(i)\) satisfies the following inequality: \[d_{l}(i)\log\left(\frac{1}{|1-\lambda_{1}|}\right)\geq\log\left(\frac{2|u(i)|} {||u||_{L^{\infty}}}\right)\] In other words, \(d_{l}\) is _highly correlated_ with \(|u|\). While Cheng et al. [14] derive this relationship explicitly for \(|u_{i}|\), we empirically show that selecting vertices for active learning in this way performs poorly relative to state of the art methods. Inspired by recent sampling strategies for graph signal reconstruction [24] in the presence of noise, we demonstrate _reweighting_\(u_{i}^{2}\) by \(\tilde{d}_{i}\) is an effective and principled heuristic. Additional details are provided in the supplemental material. Notably, we highlight that this score _comes at no extra computational cost_ due to certain features of SSM--in particular SMM's capability of producing estimates of the eigenvectors of \(L\) in addition to solutions of eq. (9). Intuitively, the proposed score naturally encodes the eigenvector centrality and degree of a vertex as well as its geodesic distance to labeled vertices. In practice, we incorporate eigenvectors of higher order eigenvalues as well as interpolate towards a margin-based score as the label-rate increases, setting the score to: \[s(v_{i})=||\tilde{d}_{i}\odot U_{i}^{2}||_{2},\] where \(U\) is now an \(n\times\ell\) matrix with eigenvectors as columns and \(U_{i}^{2}\) denotes the matrix consisting of the square of the elements of column \(U_{i}\). The choice of \(\ell\) is left as a hyperparameter. In our experiments, we use \(\ell=3\). _Remark 4.1_.: If \(\mathcal{U}^{c}\) corresponds to the empty set, it is apparent that the smallest eigenvalue of \(L_{\mathcal{U}}=\mathcal{L}\) is \(0\), and the corresponding eigenvector is \(u=1\). Hence, the acquisition score of each vertex is nothing but a constant times its degree. Figure 2: **Visualization of the lower-bound estimate on a ring of gaussians** Labeled points are annotated as red circles. Points to be labeled are marked as red stars. Brighter regions of the heatmap indicate vertices with higher score. While the work of Cheng et al. [14] provides concrete motivation for our method, we derive the following property that ensures samples are diverse, i.e. far from the labeled nodes. **Proposition 4.1**.: _Let \(N_{i}\) denote the neighborhood of vertex \(i\). Consider score at vertex \(v_{a}\), \(s(v_{a})=|u_{i}|\). For any connected pair of vertices \(v_{a}\) and \(v_{b}\) such that \(v_{a}\not\in N_{s}\), \(v_{b}\in N_{s}\) for any \(s\in\mathcal{U}\), there is a path from \(v_{a}\) to \(v_{b}\) such that the sequence of entries \((s(v_{a}),\ldots,s(v_{c}),\ldots s(v_{b}))\) is nonincreasing._ We provide an intuitive visualization of this score in Figure 2. The dataset is comprised of eight Gaussian clusters, each of equivalent size (\(300\) samples), whose centers (i.e., means) lie evenly spaced apart on the unit circle. Each cluster is created by randomly sampling \(300\) points from a Gaussian with mean \({}_{i}=(\cos(\pi i/4),\sin(\pi i/4))^{\top}\in\mathbb{R}^{2}\) and standard deviation \(\sigma_{i}=\sigma=0.17\). The class structure is then assigned in an alternating fashion. For this example, efficient exploration via active learning is critical, particularly at low label rates. As we show in Figure 2 our score facilitates effective exploration of the geometric clustering structure--i.e. by sampling diversely from each cluster in the ring. ### Complexity of active learning We now comment on the time and space complexity of our algorithm. In general, one would assume that the most expensive step is computing the principal eigenpairs of \(L_{\mathcal{U}}\). However, one key advantage of SSM is that it may provide accurate estimates of the principal eigenvectors of \(L\), coinciding with the iterates \(X_{t}\). In particular, \(u=V\tilde{u}\) is an estimate for the eigenvectors of \(L\), if \(\tilde{u}\) consists of the eigenvectors of \(\tilde{L}\) corresponding to the smallest two eigenvalues. Thus, when iteratively deriving vertices to label via active learning and subsequently solving the graph-based SSL classification problem, we may effectively re-use the previous iteration's estimate of \(u\) to do active learning in linear time, comparable to simple, decision-boundary-based margin methods and far more efficient compared to uncertainty uncertainty-based techniques that necessitate full or partial eigenvector decompositions of dense covariance matrices. ## 5 Experiments In this section, we present a numerical study of our algorithm applied to image classification in three domains at low label rates. We additionally explore medium and large label rates in comparison to recent state-of-the-art methods in the supplemental material. First, we provide visualizations of predictions made by our proposed models in conjunction with Laplace learning. Note that a significant number of predictions made by Laplace Learning are concentrated around the origin. In Figure 3, we present 2-d visualizations of the embeddings of our SSM and Procrustes initialization method in conjunction with those produced by Laplace Learning. Each plot is constructed by taking the embedding (used to make predictions) implied by Laplace Learning, our approximate method, or SSM and the value associated with class "2" on one axis and "7" on the other axis. Ideally, there should be a clear and distinct cluster structure associated with classes 2 and 7 around the supervised points and the rest of the digits. Closure structure should also be associated with the barcode plots via a block diagonally dominant barcode. The key message is that SSM exhibits a strong capability to discriminate classes (i.e. a block diagonally dominant barcode) while respecting the geometry of the unlabeled examples. In contrast, embeddings produced by Laplace Learning are not discriminatory (i.e. the barcode is uniform) and the embeddings are degenerate--concentrated at a single point. ### Experimental setup We evaluated our method on three datasets: MNIST [30], Fashion-MNIST [47] and CIFAR-10 [29]. As in Calder et al. [12], we used pretrained autoencoders as feature extractors. For MNIST and Fashion-MNIST, we used variational autoencoders with 3 fully connected layers of sizes (784,400,20) and (784,400,30), respectively, followed by a symmetrically defined decoder. The autoencoder was trained for 100 epochs on each dataset. The autoencoder architecture, loss, and training are similar to Kingma and Welling [28]. For each dataset, we constructed a graph over the latent feature space. We used all available data to construct the graph, giving \(n=70,000\) nodes for MNIST and Fashion-MNIST, and \(n=60,000\) nodes for CIFAR-10. The graph was constructed as a \(K\)-nearest neighbor graph with Gaussian weights given by \[w_{ij}=\exp\left(-4||x_{i}-x_{j}||^{2}/d_{K}(x_{i})^{2}\right),\] where \(x_{i}\) represents the latent variables for image \(i\), and \(d_{K}(x_{i})\) is the distance in the latent space between \(x_{i}\) and its \(K^{\rm th}\) nearest neighbor. We used \(K=10\) in all experiments. The weight matrix was then symmetrized by replacing \(W\) with \(\frac{1}{2}(W+W^{\top})\). In table 1, we compare our method to Laplace learning [56] and Poisson learning [12] as well as our refinement based on KL-partitioning to the PoissonMBO refinement. In the supplemental material, we compare our SSM approach and alignment-based approximation (Procrustes-SSL) against Laplace learning [56], Poisson learning [12], lazy random walks [52; 51], weighted nonlocal Laplacian (WNLL) [39], \(p\)-Laplace learning [16], and Laplacian Eigenmaps SSL (LE-SSL)[7]. Our SSM approach out-performs all methods in almost all cases. ### Numerical results Table 1 above and 2, 5, 4, and 6 in the supplementary show the average accuracy and standard deviation over all \(100\) trials for various label rates. Our SSM and SSM-KL methods consistently out-perform state-of-the-art. In particular, our method strictly improves over relevant methods on all datasets at a variety of label rates ranging from low (1 label) to high (4000). In the supplementary material, we expand on this evaluation--showing that the trend persists with medium label rate (100-1000 labels). On all datasets, the proposed method exceeds the performance of related methods, particularly as the difficulty of the classification problem increases (i.e. CIFAR-10). In the supplementary material, we see that while Laplacian Eigenmaps SSL achieves better performance at higher label rates relative to Procrustes-SSL, at lower label rates Procrustes Analysis is significantly more accurate. We highlight the discrepancy between the approximate method (Procrustes-SSL) and our SSM-based refinement. This indicates the importance of SSM for recovering good critical points of eq. (9). Figure 3: Barcode plots of MNIST predictors (left) and embeddings of samples for digits ‘2’ and ‘7’ (right). Learning is performed with 1 label per class. In the barcode plots, the rows are the samples, ordered by their class. Ordering of the columns was obtained by iteratively sorting the columns of the embedding matrices \(X\). **(a,d)** Laplace learning exhibits degeneracy in the limit of unlabeled data. **(b,e)** Embeddings derived using Procrustes Analysis (Section 3.2) demonstrates no degeneracy but mixed samples from different classes together. **(c,f)** SSM exhibits good classification performance (a block diagonally dominant barcode) while respecting the geometry of unlabeled examples. We additionally evaluate the scaling behavior of our method at intermediate and high label rates. In Table 6 in the supplementary, we compare our method to Laplace learning and Poisson learning on MNIST and Fashion-MNIST with 500, 1000, 2000, and 4000 labels per class. We see significant degradation in the performance of Poisson Learning, however, our method maintains high-quality predictions in conjunction with Laplace learning. These results imply that while Laplace learning suffers degeneracy at low label rates and Poisson Learning seemingly degrades at large label rates, our framework performs reliably in both regimes--covering the spectrum of low and high supervised sampling rates. Furthermore, we highlight the practical efficacy of SSM in the appendix by comparing to existing standard open source implementations [44] of benchmark optimization algorithms [2]. ### Spectral algorithm for active learning We numerically evaluate our selection scheme for active learning on FashionMNIST and CIFAR-10 in Figure 4. We compare to minimum margin-based uncertainty sampling [38], VOpt [25], and Model Change (MC) [33]. Note that uncertainty sampling selects query points according to the following notion of margin: \(\text{margin}(i)=\operatorname*{arg\,max}_{j}(X_{i})_{j}-\operatorname*{arg\, max}_{k\neq j}(X_{i})_{k}\). One can interpret a smaller margin at a node as more uncertainty in the classification. We additionally note that MC and VOpt necessitate eigendecompositions of certain covariance matrices. Our score is implemented as \[s^{\prime}(v_{i})=s(v_{i})-\lambda_{t}\cdot\text{margin}(X),\] where \(\lambda\) increases with \(t\) via \(\lambda_{t+1}=\left(1+\epsilon^{1/2k}\right)\lambda_{t}\) for some small value of \(\epsilon=10^{-4}\). We show that when coupled with the proposed SSM algorithm in an iterative our active learning scheme outperforms related methods at low-label rates across all benchmarks. We also emphasize that due to certain features of SSM, the computation of \(u_{i}\) is obtained for free after the first iteration. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \# FashionMNIST Labels per class & **1** & **2** & **3** & **4** & **5** & **4000** \\ \hline Lapface/LP [56] & 18.4 (7.3) & 32.5 (8.2) & 44.0 (8.6) & 52.2 (6.2) & 57.9 (6.7) & 85.8 (0.0) \\ Poisson [12] & 60.8 (4.6) & 66.1 (3.9) & 69.6 (2.6) & 71.2 (2.2) & 72.4 (2.3) & 81.1 (0.4) \\ SSM & 61.2 (5.3) & 66.4 (4.1) & 70.3 (2.3) & 71.6 (2.0) & 73.2 (2.1) & 86.1 (0.1) \\ Poisson-MBO [12] & 62.0 (5.7) & 67.2 (4.8) & 70.4 (2.9) & 72.1 (2.5) & 73.1 (2.7) & 86.8 (0.2) \\ SSM-KL & **65.8 (1.1)** & **69.2 (1.2)** & **71.6 (1.2)** & **73.0 (0.4)** & **73.4 (0.3)** & **93.5 (0.1)** \\ \hline \hline \# CIFAR-10 & & & & & & \\ \hline Lapface/LP [56] & 10.4 (1.3) & 11.0 (2.1) & 11.6 (2.7) & 12.9 (3.9) & 14.1 (5.0) & 80.9 (0.0) \\ Poisson [12] & 40.7 (5.5) & 46.5 (5.1) & 49.9 (3.4) & 52.3 (3.1) & 53.8 (2.6) & 70.3 (0.9) \\ SSM & 40.9 (6.1) & 47.3 (5.9) & 50.2 (4.3) & 52.1 (4.3) & 54.7 (3.4) & 80.9 (0.1) \\ Poisson-MBO [12] & 41.8 (6.5) & 50.2 (6.0) & 53.5 (4.4) & 56.5 (3.5) & 57.9 (3.2) & 80.1 (0.3) \\ SSM-KL & **43.7 (1.4)** & **51.4 (1.3)** & **54.1 (2.1)** & **57.1 (1.3)** & **58.8 (1.9)** & **83.9 (0.0)** \\ \hline \hline \end{tabular} \end{table} Table 1: Average accuracy over 100 trials with standard deviation in brackets. Best is bolded. Figure 4: **Performance of SSM with active learning on F-MNIST (a) and CIFAR-10 (b) Comparison between active learning methods using SSM-KL. X-axis denotes the number of vertices of the graph queries. Y-axis denotes the accuracy over 10-trials (initial labeled set). The shaded region denotes \(0.5\sigma\).** Conclusion We have proposed a novel formulation of semi supervised and active graph-based learning. Motivated by the robustness of semi-supervised Laplacian eigenmaps and spectral cuts in low label rate regimes, we introduced a formulation of Laplacian Eigenmaps with label constraints as a nonconvex Quadratically Constrained Quadratic Program. We have presented an approximate method as well as a generalization of a Sequential Subspace Method on the Stiefel Manifold. In a comprehensive numerical study on three image datasets, we have demonstrated that our approach consistently outperforms relevant methods with respect to semi-supervised accuracy in low, medium, and high label rate settings. We additionally demonstrate that selection of labeled vertices at low-label rates is critical. An active learning scheme is naturally derived from our formulation and we demonstrate it significantly improves performance, compared to competing methods. Future work includes a more rigorous analysis of the active learning score and of the problem in eq. (9) and our algorithmic generalization of SSM--for example, conditions on \(L\) and \(\mathcal{U}\) that guarantee convergence to globally optimal solutions with convergence rates derived in Hager [21], Hager and Park [20], Absil et al. [2]. ## Acknowledgments and Disclosure of Funding This work is partially supported by NSF-CCF-2217033.
2303.17994
Helson-Lowdenslager and de Branges type theorems in the setting of continuous rotationally symmetric norms
A Helson-Lowdenslager type result has been proved by Chen in the context of Lebesgue spaces of the unit circle equipped with a continuous rotationally symmetric norm by studying the simply invariant subspaces of the operator of multiplication by the coordinate function $z$. In this paper, we generalize Chen's result by obtaining a description of simply invariant subspaces for multiplication by $z^n$. A de Branges type result is also proved for Hardy spaces equipped with continuous rotationally symmetric norms.
Apoorva Singh, Niteesh Sahni
2023-03-31T12:11:21Z
http://arxiv.org/abs/2303.17994v1
[ ###### Abstract A Helson-Lowdenslager type result has been proved by Chen [3] in the context of Lebesgue spaces of the unit circle equipped with a continuous rotationally symmetric norm by studying the simply invariant subspaces of the operator of multiplication by the coordinate function \(z\). In this paper, we generalize Chen's result by obtaining a description of simply invariant subspaces for multiplication by \(z^{n}\). A de Branges type result is also proved for Hardy spaces equipped with continuous rotationally symmetric norms. R ]Helson-Lowdenslager and de Branges type theorems in the setting of continuous rotationally symmetric norms]Apoorva Singh and Nitesh Sahni]Niteesh Sahni R rotationally symmetric norm, Simply invariant subspace, Lebesgue space, Hardy space, Sub-Hilbert spaces. ## 1 Introduction Two problems in the theory of analytic functions on the unit disk that have been a center of rigorous research are Beurling's theorem [1] which characterizes subspaces of the Hardy space \(H^{2}\) that are invariant under \(T_{z}\)- the operator of multiplication by the coordination function \(z\), and de Branges' theorem [5] which characterizes contractively contained sub-Hilbert spaces in \(H^{2}\) that are also invariant under \(T_{z}\). Beurling's theorem was later generalized by Helson and Lowdenslager [9] in form of obtaining the simply invariant subspaces of \(T_{z}\) on the Lebesgue space \(L^{2}\) of the unit circle. The doubly invariant subspaces of \(L^{2}\) under the operator \(T_{z}\) were obtained by Weiner [8]. The simplicity of the arguments allowed for the generalizations to all \(L^{p}\) spaces (\(0<p\leq\infty\)) [25, 26]. Ever since, the Helson-Lowdenslager theorem has been proved in many different settings. For example, on the torus, subspaces of \(L^{2}\) invariant under doubly commuting isometries have been obtained in [6], and further extended to all \(L^{p}\) spaces in [18]. The vector-valued extensions of the Beurling and the Helson-Lowdenslager theorems obtained by Lax [13] and Halmos [7] were significant breakthroughs which sparked off a significant body of research in this direction. Motivated by the results of Lax and Halmos, invariance under the operator \(T_{z^{n}}\) on \(H^{p}\) and \(L^{p}\) spaces have been studied in [20, 12]. In fact, [20] studies invariance under \(T_{B}\)- the multiplication by a finite Blaschke factor \(B(z)\) and also the common invariant subspaces of \(T_{B^{2}}\) and \(T_{B^{3}}\) on all \(H^{p}\) spaces. Coming to the study of sub-Hilbert spaces of Lebesgue spaces, it was established in [16] that a general de-Branges type result is not possible in \(L^{2}\). However in [16], a characterization is obtained under some conditions on the norm of the sub-Hilbert space and they went on to prove a characterization for \(L^{p}\) (\(p>2\)). This result was extended to other \(L^{p}\) spaces (\(1\leq p<2\)) in [19]. Redett's result was further generalized to all \(L^{p}\) spaces in the context of the operator \(T_{z^{n}}\) in [12]. Recently in [2], Chen introduced a class of norms \(\alpha\) over all measurable functions on the unit circle called the continuous rotationally symmetric norm, and defined the general Lebesgue and Hardy spaces associated with \(\alpha\) denoted by \(L^{\alpha}\) and \(H^{\alpha}\), respectively. The space \(L^{\alpha}\) is the \(\alpha\)-closure of \(L^{\infty}\) and \(H^{\alpha}\) is the \(\alpha\)-closure of \(H^{\infty}\). The classical \(p\)-norms \(\|.\|_{p}\) (\(1\leq p<\infty\)) are particular examples of continuous rotationally symmetric norms. Interestingly, [3] introduced a more general class of norms called the \(\|.\|_{1}\)-dominating normalized gauge norms on measurable functions on the unit circle and proved the Helson-Lowdenslager type result for the operator \(T_{z}\) on Lebesgue spaces equipped with such norms. In fact, doubly invariant subspaces have also been studied in this general setting. For \(H^{\alpha}\) under a \(\|.\|_{1}\)-dominating gauge norm, invariance under multiplication by a finite Blaschke factor has been studied in [21]. Further, sharp descriptions for invariant as well as common invariant subspaces under the rotationally symmetric norms have also been presented in [21]. The aim of the present paper is two-fold. First, the Beurling type results of [21] have been extended to \(L^{\alpha}\) - the Lebesgue space equipped with a continuous rotationally symmetric norm. In particular, we describe the closed subspaces of \(L^{\alpha}\) which are simply invariant under the operator \(T_{z^{n}}\). We point out that the invariance under \(T_{z^{n}}\) on \(L^{p}\) was first studied in [12], wherein, the invariant subspaces are first characterized for \(L^{2}\) and then density arguments are given to lift the result to all \(L^{p}\) spaces. The Wold decomposition plays a crucial role in the characterization for \(L^{2}\) in addition to the orthogonal properties of \(n\)-unimodular functions. These fundamental properties of \(n\)-unimodular functions will be used in the proof for the general \(L^{\alpha}\) case as well. The arguments presented in [3] for establishing a form for the \(T_{z}\) simply invariant subspaces in \(L^{\alpha}\) make use of the fact that an \(L^{\infty}\) function \(f\), satisfying \(1/f\in L^{\alpha}\), can be factored into an \(L^{\infty}\) function \(g\) and an \(H^{\infty}\) function \(h\) such that \(1/h\in H^{\alpha}\). Our proof is elementary in the sense that it does not rely on such factorization and the arguments work for the operator \(T_{z^{n}}\). We define an \(n\)-unimodular matrix on tuple of \(L^{\infty}\) functions and make use of the decomposition of any \(L^{\alpha}\) function into a direct sum of \(L^{\alpha}(z^{n})\) functions, defined as the \(\alpha\)-closure of \(L^{\infty}(z^{n})\), to prove our main theorem in section 3. This result generalizes the Helson-Lowdenslager's type theorems of [12], [3] and [21]. Secondly, in section 4, we examine the de Branges case in \(H^{\alpha}\), that is, Hilbert spaces algebraically contained in \(H^{\alpha}\) on which \(T_{z}\) acts as an isometry. Our result generalizes the de Branges type result of [22] in \(H^{p}\) spaces. ## 2 Notations and Preliminaries Suppose \(\mathbb{D}\) is the open unit disk and \(\mathbb{T}\) is the unit circle on the complex plane \(\mathbb{C}\). Let \(m\) be a normalized Lebesgue measure on \(\mathbb{T}\). The Lebesgue space \(L^{\infty}\) consists of the essentially bounded complex valued measurable functions on \(\mathbb{T}\), such that it is a Banach space under the essential supremum norm. We define a rotationally symmetric norm on \(L^{\infty}\) as: **Definition 2.1**.: ([2]) Let \(\alpha\) be a norm on \(L^{\infty}\). \(\alpha\) is a rotationally symmetric norm if: 1. \(\alpha(1)=1\), 2. \(\alpha(|f|)=\alpha(f)\) for every \(f\in L^{\infty}\), and 3. \(\alpha(f_{w})=\alpha(f)\) for every \(w\in\mathbb{T}\) and \(f\in L^{\infty}\). Here \(f_{w}:\mathbb{T}\to\mathbb{C}\) is defined as \(f_{w}(z)=f(\overline{w}z)\). Moreover, the rotationally symmetric norm can also be extended over all complex valued measurable functions on \(\mathbb{T}\) as \[\alpha(f)=\sup\,\{\alpha(s):s\text{ is a simple function, }|s|\leq|f|\}.\] Furthermore, we say \(\alpha\) is continuous if, for a sequence of measurable sets \(\{E_{n}\}_{n=1}^{\infty}\), \(m(E_{n})\to 0^{+}\) we have \(\alpha(\chi_{E_{n}})\to 0\). Here \(\chi_{E_{n}}\) is the characteristic function on the set \(E_{n}\subset\mathbb{T}\). One of the important examples of continuous rotationally symmetric norms, besides the \(p\)-norms (\(1\leq p<\infty\)), is an Orlicz norm. We shall first define an Orlicz norm on \(L^{\infty}\) and prove that it is a continuous rotationally symmetric norm. Recall the definition of an Orlicz function. A non-decreasing convex function \(\psi:[0,\infty]\to[0,\infty]\) is an Orlicz function such that \(\psi(0)=0\) and \(\psi(\infty)=\infty\). We additionally assume that \(\psi\) is continuous at \(0\), \(\psi\) is a strictly increasing surjective function, and \(\lim_{x\to\infty}\frac{\psi(x)}{x}=\infty\). For a detailed study on Orcliz functions and their associated norms, we refer to [17]. _Example_.: One of the examples of a continuous rotationally symmetric norm is an Orlicz norm which is defined on \(L^{\infty}\) as: \[\|f\|_{\psi}=\inf\left\{\lambda>0:\frac{1}{\psi(1)}\int_{\mathbb{T}}\psi\left( \frac{|f|}{\lambda}\right)dm\leq 1\right\}\] where \(\psi\) is an Orlicz function and \(f\in L^{\infty}\). Below we justify that \(\|.\|_{\psi}\) is a continuous rotationally symmetric norm on \(L^{\infty}\). We first show that the \(\|.\|_{\psi}\) norm is well-defined on \(L^{\infty}\). Whenever \(f=0\), then \(\psi(0)=0\) and \(\|0\|_{\psi}=0\). Consider \(0\neq f\in L^{\infty}\), and observe that \(|f|\leq\|f\|_{\infty}\ a.e\). We can write \(\frac{|f|}{\|f\|_{\infty}}\leq 1\ a.e.\), such that \(\frac{1}{\psi(1)}\int_{\mathbb{T}}\psi\left(\frac{|f|}{\|f\|_{\infty}}\right)dm \leq 1\). Therefore, \(\|f\|_{\psi}\leq\|f\|_{\infty}\). Now we prove that \(\|.\|_{\psi}\) is a rotationally symmetric norm. Clearly, \(\|1\|_{\psi}=\inf\left\{\lambda>0:\frac{1}{\psi(1)}\int_{\mathbb{T}}\psi\left( \frac{1}{\lambda}\right)dm\leq 1\right\}\)=\(\inf\left\{\lambda>0:\psi\left(\frac{1}{\lambda}\right)\leq\psi(1)\right\}\). Since \(\psi\) is strictly increasing on \([0,\infty]\), implies \(\psi\) is a bijective function, therefore \(\lambda\geq 1\) and \(\|1\|_{\psi}=1\). It is easy to show that \(\|\|f\|_{\psi}=\|f\|_{\psi}\), and by changing the variable, it follows that \(\|f_{w}\|_{\psi}=\|f\|_{\psi}\) for every \(w\in\mathbb{T}\). Lastly, for the continuity of \(\|.\|_{\psi}\), suppose \(\{E_{n}\}_{n=1}^{\infty}\) is a sequence of measurable sets in \(\mathbb{T}\) such that \(m(E_{n})\to 0^{+}\) as \(n\to\infty\). We shall prove that \(\|\chi_{E_{n}}\|_{\psi}\to 0\) as \(n\to\infty\). Since \(m(E_{n})\to 0^{+}\), it can be easily seen that \(\chi_{E_{n}}\to 0\ a.e.\) as \(n\to\infty\). For any \(\lambda>0\), \(\frac{\chi_{E_{n}}}{\lambda}\to 0\ a.e.\) It is well known that every real-valued convex function on an open interval is continuous. So, \(\psi\) is continuous which gives \(\psi\left(\frac{\chi_{E_{n}}}{\lambda}\right)\to 0\ a.e.\)\(\forall\ \lambda>0\). So, for a given \(\epsilon>0\), there exists \(K\in\mathbb{N}\) such that \[\psi\left(\frac{\chi_{E_{n}}}{\lambda}\right)\leq\epsilon\ a.e.\ \forall\ n\geq K.\] This implies, \[\frac{1}{\psi(1)}\int\limits_{\mathbb{T}}\psi\left(\frac{\chi_{E_ {n}}}{\lambda}\right)dm \leq\frac{1}{\psi(1)}\int\limits_{\mathbb{T}}\epsilon\ dm\] \[\leq\frac{\epsilon}{\psi(1)}\ \forall\ n\geq K.\] By choosing \(\epsilon=\psi(1)\), we have \[\frac{1}{\psi(1)}\int\limits_{\mathbb{T}}\psi\left(\frac{\chi_{E_{n}}}{ \lambda}\right)dm\leq 1\ \forall\ n\geq K\ \text{for all}\ \lambda>0.\] Let \(\lambda_{k}=\frac{1}{2^{k}}\) be a sequence such that \(\lambda_{k}\in\left\{\lambda>0:\frac{1}{\psi(1)}\int_{\mathbb{T}}\psi\left( \frac{\chi_{E_{n}}}{\lambda}\right)dm\leq 1\right\}\) for each \(k\in\mathbb{N}\) and for all \(n\geq K\). Hence, it follows that \(\|\chi_{E_{n}}\|_{\psi}=0\) for all \(n\geq K\). The space \(\mathcal{L}^{\alpha}=\{f:\mathbb{T}\to\mathbb{C}\) measurable such that \(\alpha(f)<\infty\}\) is a Banach space under the continuous rotationally symmetric norm \(\alpha\). The \(\alpha\)-closure of \(L^{\infty}\) is the general Lebesgue space \(L^{\alpha}\). Some important facts about \(L^{\alpha}\) and \(\mathcal{L}^{\alpha}\) spaces established in [4] are that \(L^{\infty}\subset L^{\alpha}\subset\mathcal{L}^{\alpha}\subset L^{1}\), and \(||.||_{1}\leq\alpha(.)\leq||.||_{\infty}\). The space \(L^{\infty}\) multiplies \(L^{\alpha}\) back into \(L^{\alpha}\) and the following inequality is satisfied: \(\alpha(fg)\leq\|f\|_{\infty}\alpha(g)\) for all \(f\in L^{\infty}\) and \(g\in L^{\alpha}\). In fact this inequality holds for all \(g\in\mathcal{L}^{\alpha}\). The \(\alpha\)-closure of \(H^{\infty}(\mathbb{T})\) is denoted by \(H^{\alpha}\), and it is a closed subspace of \(L^{\alpha}\). The general Hardy space \(H^{\alpha}\) is a Banach space under the \(\alpha\) norm. A simpler description of \(H^{\alpha}\) has been proved in [4], which is \(H^{\alpha}=H^{1}\cap L^{\alpha}\). Furthermore, note that \(H^{\infty}\subset H^{\alpha}\subset H^{1}\). As we know \(L^{\alpha}\subset L^{1}\), we can identify \(f\in L^{\alpha}\) with a Fourier series \[f(z)=\sum\limits_{j=-\infty}^{\infty}\hat{f}(j)z^{j}\] where the Fourier coefficients are given by \(\hat{f}(j)=\int\limits_{\mathbb{T}}f(z)z^{-j}dm\), for all \(j\in\mathbb{Z}\). One of the characterizations of \(H^{\alpha}\) obtained in [4] is: \[H^{\alpha}=\{f\in L^{\alpha}\ :\ \hat{f}(j)=\int_{\mathbb{T}}fz^{-j}dm=0,\ \forall\ j<0\}.\] Consider the \(n\)th Cesaro means of \(f\in L^{\alpha}\), for each \(n\geq 1\) \[\sigma_{n}(f)=\frac{S_{0}(f)+S_{1}(f)+\cdots+S_{n}(f)}{n+1}\] where \(S_{k}(f)\) stands for the \(k\)-th partial sum \(\sum\limits_{j=-k}^{k}\hat{f}(j)z^{j}\), for \(k\geq 0\). The fact that \(\sigma_{n}(f)\) converges to \(f\) in \(L^{\alpha}\) has been proved in [4]. For the convenience of the reader, we reproduce the result below. **Lemma 2.2**.: _Let \(\alpha\) be a continuous rotationally symmetric norm and \(f\in L^{\alpha}\). Then \(\alpha(\sigma_{n}(f)-f)\to 0\) as \(n\to\infty\), and also \(L^{\alpha}\) is the \(\alpha\)-closure of span\(\{z^{n}:n\in\mathbb{Z}\}\)._ In order to obtain a characterization of simply invariant subspaces of \(L^{\alpha}\) invariant under the multiplication by \(z^{n}\), we require a decomposition of \(L^{\alpha}\) in terms of the space \(L^{\alpha}(z^{n})\), defined as the \(\alpha\) closure of \(L^{\infty}(z^{n})\). A similar decomposition has been proved in the context of \(H^{\alpha}\) in [21]. Proceeding in a similar fashion as in the proof of Lemma 4.2 in [21], we obtain a decomposition of \(L^{\alpha}\) as follows. **Lemma 2.3**.: _Suppose \(\alpha\) is a continuous rotationally symmetric norm on \(L^{\alpha}\). Then_ \[L^{\alpha}=L^{\alpha}(z^{n})\oplus zL^{\alpha}(z^{n})\oplus\cdots\oplus z^{n- 1}L^{\alpha}(z^{n}), \tag{2.1}\] _where \(\oplus\) is an algebraic direct sum._ For a fixed \(n\in\mathbb{N}\), the operator \(T_{z^{n}}\) of multiplication by a monomial \(z^{n}\) on \(L^{\alpha}\) acts as an isometry. It can be verified as \(\alpha(T_{z^{n}}(f))=\alpha(z^{n}f)=\alpha(|z^{n}f|)=\alpha(|f|)=\alpha(f)\). A closed subspace \(\mathcal{M}\) of \(L^{\alpha}\) is simply invariant under \(T_{z^{n}}\) if \(z^{n}\mathcal{M}\subsetneq\mathcal{M}\), and is doubly invariant if \(z^{n}\mathcal{M}=\mathcal{M}\). Let us now introduce the definition of an \(n\)-unimodular matrix associated with an \(r\)-tuple of \(L^{\infty}\) functions. It is a direct analogue of the definition of the \(n\)-inner matrix associated with an \(r\)-tuple of \(H^{\infty}\) functions introduced in [23]. Let \((\varphi_{1},\varphi_{2},\ldots,\varphi_{r})\) to be an \(r\)-tuple of \(L^{\infty}\) functions (\(r\leq n\)). For each \(1\leq j\leq r\), we can write \(\varphi_{j}=\sum\limits_{i=1}^{n}z^{i-1}\ \varphi_{ji}\) where \(\varphi_{ji}\in L^{2}(z^{n})\). Next, we define the \(r\times n\) matrix \(A=(\varphi_{ji})\) for \(1\leq j\leq r\), \(1\leq i\leq n\), and call it the \(n\)-unimodular matrix associated with the tuple \((\varphi_{1},\varphi_{2},\ldots,\varphi_{r})\), if \(AA^{*}=I\) almost everywhere. In particular, a function \(\varphi\in L^{\infty}\) is \(n\)-unimodular if \(\sum\limits_{i=1}^{n}|\varphi_{i}|^{2}=1\)\(a.e.\), where \(\varphi=\sum\limits_{i=1}^{n}z^{i-1}\)\(\varphi_{i}\) and \(\varphi_{i}\in L^{2}(z^{n})\). Observe that \(|\varphi_{i}|^{2}\leq 1\)\(a.e.\) and hence \(\varphi_{i}\in L^{\infty}(z^{n})\). We will require a characterization for the associated matrix \(A\) to be \(n\)-unimodular. This characterization is presented in Lemma 2.4 and is similar to the characterization of the \(B\)-inner matrix presented in [23]. We point out that the proof of the necessary part is very similar to that presented in [23]. However, the converse carries a different set of arguments. **Lemma 2.4**.: _The matrix associated with an \(r\)-tuple \((\varphi_{1},\varphi_{2},\ldots,\varphi_{r})\) of \(L^{\infty}\) functions is \(n\)-unimodular if and only if \(\{z^{kn}\varphi_{j}:k\in\mathbb{Z},\ 1\leq j\leq r\}\) is an orthonormal set in \(L^{2}\)._ Proof.: Write \(\varphi_{j}=\sum\limits_{i=1}^{n}z^{i-1}\)\(\varphi_{ji}\) for each \(1\leq j\leq r\) where \(\varphi_{ji}\in L^{2}(z^{n})\). Assume that \(A=(\varphi_{ji})\) is an \(n\)-unimodular matrix. So, we have \[\sum\limits_{i=1}^{n}|\varphi_{ji}|^{2}=1\;a.e.,\;\text{for every}\;\;1\leq j \leq r \tag{2.2}\] and \[\sum\limits_{i=1}^{n}\varphi_{ji}\overline{\varphi_{ki}}=0\;a.e.,\;\;\text{ for }j\neq k,\;\text{and}\;1\leq j,k\leq r. \tag{2.3}\] The fact \(\{z^{kn}\varphi_{j}:k\in\mathbb{Z},\ 1\leq j\leq r\}\) is orthonormal in \(L^{2}\) can be verified by following the arguments similar to those in the proof of Lemma 3.9 in [23]. Conversely, let \(\{z^{kn}\varphi_{j}:k\in\mathbb{Z},\ 1\leq j\leq r\}\) be an orthonormal set in \(L^{2}\). We shall show that \(A=(\varphi_{ji})\) satisfies conditions (2.2) and (2.3). Write \(\varphi_{j}=\sum\limits_{i=1}^{n}z^{i-1}\varphi_{ji}\). A simple calculation shows that \(\int\limits_{\mathbb{T}}\sum\limits_{i=1}^{n}|\varphi_{ji}|^{2}dm=1\) and \(\int\limits_{\mathbb{T}}\sum\limits_{i=1}^{n}|\varphi_{ji}|^{2}z^{-kn}dm=0\) for all \(k\neq 0\). Therefore, \(\sum\limits_{i=1}^{n}|\varphi_{ji}|^{2}=1\)\(a.e\). This validates (2.2). Note that when \(j\neq l\) and \(k\in\mathbb{Z}\), we have \[0 =\langle\varphi_{j},z^{kn}\varphi_{l}\rangle\] \[=\sum\limits_{i=1}^{n}\langle\varphi_{ji},z^{kn}\varphi_{li}\rangle\] \[=\left\langle\sum\limits_{i=1}^{n}\varphi_{ji}\overline{\varphi_{ li}},z^{kn}\right\rangle\] Hence \(\sum\limits_{i=1}^{n}\varphi_{ji}\overline{\varphi_{li}}=0\;a.e..\) This completes the proof of the lemma. ## 3 Simply Invariant Subspaces of \(L^{\alpha}\) In order to describe the simply invariant subspace \(\mathcal{M}\) of \(L^{\alpha}\) under the operator \(T_{z^{n}}\), we shall first observe that \(\mathcal{M}\) has a non trivial intersection with \(L^{\infty}\) and that \(\mathcal{M}\cap L^{\infty}\) is a weak*-closed subspace of \(L^{\infty}\). We borrow the description of \(T_{z^{n}}\)-simply invariant subspaces of \(L^{\infty}\) from [12] and record it below for the convenience of the reader. The rest of the arguments in the proof of the main theorem are elementary and rely on the fact (Lemma 3.6) that for any \(f\in\mathcal{M}\) there exists an outer function \(O\) such that \(Of\in\mathcal{M}\cap L^{\infty}\). **Theorem 3.1** ([12]).: _Let \(\mathcal{M}\) be a weak*-closed subspace of \(L^{\infty}\) which is simply invariant under \(T_{z^{n}}\). Then the most general form of \(\mathcal{M}\) is_ \[\mathcal{M}=\sum\limits_{j=1}^{r}\oplus\varphi_{j}H^{\infty}(z^{n})\oplus \mathcal{K}_{\mathcal{M}}^{\overline{A}}\,\] _where_ _(a) Each_ \(\varphi_{j}\) _is an_ \(n\)_-unimodular function such that_ \(r\leq n\)_,_ _(b)_ \(\overline{A}=(\overline{\varphi_{ji}})\in M_{rn}(L^{2}(z^{n}))\)_,_ \(\varphi_{j}=\sum\limits_{i=1}^{n}z^{i-1}\varphi_{ji}\)_,_ _(c)_ \(\mathcal{K}_{\mathcal{M}}^{\overline{A}}=\left\{f\in\mathcal{M}:\sum\limits_{ i=1}^{n}\overline{\varphi_{ji}}f_{i}=0\;a.e.\;\forall\;1\leq j\leq r\right\}\)_, where_ \(f=\sum\limits_{i=1}^{n}z^{i-1}f_{i}\) _and_ \(f_{i}\in L^{2}(z^{n})\)_._ _Moreover, \(\mathcal{K}_{\mathcal{M}}^{\overline{A}}\) is a doubly invariant subspace of \(L^{\infty}\) under \(T_{z^{n}}\). When \(r=n\), \(\mathcal{K}_{\mathcal{M}}^{\overline{A}}=\{0\}\). If \(r<n\), then there exist infinitely many non-zero doubly invariant subspaces of \(\mathcal{K}_{L^{\infty}}^{\overline{A}}\) when appended with \(\sum\limits_{j=1}^{r}\oplus\varphi_{j}H^{\infty}(z^{n})\) form a simply invariant subspace of \(L^{\infty}\)._ _Remark 3.2_.: The construction of the characterization of simply \(T_{z^{n}}\)-invariant subspaces of \(L^{p}\) (\(0<p\leq\infty\)) described in [12], shows that the \(r\times n\) matrix \(A=(\varphi_{ji})\) associated with the tuple \((\varphi_{1},\ldots,\varphi_{r})\) is \(n\)-unimodular. We justify Remark 3.2 as follows: The proof of Theorem A [12] reveals that \(\varphi_{1},\ldots\varphi_{r}\) appearing in the statement of Theorem 3.1 are orthonormal in \(L^{2}\). Further, \(z^{nm}\varphi_{j}\perp z^{nl}\varphi_{k}\) in \(L^{2}\) for all \(1\leq j,k\leq r\) and \(m,l\in\mathbb{Z}\). This leads to the fact that \(\{z^{kn}\varphi_{j}:k\in\mathbb{Z},1\leq j\leq r\}\) is an orthonormal set in \(L^{2}\). Now by Lemma 2.4, we have \(AA^{*}=I\). Let us now state the main result of this section which helps us to describe the simply invariant subspaces of \(L^{\alpha}\) invariant under \(T_{z^{n}}\). **Theorem 3.3**.: _Let \(\alpha\) be a continuous rotationally symmetric norm and \(\mathcal{M}\) be a non-trivial closed subspace of \(L^{\alpha}\) which is simply invariant under \(T_{z^{n}}\) _Then there exist \(n\)-unimodular functions \(\varphi_{1}\),...,\(\varphi_{r}\) (\(r\leq n\)) such that_ \[\mathcal{M}=\sum\limits_{j=1}^{r}\oplus\varphi_{j}H^{\alpha}(z^{n})\oplus \mathcal{K}_{\mathcal{M}}^{\overline{A}}\] _where (a) \(H^{\alpha}(z^{n}):=\overline{H^{\infty}(z^{n})}^{\alpha}\), (b) \(A=(\varphi_{ji})\) is an \(r\times n\) matrix and each \(\varphi_{j}=\sum\limits_{i=1}^{n}z^{i-1}\varphi_{ji}\) such that \(\varphi_{ji}\in L^{2}(z^{n})\), (c) \(\mathcal{K}_{\mathcal{M}}^{\overline{A}}=\left\{f\in\mathcal{M}:\sum\limits_{ i=1}^{n}\overline{\varphi_{ji}}f_{i}=0\;a.e.,\;\forall\;1\leq j\leq r\right\}\), where \(f=\sum\limits_{i=1}^{n}z^{i-1}f_{i}\) and \(f_{i}\in L^{2}(z^{n})\)._ The following two lemmas, Lemma 3.4 and Lemma 3.5, have been proved in [21] in the context of \(H^{\alpha}\). However, using similar arguments we obtain the \(L^{\alpha}\) versions presented below. **Lemma 3.4**.: _Suppose \(\alpha\) is a continuous rotationally symmetric norm and \(\mathcal{M}\) be a closed subspace of \(L^{\alpha}\). Then \(\mathcal{M}\cap L^{\infty}\) is weak*-closed in \(L^{\infty}\)._ **Lemma 3.5**.: _Let \(\alpha\) be a continuous rotationally symmetric norm. Suppose \(\mathcal{M}\) is a closed subspace of \(L^{\alpha}\). Then, \(\mathcal{M}\) is simply invariant under \(T_{z^{n}}\) if and only if \(\mathcal{M}\) is simply invariant under the algebra \(H^{\infty}(z^{n})\)._ Our starting point in the proof of Theorem 3.3 will be to guarantee that \(\mathcal{M}\) has a non-trivial intersection with \(L^{\infty}\). This fact is established in the next lemma. **Lemma 3.6**.: _Suppose \(\mathcal{M}\) is a non-trivial \(\alpha\)-closed subspace of \(L^{\alpha}\) such that \(z^{n}\mathcal{M}\subsetneq\mathcal{M}\). Then, \(\mathcal{M}\cap L^{\infty}\neq\{0\}\)._ Proof.: Suppose \(0\neq f\in\mathcal{M}\subset L^{\alpha}\subset L^{1}\). Then \(|f|^{\frac{1}{2}}\in L^{2}\). In view of the decomposition (2.1), we can write \(|f|^{\frac{1}{2}}\)= \(g_{1}+zg_{2}+\cdots+z^{n-1}g_{n}\) for some \(g_{1},g_{2},\ldots,g_{n}\in L^{2}(z^{n})\). Define \[k_{j}=exp(-|g_{j}|\,-i\;(|g_{j}|)^{\sim})\] where \((|g_{j}|)^{\sim}\) stands for the harmonic conjugate of the \(L^{2}\) function \(|g_{j}|\) and \((|g_{j}|)^{\sim}\in L^{2}(z^{n})\) (this is possible for all \(L^{p}\) functions [11]). Note that \(k_{j}\) is an analytic function such that \(|k_{j}|\leq 1\), i.e, \(|k_{j}|\leq 1\in H^{\infty}(z^{n})\) for each \(1\leq j\leq n\). Put \(O=k_{1}k_{2}\ldots k_{n}\). Note that \[O|f|^{\frac{1}{2}}=O\left(g_{1}+zg_{2}+\cdots+z^{n-1}g_{n}\right)\] \[=Og_{1}+zOg_{2}+\cdots+z^{n-1}Og_{n}.\] Therefore, \[|O||f|^{\frac{1}{2}}\leq|k_{1}||g_{1}|+\cdots+|k_{n}||g_{n}|\] \[\qquad=exp(-|g_{1}|)|g_{1}|+\cdots+exp(-|g_{n}|)|g_{n}|\leq n.\] Thus \(O|f|^{\frac{1}{2}}\in L^{\infty}\) which implies that \(O^{2}f\in L^{\infty}\). By Lemma 3.5, we have \(O^{2}f\in\mathcal{M}\). Hence \(O^{2}f\in\mathcal{M}\cap L^{\infty}\). We now return to the proof of Theorem 3.3. Proof: In view of Lemma 3.4 and Lemma 3.6, we conclude that \(\mathcal{M}\cap L^{\infty}\) is a non-trivial weak*-closed subspace of \(L^{\infty}.\) It can be easily seen that \(\mathcal{M}\cap L^{\infty}\) is simply invariant under \(T_{z^{n}}.\) So, by Theorem 3.1, there exist \(n\)-unimodular functions \(\varphi_{1},\ldots,\varphi_{r}\) (\(r\leq n\)) such that \[\mathcal{M}\cap L^{\infty}=\sum_{j=1}^{r}\oplus\varphi_{j}H^{\infty}(z^{n}) \oplus\mathcal{K}_{\mathcal{M}\cap L^{\infty}}^{\overline{A}}, \tag{3.1}\] and \(A\) is the corresponding \(n\)-unimodular matrix in \(M_{rn}(L^{\infty}(z^{n}))\) associated with the \(r\) tuple \((\varphi_{1},\ldots,\varphi_{r}).\) Moreover, the \(T_{z^{n}}\)-doubly invariant subspace \(\mathcal{K}_{\mathcal{M}\cap L^{\infty}}^{\overline{A}}\) has the form \[\mathcal{K}_{\mathcal{M}\cap L^{\infty}}^{\overline{A}}=\left\{f\in\mathcal{ M}\cap L^{\infty}:\sum_{i=1}^{n}\overline{\varphi_{ji}}f_{i}=0\;a.e.\;\forall\;1 \leq j\leq r\right\} \tag{3.2}\] in which \(f=\sum\limits_{i=1}^{n}z^{i-1}f_{i}\) and \(f_{i}\in L^{2}(z^{n}).\) We claim that \(\mathcal{M}=\sum\limits_{j=1}^{r}\oplus\varphi_{j}H^{\alpha}(z^{n})\oplus \mathcal{K}_{\mathcal{M}}^{\overline{A}}.\) It is trivial to note that \(\mathcal{K}_{\mathcal{M}}^{\overline{A}}\subset\mathcal{M}.\) Note that \(\varphi_{j}H^{\infty}(z^{n})\subset\mathcal{M}\) for each \(1\leq j\leq r.\) We now show that \(\varphi_{j}H^{\alpha}(z^{n})\subset\mathcal{M}.\) For any \(f\in H^{\alpha}(z^{n}),\) there exists a sequence \(\{f_{n}\}_{n=1}^{\infty}\in H^{\infty}(z^{n})\) such that \(\alpha(f_{n}-f)\to 0\) and since \(\varphi_{j}\in L^{\infty},\) we see that \(\varphi_{j}f_{n}\) converges to \(\varphi_{j}f\) in \(L^{\alpha}.\) Also the sequence \(\{\varphi_{j}f_{n}\}_{n=1}^{\infty}\subset\mathcal{M},\) and hence \(\varphi_{j}f\in\mathcal{M}.\) This implies that \(\sum\limits_{j=1}^{r}\oplus\varphi_{j}H^{\alpha}(z^{n})\oplus\mathcal{K}_{ \mathcal{M}}^{\overline{A}}\subset\mathcal{M}.\) In order to prove the reverse containment, note that (as in the proof of Lemma 3.6) for any \(f\in\mathcal{M},\) we can construct an outer function \(O\in H^{\infty}(z^{n})\) such that \(Of\in\mathcal{M}\cap L^{\infty}.\) By equation (3.1), we can write \[Of=\varphi_{1}h_{1}+\cdots+\varphi_{r}h_{r}+K \tag{3.3}\] where \(h_{1},\ldots,h_{r}\in H^{\infty}(z^{n})\) and \(K\in\mathcal{K}_{\mathcal{M}\cap L^{\infty}}^{\overline{A}}.\) In view of Lemma 2.3, we can write \[f=f_{1}+\cdots+z^{n-1}f_{n}\] for some \(f_{1},\ldots,f_{n}\in L^{\alpha}(z^{n}).\) Therefore, \[Of=Of_{1}+\cdots+z^{n-1}Of_{n} \tag{3.4}\] Also we can decompose \(\varphi_{j}\) and \(K\) as follows \[\begin{split}\varphi_{j}=\varphi_{j1}+\cdots+z^{n-1}\varphi_{jn},\;\;\varphi_{ji}\in L^{\infty}(z^{n})\\ K=K_{1}+\cdots+z^{n-1}K_{n},\;\;K_{i}\in L^{2}(z^{n})\end{split} \tag{3.5}\] From equations (3.3), (3.4) and (3.6), it is easy to see that \[\begin{pmatrix}Of_{1}\\ \vdots\\ Of_{n}\end{pmatrix}\ =\ \begin{pmatrix}\varphi_{11}h_{1}+\varphi_{21}h_{2}+\cdots+ \varphi_{r1}h_{r}+K_{1}\\ \vdots\\ \varphi_{1n}h_{1}+\varphi_{2n}h_{2}+\cdots+\varphi_{rn}h_{r}+K_{n}\end{pmatrix}. \tag{3.6}\] It follows that \[\left(\overline{Of_{1}}\ \ \cdots\ \ \ \overline{Of_{n}}\right)\ =\ \left(\overline{\sum\limits_{j=1}^{r}\varphi_{j1}h_{j}+K_{1}}\ \ \cdots\ \ \overline{\sum\limits_{j=1}^{r}\varphi_{jn}h_{j}+K_{n}}\right). \tag{3.7}\] Therefore, \[|Of_{1}|^{2}+\cdots+|Of_{n}|^{2}=\left|\sum\limits_{j=1}^{r}\varphi_{j1}h_{j} +K_{1}\right|^{2}+\cdots+\left|\sum\limits_{j=1}^{r}\varphi_{jn}h_{j}+K_{n} \right|^{2}.\] Hence, for each \(1\leq i\leq n\), \[\left|\sum\limits_{j=1}^{r}\varphi_{ji}h_{j}+K_{i}\right|^{2}\leq|Of_{1}|^{2 }+\cdots+|Of_{n}|^{2}\leq|O|^{2}(|f_{1}|+\cdots+|f_{n}|)^{2}.\] This yields \[\left|\sum\limits_{j=1}^{r}\varphi_{ji}\frac{h_{j}}{O}+\frac{K_{i}}{O}\right| \leq|f_{1}|+\cdots+|f_{n}|.\] Since \(f_{1},\ldots,f_{n}\in L^{\alpha}\), we have \[\alpha\left(\left|\sum\limits_{j=1}^{r}\varphi_{ji}\frac{h_{j}}{O}+\frac{K_{i} }{O}\right|\right)\leq\alpha(|f_{1}|+\cdots+|f_{n}|)<\infty.\] Therefore, for each \(1\leq i\leq n\), \(\sum\limits_{j=1}^{r}\varphi_{ji}\frac{h_{j}}{O}+\frac{K_{i}}{O}\in L^{\alpha}\). We now claim that each \(\frac{h_{j}}{O}\) and \(\frac{K_{i}}{O}\) belong to \(L^{\alpha}\). For a fixed \(i\), we have \[\varphi_{1i}\frac{h_{1}}{O}+\cdots+\varphi_{ji}\frac{h_{j}}{O}+\cdots+\varphi_ {ri}\frac{h_{r}}{O}+\frac{K_{i}}{O}\in L^{\alpha}.\] Since \(\overline{\varphi_{ji}}\in L^{\infty}\), so we get \[\varphi_{1i}\overline{\varphi_{ji}}\frac{h_{1}}{O}+\cdots+|\varphi_{ji}|^{2} \frac{h_{j}}{O}+\cdots+\varphi_{ri}\overline{\varphi_{ji}}\frac{h_{r}}{O}+ \overline{\varphi_{ji}}\frac{K_{i}}{O}\in L^{\alpha}.\] Take the summation over \(1\leq i\leq n\), \[\sum\limits_{i=1}^{n}\varphi_{1i}\overline{\varphi_{ji}}\frac{h_{1}}{O}+ \cdots+\sum\limits_{i=1}^{n}\left|\varphi_{ji}\right|^{2}\frac{h_{j}}{O}+ \cdots+\sum\limits_{i=1}^{n}\varphi_{ri}\overline{\varphi_{ji}}\frac{h_{r}}{O} +\sum\limits_{i=1}^{n}\overline{\varphi_{ji}}\frac{K_{i}}{O}\in L^{\alpha}.\] Since \(A=(\varphi_{ji})\) is \(n\)-unimodular matrix, so by conditions (2.2), (2.3) and (3.2), we get \(\dfrac{h_{j}}{O}\in L^{\alpha}\) for \(1\leq j\leq r.\) So, \(\underset{j=1}{\overset{r}{\sum}}\varphi_{ji}\dfrac{h_{j}}{O}\in L^{\alpha}.\) Since \(\underset{j=1}{\overset{r}{\sum}}\varphi_{ji}\dfrac{h_{j}}{O}+\dfrac{K_{i}}{O} \in L^{\alpha},\) implies \(\dfrac{K_{i}}{O}=\underset{j=1}{\overset{r}{\sum}}\varphi_{ji}\dfrac{h_{j}}{O} +\dfrac{K_{i}}{O}-\underset{j=1}{\overset{r}{\sum}}\varphi_{ji}\dfrac{h_{j}}{O }\in L^{\alpha}.\) Hence the claim follows. Furthermore, because \(O\) is an outer function implies \(\dfrac{h_{j}}{O}\in H^{1}.\) Hence, \(\dfrac{h_{j}}{O}\in H^{\alpha}.\) By Lemma 2.2, the Cesaro means \(\sigma_{l}\left(\dfrac{h_{j}}{O}\right)\) converge to \(\dfrac{h_{j}}{O}\) in \(H^{\alpha}.\) Since \(\sigma_{l}\left(\dfrac{h_{j}}{O}\right)\) is a polynomial in \(z^{n},\) so \(\dfrac{h_{j}}{O}\in H^{\alpha}(z^{n})\) for all \(1\leq j\leq r.\) Lastly to show that \(f\in\underset{j=1}{\overset{r}{\sum}}\oplus\varphi_{j}H^{\alpha}(z^{n})\oplus \mathcal{K}_{\mathcal{M}}^{\overline{A}},\) it is suffices to prove that \(\dfrac{K}{O}\in\mathcal{K}_{\mathcal{M}}^{\overline{A}}.\) Observe that \(\dfrac{K}{O}=f-\left(\varphi_{1}\dfrac{h_{1}}{O}+\varphi_{2}\dfrac{h_{2}}{O}+ \cdots+\varphi_{r}\dfrac{h_{r}}{O}\right)\in\mathcal{M}\) (since \(\underset{j=1}{\overset{r}{\sum}}\oplus\varphi_{j}H^{\alpha}(z^{n})\subset \mathcal{M}\)). Further, as \(K\in\mathcal{K}_{\mathcal{M}\cap L^{\infty}}^{\overline{A}}\) and in view of (3.2), we have for all \(1\leq j\leq r\) \[\overline{\varphi_{j1}}K_{1}+\overline{\varphi_{j2}}K_{2}+\cdots+\overline{ \varphi_{jn}}K_{n}=0\text{ }a.e.\] Therefore, \(\overline{\varphi_{j1}}\dfrac{K_{1}}{O}+\overline{\varphi_{j2}}\dfrac{K_{2}}{ O}+\cdots+\overline{\varphi_{jn}}\dfrac{K_{n}}{O}=0\text{ }a.e.\) This implies \(\dfrac{K}{O}\in\mathcal{K}_{\mathcal{M}}^{\overline{A}}\) and hence \[\mathcal{M}=\underset{j=1}{\overset{r}{\sum}}\oplus\varphi_{j}H^{\alpha}(z^{n })\oplus\mathcal{K}_{\mathcal{M}}^{\overline{A}}.\] _Remark 3.7_.: Since \(\|.\|_{p}\) for \(1\leq p<\infty\) and \(p\neq 2\) are continuous rotationally symmetric norms, so Theorem A in [12] comes as an special case of Theorem 3.3. **Corollary 3.8**.: _([3]) Let \(\alpha\) be a continuous rotationally symmetric norm and \(\mathcal{M}\) be a non-trivial closed subspace of \(L^{\alpha}\) which is simply invariant under \(T_{z}\). Then there exists an unimodular function \(\varphi\) such that_ \[\mathcal{M}=\varphi H^{\alpha}.\] Proof.: For the case of multiplication by \(T_{z},\) we have \(n=1.\) By Theorem 3.3, we see that there exists an unimodular function \(\phi\) such that \[\mathcal{M}=\varphi H^{\alpha}\oplus\mathcal{K}_{\mathcal{M}}^{\overline{A}}.\] Here \(A\) is the \(1\times 1\) matrix \((\varphi),\) and \(\mathcal{K}_{\mathcal{M}}^{\overline{A}}=\{f\in\mathcal{M}:\overline{\varphi} f=0\text{ }a.e.\}.\) Further, the unimodularity of \(\varphi\) forces \(\mathcal{K}_{\mathcal{M}}^{\overline{A}}=\{0\}.\) ## 4 de Branges result in \(H^{\alpha}\) de Branges in [5] first characterized Hilbert spaces which are contractively contained in \(H^{2}\) and on which \(T_{z}\) acts as an isometry. In [24], it was established using the Wold decomposition that the condition of contractive containment can be dropped. This was a significant generalization of de Branges' result. This result was further extended to all \(H^{p}\) spaces in [22]. In this section, we extend the result of Singh and Agrawal [22] to Hardy spaces equipped with rotationally symmetric norms. Theorem 4.1: _Let \(\mathcal{M}\) be a Hilbert space which is a vector subspace of \(H^{\alpha}\) such that \(T_{z}(\mathcal{M})\subset\mathcal{M}\) and \(T_{z}\) acts as an isometry on \(\mathcal{M}\). Then, there exists \(\phi\in H^{2}\cap H^{\alpha}\), such that_ \[\mathcal{M}=\phi H^{2}.\] _Further, \(\|\phi g\|_{\mathcal{M}}=\|g\|_{H^{2}}\) for all \(g\in H^{2}\)._ Proof: Since \(T_{z}\) acts as an isometry on \(\mathcal{M}\), so by Wold decomposition theorem ([10]), we can write \[\mathcal{M}=\bigcap\limits_{n=0}^{\infty}T_{z^{n}}\mathcal{M}\ \oplus\ \bigoplus\limits_{n=0}^{\infty}T_{z^{n}}\mathcal{N} \tag{4.1}\] where \(\mathcal{N}\) stands for the orthogonal complement of \(z\mathcal{M}\) in \(\mathcal{M}\). We claim that \(\bigcap\limits_{n=0}^{\infty}T_{z^{n}}\mathcal{M}=\{0\}\). Note that for any \(f\in\bigcap\limits_{n=0}^{\infty}T_{z^{n}}\mathcal{M}\) can be expanded as a Fourier series \[f(z)=a_{0}+a_{1}z+a_{2}z^{2}+\cdots+a_{n}z^{n}+\cdots\] where \(a_{n}=\int\limits_{\mathbb{T}}fz^{-n}dm\). Since \(f\in\bigcap\limits_{n=0}^{\infty}T_{z^{n}}\mathcal{M}\), so for \(n\geq 0\) we can find \(g_{n+1}\in\mathcal{M}\) such that \(f(z)=z^{n+1}g_{n+1}(z)\). This forces \(a_{n}=0\) and hence \(f=0\). Therefore, \[\mathcal{M}=\mathcal{N}\oplus z\mathcal{N}\oplus z^{2}\mathcal{N}\oplus\cdots. \tag{4.2}\] Let \(\phi\) be any non-zero element of \(\mathcal{N}\). Without loss of generality assume that \(\|\phi\|_{\mathcal{M}}=1\). We first show that \(\phi\) multiplies \(H^{2}\) into \(\mathcal{M}\). Suppose \(g(z)=\sum\limits_{n=0}^{\infty}b_{n}z^{n}\) be an arbitrary element of \(H^{2}\). Put \(g_{n}(z)=\sum\limits_{k=0}^{n}b_{k}z^{k}\). In view of (4.2) we make the following computation, for any \(n\geq 0\): \[\|\phi g_{n}\|_{\mathcal{M}}^{2} =\|b_{0}\phi+b_{1}z\phi+\cdots+b_{n}z^{n}\phi\|_{\mathcal{M}}^{2}\] \[=\|b_{0}\phi\|_{\mathcal{M}}^{2}+\|b_{1}z\phi\|_{\mathcal{M}}^{2} +\cdots+\|b_{n}z^{n}\phi\|_{\mathcal{M}}^{2}\] \[=|b_{0}|^{2}+|b_{1}|^{2}+\cdots+|b_{n}|^{2}\] \[=\|g_{n}\|_{H^{2}}^{2}.\] Since \(\{g_{n}\}_{n=1}^{\infty}\) is a Cauchy sequence in \(H^{2}\), so \(\{\phi g_{n}\}_{n=1}^{\infty}\) is a Cauchy sequence in \(\mathcal{M}\) and hence, there exists \(h\in\mathcal{M}\) such that \(\phi g_{n}\to h\) in \(\mathcal{M}\). Note that for any \(k\leq n\), we have \[\begin{split}\phi g_{n}&=b_{0}\phi+b_{1}z\phi+\cdots+b_ {k}z^{k}\phi+b_{k+1}z^{k+1}\phi+\cdots+b_{n}z^{n}\phi\\ &=b_{0}\phi+b_{1}z\phi+\cdots+b_{k}z^{k}\phi+z^{k+1}\phi h_{n}\end{split} \tag{4.3}\] where \(h_{n}=b_{k+1}+b_{k+2}z+\cdots+b_{n}z^{n-k-1}\in H^{2}\). In a similar fashion as above, we can show that \(\{\phi h_{n}\}\) is a Cauchy sequence in \(\mathcal{M}\) and hence, there exists \(f\in\mathcal{M}\) such that \[\phi h_{n}\to f\text{ in }\mathcal{M}. \tag{4.4}\] From (4.3) and (4.4), we get \[h=b_{0}\phi+b_{1}z\phi+\cdots+b_{k}z^{k}\phi+z^{k+1}f.\] Observe that the coefficient of \(z^{k}\) in \(h\) is equal to the coefficient of \(z^{k}\) in \(b_{0}\phi+b_{1}z\phi+\cdots+b_{k}z^{k}\phi\). Put \(\phi=\beta_{0}+\beta_{1}z+\beta_{2}z^{2}+\cdots+\beta_{k}z^{k}+\cdots\). Therefore, the coefficient of \(z^{k}\) in \(b_{0}\phi+b_{1}z\phi+\cdots+b_{k}z^{k}\phi+\cdots\) is \(b_{0}\beta_{k}+b_{1}\beta_{k-1}+\cdots+b_{k}\beta_{0}\). This is the same as the coefficient of \(z^{k}\) in the formal product of \(\phi g\). Therefore \(\phi g=h\in\mathcal{M}\), and hence \(\phi H^{2}\subset\mathcal{M}\). Recall that on the unit circle, \(L^{2}=H^{2}\oplus\overline{zH^{2}}\). Let \(f\in L^{2}\) then \(f=f_{1}+\overline{zf_{2}}\) for some \(f_{1},f_{2}\in H^{2}\). So, \[\phi f=\phi f_{1}+\phi\cdot\overline{zf_{2}}.\] Note that \(\phi f_{1}\), \(\phi f_{2}\) belong to \(H^{\alpha}\), and this implies \[\alpha(\phi\cdot\overline{zf_{2}})=\alpha(|\phi\cdot\overline{zf_{2}}|)= \alpha(|\phi f_{2}|)<\infty.\] Thus \(\alpha(\phi f)\leq\alpha(\phi f_{1})+\alpha(\phi\cdot\overline{zf_{2}})<\infty\), and hence \(\phi\) multiplies \(L^{2}\) into \(\mathcal{L}^{\alpha}\). Note that \(\phi f\in\mathcal{L}^{\alpha}\subset L^{1}\) for all \(f\in L^{2}\). Therefore, by converse of Holder's inequality [14], we have \(\phi\in L^{2}\). Also \(\phi\in\mathcal{M}\subset H^{\alpha}\subset H^{1}\), implies that \(\phi\in L^{2}\cap H^{1}=H^{2}\). Finally, with this we conclude that \(\phi\in H^{2}\cap H^{\alpha}\) and \(\mathcal{N}\subset H^{2}\cap H^{\alpha}\). We claim that the _dim_(\(\mathcal{N}\))= 1. Let \(\phi_{1}\) and \(\phi_{2}\) be two unit vectors in \(\mathcal{N}\) such that \(\phi_{1}\perp\phi_{2}\) in \(\mathcal{M}\). We show that \(\phi_{1}H^{2}\perp\phi_{2}H^{2}\) in \(\mathcal{M}\). Consider \(f,g\in H^{2}\) and \(f_{n}=\sum\limits_{k=1}^{n}a_{n}z^{n}\), \(g_{n}=\sum\limits_{k=0}^{n}b_{n}z^{n}\) such that \(f_{n}\to f\) and \(g_{n}\to g\) in \(H^{2}\). We have already proved above that any element of \(\mathcal{N}\) multiplies \(H^{2}\) into \(\mathcal{M}\) i.e., \(\phi_{1}f,\phi_{2}g\in\mathcal{M}\) such that \[\phi_{1}f_{n}\to\phi_{1}f\text{ and }\phi_{2}g_{n}\to\phi_{2}g\text{ in } \mathcal{M}. \tag{4.5}\] Moreover, from the orthogonality of \(\phi_{1}\) and \(\phi_{2}\) in \(\mathcal{M}\) and (4.2) \[\langle\phi_{1}f_{n},\phi_{2}g_{n}\rangle_{\mathcal{M}} =\langle a_{0}\phi_{1}+a_{1}z\phi_{1}+\cdots+a_{n}z^{n}\phi_{1},b _{0}\phi_{2}+b_{1}z\phi_{2}+\cdots+b_{n}z^{n}\phi_{2}\rangle_{\mathcal{M}}\] \[=0.\] Taking limit \(n\to\infty\), in view of (4.5), we have \[\lim_{n\to\infty}\langle\phi_{1}f_{n},\phi_{2}g_{n}\rangle_{\mathcal{M}}= \langle\phi_{1}f,\phi_{2}g\rangle_{\mathcal{M}}=0.\] Therefore, \(\phi_{1}H^{2}\perp\phi_{2}H^{2}\) in \(\mathcal{M}\). Since \(\mathcal{N}\subset H^{2}\cap H^{\alpha}\), we see that \(\phi_{1}\phi_{2}\in\phi_{1}H^{2}\cap\phi_{2}H^{2}\). This forces \(\phi_{1}\phi_{2}=0\). Therefore, either \(\phi_{1}\equiv 0\) or \(\phi_{2}\equiv 0\). This confirms that _dim_(\(\mathcal{N}\)) = 1. Let \(\mathcal{N}=\langle\phi\rangle\) for some unit vector \(\phi\in\mathcal{N}\). Therefore in view of (4.2), we can write \[\mathcal{M}=\phi H^{2}.\] Further, since \(\|\phi g_{n}\|_{\mathcal{M}}^{2}=\|g_{n}\|_{H^{2}}^{2}\) and \(\phi g_{n}\to\phi g\) in \(\mathcal{M}\), it follows that \(\|\phi g\|_{\mathcal{M}}^{2}=\|g\|_{H^{2}}^{2}\) for all \(g\in H^{2}\). This completes the proof. In the next theorem, we characterize the Hilbert space \(\mathcal{M}\) under the condition that \(H^{\alpha}\) is properly contained in \(H^{2}\). **Theorem 4.2**.: _Let \(\alpha\) be a continuous rotationally symmetric norm and \(H^{\alpha}\) is properly contained in \(H^{2}\). Suppose \(\mathcal{M}\) is a Hilbert space algebraically contained in \(H^{\alpha}\) such that \(T_{z}\) acts as an isometry on \(\mathcal{M}\) and \(T_{z}\mathcal{M}\subset\mathcal{M}\). Then, \(\mathcal{M}=\{0\}\)._ Proof.: Since \(\mathcal{M}\) satisfies the assumptions of Theorem 4.1, there exists \(\phi\in H^{2}\cap H^{\alpha}\) such that \[\mathcal{M}=\phi H^{2}.\] We claim that \(\phi=0\). Let, if possible, \(\phi\neq 0\). The computations in the proof of Theorem 4.1 show that \(\phi\) multiplies \(H^{2}\) into \(H^{\alpha}\). Also, by the additional condition that \(H^{\alpha}\subset H^{2}\), we have \(\phi H^{2}\subset H^{2}\). This implies that \(\phi\in H^{\infty}\). For a fixed \(n\geq 1\), define \[E_{n}=\left\{e^{i\theta}:|\phi(e^{i\theta})|>\frac{1}{n}\right\}.\] Then \[E_{n}^{c}=\left\{e^{i\theta}:|\phi(e^{i\theta})|\leq\frac{1}{n}\right\}.\] Since \(0\neq\phi\in H^{2}\), \(\phi\) cannot vanish on a set of Lebesgue measure zero, it follows that \(m(\cap_{n=1}^{\infty}E_{n}^{c})=0\). So, \(m(E_{n}^{c})\to 0\) as \(n\to\infty\). Clearly, \(\chi_{E_{n}^{c}}\to 0\)\(a.e.\) and hence \(\chi_{E_{n}}\to 1\)\(a.e\). Now for any \(\epsilon>0\), we can find \(N_{0}\in\mathbb{N}\) such that \[1-\epsilon\leq\chi_{E_{N_{0}}}\leq 1+\epsilon\ a.e. \tag{4.6}\] Also, \(H^{\alpha}\subsetneq H^{2}\), there exists \(h\in H^{2}\) and \(h\notin H^{\alpha}\). Define \(g=\chi_{E_{N_{0}}}h\), then \(g\in L^{2}\). We claim that \(g\notin L^{\alpha}\). For if, \(g\in L^{\alpha}\) then from (4.6) we have, \(|(1-\epsilon)h|\leq|\chi_{E_{N_{0}}}h|=|g|\ a.e.\), this forces \(h\in L^{\alpha}\). This is a contradiction. Further, \(\phi g=\chi_{E_{N_{0}}}\phi h\) belongs to \(L^{\alpha}\) because \(\phi h\) belongs to \(H^{\alpha}\). Since \((1-\epsilon)|\phi|\ \leq\ |\chi_{E_{N_{0}}}\phi|\ a.e.\), we conclude that \(|\chi_{E_{N_{0}}}\phi|\) is invertible on \(\mathbb{T}\) except possibly on a set of measure zero. Therefore, \[\left|\frac{1}{\chi_{E_{N_{0}}}\phi}\right| \leq\left|\frac{1}{(1-\epsilon)\phi}\right|\ a.e. \tag{4.7}\] \[\leq\frac{N_{0}}{1-\epsilon}\ a.e.\] In view of the invertibility of \(|\chi_{E_{N_{0}}}\phi|\), we can write \(g=\dfrac{\chi_{E_{N_{0}}}\phi g}{\chi_{E_{N_{0}}}\phi}\), and hence \[|g|\leq\dfrac{N_{0}}{1-\epsilon}|\chi_{E_{N_{0}}}\phi g|\ a.e.\] Therefore, \(\alpha(g)\leq\dfrac{N_{0}}{1-\epsilon}\|\chi_{E_{N_{0}}}\|_{ \infty}\alpha(\phi g)<\infty\) and \(g\in L^{\alpha}\), which contradicts the fact that \(g\notin L^{\alpha}\). This contradiction stems from the assumption that \(\phi\neq 0\). _Remark 4.3_.: An example of a continuous rotationally symmetric norm \(\alpha\) for which \(H^{\alpha}\) is contained in \(H^{2}\) is provided in [15]. In fact, [15] constructs an example of \(H^{\alpha}\) which is contained in all \(H^{p}\) spaces for \(1\leq p<\infty\). We end this section with a corollary which is an immediate consequence of Theorems 4.1 and 4.2. **Corollary 4.4**.: _([22]) Let \(\mathcal{M}\) be a Hilbert space which is algebraically contained in \(H^{p}\) for any \(1\leq p\leq\infty\). Further assume that the operator \(T_{z}\) acts as an isometry on \(\mathcal{M}\) and \(T_{z}\mathcal{M}\subset\mathcal{M}\). Then_ \[\mathcal{M}=bH^{2}\] _for a unique \(b\): (1) If \(1\leq p\leq 2\), \(b\in H^{2p/2-p}\) and \(\|bf\|_{M}=\|f\|_{H^{2}}\) for all \(f\in H^{2}\). (2) If \(p>2\), b=0._ _Remark 4.5_.: Note that the part (2) of Corollary 4.4 is in line with the conclusion of Theorem 4.2.
2309.05313
Carrier filtering effect for enhanced thermopower in a body-centered tetragonal ruthenate
Charged carriers in solids diffuse from hot to cold sides under temperature gradient to induce the thermoelectric voltage. Carrier filtering effect, which only passes either electrons or holes for the conduction process, is an efficient method to enhance such voltage, although it is challenging to experimentally realize it especially in conventional metals with weak energy dependence of the density of states near the Fermi level. Here we measure the in-plane and out-of-plane thermopower of the layered perovskite Sr$_2$RuO$_4$ single crystals above room temperature, and find that the out-of-plane thermopower is largely enhanced with increasing temperature, while the in-plane one seems to remain a temperature-independent constant value which is expected from the Heikes formula. The observed large out-of-plane thermopower may originate from the recently proposed intriguing hole filtering effect in the body-centered tetragonal system, in which the carrier hopping through the centered atom is essential. Thus, the present carrier filtering effect may be a universal property to be applicable in various materials belonging to such crystal system.
Ryota Otsuki, Yoshiki J. Sato, Ryuji Okazaki, Tomoya Komine, Ryosuke Kurihara, Hiroshi Yaguchi
2023-09-11T08:57:10Z
http://arxiv.org/abs/2309.05313v2
# Carrier filtering effect for enhanced thermopower in a body-centered tetragonal ruthenate ###### Abstract Charged carriers in solids diffuse from hot to cold sides under temperature gradient to induce the thermoelectric voltage. Carrier filtering effect, which only passes either electrons or holes for the conduction process, is an efficient method to enhance such voltage, although it is challenging to experimentally realize it especially in conventional metals with weak energy dependence of the density of states near the Fermi level. Here we measure the in-plane and out-of-plane thermopower of the layered perovskite Sr\({}_{2}\)RuO\({}_{4}\) single crystals above room temperature. We find that the out-of-plane thermopower is largely enhanced with increasing temperature, while the in-plane one seems to remain a temperature-independent constant value expected from the Heikes formula. The observed large out-of-plane thermopower may originate from the recently proposed intriguing hole filtering effect in the body-centered tetragonal system, in which the carrier hopping through the centered atom is essential. Thus, the present carrier filtering effect may be a universal property to be applicable in various materials belonging to such crystal system. ## I Introduction Thermoelectricity is a solid-state property to convert the heat current into charge one, or vice versa, attracting great attention as an environmentally-friendly energy conversion technology [1; 2; 3]. In the fundamental point of view, the thermopower \(S\), which is the proportional coefficient between the electric field \(\mathbf{E}\) and the temperature gradient \(\nabla T\) as \(\mathbf{E}=S\,\nabla T\), serves an intriguing measure of how large the electron-hole asymmetry is in materials [4]; under the temperature gradient, the carrier diffusion from high- to low-temperature sides induces the thermoelectric voltage, the magnitude of which is however significantly cancelled owing to the opposite polarities of the thermally excited electrons and holes. To enhance the thermopower, it is thus important to introduce a sort of asymmetry for the charged carriers: for example, an energy barrier picture to pass only the unipolar carriers has been suggested in the superlattice structures based on the thermionic emission process [5; 6]. In the reciprocal space, a peculiar pudding-mold band shape to explain large thermopower in the cobalt oxides [7; 8; 9] is a kind of carrier filtering because it utilizes the large asymmetry in the carrier velocities to increase the thermopower and the electrical conductivity simultaneously. Low-dimensional materials with highly asymmetric energy dependence of the density of states (DOS) are also promising to enhance the thermoelectric efficiency [10] and has been experimentally demonstrated [11; 12; 13]. In a real-space picture, a spin blockade hopping in several cobalt oxides also clarifies the asymmetry in the thermoelectric transport [14; 15; 16; 17; 18; 19; 20]. Moreover, the electron-hole asymmetry in the scattering rate is also crucial for the thermopower [21; 22]. For such carrier filtering effects, Mravlje and Georges have recently suggested unique filtering mechanism based on the band structure in the body-centered tetragonal (bct) crystal system [Fig. 1(a)] such as the layered ruthenate [23]. In a simple tight-binding picture, the energy dispersion of the bct lattice in such layered materials is given as \(\varepsilon(\mathbf{k})=\varepsilon_{1}(\mathbf{k})+\varepsilon_{2}(\mathbf{k})\), where \(\varepsilon_{1}(\mathbf{k})=t_{1}(\cos k_{x}+\cos k_{y})\) (\(t_{1}<0\)) is the large in-plane hopping term and \(\varepsilon_{2}(\mathbf{k})=t_{2}\cos(k_{x}/2)\cos(k_{y}/2)\cos(k_{z}/2)\) (\(t_{2}<0\), \(|t_{2}|<|t_{1}|\)) represents the out-of-plane hopping one [Fig. 1(b)]. In this case, owing to the in-plane transfer integral \(t_{1}\), the energy along the \(\Gamma\)-Z line (\(0,0,k_{z}\)) and the X-X' line (\(\pm\pi,\pm\pi,k_{z}\)) yield large negative and positive values, respectively. The out-of-plane velocity along the \(\Gamma\)-Z line then becomes finite because of the dispersion \(\varepsilon(0,0,k_{z})=2t_{1}+t_{2}\cos(k_{z}/2)\), while it becomes zero along the X-X' line since \(\varepsilon(\pm\pi,\pm\pi,k_{z})=-2t_{1}\). Therefore, the difference in the out-of-plane velocity along the \(\Gamma\)-Z and the X-X' lines becomes significant. This leads to the prominent electron-hole asymmetry when the chemical potential \(\mu\) locates near the Figure 1: Concept of the carrier filtering in body-centered tetragonal lattice. (a) Body-centered tetragonal lattice. (b) Tight-binding band dispersion for (a). The inset depicts the Brillouin zone and the notations in Ref. [39] are used. The out-of-plane velocity appears along the \(\Gamma\)-Z line while it vanishes along the X-X’ line. The point X’ denotes the X point in the adjacent Brillouin zone. (c) Primitive tetragonal lattice. (d) Tight-binding band dispersion for (c). The out-of-plane velocity emerges in both the \(\Gamma\)-Z and the X-X’ lines. middle of the band, and the carrier filtering mechanism operates at high temperatures \(T\sim|t_{1}|/k_{\text{B}}\), where the thermal energy \(k_{\text{B}}T\) is a substantial fraction of the bandwidth \(|t_{1}|\). Subsequently large thermopower is expected at such high-temperature range. Note that the situation is distinct from that of the primitive tetragonal structure with the out-of-plane hopping term of \(\varepsilon_{2^{\prime}}(\mathbf{k})=t_{2}^{\prime}\cos(k_{x})\cos(k_{y})\cos(k_{z})\) [Figs. 1(c) and 1(d)]. The aim of this paper is to experimentally examine such carrier filtering effect in the proposed material Sr\({}_{2}\)RuO\({}_{4}\). The layered perovskite oxide Sr\({}_{2}\)RuO\({}_{4}\) with the bct lattice (space group \(I4/mmm\)) has been extensively studied as a model material to examine the unconventional superconductivity in two dimensions [24; 25; 26; 27; 28; 29], while the pairing mechanism of the superconducting state is still a remaining issue [30; 31; 32; 33; 34; 35]. Since the electronic structure and the quasi-two-dimensional (q-2D) Fermi surfaces of this material have been well studied both theoretically [36; 37; 38; 39; 40] and experimentally [41; 42; 43; 44; 45], this material is indeed a minimal model for the examination of such filtering effect, whereas the thermopower measurements on the single-crystalline samples of Sr\({}_{2}\)RuO\({}_{4}\) have been limited in the low-temperature range [46; 47; 48]. In the present study, we measure the in-plane and out-of-plane thermopower of Sr\({}_{2}\)RuO\({}_{4}\) single crystals at high temperatures. The in-plane thermopower is well described within the Heikes formula that has been widely used to explain the high-temperature thermopower of correlated oxides. On the other hand, the out-of-plane thermopower increases with increasing temperature and exceeds the expected value of Heikes formula. These results indicate that the suggested carrier filtering effect is realized in the present bct system. Although the out-of-plane resistivity is relatively large in Sr\({}_{2}\)RuO\({}_{4}\)[49], this filtering is very universal for the bct lattice and acts at high temperatures, offering a unique class of efficient thermoelectric materials. ## II Methods Single crystals of Sr\({}_{2}\)RuO\({}_{4}\) were grown by a floating-zone method [50]. We used the cleaved single crystals for the in-plane thermopower measurements, the photograph of which is shown in Fig. 2(a). For the out-of-plane experiments, the crystal was cut using an electrical discharge machine to obtain the elongated crystal to the \(c\)-axis direction [Fig. 2(b)] [51]. The thermopower was measured by a steady-state technique using two platinum resistance thermometers in a tube furnace [52; 53; 54]. The thermoelectric voltage of the crystal was measured with a Keithley 2182A nanovoltmeter. The temperature gradient of about 0.5 K/mm was applied using a resistive heater. The thermoelectric voltage from the wire leads (platinum wires) was subtracted. In order to verify how similar the band structure of Sr\({}_{2}\)RuO\({}_{4}\) is to the band dispersion of the simple tight-binding model shown in Fig. 1(b), we also performed first-principles calculations based on density functional theory (DFT) using Quantum Espresso [55; 56; 57]. We used the projector-augmented-wave pseudopotentials with the Perdew-Burke-Ernzerhof generalized-gradient-approximation (PBE-GGA) exchange-correlation functional. The cut-off energies for plane waves and charge densities were set to 70 and 560 Ry, respectively, and the \(k\)-point mesh was set to \(20\times 20\times 20\) uniform grid to ensure the convergence. We used on-site Coulomb energy \(U=3.5\) eV and exchange parameter \(J=0.6\) eV for Ru ions [58] and performed full relativistic calculations with spin-orbit coupling (DFT+\(U+\)SOC). The present calculations are not spin-polarized. To examine the filtering effect, we also calculated the thermopower based on the linearized Boltzmann equations under constant relaxation time approximation using a Boltzwann module [59]. From the obtained eigenvalues of the \(n\)-th band at \(\mathbf{k}\) point \(E_{n,\mathbf{k}}\), the transport function tensor \(L_{ii}(\varepsilon)\) is calculated as \[L_{ii}(\varepsilon)=\sum_{n,\mathbf{k}}v_{i}^{2}\tau\delta(\varepsilon-E_{n,\mathbf{k} }), \tag{1}\] where \(v_{i}\) is the \(i\)-th component of the group velocity \(\mathbf{v}=\frac{1}{\hbar}\nabla_{\mathbf{k}}E_{n,\mathbf{k}}\), \(\tau\) (\(=10^{-14}\) s) is the relaxation time, and \(\delta\) is the delta function [59; 60]. Here we consider the diagonal components \(ii=xx\) and \(zz\). We then calculated the electrical conductivity tensor \(\sigma_{ii}(\mu)=e^{2}\int_{-\infty}^{\infty}d\varepsilon\left(-\frac{\partial _{\mathbf{k}}}{\partial\varepsilon}\right)L_{ii}\), where \(e\) is the elementary charge and \(f_{0}\) is the Fermi-Dirac distribution function for the chemical potential \(\mu\) and temperature \(T\). Similarly, the Peltier conductivity tensor is calculated as \(P_{ii}(\mu)=-\frac{\varepsilon}{T}\int_{-\infty}^{\infty}d\varepsilon\left(- \frac{\partial_{\mathbf{k}}}{\partial\varepsilon}\right)(\varepsilon-\mu)L_{ii}\), and the thermopower \(S_{ii}\) is obtained as \(S_{ii}=P_{ii}/\sigma_{ii}\). ## III Results and discussion ### In-plane thermopower Figure 2(c) shows the temperature dependence of the in-plane (blue) and out-of-plane (red) thermopower. We first discuss the in-plane thermopower behavior. The room-temperature value of \(S\sim 25\)\(\mu\)V/K well agrees with the earlier reports [46; 47; 48], in which the sample dependence was also observed in the magnitude of the thermopower. Above room temperature, the in-plane thermopower exhibits weak temperature dependence, which is similar to the thermopower measured in the polycrystalline samples [61]. This in-plane data may be described by the Heikes formula for the mixed three valence states (atomic states with \(N-1\), \(N\), and \(N+1\) electrons) [23] with the quenched orbital degeneracy due to the dominant Hund's coupling [62; 63]: \[S=\frac{k_{\text{B}}}{2e}\ln\frac{g_{N-1}}{g_{N+1}}, \tag{2}\] where \(k_{\text{B}}\) is the Boltzmann constant and \(g_{i}\) is the spin degeneracy of the atomic state with \(i\) electrons. For Sr\({}_{2}\)RuO\({}_{4}\) (\(N=4\)), the Heikes thermopower is estimated as \(S=k_{\text{B}}/(2e)\ln(g_{3}/g_{5})=k_{\text{B}}/(2e)\ln 2\approx 30\)\(\mu\)V/K, which is reasonably close to the experimental values. Note that such concentration-independent formula may also be applicable to the high-temperature thermopower in the Mn oxides [64]. Figure 3 summarizes the room-temperature thermopower data in various ruthenium oxides as a function of the formal valence of Ru ions. Note that we here focus on the in-plane data and the large out-of-plane thermopower at high temperatures will be discussed in the following section. The thermopower in these ruthenium oxides is almost temperature-independent near room temperature [65; 66], similar to that of the present Sr\({}_{2}\)RuO\({}_{4}\) for the in-plane direction. In Fig. 3, the data seems to gather at the valence state of Ru\({}^{4+}\) with the thermopower value of \(S=30\)\(\mu\)V/K as mentioned before. In addition, a trend of negative correlation between the thermopower value and the valence is obtained. However, it is difficult to express the overall behavior by the Heikes formula: Eq. (2) is obtained by considering the mixed three valence states and is valid only for \(N=4\), whereas the extended Heikes formula for mixed two valence states shows divergent behavior of the thermopower close to the integer valence state [67; 68; 69]. For instance, in the valence range above 4 (mean number of electrons \(n<4\)), the Heikes formula becomes \[S=-\frac{k_{\rm B}}{e}\ln\left(\frac{g_{4}}{g_{3}}\frac{4-n}{n-3}\right), \tag{3}\] which diverges for the integer values of \(n\). Note that a mixed state with Ru\({}^{4+}\) (\(N=4\)) and Ru\({}^{5+}\) (\(N=3\)) ions is considered in Eq. (3), in contrast to the situation for Eq. (2) where three valence states are adopted. In Fig. 3, we instead note the expected thermopower of \((-k_{\rm B}/e)\ln(g_{4}/g_{3})\approx 25\)\(\mu\)V/K for the formal valence of Ru\({}^{4+5+}\) (\(n=3.5\)) and \((-k_{\rm B}/e)\ln(g_{5}/g_{4})\approx 35\)\(\mu\)V/K for Ru\({}^{3.5+}\) (\(n=4.5\)). The data seems to locate between these values and it is a future study to elucidate the exact formula to describe the thermopower in the whole valence range. ### Out-of-plane thermopower Then we focus on the out-of-plane thermopower, which increases with increasing temperature (Fig. 2) and well exceeds the above mentioned Heikes value of 30 \(\mu\)V/K (Fig. 3). This behavior is highly distinct from the conventional trend: as seen in the Heikes formula, the high-temperature thermopower will be expressed as the thermodynamic quantity with no directional anisotropy. Indeed, for the layered cobalt oxides, the in-plane anisotropy in the thermopower gradually diminishes at higher temperatures [53]. Thus, it is clear that the Heikes picture is not applicable to the out-of-plane thermopower at high temperature range in Sr\({}_{2}\)RuO\({}_{4}\). Rather, such a temperature dependence is well consistent with the theoretical estimation to consider the unique carrier filtering effect based on the bct lattice system [23]. To examine such filtering effect for the out-of-plane thermopower, we calculate the band structure along the peculiar \(k_{z}\) line in the DFT(GGA) + \(U\) + SOC scheme, since (i) the bandwidth may be corrected by the correlation term \(U\) and (ii) the inclusion of SOC may seriously affect the dispersion Figure 3: Thermopower in various itinerant ruthenium oxides as a function of the formal valence of Ru ions. The data were measured at room temperature except for the out-of-plane data of Sr\({}_{2}\)RuO\({}_{4}\). Data in the earlier reports are taken from Ref. [65]. Figure 2: (a,b) Photograph of the single-crystalline Sr\({}_{2}\)RuO\({}_{4}\) samples for the (a) in-plane and (b) out-of-plane thermopower measurements. (c) Temperature dependence of the thermopower \(S\) measured for the in-plane (blue) and out-of-plane (red) directions. The dashed line indicates a value expected from the Heikes formula at high temperatures [23]. along the \(k_{z}\) direction [39], which we focus here. Figures 4(a) and 4(b) show the calculated electronic band structure near the Fermi energy \(E_{\rm F}\), which well coincides with the results in earlier studies [36; 37; 38; 39]. High-symmetry points and the \(k\) path are shown in Fig. 4(d). The solid curves represent the DFT+ \(U\) + SOC results, while the dashed curves are the conventional GGA results. The \(\beta\) and \(\gamma\) bands at the high-symmetry points \(\Gamma\) and Z split owing to the SOC [39], and the upper \(e_{\rm g}\) bands are shifted upward due to the on-site \(U\). Importantly, the \(\beta\) and \(\gamma\) bands along the Z-\(\Gamma\)-X-X' line are well reproduced with the tight-binding picture shown in Fig. 1(b): the low-energy holes around \(E-E_{\rm F}\approx-0.7\) eV possess the finite velocity along the Z-\(\Gamma\) line. In contrast, the \(c\)-axis velocity of the high-energy electrons around \(E-E_{\rm F}\approx+0.5\) eV for the \(\gamma\) band and \(\approx+0.8\) eV for the \(\beta\) band becomes almost zero due to the flat dispersion along the X-X' line. No \(k_{z}\) dispersion is also observed at the edge of the Brillouin zone along the M-M' line [Fig. 4(b)]. Figure 4(c) shows the in-plane and out-of-plane transport function \(L_{ii}\) (\(ii=xx,zz\)) displayed in the same energy range for Figs. 4(a) and 4(b). Corresponding to the band structure shown in Fig. 4(a), a significant peak structure of \(L_{\rm zz}\) is found near \(E-E_{\rm F}\approx-0.7\) eV in contrast to the negligibly small \(L_{zz}\) for \(E>E_{\rm F}\). These prominent asymmetry in \(L_{zz}\) owing to aforementioned large hole and small electron velocities may lead to the enhanced positive thermopower at high temperatures. Note that although the \(L_{zz}\) peak is far from the Fermi energy (\(E-E_{\rm F}\approx-0.7\) eV) compared to the thermal energy \(k_{\rm B}T\) of the present measurement range, this energy difference should become smaller by considering the scattering of correlated carriers accurately [23]. In contrast to highly asymmetric out-of-plane transport function, the in-plane one \(L_{\rm xx}\) in Fig. 4(c) seems symmetric around \(E_{\rm F}\). The thermopower data calculated with the different schemes are shown in Fig. 4(e). Although the experimental data are not reproduced well due to the absence of mass renormalizations from the dynamical real-part of the self-energy [23], which is not included in the present DFT calculation, the enhanced out-of-plane thermopower \(S_{zz}\) is obtained owing to the carrier filtering effect. Figure 4: (a,b) Calculated band structure with DFT + \(U\) + SOC scheme (solid curves) along (a) the Z-\(\Gamma\)-X-X’ and (b) the \(\Gamma\)-M-M’ lines. The dashed curves represent the results of scalar relativistic calculations and \(U\) is not included. The \(\alpha\) (red), \(\beta\) (blue), and \(\gamma\) (green) bands cross the Fermi energy \(E_{\rm F}\) indicated by the solid line. (c) The in-plane and out-of-plane transport functions \(L_{ii}\) (\(ii=xx,zz\)). The solid and dashed curves represent the DFT(GGA) + \(U\) + SOC and the DFT calculations, respectively. (d) High-symmetry points and the \(k\) path for the band structure in the panels (a) and (b). The points X’ and M’ locate in the adjacent Brillouin zone. (e) The calculated thermopower \(S_{ii}\) for both directions as a function of temperature. The solid symbols are experimental results. The solid and dashed curves represent the DFT+ \(U\) + SOC and the DFT calculations, respectively. Figure 5: (a) Calculated band structure of Sr\({}_{2}\)MoO\({}_{4}\) with GGA scheme along the Z-\(\Gamma\)-X-X’ line. (b) The in-plane and out-of-plane transport functions \(L_{ii}\) (\(ii=xx,zz\)). (c) The calculated thermopower \(S_{ii}\) (\(ii=xx,zz\)) as a function of temperature. Interestingly, the out-of-plane thermopower calculated with DFT+\(U\)+SOC results is much larger than that calculated with the DFT data. This difference originates from the upper shift of the \(\beta\) band due to the SOC [39], and the \(L_{zz}\) peak near \(E-E_{\rm F}\approx-0.7\) eV is slightly shifted to higher energy in the DFT+\(U\)+SOC results [Fig. 4(c)]. These results thus indicate that the SOC may affect the out-of-plane thermopower significantly. Nevertheless, the calculated thermopower \(S_{zz}\) from the DFT+\(U\)+SOC results is much larger than the experimental data of the out-of-plane thermopower. This discrepancy may come from the absence of the self-energy analysis as described before. We also mention other mechanisms to enhance the high-temperature thermopower. Since this material shows an interesting bad-metallic transport at high temperatures [49], the nature of the incoherent charge transport in such regime is of interest [70]. In particular, the thermopower may be enhanced in the bad-metallic state near the Mott insular phase [71], whereas both in-plane and out-of-plane thermopower may be increased in such a case, which differs from the present results. To explore the present carrier filtering effect in the bct lattice, we have calculated the electronic structure and the transport function for the related layered oxide Sr\({}_{2}\)MoO\({}_{4}\)[72]. The calculation here is performed within the GGA scheme. Figures 5(a) and 5(b) show the band structure, which well reproduce the earlier studies [73; 63], and the transport function \(L_{ii}\) for Sr\({}_{2}\)MoO\({}_{4}\), respectively. Similar to the results of Sr\({}_{2}\)RuO\({}_{4}\), large difference in the out-of-plane velocities between the Z-\(\Gamma\) and the X-X' lines is observed for Sr\({}_{2}\)MoO\({}_{4}\), while the small \(k_{z}\) dependence is also observed at around \(E-E_{\rm F}\sim 1.4\) eV. Nevertheless, the out-of-plane transport function \(L_{zz}\) is prominently asymmetric around \(E_{\rm F}\) and exhibits a peak at around \(E-E_{\rm F}\sim-0.3\) eV, compared to relatively weak energy dependence of the in-plane transport function \(L_{xx}\). Figure 5(c) shows the calculated thermopower for Sr\({}_{2}\)MoO\({}_{4}\). Indeed, the out-of-plane thermopower \(S_{zz}\) shows relatively large, positive values owing to the present hole filter effect. In contrast, the in-plane thermopower \(S_{xx}\) becomes negative because the electron number of the Mo ion is smaller than that in the Ru ion. Thus, Sr\({}_{2}\)MoO\({}_{4}\) may also be interesting as a candidate of an oxide goniopolar material with axis-dependent polarity [74]. These results imply that the present carrier filtering may be applicable to wide range of the bct lattice such as K\({}_{2}\)NiF\({}_{4}\)-type or related structure [75; 76; 77; 78], offering an interesting strategy toward the efficient thermoelectrics. _Note added._ In the final stage of completion of this paper, we became aware of the preprint of Daou et al. [79], which reports similar results on the anisotropic thermopower of Sr\({}_{2}\)RuO\({}_{4}\) single crystals at high temperatures. ## IV Summary In summary, we have measured the high-temperature thermopower of Sr\({}_{2}\)RuO\({}_{4}\) single crystals for both in-plane and out-of-plane directions. We find that the in-plane thermopower exhibits a weak temperature dependence, which may be understood within the Heikes formula including the three different valence states. Interestingly, the out-of-plane thermopower well exceeds the expected value of the Heikes formula and increases with increasing temperature. Such behavior may originate from the theoretically suggested carrier filtering effect, which may be widely applied to various potential thermoelectrics with the body-centered tetragonal lattice system. ###### Acknowledgements. We appreciate R. Nishinakayama, H. Shiina, and R. Taira for the assistance. We thank the machine shop in Department of Mechanical and Aerospace Engineering, Tokyo University of Science, for the use of electrical discharge machine. This work was partly supported by JSPS KAKENHI Grant No. 17H06136 and No. 22H01166.
2309.12928
BayesDLL: Bayesian Deep Learning Library
We release a new Bayesian neural network library for PyTorch for large-scale deep networks. Our library implements mainstream approximate Bayesian inference algorithms: variational inference, MC-dropout, stochastic-gradient MCMC, and Laplace approximation. The main differences from other existing Bayesian neural network libraries are as follows: 1) Our library can deal with very large-scale deep networks including Vision Transformers (ViTs). 2) We need virtually zero code modifications for users (e.g., the backbone network definition codes do not neet to be modified at all). 3) Our library also allows the pre-trained model weights to serve as a prior mean, which is very useful for performing Bayesian inference with the large-scale foundation models like ViTs that are hard to optimise from scratch with the downstream data alone. Our code is publicly available at: \url{https://github.com/SamsungLabs/BayesDLL}\footnote{A mirror repository is also available at: \url{https://github.com/minyoungkim21/BayesDLL}.}.
Minyoung Kim, Timothy Hospedales
2023-09-22T15:27:54Z
http://arxiv.org/abs/2309.12928v1
# BayesDLL: Bayesian Deep Learning Library ###### Abstract We release a new Bayesian neural network library for PyTorch for large-scale deep networks. Our library implements mainstream approximate Bayesian inference algorithms: variational inference, MC-dropout, stochastic-gradient MCMC, and Laplace approximation. The main differences from other existing Bayesian neural network libraries are as follows: 1) Our library can deal with very large-scale deep networks including Vision Transformers (ViTs). 2) We need virtually zero code modifications for users (e.g., the backbone network definition codes do not meet to be modified at all). 3) Our library also allows the pre-trained model weights to serve as a prior mean, which is very useful for performing Bayesian inference with the large-scale foundation models like ViTs that are hard to optimise from scratch with the downstream data alone. Our code is publicly available at: [https://github.com/SamsungLabs/BayesDLL1](https://github.com/SamsungLabs/BayesDLL1). Footnote 1: A mirror repository is also available at: [https://github.com/minyoungkin21/BayesDLL](https://github.com/minyoungkin21/BayesDLL). ## 1 Bayesian Neural Networks: Overview The followings are the list of approximate Bayesian inference algorithms implemented in the library: * Variational Inference (Sec. 2.1) * MC-Dropout (Sec. 2.2) * SG-MCMC (SGLD) (Sec. 2.3) * Laplace Approximation (Sec. 2.4) The Bayesian neural network (BNN) is a Bayesian model where we treat the parameters of the deep neural network (e.g., weights and biases) as _random variables_ that are endowed with some distribution a priori (_prior distribution_). Like other Baysian models, there is a likelihood model that assigns the compatibility score to the observation given the network parameters. Formally we use the following notations: * \(\theta=\text{Network parameters (weights $\&$ biases) of the underlying deep model}\). * \(\overline{\theta}=\text{The most reasonable parameter values before observing any evidence}\). In typical situations, we can have \(\overline{\theta}=0\) meaning that we have no prior information, or \(\overline{\theta}\) can take _pre-trained_ model parameters on some base datasets, often called the upstream datasets. For simplicity, we assume a Gaussian prior model in which case \(\overline{\theta}\) becomes the prior mean. More specifically, the prior distribution is written as: \[p(\theta)=\mathcal{N}(\theta;\overline{\theta},\sigma^{2}I),\] (1) where \(\sigma^{2}\) is the prior variance (isotropic Gaussian) chosen by the users. Obviously, the prior mean \(\overline{\theta}\) and variance \(\sigma^{2}\) are fixed constants. * \(D=\) Given evidence. Typically \(D\) is a supervised dataset (\(D=\{(x,y)\}\) where \(x\in\mathcal{X}\) is the input and \(y\in\mathcal{Y}\) is the target label, either class-valued or real-valued). As conventional practice we have i.i.d. samples \((x,y)\in D\), which form the likelihood model, \[p(D|\theta)=\prod_{(x,y)\in D}p(y|x,\theta),\quad p(y|x,\theta)\propto\exp(-l(x,y;\theta)/\tau),\] (2) where \(l(x,y;\theta)=l(y,f_{\theta}(x))\) is the conventional deep learning loss (e.g., cross entropy or \(L_{2}\) distance), \(f_{\theta}(x)\!\in\!\mathcal{Y}\) is the prediction of the deep network with parameters \(\theta\) and input \(x\), and \(\tau\) is the scaling hyperparameter (e.g., temperature for cross entropy in classification cases or variance of the output noise in regression cases). Neural Network LearningThe main task is _posterior inference_, the task of inferring the posterior distribution \(p(\theta|D)\) of the weights given the evidence \(D\). That is, \[p(\theta|D)=\frac{p(\theta)\,p(D|\theta)}{\int p(\theta)\,p(D|\theta)\,d \theta}. \tag{3}\] The denominator does not in general admit closed-form expression, it is even infeasible to evaluate it exactly. Thus one has to resort to approximation, and several well-known approximate inference algorithms are listed in the beginning of this section, detailed in the next section, and implemented in our BayesDLL library. Neural Network InferenceOnce the posterior inference is done, at test time we can use the posterior to derive the _test predictive distribution_\(p(y^{*}|x^{*},D)\) where \(x^{*}\) is the test input. In principle, \[p(y^{*}|x^{*},D)=\int p(y^{*}|x^{*},\theta)\;p(\theta|D)\;d\theta. \tag{4}\] However, the integration in (4) is in general intractable to compute exactly. Instead one can approximate it by the Monte Carlo estimation. If we have a finite number (\(S\)) of samples from the posterior, then the posterior predictive distribution can be approximated as: \[p(y^{*}|x^{*},D)\approx\frac{1}{S}\sum_{i=1}^{S}p(y^{*}|x^{*},\theta^{(i)}), \;\;\text{where}\;\;\theta^{(i)}\!\sim\!p(\theta|D),\;\;i=1,\dots,S. \tag{5}\] BayesDLL Usages (Pseudocodes)The above two steps are implemented in our BayesDLL. For the four inference methods to be described in the next section, we highlight the pseudocodes in Fig. 1, which shows how to use BayesDLL to do posterior inference and test prediction. ## 2 Approximate Inference Algorithms ### Variational Inference (aka Bayes-by-Backprop [1]) In the variational inference we typically adopt the following Gaussian2 densities for both prior and variational posterior: Footnote 2: Perhaps this assumption/restriction of the tractable density family is one of the main caveats of the variational inference. On the other hand, the MCMC algorithms (e.g., SGLD in Sec. 2.3) do not require such an assumption, thus being highly flexible. The only requirement for SGLD is that we can easily (e.g., analytically) compute the gradients of the log-prior \(\log p(\theta)\) and the log-likelihood \(\log p(y|x,\theta)\) with respect to \(\theta\). Sec. 2.3 for details. * **Prior:**\(p(\theta)=\mathcal{N}(\theta;\overline{\theta},\sigma^{2}I)\). * **Variational posterior:**\(q(\theta)=\mathcal{N}(\theta;m,s^{2})\) where \(s\) is the vectorized standard deviations (of the same shape as \(\theta\)), embedded in a diagonal matrix (and \(s^{2}\) means elementwise squaring). Both \(m\) and \(s\) are the variational parameters to be estimated. Due to the positivity constraint for \(s\), we consider positive linking \(s=g(\tilde{s})\) where \(\tilde{s}\) is free (unconstrained) optimization variables, and \(g(\cdot)\) can be typically the exponential function (\(s=\exp(\tilde{s})\)), the soft-plus function (\(s=\log(1+\exp(\tilde{s}))\)), or a simple hinge function (\(s=\max(\tilde{s},s_{\min})\) where \(s_{\min}\) is a small positive constant such as \(10^{-8}\)). net=Network()#defineyourneuralnetwork(torch.nn.Module) inference_method='vi'#'mc_dropout','sgld','la' #....VariationalInference....# ifinference_method=='vi': model=vi.Model(net,ND=1008,prior_sig=1.0,kld=1.0) #ND=trainingdatasize,prior_sig=prior'ssigma, #kld=KLdiscountfactor #training optim=torch.optim.SGD(model)#optimizerfortheposteriormodel vi.train(model,optim,train_data_loader) #testpredictionwithreportarcalibrationscores vi.evaluate(model,test_data_loader,nst=5) #nst=numberofNCSamplesfromtheposterior(testpredictivedistribution) calibration.analyze(model,test_data_loader,num_bins=20,temperature=1) #calibration(ECE,MCE,NLL,reliabilityplot) #....MC-Dropout....# elifinference_method=='mc_dropout': model=mc_dropout.Model(net,ND=1008,prior_sig=1.0,pdrop=0.1,kld=1.0) #pdrop=dropoutprob #training optim=torch.optim.SGD(model)#optimizerfortheposteriormodel mc_dropout.train(model,optim,train_data_loader) #testpredictionwithreportarcalibrationscores mc_dropout.evaluate(model,test_data_loader,nst=5) calibration.analyze(model,test_data_loader,num_bins=20,temperature=1) #....SGLD....# elifinference_method=='sgld': model=sgld.Model(net,ND=1008,prior_sig=1.0,Ninflate=1.0,nd=1.0,burnin=5,thin=10) #Ninflate=datainflationfactor,nd=noisediscountfactor, #burnin=numberofburninepochs,thin=numberofthinnigestg #training optim=torch.optim.SGD(model)#optimizerfortheposteriormodel sgld.train(model,optim,train_data_loader) #testpredictionwithreportarcalibrationscores sgld.evaluate(model,test_data_loader,nst=5) calibration.analyze(model,test_data_loader,num_bins=20,temperature=1) #....LaplaceApproximation....# elifinference_method=='la': model=la.Model(net,ND=1008,prior_sig=1.0,Ninflate=1.0) #training optim=torch.optim.SGD(model)#optimizerfortheposteriormodel post_var=la.train(model,optim,train_data_loader) #findtheMAPfirst;thencomputetheposteriorvariance #testpredictionwithreportarcalibrationscores la.evaluate(model,post_var,test_data_loader,nst=5) calibration.analyze(model,test_data_loader,num_bins=20,temperature=1) Figure 1: BayesDLL usage pseudocodes for the four inference methods. For each method, we have pseudocodes showing how to perform posterior inference (training) and how to obtain a test predictive distribution (test evaluation). The negative ELBO loss function, in the data size normalized and the unbiased minibatch stochastic estimate version, can be written as (here, \(B\) denotes a minibatch): \[\text{loss}\ =\ \frac{1}{|B|}\sum_{(x,y)\in B}l(x,y;\theta\!=\!m\!+\!\epsilon \odot s)\ +\ \frac{1}{|D|}\sum_{i}\frac{1}{2}\bigg{(}\log\frac{\sigma^{2}}{s_{i}^{2}}-1+ \frac{s_{i}^{2}}{\sigma^{2}}+\frac{(m_{i}-\overline{\theta}_{i})^{2}}{\sigma^ {2}}\bigg{)}, \tag{6}\] where \(\epsilon\sim\mathcal{N}(0,I)\) and \(\odot\) is elementwise product. The loss gradient can be easily derived using the chain rule: \[\frac{\partial\text{loss}}{\partial m} =\ \frac{1}{|B|}\sum_{(x,y)\in B}\frac{\partial l(x,y;\theta)}{ \partial\theta}\ +\ \frac{1}{\sigma^{2}|D|}(m-\overline{\theta}), \tag{7}\] \[\frac{\partial\text{loss}}{\partial\tilde{s}} =\ \frac{1}{|B|}\sum_{(x,y)\in B}\frac{\partial l(x,y;\theta)}{ \partial\theta}\odot\epsilon\odot s\ +\ \frac{1}{|D|}\bigg{(}\frac{s^{2}}{\sigma^{2}}-1\bigg{)}, \tag{8}\] where in this case we assumed the exponential positive linking function (\(s=\exp(\tilde{s})\)). Once \(m\) and \(s\) (that is, \(\tilde{s}\)) are learned, at test time we can sample \(\theta\!\sim\!q(\theta)\!=\!\mathcal{N}(\theta;m,s^{2})\). If we consider \(S\) samples, then the posterior predictive distribution becomes: \[p(y|x,D)=\frac{1}{S}\sum_{i=1}^{S}p(y|x,\theta^{(i)}),\ \ \text{where}\ \ \theta^{(i)}\!\sim\!\mathcal{N}(\theta;m,s^{2}),\ \ i=1,\ldots,S. \tag{9}\] ### MC-Dropout In this section we describe _our_ formulation for the MC-Dropout approximate inference algorithm. This is slightly different from the original version [6] in the following aspects: 1) We allow the Gaussian prior mean is either set to be \(0\) (original version) if no prior information is available, or set to be some known values \(\overline{\theta}\) to incorporate the prior knowledge (typically pre-trained network parameters); 2) Whereas the original version dropouts the inputs to layers, we dropout the network parameters instead; Since the former usually requires modification of the network definition codes in order to insert dropout layers, the main advantage of the parameter dropout is that the code modification is not necessary; 3) In the original version the bias parameters take Gaussian posteriors, being different from the mixture of two spiky Gaussian posteriors for the weight parameters; We consider both the Gaussian posterior and the spiky mixture posterior for bias parameters, which is offered as an option for users to select; 4) Moreover, sometimes it is conventional practice not imposing prior for the bias parameters at all, and we incorporate this option as well for the sake of user's convenience. Now we discuss the detailed formulations for our implementation. * **Prior:**\(p(\theta)=\mathcal{N}(\theta;\overline{\theta},\sigma^{2}I)\). * **Variational posterior:**\(q(\theta)=\prod_{i}\big{(}(1\!-\!p)\cdot\mathcal{N}(\theta_{i};m_{i},\epsilon^{ 2})+p\cdot\mathcal{N}(\theta_{i};\overline{\theta}_{i},\epsilon^{2})\big{)}\), where \(\epsilon\) is negligibly small (making two components _spiky_), \(p\in[0,1]\) corresponds to the dropout probability, and \(m\) (of the same shape/size as \(\theta\)) is the only variational parameters to be estimated. * **Bias options:** The bias parameters, denoted by \(\theta_{b}\), can take different prior and/or variational posterior distributions depending on user's option choice. The first option is _not imposing prior_ on \(\theta_{b}\) at all, more precisely imposing the uninformative prior \(p(\theta_{b})\propto 1\), in which case we set \(q(\theta_{b})\) as a delta function, or virtually equivalent to \(q(\theta_{b})=\mathcal{N}(\theta_{b};m_{b},\epsilon^{2}I)\), and the consequence is that in the loss function (negative ELBO) we can simply ignore the corresponding KL term; The second option is to place the Gaussian prior \(p(\theta_{b})=\mathcal{N}(\overline{\theta}_{b},\sigma^{2}I)\) and Gaussian posterior \(q(\theta_{b})=\mathcal{N}(\theta_{b};m_{b},\epsilon^{2}I)\), which is exactly the option taken by the original MC-Dropout [6]. This can be implemented by treating the dropout probability \(p\) separately for biases (denoted by \(p_{b}\)) and non-biases (denoted by \(p\)), and setting \(p_{b}=0\); And of course the last (default) option is to _treat biases in the same way as weights_, in which case we use exactly the above prior and variational posterior. The negative ELBO loss function, in the data size normalized and the unbiased minibatch stochastic estimate version, is comprised of the expected negative log-likelihood (ENLL) \(\frac{1}{|B|}\mathbb{E}_{q}[-\log p(B|\theta)]\) and the KL term \(\frac{1}{|D|}\text{KL}(q(\theta)||p(\theta))\), where \(B\) and \(D\) are the minibatch and the whole training set, respectively. The ENLL term is estimated by Monte Carlo, using the reparametrization trick. We first sample keep-or-dropout binary indicators \(z\) (of the same shape/size as \(\theta\)), specifically \(z_{i}\sim\text{Bernoulli}(1-p)\), where \(z_{i}\!=\!1\) means no-dropout of \(\theta_{i}\) and \(z_{i}\!=\!0\) implies dropout. Then the reparametrized sample is \(\theta=z\odot m+(1\!-\!z)\odot\overline{\theta}\). The KL term (between the mixture of Gaussians \(q\) and the Gaussian \(p\)) can be approximated by the same technique as [6]. The final loss function (to be minimized over the variational parameters \(m\)) is as follows: \[\text{loss }=\ \frac{1}{|B|}\sum_{(x,y)\in B}l(x,y;\theta=z\odot m+(1\!-\!z) \odot\overline{\theta})\ +\ \frac{1}{\sigma^{2}|D|}\frac{1\!-\!p}{2}||m-\overline{\theta}||_{2}^{2}. \tag{10}\] Note that when there is no prior information on \(\theta\), that is, \(\overline{\theta}=0\), (10) reduces to the original MC-Dropout. The loss gradient3 can be easily derived using the chain rule: Footnote 3: We may not need the loss gradient explicitly if one implements it using the parameter-level comp-graph build-up and backprop such as the higher library. We do not utilize this library in our current version (as of August 2023). \[\frac{\partial\text{loss }}{\partial m}\ =\ \frac{1}{|B|}\sum_{(x,y)\in B} \frac{\partial l(x,y;\theta)}{\partial\theta}\odot z\ +\ \frac{1\!-\!p}{\sigma^{2}|D|}(m-\overline{\theta}), \tag{11}\] where \(\frac{1}{|B|}\sum_{(x,y)\in B}\frac{\partial l(x,y;\theta)}{\partial\theta}\) can be easily computed by a backprop call provided in most auto-gradient deep learning libraries (e.g., PyTorch or Tensorflow). Once \(m\) is learned, at test time we can sample \(\theta\sim q(\theta)\) from the mixture density. Although this amounts to doing similar dropout sampling used in the ENLL estimation, we often ignore dropout and use the Gaussian sampling. Due to negligible \(\epsilon\), we have a _deterministic_ sample \(\theta=m\). ### Sg-Mcmc (Sgld) In the stochastic-gradient MCMC (SG-MCMC) approach [15; 5; 2], we can collect posterior samples by running a certain stochastic dynamic model whose stationary distribution coincides with the posterior distribution (3). The stochastic-gradient Langevin dynamic method (SGLD) [15] forms a Langevin dynamic model, which amounts to running the following recurrence to collect posterior samples (after some burn-in steps): \[\theta\ \leftarrow\ \theta+\frac{\eta}{2}\nabla\bigg{(}\log p(\theta)+\frac{|D |}{|B|}\log p(B|\theta)\bigg{)}+\epsilon\sqrt{\eta} \tag{12}\] where \(B\) (\(\subset\!D\)) is a minibatch, \(\eta\) is small step size, and \(\epsilon\!\sim\!\mathcal{N}(0,I)\). Note that in the parentheses subject to the derivative, the first log-prior term admits closed-form gradient while the gradient of the second term can be computed by the conventional SGD backprop. Thus each step in (12) is as efficient as the vanilla SGD step. After a burn-in period, we can maintain those samples \(\theta\) to approximate the posterior \(p(\theta|D)\). For instance, the running average of the \(\theta\) samples, denoted by \(\hat{\theta}\), is a good estimate of the mean of the posterior \(p(\theta|D)\). In the ideal case, we can save all available samples from the posterior (i.e., the iterates from (12)), however, due to the large number of parameters in \(\theta\), this would easily incur a computational challenge. To this end, in our current implementation4 we estimate/maintain the sample means and variances from the posterior samples (via running estimation); and at test time the (approximate) posterior samples are taken from the Gaussian fitted with these sample means and variances. Footnote 4: Alternatively, perhaps more expressive solution might be to estimate a mixture-of-Gaussians density model to fit the posterior samples. We leave this implementation as our future work. ### Laplace Approximation The Laplace approximation essentially approximates the log-posterior \(\log p(\theta|D)\) by the second-order Taylor polynomial at the maximum-a-posterior (MAP) estimate \(\theta^{*}\). More specifically, we first obtain the MAP estimate \(\theta^{*}\) by solving the following optimization problem, typically using the SGD: \[\theta^{*}=\arg\max_{\theta}\ \log p(\theta|D)=\arg\max_{\theta}\ \log p(\theta)+\log p(D|\theta). \tag{13}\] Then we approximate \(\log p(\theta|D)\) by the quadratic Taylor polynomial at \(\theta=\theta^{*}\), which is simplified as follows due to the vanishing gradient at the (local) optimum (i.e., \(\nabla\log p(\theta^{*}|D)=0\)): \[\log p(\theta|D)\;\approx\;\frac{1}{2}(\theta-\theta^{*})\nabla^{2}\log p( \theta^{*}|D)(\theta-\theta^{*})+\text{const.} \tag{14}\] Assuming that the Hessian \(\nabla^{2}\log p(\theta^{*}|D)\) is negative definite5, the equation (14) essentially leads to the Gaussian posterior approximation, Footnote 5: If not, one can perform the Generalised Gauss-Newton (GGN) approximation for the Hessian [14, 7, 9]. However, we omit this step for simplicity, and our diagonal empirical Fisher approximation in (19) implicitly handles this potential issue. \[p(\theta|D)\;\approx\;\mathcal{N}\big{(}\theta;\theta^{*},-(\nabla^{2}\log p( \theta^{*}|D))^{-1}\big{)}. \tag{15}\] Here arises the infamous computational challenge in Hessian evaluation and inversion from (15). First, to circumvent the \(O(d^{2})\) memory overhead for saving the Hessian matrix and prohibitive \(O(d^{3})\) matrix inversion time, where \(d=\dim(\theta)\), we consider the diagonal Hessian approximation. Secondly, to deal with the overhead of the Hessian computation, we adopt the famous _empirical Fisher information approximation_ for the Hessian. For concreteness, we here derive the details of the diagonal empirical Fisher approximation. Letting the training data \(D=\{(x_{n},y_{n})\}_{n=1}^{N}\), \[\nabla^{2}\log p(\theta^{*}|D)=\nabla^{2}\log p(\theta^{*})+\sum_{n=1}^{N} \nabla^{2}\log p(y_{n}|x_{n},\theta^{*}) \tag{16}\] The second term in the RHS of (16) can be approximated by the empirical Fisher information as (17), which is essentially obtained by replacing the model distribution \(p(y|x,\theta)\) in the Fisher information by the plug-in estimate or the empirical distribution \(\frac{1}{N}\sum_{n=1}^{N}\delta(y-y_{n}|x_{n})\): \[\nabla^{2}\log p(\theta^{*}|D)\approx\nabla^{2}\log p(\theta^{*})-\sum_{n=1}^ {N}\nabla\log p(y_{n}|x_{n},\theta^{*})\nabla\log p(y_{n}|x_{n},\theta^{*})^{ \top}. \tag{17}\] Now, we further approximate the dyads by the diagonal matrix (i.e., element-wise squaring instead of outer product), leading to: \[\nabla^{2}\log p(\theta^{*}|D)\approx\nabla^{2}\log p(\theta^{*})-\sum_{n=1}^ {N}\text{Diag}\Big{(}\nabla\log p(y_{n}|x_{n},\theta^{*})^{2}\Big{)}, \tag{18}\] where the squaring in (18) is element-wise, and \(\text{Diag}(v)\) is the diagonal matrix with the vector \(v\) embedded in the diagonal entries. Lastly, assuming isotropic Gaussian prior \(p(\theta)=\mathcal{N}(\theta;\overline{\theta},\sigma^{2}I)\), we have the final posterior approximation: \[p(\theta|D)\;\approx\;\prod_{i}\mathcal{N}(\theta_{i};\theta_{i}^{*},v_{i})\; \;\text{where}\;\;v_{i}=\frac{1}{\frac{1}{\sigma^{2}}+\sum_{n=1}^{N}\big{[} \nabla\log p(y_{n}|x_{n},\theta^{*})\big{]}_{i}^{2}}. \tag{19}\] Now (19) can be computed with one forward-pass for each data instance, and all operations are done in \(O(d)\) time/memory. Although there exist other Hessian approximation strategies, notably the block diagonal approximation schemes such as the Kronecker factorization [13], they can potentially introduce considerable computational overhead compared to the diagonal one, which often hinders their applications to the large-scale networks such as Vision Transformers. For this reason we omit the implementation of these methods in our library. ## 3 Uncertainty Quantification One of the key benefits of using Bayesian deep models is its capability of capturing uncertainty in their predictive distributions. There are two popular types of methods to quantify/measure how well the uncertainty is captured in the underlying models: error calibration and negative log-likelihood. ### Error Calibration The popular error calibration metrics such as ECE and MCE [8] as well as the visualization tool like the Reliability plot [3; 10] belong to this category. The key idea is to measure _how well the prediction accuracy and the prediction confidence are aligned_. Most approaches rely on metric evaluation based on confidence binning. More specifically, assume that we have class predictions by the model \(p(y=j|x^{i})\) for \(i=1,\ldots,N\) and \(j=1,\ldots,K\) where \(K\) is the class cardinality. Let \(y^{\text{true}}(x)\) be the ground-truth class label. We consider bin size \(M\) (bin index \(m=0,1,\ldots,M\!-\!1\)). * Initialize: \(binsize[0:M]=acc[0:M]=conf[0:M]=0\) * Determine \(m=\text{Bin ID}\) that \(p(y=j|x^{i})\) belongs to - \(binsize[m]\gets binsize[m]+1\) - \(acc[m]\gets acc[m]+I(y^{\text{true}}(x_{i})=j)\) - \(conf[m]\gets conf[m]+p(y=j|x^{i})\) It would be desirable to have a prediction model that leads to \(acc[m]\approx conf[m]\) for all confidence level \(m=0,1,\ldots,M-1\). There are several ways to visualize or quantify this goodness of alignment. **Reliability plot** is just a simple plot of \(acc[0:M]\) (Y-axis) vs. \(conf[0:M]\) or bin centers (X-axis). Thus in the ideal case (0 calibration error), this plot would coincide with \(Y=X\) line. **ECE and MCE** can be computed by the following formulas: ECE \[= \frac{1}{M}\sum_{m=0}^{M-1}\frac{binsize[m]}{N\cdot K}\cdot \Big{|}acc[m]-conf[m]\Big{|}\] (20) MCE \[= \arg\max_{0\leq m<M}\Big{|}acc[m]-conf[m]\Big{|}\] (21) **Temperature scaling:** We typically have a logit vector \(f(x)\in\mathbb{R}^{K}\) as an output of the neural network, before soft-max normalizing it to \(p(y|x)\). We consider the (temperature) scaling of this logit before soft-max, that is, \[p_{T}(y|x)=\frac{ef_{y}(x)/T}{\sum_{j=1}^{K}e^{f_{j}(x)/T}}. \tag{22}\] Obviously \(T\!=\!1\) is the default setting, but one can find the best \(T\) that minimizes the calibration error. To this end, by regarding \(T\) as an optimization parameter, we typically form a maximum likelihood estimation problem on the validation set. More specifically, \[\min_{T}\ \mathbb{E}_{(x,y)\sim D_{V}}[-\log p_{T}(y|x)], \tag{23}\] where \(D_{V}\) is the validation data set. Once the optimal \(T\) is found, we can report the _temperature-scaled_ calibration error metrics with \(p_{T}(y|x)\). Our BayesDLL library can produce reliability plots and ECE/MCE metrics during model training. See Fig. 2 for the examples. ### Negative Log-Likelihood The negative log-likelihood (NLL) on the test data set is the standard statistical metric that measures how close the model's predictive distribution is to the true labeling distribution. It can be computed as follows: \[\text{NLL}=\mathbb{E}_{(x,y)\sim D_{T}}[-\log p(y|x)], \tag{24}\] where \(D_{T}\) is the test data set. ## 4 Implementation Notes Our current library implements four different Bayesian deep learning methods as well as the baseline deterministic (non-Bayesian) method. Which method is used can be specified by the flag --method. For instance, one can add --method mc_dropout flag in the command line. Within each method, we also list the _method-specific_ hyperparameters. * "vanilla": This is a vanilla deterministic deep learning, aka SGD (stochastic gradient descent) learning. We also allow _weight decay_ to L2-penalise deviation from the pre-trained parameters or zero parameters, as well as the bias option for the L2 penalty. * [noitemsep] - wd (eg, 1e-4): The weight decay (L2 regularisation) coefficient. The L2 penalty is measured based on either deviation from the pre-trained parameters or from 0. * bias {"penalty","ignore"}: How to treat the bias parameters in L2 penalty. "penalty" specifies the same treatment as weight parameters, while "ignore" simply ignores the bias deviation (analogous to uninformative bias prior in Bayesian methods). * "vi": This is the variational inference method. The related hyperparameters are as follows: * [noitemsep] - prior_sig (eg, 0.01): This specifies the standard deviation \(\sigma\) of the prior Gaussian distribution. * bias {"informative","uninformative"}: How to treat the bias parameters in the prior. "informative" specifies the same treatment as weight parameters, while "uninformative" simply adopts \(p(\theta_{b})\propto 1\). This amounts to dropping the KL terms for the bias parameters. * kld (eg, 0.1): The discount factor for the KL term. The KL term is multiplied by this factor. This is related to the training data size inflation due to data augmentation. * nst (eg, 5): The number of posterior samples at test time (i.e., \(S\) in (9)). * "mc_dropout": This is the MC-Dropout, and the related hyperparameters are as follows: * [noitemsep] - prior_sig (eg, 0.01): This specifies the standard deviation \(\sigma\) of the prior Gaussian distribution. * p_drop (eg, 0.1): This specifies the dropout probability. * bias {"gaussian","spikymix", "ignore"}: How to treat the bias parameters in the prior. "gaussian" takes Gaussian \(q(\theta_{b})\), thus no dropout for bias parameters; "spikymix" specifies the same treatment as weight parameters; while "ignore" simply ignores prior and posterior for bias parameters. * kld (eg, 0.1): The discount factor for the KL term. The KL term is multiplied by this factor. This is related to the training data size inflation due to data augmentation. * nst (eg, 5): This is the number of posterior samples to be sampled at test time (i.e., \(S\) in (9)). Figure 2: Our BayesDLL library can produce reliability plots during model training. (a) Reliability plot with default temperature \(T\!=\!1\), (b) Reliability plot after temperature scaling, and (c) Temperature scaling optimisation learning curve. In the reliability plots we also show ECE, MCE, and NLL metrics on the lower-right corner. * "sgld": This is the SGLD whose related hyperparameters are as follows: * prior_sig (eg, 0.01): This specifies the standard deviation \(\sigma\) of the prior Gaussian distribution. * Ninflate (eg, 1e3): Data inflation factor (due to data augmentation). The training data size \(|D|\) is inflated by this factor. * nd (eg, 0.1): Noise discount factor. The noise term in the SGLD iteration is multiplied by this factor. * burnin (eg, 20): Burn-in period (in epochs). * thin (eg, 10): Thinning steps (in batch iterations). * bias {"informative","uninformative"}: How to treat the bias parameters in the prior. "informative" specifies the same treatment as weight parameters, while "uninformative" simply adopts \(p(\theta_{b})\propto 1\). This amounts to dropping the prior term in the SGLD iteration. * nst (eg, 5): The number of posterior samples to be sampled at test time (i.e., \(S\) in (9)). Recall that in the current version we use a sample-estimated Gaussian for the posterior approximation. Thus this is the number of Gaussian samples. * "la": This is the Laplace approximation, and the related hyperparameters are as follows: * prior_sig (eg, 0.01): This specifies the standard deviation \(\sigma\) of the prior Gaussian distribution. * Ninflate (eg, 1e3): Data inflation factor (due to data augmentation). The training data size \(|D|\) is inflated by this factor. * bias {"informative","uninformative"}: How to treat the bias parameters in the prior. "informative" specifies the same treatment as weight parameters, while "uninformative" simply adopts \(p(\theta_{b})\propto 1\). This amounts to dropping the prior terms for bias parameters in the MAP objective. * nst (eg, 5): The number of posterior samples at test time (i.e., \(S\) in (9)). ## 5 Experiments In this section we demonstrate the execution results of training and testing with our Bayesian neural network library. In Sec. 5.1 we provide extensive comparison among various running options and hyperparameters, especially the number of posterior samples during test prediction (nst), how to treat the bias parameters (bias, e.g., either not imposing a prior or treating the same way as weight parameters), the choice of prior scale (prior_sig), and so on. In Sec. 5.2 we test our library on large-scale neural networks for vision tasks, in particular, ResNet-101 and Vision Transformer (ViT) models. For these models, we will show that learning the models from scratch (i.e., uninformative zero-mean Gaussian prior \(p(\theta)\)) often fails. Instead, we impose a prior that is centered at the pre-trained model parameters that are available publicly, which leads to prediction performance comparable to conventional warm-start/fine-tuning deterministic model learning, but with better uncertainty calibration. ### Testing Various Options/Hyperparameters with MLP on MNIST **Experimental setup.** The original MNIST training dataset is randomly split into \(50\%\) training and \(50\%\) validation sets, where the latter is used to determine early stopping of training iterations. The neural network we adopted is a fully-connect network (aka, MLP), which has three hidden layers with 1000 units, followed by the final linear prediction head. For the nonlinearity the ReLU activation is used. For all competing methods, we set the maximum training epochs as 100, learning rate \(10^{-2}\), batch size 128, SGD optimizer with momentum 0.5. All training starts with randomly initialized model parameters. **Competing approaches.**Vanilla is the non-Bayesian deterministic SGD learning, where optionally we can impose the L2 regularization via the weight-decay option (either wd=0 or wd=\(10^{-4}\)). VI stands for variational inference. We take as default hyperparameters the prior scale \(\sigma=1.0\) (except \(\sigma=0.01\) for Laplace) and the KL discount factor \(10^{-3}\). For MC-Dropout, we use the default dropout probability \(0.1\) while the prior scale and the KL discount factor have the same default values as VI. Recalling from Sec. 2.2, there are three bias treatment options, which are abbreviated as: ga (Gaussian prior, also conforming to the original version), sm (the spiky mixture prior, thus the same treatment as weight parameters), and uninformative prior. In SGLD, we take the burn-in steps for the first 5 epochs, followed by thinning at every 10 batch iterations. The training data inflation factor is set to \(10^{3}\). Lastly, for Laplace approximation, we use the same inflation factor, but the prior scale is set to \(\sigma=0.01\) since having larger scale (e.g., \(\sigma=1.0\)) led to failure in all cases. **Results on prediction errors.** As shown in Table 1, all methods perform equally well whereas SGLD slightly falls short. Overall the different bias treatment options have little impact on the prediction performance. **Impact of the number of posterior samples at test prediction.** In the Bayesian neural networks, one can incorporate the prediction uncertainty by marginalizing over the posterior samples, which is in practice typically done by Monte-Carlo averaging such as (9) for VI and similarly for other methods. This is known to improve the uncertainty calibration, i.e., consistency between prediction confidence and accuracy. We check this property by comparing two different settings, the number of posterior samples \(\mathtt{nst}=0\) (using the posterior mean) and \(\mathtt{nst}=5\). As shown in Table 2, increasing the number of posterior samples in test prediction leads to reduction in ECE and MCE metrics, indicating that the models are better calibrated. **Impact of prior scale (\(\sigma\)).** We test how the Bayesian models behave when we change the prior scale. From the default value \(\sigma=1.0\), we reduce it to \(\sigma=0.01\). As the results in Table 3, the prediction errors barely change, but there are slight improvement in the uncertainty calibration scores. This may be attributed to the stronger regularisation effect, where further deviation from the prior mean \(0\) weight parameters is penalised more severely. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Vanilla} & \multicolumn{2}{c}{VI} & \multicolumn{2}{c}{MC-Dropout} & \multicolumn{2}{c}{SGLD} & Laplace \\ \cline{2-11} & wd=0 & bias=1 & bias=0 & bias=1 & bias=0 & bias=a & bias=sm & bias=0 & bias=1 & bias=0 & bias=1 & bias=0 \\ \hline Error (\(\%\)) & 2.52 & 2.55 & 2.50 & 2.25 & 2.43 & 2.47 & 2.47 & 2.48 & 2.74 & 2.73 & 2.41 & 2.53 \\ \hline \hline \end{tabular} \end{table} Table 1: MNIST test prediction errors (\(\%\)). For the Bayesian neural network models, we mainly vary the bias treatment option, either imposing informative bias prior (the same treatment as weight parameters) or uninformative (essentially, not imposing prior on the bias parameters). The other hyperparameters are roughly default values (See text for details), and the test-time posterior sampling is not done, but the posterior mean is used. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Vanilla} & \multicolumn{2}{c}{VI} & \multicolumn{2}{c}{MC-Dropout} & \multicolumn{2}{c}{SGLD} & Laplace \\ \cline{2-11} & wd=0 & wd=10\({}^{-4}\) & nst=0 & nst=5 & nst=0 & nst=5 & nst=0 & nst=5 & nst=0 & nst=5 \\ \hline Error (\(\%\)) & 2.52 & 2.55 & 2.25 & 2.75 & 2.47 & 2.43 & 2.74 & 2.75 & 2.41 & 1.51 \\ ECE (\(\%\)) & 0.22 & 0.20 & 0.12 & 0.11 & 0.32 & 0.18 & 0.23 & 0.21 & 0.24 & 0.12 \\ MCE (\(\%\)) & 22.04 & 17.33 & 14.64 & 11.71 & 21.83 & 11.19 & 13.76 & 13.56 & 17.82 & 8.04 \\ NLL (\(\times 10^{-2}\)) & 9.54 & 9.38 & 8.11 & 8.57 & 11.28 & 8.75 & 10.00 & 9.27 & 9.45 & 9.36 \\ \hline \hline \end{tabular} \end{table} Table 2: Impact of the number of posterior samples at test prediction in MNIST. The number of posterior samples nst=0 amounts to using the posterior mean. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{2}{c}{VI} & \multicolumn{2}{c}{MC-Dropout} & \multicolumn{2}{c}{SGLD} \\ \cline{2-7} & nst=0 & nst=5 & nst=0 & nst=5 & nst=0 & nst=5 \\ \hline Error (\(\%\)) & \(2.25\!\rightarrow\!2.26\) & \(2.75\!\rightarrow\!2.63\) & \(2.47\!\rightarrow\!2.42\) & \(2.43\!\rightarrow\!2.52\) & \(2.74\!\rightarrow\!2.70\) & \(2.75\!\rightarrow\!2.70\) \\ ECE (\(\%\)) & \(0.12\!\rightarrow\!0.12\) & \(0.11\!\rightarrow\!0.10\) & \(0.32\!\rightarrow\!0.32\) & \(0.18\!\rightarrow\!0.17\) & \(0.23\!\rightarrow\!0.18\) & \(0.21\!\rightarrow\!0.16\) \\ MCE (\(\%\)) & \(14.64\!\rightarrow\!11.50\) & \(11.71\!\rightarrow\!10.50\) & \(21.83\!\rightarrow\!21.72\) & \(11.19\!\rightarrow\!22.27\) & \(13.76\!\rightarrow\!11.95\) & \(13.56\!\rightarrow\!12.56\) \\ NLL (\(\times 10^{-2}\)) & \(8.11\!\rightarrow\!7.75\) & \(8.57\!\rightarrow\!8.44\) & \(11.28\!\!\rightarrow\!11.24\) & \(8.75\!\rightarrow\!8.50\) & \(10.00\!\rightarrow\!9.49\) & \(9.27\!\rightarrow\!9.42\) \\ \hline \hline \end{tabular} \end{table} Table 3: Impact of the prior scale (\(\sigma\)) in MNIST. In the table entries, “\(a\!\rightarrow\!b\)” indicates that score \(a\) with \(\sigma=1.0\) is changed to score \(b\) with \(\sigma=0.01\). ### Large-Scale Backbones including Foundation Models Next we test our library on the large-scale backbone networks. We consider two popular deep networks for vision tasks: **ResNet-101** and **Vision Transformer (ViT)** specifically the version known as **ViT-L-32**, where the former consists of about \(43\) million parameters and the latter about \(305\) million parameters. For simplicity we consider the image classification vision tasks with the Pets [12] and Flowers [11] datasets that contain images of 37 and 102 different categories, respectively. For Pets, we randomly split the official training data into \(50\%\) training and \(50\%\) validation sets. For Flowers, we merge the official training and validation data splits, and randomly split them into \(50\%\) training and \(50\%\) validation sets. As it is widely believed that training such large-scale networks from the scratch is very difficult and often leads to inferior solutions, we instead adopt the pre-trained model weights in the form of prior mean parameters in the Bayesian models. That is, instead of having 0-mean prior (i.e., \(\overline{\theta}=0\)) as usual practice, we set the prior mean equal to the pre-trained weights6. In our quick experiments with the (non-Bayesian) SGD learning (denoted by Vanilla) in Table 4, we can verify that there is huge performance difference between the trained models with and without pre-trained weights, signifying that the use of pre-trained weights is crucial for large-scale models. Footnote 6: We simply employ the network architecture definitions and the network weights obtained from pre-training with the ImageNet subsets [4], available at [https://pytorch.org/vision/main/models.html](https://pytorch.org/vision/main/models.html). Note that this feature of flexible external code incorporation, without any modification of the original code, is one of the key benefits of the proposed library. The overall test errors are shown in Table 5. For the detailed hyperparameters used in this experiment, please refer to our code. We see that the variational inference, MC-dropout, and SGLD models perform on par or better than deterministic models. The Laplace approximation performs reliably well with the posterior mean parameters (nst=0), but once we incorporate multiple posterior samples (nst=5) the test accuracy dropped significantly. We still investigate the precise reasons, but it might be due to the numerical issue in the diagonal empirical Fisher information estimate (e.g., one may as well put some larger regulariser in the denominator of (19) for better numerical stability). Table 6 summarises the uncertainty quantification results. **Computational overhead of Bayesian neural networks.** One (often-believed) obstacle that prevents the Bayesian neural networks from being widely applied to large-scale foundation models in real world practice, is the computational overhead - a sort of prejudice where one may well need to keep track of more parameters than deterministic models with increased training time. To clarify this, we actually compare the wall-clock training times and memory footprints of the different Bayesian models against the deterministic vanilla SGD training in Fig. 3. We use a RTX-2080Ti machine for ResNet-101 and Tesla A100 for ViT-L-32, with single GPUs for both cases. As shown, all Bayesian approaches have tolerable overhead compared to base SGD models - the over head is minor for SGLD and Laplace approximation; the worst-case overhead is at most two times of base model's complexity. These results imply that the proposed Bayesian neural network library makes the Bayesianisation of large-scale foundation models viable. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{ResNet-101} & \multicolumn{2}{c}{ViT-L-32} \\ \cline{2-5} & From-scratch & Pre-trained-warm-start & From-scratch & Pre-trained-warm-start \\ \hline Test error (\(\%\)) & 94.63 & 11.76 & 73.49 & 14.20 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison between trained (non-Bayesian) vanilla models with and without pre-trained weights on the Flowers dataset [11]. The weight decay hyperparameter is set to \(10^{-4}\). \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Vanilla} & \multicolumn{2}{c}{VI} & \multicolumn{2}{c}{MC-Dropout} & \multicolumn{2}{c}{SGLD} & \multicolumn{2}{c}{Laplace} \\ \cline{2-11} & wd=0 & wd=\(10^{-4}\) & nst=0 & nst=5 & nst=0 & nst=5 & nst=0 & nst=5 & nst=0 & nst=5 \\ \hline ResNet-101 & 10.03 & 10.03 & 10.03 & 9.27 & 10.03 & 9.65 & 9.21 & 9.24 & 10.19 & N/A \\ ViT-L-32 & 8.72 & 8.69 & 8.39 & 8.45 & 8.37 & 8.42 & 8.67 & 8.72 & N/A \\ \hline \hline \end{tabular} \end{table} Table 5: Test errors (\(\%\)) of Bayesian adaptation/fene-tuning on the Pets dataset [12]. ## 6 Conclusion We provide full implementation, without relying on other libraries, and easy-to-use demo codes for various Bayesian inference methods including: variational inference, MC-dropout, stochastic-gradient Langevin dynamics, and Laplace approximation. We also include the codes for evaluating Uncertainty Quantification measures provided (eg, ECE, MCE, Reliability plots, Negative log-likelihood), which can be used to report how well the uncertainty is captured in new models. Although we have tested the library with ResNet-101 and ViT-L-32, our library can be ready to be applicable to other Foundation Models such as LLAMA, RoBERTa, and Denoising Diffusion generative models without code modification at all. We also demonstrate that our code incurs minimal/acceptable use of extra computational resources (time and GPU memory).
2309.14381
Survey of Social Bias in Vision-Language Models
In recent years, the rapid advancement of machine learning (ML) models, particularly transformer-based pre-trained models, has revolutionized Natural Language Processing (NLP) and Computer Vision (CV) fields. However, researchers have discovered that these models can inadvertently capture and reinforce social biases present in their training datasets, leading to potential social harms, such as uneven resource allocation and unfair representation of specific social groups. Addressing these biases and ensuring fairness in artificial intelligence (AI) systems has become a critical concern in the ML community. The recent introduction of pre-trained vision-and-language (VL) models in the emerging multimodal field demands attention to the potential social biases present in these models as well. Although VL models are susceptible to social bias, there is a limited understanding compared to the extensive discussions on bias in NLP and CV. This survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL. By examining these perspectives, the survey aims to offer valuable guidelines on how to approach and mitigate social bias in both unimodal and multimodal settings. The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models in various applications and research endeavors.
Nayeon Lee, Yejin Bang, Holy Lovenia, Samuel Cahyawijaya, Wenliang Dai, Pascale Fung
2023-09-24T15:34:56Z
http://arxiv.org/abs/2309.14381v1
# Survey of Social Bias in Vision-Language Models ###### Abstract In recent years, the rapid advancement of machine learning (ML) models, particularly transformer-based pre-trained models, has revolutionized Natural Language Processing (NLP) and Computer Vision (CV) fields. However, researchers have discovered that these models can inadvertently capture and reinforce social biases present in their training datasets, leading to potential social harms, such as uneven resource allocation and unfair representation of specific social groups. Addressing these biases and ensuring fairness in artificial intelligence (AI) systems has become a critical concern in the ML community. The recent introduction of pre-trained vision-and-language (VL) models in the emerging multimodal field demands attention to the potential social biases present in these models as well. Although VL models are susceptible to social bias, there is a limited understanding compared to the extensive discussions on bias in NLP and CV. This survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL. By examining these perspectives, the survey aims to offer valuable guidelines on how to approach and mitigate social bias in both unimodal and multimodal settings. The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models in various applications and research endeavors. Vision-and-Language Models, Social Bias, Gender Bias, Racial Bias, Measurement, Mitigation + Footnote †: Corresponding author: Authors’ address: Nayeon Lee, [email protected]; Yejin Bang, [email protected]; Holy Lovenia, [email protected]; Samuel Cahyawijaya, [email protected]; Wenliang Dai, [email protected]; Pascale Fung, [email protected], Center for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. ###### Contents * 1 Introduction * 2 Overview * 2 Concept and Terminology * 2 Protected Demographic Attributes * 2.3 Fairness Criterion * 2.4 Categorization of Bias Metrics and Mitigation Methods * 3 Bias in Unimodal Models * 3.1 Bias in Language * 3.2 Bias in Vision * 4 Bias in Vision & Language Modeling * 4.1 Challenges in Extending to VL Bias
2309.03825
Prime and Modulate Learning: Generation of forward models with signed back-propagation and environmental cues
Deep neural networks employing error back-propagation for learning can suffer from exploding and vanishing gradient problems. Numerous solutions have been proposed such as normalisation techniques or limiting activation functions to linear rectifying units. In this work we follow a different approach which is particularly applicable to closed-loop learning of forward models where back-propagation makes exclusive use of the sign of the error signal to prime the learning, whilst a global relevance signal modulates the rate of learning. This is inspired by the interaction between local plasticity and a global neuromodulation. For example, whilst driving on an empty road, one can allow for slow step-wise optimisation of actions, whereas, at a busy junction, an error must be corrected at once. Hence, the error is the priming signal and the intensity of the experience is a modulating factor in the weight change. The advantages of this Prime and Modulate paradigm is twofold: it is free from normalisation and it makes use of relevant cues from the environment to enrich the learning. We present a mathematical derivation of the learning rule in z-space and demonstrate the real-time performance with a robotic platform. The results show a significant improvement in the speed of convergence compared to that of the conventional back-propagation.
Sama Daryanavard, Bernd Porr
2023-09-07T16:34:30Z
http://arxiv.org/abs/2309.03825v1
Prime and Modulate Learning: Generation of forward models with signed back-propagation and environmental cues ###### Abstract Deep neural networks employing error back-propagation for learning can suffer from exploding and vanishing gradient problems. Numerous solutions have been proposed such as normalisation techniques or limiting activation functions to linear rectifying units. In this work we follow a different approach which is particularly applicable to closed-loop learning of forward models where back-propagation makes exclusive use of the sign of the error signal to prime the learning, whilst a global relevance signal modulates the rate of learning. This is inspired by the interaction between local plasticity and a global neuromodulation. For example, whilst driving on an empty road, one can allow for slow step-wise optimisation of actions, whereas, at a busy junction, an error must be corrected at once. Hence, the error is the priming signal and the intensity of the experience is a modulating factor in the weight change. The advantages of this Prime and Modulate paradigm is twofold: it is free from normalisation and it makes use of relevant cues from the environment to enrich the learning. We present a mathematical derivation of the learning rule in z-space and demonstrate the real-time performance with a robotic platform. The results show a significant improvement in the speed of convergence compared to that of the conventional back-propagation. ## 1 Introduction Since its inception, deep learning has proven remarkably successful in a wide variety of areas, such as: image classification, speech recognition, and reinforcement learning Bahri et al. (2020). Deep Neural Networks (DNNs) employ activation functions, such as the logistic or the hyperbolic tangent (\(\tanh\)), and are most commonly trained using an error signal with the Gradient Descent Method (GDM) for optimisation. The derivative of such functions have a narrow range of practical values and a limit of zero outwith, therefore the propagation of the error signal through this non-linearity often suffers from the exploding and vanishing gradient problem (EVGP). This has ignited a significant research effort into addressing this issue; amongst the solutions are the network architectures such as long short-term memory (LSTM), precise weight initialisations, and specific non-linear activation functions (Hanin, 2018). Notably, linear rectifying units are used to remedy this problem which effectively remove the derivative and thus, drastically alter the behaviour of the network and the nature of learning. On the other hand, the logistic function allows for a smooth saturation of signals, along with \(\tanh\). This is in accordance with neuroscience where nearly all psychometric functions or internal neuronal processing follow a sigmoid activation pattern. From a neurophysiological standpoint, learning is driven by local and global mechanisms changing synaptic plasticity Reynolds and Wickens (2002). In particular, in closed-loop learning an interplay of local learning and global learning which has been advantageous, for example, improving the stability of learning when generating a forward model of a reflex Porr and Worgotter (2007). Nonetheless, to date, this class of closed-loop learning has only been used in shallow networks, often with a single learning units or shallow networks (Porr and Worgotter, 2007; Kulvicius et al., 2007; Maffei et al., 2017). In this work, we present a learning paradigm that combines local error back-propagation and global modulation to create a robust learning scheme for the generation of forward models. More specifically, only the sign of the propagating signal is used to prime the nature of weight change in the context of their local connections, whilst a global "relevance" signal acts as a third factor to excite the weight changes across the network. This offers not only a robust, nearly one-shot real-time learning, free of exploding and vanishing gradient problem (EVGP), but also a more comparable learning model to neuroscience than the conventional back-propagation. In this paper, we implement this novel algorithm on a physical robot that learns to improve the performance of a closed-loop feedback controller (a _reflex_) by calculating its forward model, and thus prevents the triggering of the controller. ## 2 The closed-loop learning platform The reflex and predictive loops:Figure 1 shows the learning platform. The so called "reflex" is a reactive closed loop controller which aims to stay as close to its desired state \(I_{d}\) as possible. This fixed reflex controller acts against the disturbance \(D\) which travels through the environmental transfer function \(R_{E}\). This leads to a new state \(S\) that is picked up by \(R_{S}\) and causes a sensor signal \(I\). This signal is compared to the desired state \(I_{d}\) at node 1 and in turn creates the error signal \(E\). This error signal is translated into an appropriate action \(A\) via the motor transfer function \(R_{M}\) to counteract the disturbance \(D\) at node 2. Here, the crucial aspect of the error signal \(E\) is that it can be used for realtime learning that enables the learner to generate a forward model of the reflex loop. Learning of the forward model is performed by the novel PaM network placed in the learning (predictive) loop. The disturbance travels through the learner's environment \(L_{E}\), leading to a new state \(S^{\prime}\) which is picked up by learner's sensory unit \(L_{S}\) and causes a predictive sensory signal \(I^{\prime}\). This is fed into the learning algorithm which in turn generates the output \(P\). This is translated into the predictive action \(A^{\prime}\), executed by the learner's motor unit \(L_{M}\), to eliminate the Disturbance \(D\) at node 3, before it can enter the reflex loop. z-space:The signals in Figure 1 are discrete time and real-valued physical measurements, therefore are more accurately referred to as sequences. Due to the recursive nature of closed-loop systems it is beneficial to analyse their behaviour in z-space where discrete time-domain sequences are transformed into complex frequency-domain representations (Oppenheim, 1999). We use the unilateral z-transform representation of these sequences in this work, which, for an arbitrary sequence \(x[n]\), is defined as: \[X(z)=\mathcal{Z}\{x[n]\}=\Sigma_{n=0}^{\infty}(x[n]z^{-n}) \tag{1}\] Where \(\mathcal{Z}\{.\}\) is z-transform operator and \(z\) is a complex variable. In this work we harness three properties of the z-transform: linearity, time shifting, and convolution of sequences (Oppenheim, 1999). This converts the recursive nature of the derivations into simple algebraic operations, for example considering the reflex loop and assuming that \(D\) and \(A^{\prime}\) are zero, solving for the error signal yields: Figure 1: _The Prime and Modulate (PaM) closed-loop learning platform consisting of an inner reflex loop (highlighted in blue) and an outer learning loop (highlighted in green). The signal and transfer function notations are as follow: Disturbance (D), delay function (\(z^{-T}\)), reflex environment (\(R_{E}\)), state of reflex (\(S\)), reflex sensory unit (\(R_{S}\)), reflex sensory input (\(I\)), desired sensory input (\(I_{d}\)), error signal (\(E\)), reflex motor unit (\(R_{M}\)), reflex motor action (\(A\)), learner environment (\(L_{E}\)), state of learner (\(S^{\prime}\)), learner sensory unit (\(L_{S}\)), leaner sensory input (\(I^{\prime}\)), learner (PaM network), learner motor action (\(A^{\prime}\)). The priming pathway is highlighted with red where \(F_{P}\) is the priming factor that propagates through the network. Modulating pathway is highlighted with yellow, where, reflex cue function (\(R_{C}\)) and learner cue function (\(L_{C}\)) lead to the modulating factor (\(F_{M}\)) and drives the learning._ _time-domain_ _e[n]_\(=i_{d}(t)-r_{S}(r_{E}(r_{M}(e[n-1])))\) (2) _z-space_ _E(z)_\(=I_{d}(z)-R_{S}R_{E}R_{M}\cdot z^{-1}E(z)=\frac{I_{d}}{1+R_{S}R_{E}R_{M}z^{-1}}\) (3) This shows that the error signal is function in time-domain, whereas, with z-transformation this translates into multiplication of transfer functions and sequences which allows for expression of \(E(z)\) to be derived. In this work the \((z)\) symbol, representing the complex variable, is omitted for brevity. The prime and modulate pathways:As described in the introduction, only the sign of the error signal \(E\) is used to generate the priming factor (\(F_{P}\)) for the back-propagation (BP), while the global modulating factor (\(F_{M}\)) acts as a third factor where a filtered version of the sensor inputs is used as a novelty or relevance signal which modulates the learning with its amplitude. This is in line with the claim that in particular dorsal striatal dopamine is more of a novelty or salience detector Prescott et al. (2006) than an error signal, or serotonin being a rectified version of the reward prediction error Li et al. (2016). These signals and their functionalities are described in more details in the future sections. The learning goal:The aim of the learner is to produce a predictive signal \(P\) such that the error signal \(E\) is kept at zero persistently. In mathematical terms, this is analogous to minimising the absolute value of the error \(|E|\); since this is a non differentiable function at \(E=0\) the quadratic of \(E\) is minimised instead: _Learning goal:_ \(P=\underset{p}{arg\,min}\ E^{2}\) (4) This is achieved by adjusting the internal parameters of the network that are introduced below. The neural network:This paradigm employs a feed-forward neural network with fully connected layers. Figure 2 shows the internal connections of two neurons in this network. The forward propagation of activations \(A\) is shown with solid right-arrows and is calculated as:1 Footnote 1: The superscripts denote the layer index, the first subscripts denote the neuron index, and the second subscripts denote the input (or the associated weight) index. \[A_{j}^{\ell}=\sigma(v_{j}^{\ell})=\sigma(\Sigma_{i=1}^{I}\omega_{ij}^{\ell} \sigma(v_{i}^{\ell-1}))=\sigma(\Sigma_{i=1}^{I}\omega_{ij}^{\ell}A_{i}^{\ell-1 })\text{ {Where: }}\ell=1\to L \tag{5}\] Where \(\omega\) denotes the weights, \(v\) is referred to as the sum output, and the \(\sigma\) is the logistic function that maps the sum output of the neuron \(v\) to the activation. The weighted sum takes place at summation points \(\textcircled{a}\). Note that for \(\ell=1\), the predictive input information substitute for the input activations: \(A_{i}^{0}=I_{i}^{\prime}\) and, for \(\ell=L\), the weighted sum of activations in the last layer yields the predictive signal in the closed-loop platform: \(P=\Sigma_{x=0}^{X}(g_{x}A_{x}^{L})\), where \(g_{x}\) is a weighting constant. ## 3 Derivation of the learning rule The learning goal in Equation 4 is achieved through adjustments of weights \(\omega\) in Equation 5. At each iteration, the error signal \(E\) provides the network with constructive feedback on the adequacy of the predictive signal \(P\). Conventionally, the GDM is employed for weight optimisation; where the change to an arbitrary weight is proportional Figure 2: _Neuron connections in PaM network. Forward propagation of inputs in shown with the left-to-right solid lines highlighted in blue. \(\sigma\) is the sigmoid activation function and \(A_{j}^{\ell}\) denotes the activation of \(j^{th}\) neuron in layer \(\ell\). \([\omega]_{ij}^{\ell}\) is the weight matrix associated with inputs \(I\) inputs to this layer. The summation node \(\textcircled{a}\) corresponds to Equation 5. Backpropagation pathway is shown with right-to-left dashed lines highlighted in green. The summation at node \(\textcircled{b}\) and product at node \(\textcircled{c}\) correspond to Equation 10. The priming pathway is shown with short dashed lines highlighted in red. This is the sign of the resulting value from the backpropagation pathway, see Equation 11. The modulating pathway is shown in long dashed lines that enter each neuron from the environment, see Equation 12. The priming and modulating factors join at node \(\textcircled{a}\), together with the learning rate \(\eta\) and the relevant input to the neuron \(A_{i}^{\ell-1}\), to drive the learning rule, corresponding to Equation 13._ to the sensitivity of \(E^{2}\) with respect to the sum output \(v\) of the neuron containing that weight: \[\textit{Gradient descent method:}\quad\Delta\omega_{ij}^{\ell}\propto \frac{\partial E^{2}}{\partial v_{j}^{\ell}} \tag{6}\] When differentiating in z-space we assume that the weight changes in time-domain are constant or significantly slower than the changes in the closed-loop system. (reference???). We seek an expression of this gradient in the context of closed-loop applications. With a similar approach to Daryanavard and Porr (2020), this gradient is unravelled using the chain rule: \[\frac{\partial E^{2}}{\partial v_{j}^{\ell}}=\overbrace{\frac{\partial E^{2} }{\partial P}}^{Closed-loop}\cdot\overbrace{\frac{\partial P}{\partial v_{j} ^{\ell}}}^{Network} \tag{7}\] The former partial derivative solely relates to the dynamics of the closed-loop platform, whilst, the latter partial derivative relates to the inner connections of the network. The closed-loop gradient (\(\frac{\partial E^{2}}{\partial P}\)):Referring to Figure 1 (summation points 1 and 2), the closed-loop expression of the error signal in z-space is derived as: \[\textbf{E} =I_{d}-I=I_{d}-R_{S}R_{E}(\overbrace{\textbf{ER}_{M}}^{=A}+ \overbrace{PL_{M}}^{=A^{\prime}}+Dz^{-T})\] \[=\frac{I_{d}-R_{S}R_{E}(PL_{M}+Dz^{-T})}{1+R_{S}R_{E}R_{M}} \tag{8}\] Therefore, differentiation of quadratic of \(E\) with respect to \(P\) yields: \[\textit{Closed-loop gradient:}\quad\frac{\partial E^{2}}{\partial P}=2E \frac{\partial E}{\partial P}=2E\frac{-R_{S}R_{E}L_{M}}{1+R_{S}R_{E}R_{M}} \tag{9}\] The value of the resulting fraction can be found experimentally by substituting \(I_{d}=0\), \(D=0\), and \(P=1\) in Equation 8 and measuring \(E\); this is the closed loop gain of the system. The network gradient (\(\frac{\partial P}{\partial v}\)):From Equation 5, differentiating with respect to an arbitrary sum output \(v_{j}^{\ell}\) results in a recursive expression: \[\textit{Network gradient:}\quad\frac{\partial P}{\partial v_{j}^{\ell}}= \sigma^{-1}(v_{j}^{\ell})\cdot\Sigma_{k=0}^{K}(\omega_{jk}^{\ell+1}\frac{ \partial P}{\partial v_{k}^{\ell+1}}) \tag{10}\] Where \(\sigma^{-1}\) is the inverse logistic function. For the neurons in the final layer we have: \(P=A_{x}^{L}\), therefore, this gradient is simply calculated as: \(\sigma^{-1}(v_{x}^{L})\). This is fed through the network using the back-propagation technique. This chain of events is shown by dashed left-arrows in Figure 2, with the weighted sum and the product indicated at points 1 and 2, respectively. With that, we have an expression of the gradient found in Equation 6 which can be used for closed-loop learning applications. However, depending on the distribution of the weights and the topology of the neural network, the propagation can suffer from exploding and vanishing gradient problem (Pascanu et al., 2013; Bengio et al., 1994, 1993). In the following section we derive the PaM learning rule which is free from this issue. Prime and Modulate learningIn this work merely the _sign_ of the gradient \(\frac{\partial E^{2}}{\partial v}\), found in Equation 6, is used for learning. This serves to '_prime_' the weights to later undergo an increase, a decrease, or remain unchanged. Hence, it is referred to as the Priming Factor \(F_{P}\) as seen in Figures 1 and 2: \[F_{P}=\delta(E)=\frac{\frac{\partial|E|^{2}}{\partial v}}{|\frac{ \partial|E|^{2}}{\partial v}|}=\begin{cases}+1&\text{\emph{primes} $\omega$ to be increased}\\ 0&\text{\emph{primes} $\omega$ to remain unchanged}\\ -1&\text{\emph{primes} $\omega$ to be decreased}\end{cases} \tag{11}\] Once primed, the magnitude of weight change is dictated by a secondary signal that contains collective cues gathered from the environment informing the significance and/or relevance of the learning experience at any given instance of time. \(R_{C}\) and \(L_{C}\), shown in Figure 1, are functions designed to extract relevant cues from the reflex and predictive loops, respectively. The correlation of these cues indicated at product point 1, modulates the magnitude of weight changes. Hence, this signal is referred to as the Modulating Factor \(F_{M}\) (Figures 1 and 2): \[F_{M}=|ER_{C}\cdot I^{\prime}L_{C}| \tag{12}\] Finally, we redefine the weight change proportionality in Equation 6, and with the introduction of the learning rate \(\eta\), we establish the update rule for this paradigm (refer to Equations 5, 7, and 9): \[\Delta\omega_{ij}^{\ell}\propto\overbrace{\frac{2E\cdot\frac{-R_ {S}R_{E}L_{M}}{1+R_{S}R_{E}R_{M}}\cdot\frac{\partial P}{\partial v_{i}^{\ell} }}{|2E\cdot\frac{-R_{S}R_{E}L_{M}}{1+R_{S}R_{E}R_{M}}\cdot\frac{\partial P}{ \partial v_{i}^{\ell}}}}^{\text{\emph{priming factor}}}\cdot\overbrace{ \left[ER_{C}\cdot I^{\prime}L_{C}\right]}^{\text{\emph{modulating factor}}}\] \[\Delta\omega_{ij}^{\ell}\coloneqq\eta A_{i}^{\ell-1}F_{P}^{\ \ \ell}F_{M} \tag{13}\] As explained above, \(F_{P}^{\ \ell}\) is back-propagated through the layers, whilst, \(F_{M}\) is available to all neurons in the network globally; Figure 2 point 1 illustrates the weight changes. This finalises the derivation of the learning rule for Prime and Modulate (PaM) paradigm. ## 4 Experimentation platform: robotic navigation Figure 3 illustrates the experimental setup where a robot is placed on a canvas with the task of following a path. The robot is fitted with a Raspberry Pi 3B+ (RPi) that hosts the learning algorithm as an external C++ library. The network is initialised with 10 hidden layers for the following set of experiments. The chassis houses an array of Light Dependent Resistors (LDRs), a camera, two wheels with servo motors, and a battery bank to power the system. The equivalent closed-loop symbol of these components, as introduced in Figure 1, are shown here. Light Dependent Resistors:These sensors measure the Grey-Scale Values (GSVs) \(I_{i}\) of the surface underneath them and thus, monitor the position of the robot with respect to the path. The desired state is a symmetrical stand, therefore, a nonsymmetrical alignment results in the generation of a non-zero error signal as: \[E=a(I_{1}-I_{6})+b(I_{2}-I_{5})+c(I_{3}-I_{4})\quad\textit{where: }a>b>c \tag{14}\] The weighting registers the degree of deviation and enables sharp, medium, or gentle steering. Servo motors:This error signal is used to recover the desired symmetrical state by altering the speed of motors: Figure 3: _Schematic of the experimental setup: A robot placed on a canvas which navigates the path_ \[V_{left}=V_{0}-\overbrace{\begin{array}{c}dE\end{array}}^{Reflex}- \overbrace{\begin{array}{c}\begin{array}{c}Network\\ \begin{array}{c}\begin{array}{c}Network\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\begin{array}{c}\\ \begin{array}{c}\\ \begin{array}{c}\begin{array}{c}\\ \begin{array}{c}\begin{array}{c}\\ \begin{array}{c}\begin{array}{c}\\ \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}\begin{array}{c}\begin{array}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{array}[]{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}\end{array}[]{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{array}[]{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}\end{array}[]{c}\begin{array}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}\end{array}[]{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}\end{array}[]{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}[]{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\end{array}\end{array}[]{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array} where the error's moving average over \(10\) seconds, \(0.1\int_{t-10}^{t}|E|dt\), falls below \(1\%\) of its maximum value during the trial. The dashed traces in Figure 4A and B show the error signal \(E\) and its moving average during a GDM trial with the learning rate2 of \(\eta=e^{-5}\); the success condition is reached at time \(t=266.3[s]\). The black traces in these figures show the error signal \(E\) and its moving average during a trial with PaM; the success condition is achieved at time \(t=23.1[s]\). The modulating factor \(F_{M}\) for this trial is shown in figure 4C. This pair of trials shows a significant improvement in the speed of learning and navigational performance of the robot. Footnote 2: This is the natural number \(e=2.71828\) A second pair of trials with higher learning rate of \(\eta=e^{-1}\) is shown in figure 5; this Figure 4: _A pair of comparative learning trials with learning rate of \(\eta=e^{-5}\)_ Figure 5: _A pair of comparative learning trials with learning rate of \(\eta=e^{-1}\)_ shows a close to one-shot learning performance for PaM. Such comparative trial pairs were repeated \(50\) times across learning rates of: \(\eta=[e^{-5},e^{-4},e^{-3},e^{-2},e^{-1}]\). Figure 6A shows the time taken for the robot to obtain the success state during the trials with PaM (black trace) compared to that of the GDM (dashed trace). This shows consistency in the trend observed in Figures 4 and 5, in that, the PaM paradigm is significantly faster than its GDM counterpart. Both methods demonstrate a faster learning with higher learning rates, however, the performance of GDM is influenced by changes in the learning rate to a greater degree, given by the deflection of the fitted curve. Figure 6B show the corresponding error integrals, \(\int_{0}^{t}|E|dt\), for these trials. As anticipated, the accumulation of the error signal is greater in trials with slower learning rate. Although the total accumulation of error is significantly smaller in PaM trials, in both methods this parameter is influenced by the learning rates to the same degree; inferred from the slope of the fitted curves. ## 6 Discussion In this work we presented a novel closed loop algorithm which learns the forward model of a reflex Porr and Worgotter (2003). These models play an important role in robotic and biological motor control (Wolpert and Kawato, 1998; Wolpert et al., 2001; Haruno et al., 2001; Nakanishi and Schaal, 2004) where they guarantee, for example, an optimal trajectory. Previous work in this area used shallow networks (Kulvicius et al., 2007), filter banks Porr and Worgotter (2003) or single layers to perform predictive control (Nakanishi and Schaal, 2004; Maffei et al., 2017) and it was not possible to employ deeper structures. On the other hand, model free closed-loop learning has been using more complex network structures such as deep learning in combination with Q-learning (Guo et al., 2014; Bansal et al., 2016). Here, we demonstrate that model based closed-loop learning can also benefit from deep learning and thus a combination of both is more powerful (Botvinick et al., 2019). However, given the fast learning of this Figure 6: _comparison of learning speed and total error over a range of 5 different learning rates_ model it can be prone to converging to a local minima. Whether or not deep learning is biologically realistic has been debated for many years where the main issue is the requirement of weight symmetry for forward- and backward-pass which limits its plausibility to only a few layers Lillicrap et al. (2016). However, if the error is merely transmitted as a sign, this weight symmetry can be relaxed as long as there are interconnections between the top/down and bottom/up pathways guaranteeing the correct sign of the learning Larkum (2013). In the context of neuroscience, this means that the bottom up pathway just controls long term potentiation (LTP) or long term depression (LTD) while neuromodulators, in particular serotonin as a rectified reward prediction error, can control the speed of the learning Iigaya et al. (2018) as a third factor (Li et al., 2016). Thus, in particular for cortical processing where serotonin is more prominent than dopamine and deep neuronal structures exist, a combination of local and global learning is a compelling fit for neuroscience, in addition to its application in machine learning and robotic navigation. ## Acknowledgements We would like to acknowledge Jarez Patel for his valuable intellectual and technical input for the making of the robotic platform.
2309.16596
Local minima in quantum systems
Finding ground states of quantum many-body systems is known to be hard for both classical and quantum computers. As a result, when Nature cools a quantum system in a low-temperature thermal bath, the ground state cannot always be found efficiently. Instead, Nature finds a local minimum of the energy. In this work, we study the problem of finding local minima in quantum systems under thermal perturbations. While local minima are much easier to find than ground states, we show that finding a local minimum is computationally hard for classical computers, even when the task is to output a single-qubit observable at any local minimum. In contrast, we prove that a quantum computer can always find a local minimum efficiently using a thermal gradient descent algorithm that mimics the cooling process in Nature. To establish the classical hardness of finding local minima, we consider a family of two-dimensional Hamiltonians such that any problem solvable by polynomial-time quantum algorithms can be reduced to finding ground states of these Hamiltonians. We prove that for such Hamiltonians, all local minima are global minima. Therefore, assuming quantum computation is more powerful than classical computation, finding local minima is classically hard and quantumly easy.
Chi-Fang Chen, Hsin-Yuan Huang, John Preskill, Leo Zhou
2023-09-28T16:59:05Z
http://arxiv.org/abs/2309.16596v1
# Local minima in quantum systems ###### Abstract Finding ground states of quantum many-body systems is known to be hard for both classical and quantum computers. As a result, when Nature cools a quantum system in a low-temperature thermal bath, the ground state cannot always be found efficiently. Instead, Nature finds a local minimum of the energy. In this work, we study the problem of finding local minima in quantum systems under thermal perturbations. While local minima are much easier to find than ground states, we show that finding a local minimum is computationally hard for classical computers, even when the task is to output a single-qubit observable at any local minimum. In contrast, we prove that a quantum computer can always find a local minimum efficiently using a _thermal gradient descent_ algorithm that mimics the cooling process in Nature. To establish the classical hardness of finding local minima, we consider a family of two-dimensional Hamiltonians such that any problem solvable by polynomial-time quantum algorithms can be reduced to finding ground states of these Hamiltonians. We prove that for such Hamiltonians, all local minima are global minima. Therefore, assuming quantum computation is more powerful than classical computation, finding local minima is classically hard and quantumly easy. ###### Contents * 1 Introduction * 2 Results * 2.1 Local minima under local unitary perturbations * 2.2 Local minima under thermal perturbations * 2.2.1 Finding local minima is easy for quantum computers * 2.2.2 Finding local minima is hard for classical computers * 3 Discussion * A Notations and Preliminaries * A.1 Notations * A.2 Lindbladians * A.3 Thermal Lindbladians * B Local minima in quantum systems * B.1 Local minima in classical optimization * B.1.1 Local minima in Euclidean space * B.1.2 Local minima in general geometrical spaces * B.1.3 Approximate local minima * B.2 Defining local minima in quantum systems * B.2.1 Definition based on thermal perturbations * B.2.2 Definition based on local unitary perturbations * B.3 The problem of finding a local minimum in quantum systems * B.4 The importance of irreversible perturbations * C Characterizing local minima under local unitary perturbations * D Characterizing local minima under thermal perturbations * D.1 Energy gradients * D.2 A sufficient condition and a necessary condition of local minima * D.3 Hamiltonians without suboptimal local minima * E Complexity of finding a local minimum in quantum systems * E.1 Finding a local minimum under local unitary perturbations * E.2 Finding a local minimum under thermal perturbations * F Details of thermal Lindbladians * F.1 Exact form * F.2 Properties of thermal Lindbladians * F.3 Algorithmic primitives for simulating thermal Lindbladians * G A polynomial-time quantum algorithm for finding a local minimum under thermal perturbations (Proof of Theorem 6) * G.1 Cooling by gradient descent * G.2 Quantum thermal gradient descent * G.3 * H Characterizing energy gradients in low-temperature heat bath * H.1 Basic properties of energy gradients in low-temperature bath * H.2 Relating subspace and local gradients to global gradients * H.3 Gradients for commuting Hamiltonians * H.4 Negative gradient condition under perturbations to Hamiltonians * I Energy landscape of an Ising chain * J All local minima are global in BQP-hard Hamiltonians (Proof of Theorem 7) * J.1 Characterizing low energy states of \(\mathbf{H}_{C}\) * J.2 Proof of Theorem 11 * J.3 Explicit calculations for energy gradients * J.3.1 Gradient from \(\mathbf{H}_{\rm clock}\) * J.3.2 Gradient from \(\mathbf{H}_{\rm prop}\) * J.3.3 Gradient from \(\mathbf{H}_{\rm in}\) * K Operator Fourier Transform * K.1 Useful properties * K.2 Secular approximation * L Proving monotonicity of energy gradient under level splitting * L.1 Expressing the energy gradient * L.2 Monotonicty of rates * L.3 Perturbation theory of eigenstates and eigenvalues * L.4 Proof of Theorem 12 * L.5 Supplementary calculations * L.6 Monotonicity of gradient on a subspace * L.7 Example where perturbation kills energy gradient Introduction Finding ground states and other low-energy states of quantum many-body systems is a central problem in physics, materials science, and chemistry. To address this problem, many powerful computational methods, such as density functional theory (DFT) [1, 2], quantum Monte Carlo (QMC) [3, 4, 5], variational optimization with tensor network ansatzes [6, 7, 8, 9, 10, 11, 12] or neural network ansatzes [13, 14, 15], and data-driven machine learning approaches [16, 17, 18, 19], have been developed. These methods work well for many physically relevant problem instances but fail badly in other cases. One hopes that scalable fault-tolerant quantum computers will be able to solve a broader array of problem instances, but finding ground states of local Hamiltonians is known to be QMA-hard [20, 21], and therefore is expected to be intractable even for quantum computers in some instances. Indeed, the efficacy of existing quantum algorithms requires additional assumptions that are yet to be justified [22], such as the presence of a trial state with sufficient ground state overlap [23, 24] or a parameterized adiabatic path whose spectral gap remains open [25]. Under the widely accepted conjecture that Nature can be efficiently simulated on a quantum computer, the hardness of finding ground states on quantum computers implies that Nature cannot find ground states in general. When a quantum system with Hamiltonian \(\mathbf{H}\) is placed in a low-temperature thermal bath, the system seeks a local minimum of the energy, which may not be the ground state of \(\mathbf{H}\). For some physical systems, such as spin glasses [26, 27, 28, 29], finding a ground state is indeed known to be computationally hard; such systems, when cooled, almost always find a local minimum instead of the ground state. In these cases, the ground state of the Hamiltonian is physically irrelevant in that it is never observed in experiments. Motivated by this perspective, in this work we study the problem of finding local minima in quantum many-body systems. For concreteness, we consider an \(n\)-qubit system governed by a local Hamiltonian \(\mathbf{H}\). The central question we are interested in is: _How tractable is the problem of finding local minima of the energy_ _in quantum systems using classical and quantum computers?_ To begin to answer this question, we need a mathematical definition of local minima in quantum systems. Based on the standard definition in mathematical optimization [30, 31, 32, 33, 34], we consider a local minimum in a quantum system governed by Hamiltonian \(\mathbf{H}\) to be a quantum state such that the expectation value of \(\mathbf{H}\) does not decrease under any small perturbation applied to the state. The local minima of \(\mathbf{H}\) form a subset of the entire quantum state space, which contains the global minima, the ground states of \(\mathbf{H}\). We will consider two definitions of perturbations for defining local minima. The first one is, in a sense, mathematically natural but turns out to be inadequate for reasons we will explain. The second one is well-motivated physically and turns out to have interesting properties which we will explore. The first definition of perturbations we study in this work is local unitary perturbations, which can be viewed as short-time unitary evolution governed by a sum of few-body Hermitian operators, as might arise in an adaptive variational quantum eigensolver (VQE) [35, 36, 37]. A drawback of this definition is that finding a local minimum becomes so easy that even a classical computer can solve it efficiently. We prove that a random \(n\)-qubit pure state is almost always a local minimum of \(\mathbf{H}\) under local unitary perturbations. Hence, there are \(\exp(\exp(\Omega(n)))\) many local minima that are not global minima in the energy landscape. Because the number of local minima is enormous, finding a local minimum under this definition is _classically easy_. While local unitary perturbations are natural from a mathematical perspective, they are not physically motivated since the evolution of a quantum system interacting with a low-temperature thermal bath is governed by _quantum thermodynamics_ and is inherently nonunitary. Our second definition is inspired by how quantum systems actually seek out local minima in Nature. Under suitable physical assumptions1, perturbations induced by a thermal bath are represented by a master equation defined by a linear combination of _thermal Lindbladians_\(\mathcal{L}_{a}\), each associated with a local system-bath interaction \(\mathbf{A}^{a}\)[38, 39, 40]. In its modern formulation [41, 42], the thermal Lindbladian \(\mathcal{L}_{a}\) depends on the system Hamiltonian \(\mathbf{H}\) and two macroscopic bath quantities: the inverse temperature \(\beta\) and a characteristic time scale \(\tau\). We prove two fundamental results concerning the problem of finding local minima under thermal perturbations. We prove that a quantum computer can efficiently find a local minimum under thermal perturbations using a proposed _quantum thermal gradient descent algorithm_ that mimics Nature's cooling process. And in stark contrast to the definition of a local minimum based on local unitary perturbations, we prove that finding local minima under thermal perturbations is universal for quantum computation and, hence, is _classically hard_ under the standard assumption \(\mathsf{BPP}\neq\mathsf{BQP}\). Footnote 1: Typical assumptions are that the system-bath coupling is weak and the thermal bath is memoryless. To establish the classical hardness of finding local minima under thermal perturbations, we consider geometrically local Hamiltonians on a 2D lattice, such that the ground state encodes the outcome of any efficient quantum computation using a modified version of Kitaev's circuit-to-Hamiltonian construction [43, 20, 44]. The most technically involved result of this work is a theorem stating that for these 2D Hamiltonians, all local minima under low-temperature thermal perturbations are global minima, i.e., ground states. That is, the energy landscape for these Hamiltonians has a nice bowl shape over the entire \(n\)-qubit state space such that quantum gradient descent efficiently finds the ground state. Meanwhile, if a classical computer can efficiently find any local minima under thermal perturbations, then the classical computer can efficiently simulate quantum computation, which is widely believed to be impossible. To prove the theorem, we develop a set of techniques for establishing that a Hamiltonian \(\mathbf{H}\) has _no suboptimal local minima_, i.e., all local minima of \(\mathbf{H}\) are global minima. We conclude that local minima under thermal perturbations are, in general, hard to find classically but easy to find on a quantum computer. Hence, the local minima problem provides a quantum tractable alternative to the ground state problem, which is believed to be hard for both classical and quantum computers. Since ground states of quantum systems are frequently encountered in the laboratory, one wonders whether generic quantum many-body systems relax to their ground states ef Figure 1: _(a) Energy landscape under local unitary perturbations._ For any local Hamiltonian \(\mathbf{H}\), there will be doubly exponentially many local minima within the \(n\)-qubit state space that stems from a large barren plateau. _(b) Energy landscape under thermal perturbations._ For some local Hamiltonians, such as a family of \(\mathsf{BQP}\)-hard Hamiltonians, the energy landscape over the entire \(n\)-qubit state space has a nice bowl shape, and the only local minimum is the global minimum. However, for \(\mathsf{QMA}\)-hard Hamiltonians, the energy landscape necessarily contains many suboptimal local minima. Local unitary perturbations are reversible, while thermal perturbations are irreversible. ficiently when cooled because these systems have no suboptimal local minima, similar to the situation in convex optimization [31]. Exploring the shape of the energy landscape of Hamiltonians arising in physics, chemistry, and materials science may suggest new opportunities for solving classically intractable and physically relevant problems using quantum computers. ## 2 Results We now present our main results concerning the tractability of finding local minima in quantum systems. The results are organized into the complexity of finding local minima under local unitary perturbations and under thermal perturbations. A collection of notational conventions and some background on thermal Lindbladians can be found in Appendix A. We define local minima in quantum systems by generalizing a definition commonly used in classical optimization; see a brief review of classical optimization in Appendix B.1. Let \(\mathcal{P}_{\boldsymbol{\alpha}}\) be a perturbation parameterized by a small vector \(\boldsymbol{\alpha}\) that maps quantum states to quantum states. An \(\epsilon\)-approximate local minimum of an \(n\)-qubit Hamiltonian \(\boldsymbol{H}\) under perturbation \(\mathcal{P}\) is a state \(\boldsymbol{\rho}\) with an energy \(\operatorname{tr}(\boldsymbol{H}\boldsymbol{\rho})\) that is an approximate minimum under perturbations, i.e., \[\operatorname{tr}(\boldsymbol{H}\boldsymbol{\rho})\leq\operatorname{tr}( \boldsymbol{H}\mathcal{P}_{\boldsymbol{\alpha}}(\boldsymbol{\rho}))+\epsilon \left\|\boldsymbol{\alpha}\right\| \tag{2.1}\] for all small enough \(\boldsymbol{\alpha}\). The formal definition is given in Appendix B.2. We say an algorithm \(\mathcal{A}\) has solved the problem of _finding_ local minima under perturbation \(\mathcal{P}\) if given any \(n\)-qubit Hamiltonian \(\boldsymbol{H}\), written as a sum of few-qubit Hermitian operators, and any few-qubit observable \(\boldsymbol{O}\), the algorithm \(\mathcal{A}\) can output a real value \(\operatorname{tr}(\boldsymbol{O}\boldsymbol{\rho})\) corresponding to any approximate local minimum \(\boldsymbol{\rho}\) of \(\boldsymbol{H}\) under perturbations \(\mathcal{P}\) up to a small error.2 Footnote 2: Since there could be multiple local minima and we consider finding one instance to be sufficient, this problem is closer to a _relational_ problem than to a _decision_ problem. ### Local minima under local unitary perturbations We first study local minima under local unitary perturbations. Local unitary perturbations are short-time unitary evolutions under a sum of few-body Hermitian operators. A quantum circuit consisting of near-identity two-qubit gates induces a local unitary perturbation. Consider an \(n\)-qubit pure state \(\ket{\psi}\). A local unitary perturbation of \(\ket{\psi}\) is given by \[(\text{local unitary perturbation}):\qquad\ket{\psi}\rightarrow\exp\left(- \mathrm{i}\sum_{a=1}^{m}\alpha_{a}\boldsymbol{h}^{a}\right)\ket{\psi}, \tag{2.2}\] where \(\boldsymbol{h}^{a}\) is a Hermitian operator acting on a few qubits, \(m=\operatorname{poly}(n)\) is the number of such Hermitian operators, and \(\boldsymbol{\alpha}=\sum_{a}\alpha_{a}\boldsymbol{\hat{e}}_{a}\in\mathbb{R}^ {m}\) is a vector close to zero. This definition is inspired by adaptive variational quantum eigensolvers [35, 36, 37], and is the state version of the Riemannian geometry of quantum computation defined in [45]. When one variationally minimizes the energy by applying unitary gates, one finds a local minimum under local unitary perturbations. To understand how easy the problem of finding local minima under local unitary perturbations is, we need to characterize the energy landscape. The following lemma provides a universal characterization of the structure of the energy landscape under the geometry defined by local unitary perturbations for any local Hamiltonian \(\boldsymbol{H}\). The formal statement is given in Lemma C.1, and the proof is given in Appendix C. **Lemma 2.1** (Barren plateau; informal).: _Given any \(n\)-qubit local Hamiltonian \(\boldsymbol{H}\). A random pure \(n\)-qubit state \(\ket{\psi}\) is an approximate local minimum of \(\boldsymbol{H}\) under local unitary perturbations._ Furthermore, the proof of the above lemma illustrates the following physical picture: the energy landscape in the pure state space defined based on local unitary perturbations consists of a large barren plateau [46] with doubly-exponentially many approximate local minima having exponentially small energy gradient. Additionally, almost all of the local minima have local properties that are exponentially close to that of the maximally mixed state. As a result, while finding ground states is classically hard, finding local minima under local unitary perturbations is classically trivial. **Theorem 1** (Classically easy to find local minima under local unitary perturbations; informal).: _The problem of finding approximate local minima of \(n\)-qubit local Hamiltonian \(\mathbf{H}\) under local unitary perturbations is classically easy._ See Theorem 5 for a more detailed statement, and Appendix E.1 for its proof. The presence of barren plateaus in the energy landscape under local unitary perturbations causes the problem of finding local minima to be classically easy. However, a definition of local minima based on local unitary perturbation is not physically well motivated since Nature cools a physical system via open-system dynamics by coupling to a thermal bath rather than by unitary dynamics. ### Local minima under thermal perturbations In this section, we consider local minima under thermal perturbations induced by a heat bath, formally defined in Appendix B.2. We will show that the classical hardness of finding local minima under thermal perturbations is much different than the classical hardness under local unitary perturbations. When the coupling between an \(n\)-qubit system and a thermal bath is weak, and the bath is memoryless, the complicated joint system-bath Hamiltonian dynamics reduces to a Markovian Lindbladian evolution of the system alone, \(\mathbf{\rho}(t)=\mathrm{e}^{\mathcal{L}t}[\mathbf{\rho}]\). Remarkably, this continuous time generator \(\mathcal{L}\) can be defined by merely the system Hamiltonian \(\mathbf{H}\), the _jump operators_\(\mathbf{A}^{a}\) through which the bath interacts with the system, and thermodynamic quantities of the bath: inverse temperature \(\beta\) and a characteristic time-scale \(\tau\). See Appendix A.3 for an introduction and Appendix F for an in-depth discussion. Under these assumptions, we may effectively consider a thermal perturbation of \(n\)-qubit state \(\mathbf{\rho}\) to be \[\text{(thermal perturbation):}\qquad\mathbf{\rho}\to\exp\left(\sum_{a=1}^{m} \alpha_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\right)(\mathbf{\rho}), \tag{2.3}\] where \(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) is the thermal Lindbladian associated with each jump operator \(\mathbf{A}^{a}\) acting on a few qubits, \(m=\text{poly}(n)\) is the number of jump operators, and \(\mathbf{\alpha}=\sum_{a}\alpha_{a}\hat{\mathbf{e}}_{a}\in\mathbb{R}_{\geq 0}^{m}\) is a _nonnegative_ vector close to zero. Here, the vector is nonnegative because thermodynamic processes are generally irreversible. The irreversibility in thermal perturbations is crucial to ensure that there are fewer than doubly-exponentially many local minima in the energy landscape; see a discussion in Appendix B.4. We may define a local minimum under thermal perturbations to be a state \(\mathbf{\rho}\) with the minimum energy \(\text{tr}(\mathbf{H}\mathbf{\rho})\) under thermal perturbations given in Eq. (2.3). More precisely, we will consider \(\epsilon\)-approximate local minima as in Eq. (2.1). A central concept that enables us to understand the energy landscape and establishes the computational complexity of finding local minima under thermal perturbations is the _energy gradient operator_, \[\text{(energy gradient operator):}\qquad\sum_{a=1}^{m}\mathcal{L}_{a}^{ \dagger\beta,\tau,\mathbf{H}}(\mathbf{H})\hat{\mathbf{e}}_{a}, \tag{2.4}\] where the adjoint \(\mathcal{L}^{\dagger}\) is the Heisenberg-picture Lindbladian, i.e., \(\text{tr}(\mathcal{L}^{\dagger}[\mathbf{O}]\mathbf{\rho})=\text{tr}(\mathbf{O}\mathcal{ L}[\mathbf{\rho}])\). The energy gradient operator is a vector of individual gradient operators3 associated with each jump operator \(\mathbf{A}^{a}\). Indeed, the energy gradient operator naturally emerges by taking an infinitesimal perturbation, i.e., the gradient of the energy \(\operatorname{tr}(\mathbf{H}\mathbf{\rho})\), \[\operatorname{tr}\left(\mathbf{H}\exp\left(\sum_{a=1}^{m}\alpha_{a}\mathcal{L}_{a}^{ \beta,\tau,\mathbf{H}}\right)(\mathbf{\rho})\right)=\operatorname{tr}(\mathbf{H}\mathbf{\rho})+ \mathbf{\alpha}\cdot\sum_{a=1}^{m}\operatorname{tr}\left(\mathcal{L}_{a}^{\dagger \beta,\tau,\mathbf{H}}(\mathbf{H})\mathbf{\rho}\right)\hat{\mathbf{e}}_{a}+\mathcal{O}(\|\mathbf{ \alpha}\|^{2}). \tag{2.5}\] In Appendix D, we describe the formal definition and some properties of the energy gradient. We provide a concrete example showing the sets of local minima for ferromagnetic Ising chains under different longitudinal field strengths in Appendix I. Given the definition of thermal perturbations, we next study how tractable is the problem of finding a local minimum under thermal perturbations. In stark contrast to finding local minima under local unitary perturbations, which is classically easy, our complexity-theoretic results show that finding local minima under thermal perturbations is both quantumly easy (Section 2.2.1) and classically hard (Section 2.2.2) if we assume the well-accepted conjecture that not all quantum circuits can be efficiently simulated on classical computers (\(\mathsf{BPP}\neq\mathsf{BQP}\)). #### 2.2.1 Finding local minima is easy for quantum computers In practice, quantum systems find local minima easily when coupled to a cold thermal bath. Therefore, if our definition of a local minimum properly captures how a quantum system behaves in a cold environment, we expect finding local minima to be quantumly easy. Indeed, in the following theorem, we prove that a quantum computer can always efficiently find a local minimum of \(\mathbf{H}\) under thermal perturbations starting from any initial state. **Theorem 2** (Quantumly easy to find local minima under thermal perturbations; informal).: _The problem of finding an \(\epsilon\)-approximate local minimum of an \(n\)-qubit local Hamiltonian \(\mathbf{H}\) under thermal perturbations with inverse temperature \(\beta\) and time scale \(\tau\) can be solved in \(\operatorname{poly}(n,1/\epsilon,\beta,\tau)\) quantum computational time._ The formal restatement is given in Theorem 6 and is proven in Appendix G. To establish the theorem, we propose a _quantum thermal gradient descent algorithm_ based on the energy gradient operator. Gradient descent is necessary when the inverse temperature \(\beta\) and time scale \(\tau\) are not infinite. When \(\beta=\tau=\infty\), the energy gradient \(\mathcal{L}_{a}^{\dagger\infty,\infty,\mathbf{H}}(\mathbf{H})\preceq 0\) is nonpositive. In this case, the algorithm can just perform a random walk along random directions because no perturbations increase energy. But when \(\beta\) and \(\tau\) are finite, the energy gradient can be positive. To find a local minimum that is a minimum under all thermal perturbations, the algorithm needs to carefully walk in directions with negative energy gradients. To prove the convergence of quantum thermal gradient descent, we show that every small gradient step decreases the energy. To establish this claim, we derive analytic properties of thermal Lindbladians based on a smoothness bound on the second derivatives in [42]. To implement a gradient step based on thermal perturbations, we build on a recently developed efficient quantum algorithm that simulates thermal Lindbladian evolution using a quantum circuit augmented by mid-circuit measurements [42]. #### 2.2.2 Finding local minima is hard for classical computers Given that finding local minima under local unitary perturbations is classically trivial, it is natural to wonder whether finding local minima under thermal perturbations is also classically easy. What does the corresponding energy landscape look like? And what computational problems can be solved using quantum thermal gradient descent? As our second main result, we address these questions for a class of geometrically local Hamiltonians \(\{\mathbf{H}_{C}\}\) on two-dimensional lattices, where the ground state encodes the output of quantum circuit \(C\). **Theorem 3** (No suboptimal local minimum in \(\mathsf{BQP}\)-hard Hamiltonians; informal).: _For any quantum circuit \(C\) with size \(|C|\), all approximate local minima of the geometrically local 2D Hamiltonian \(\mathbf{H}_{C}\) under thermal perturbations with inverse temperature \(\beta=\operatorname{poly}(|C|)\) and time scale \(\tau=\operatorname{poly}(|C|)\) are close to the ground state._ This theorem is the most technically involved contribution of this work. The formal statement is given in Theorem 7 and is proven in Appendix J. Conceptually, the landscape of these 2D Hamiltonians has a nice _bowl shape_, like in convex optimization [31]. Therefore, performing thermal gradient descent (Theorem 2) allows us to prepare the ground state starting from an _arbitrary_ initial state. For a choice of inverse temperature that grows polynomially with \(|C|\), thermal fluctuations in the cooling process do not kill the power of quantum computation. As a consequence of this energy landscape characterization, we can show that finding a local minimum under thermal perturbations is classically intractable, assuming quantum computation is more powerful than classical computation. See Theorem 8 for a formal restatement and the proof. **Theorem 4** (Classically hard to find local minima under thermal perturbations; informal).: _Assume the widely believed conjecture that \(\mathsf{BPP}\neq\mathsf{BQP}\). The problem of finding an approximate local minimum of an \(n\)-qubit local Hamiltonian \(\mathbf{H}\) under thermal perturbations is universal for quantum computation and is thus classically hard._ There have been other proposals for solving \(\mathsf{BQP}\)-hard problems by finding suitable quantum states, such as designing a gapped adiabatic path for Hamiltonians to find ground states [44], engineering Lindbladians to have rapid dissipative evolution towards steady states [47] and performing quantum phase estimation on an initial state with high ground-state overlap [24]. These approaches draw inspiration from physics to motivate algorithms for solving problems on analog and digital quantum devices but do not emulate naturally occurring physical processes. In contrast, the problem of finding a local minimum is motivated by ubiquitous physical processes in Nature that produce the low-energy states studied in physics, chemistry, and materials science. Furthermore, the local minima problem enjoys the robustness of thermodynamics: one merely needs to specify macroscopic bath quantities \(\beta\) and \(\tau\) without worrying about microscopic details, and the choice of jump operators can be flexible since adding more jumps (even unwanted ones) only _improves_ the gradient and _removes_ suboptimal local minima. We now highlight the proof idea for Theorem 3 as follows. We consider a family of geometrically local \(n\)-qubit Hamiltonians \(\{\mathbf{H}_{C}\}\) in a 2D lattice defined by modifying Kitaev's circuit-to-Hamiltonian construction [43, 20] where the ground state encodes the computation of a quantum circuit \(\mathbf{U}_{C}=\mathbf{U}_{T}\dots\mathbf{U}_{1}\). In particular, we design the ground state of \(\mathbf{H}_{C}\) to be \[\sum_{t=0}^{T}\sqrt{\xi_{t}}\big{(}\mathbf{U}_{t}\cdots\mathbf{U}_{1}\ket{0^{n}} \big{)}\otimes\ket{0^{t}1^{T-t}},\qquad\text{where}\quad\xi_{t}:=\frac{1}{2^{ T}}\binom{T}{t}. \tag{2.6}\] The binomial coefficient \(\xi_{t}\) is our modification of Kitaev's construction and is chosen to ensure that desired properties hold for the spectrum and the energy gradients.4 Therefore, estimating local properties of the ground state of \(\mathbf{H}_{C}\) is equivalent to simulating the quantum circuit \(C\), which is \(\mathsf{BQP}\)-hard. Footnote 4: The binomial distribution ensures the Bohr-frequency gap is sufficiently large, which is central to the robustness of energy gradients under errors due to finite temperature and small perturbations. We believe that the standard circuit-to-Hamiltonian construction also has a large Bohr-frequency gap, but the proof seems more difficult. Given the Hamiltonian \(\mathbf{H}_{C}\), the central challenge is to show that all of its approximate local minima under thermal perturbations are also approximate global minima. This seems daunting to study due to the complex expression for the thermal Lindbladian \(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) and the doubly exponentially large space of possible quantum states. Previous studies on circuit-to-Hamiltonian mappings mainly focused on the lowest energy states. Here, we need to worry about potential local minima in all excited states in any superposition. To make progress, we propose a sufficient condition in Appendix D.3 that captures the nice landscape of \(\mathbf{H}_{C}\) and rules out the presence of _any_ suboptimal local minimum. Let \(\mathbf{P}_{G}(\mathbf{H})\) be the projector onto the ground state space of \(\mathbf{H}\). Assume there exists a unit vector \(\hat{\mathbf{\alpha}}\in\mathbb{R}_{\geq 0}^{m}\) and \(r>0\) with \[\text{(negative gradient condition):}\qquad-\sum_{a=1}^{m}\hat{\alpha}_{a} \mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}(\mathbf{H})\succeq r(\mathbf{I}-\mathbf{P}_{G }(\mathbf{H})). \tag{2.7}\] This negative gradient condition implies that any state with a small ground state overlap must experience a substantially negative energy gradient, i.e., it must not be a local minimum. To prove that \(\mathbf{H}_{C}\) satisfies the negative gradient condition, we propose a series of lemmas and mathematical techniques for characterizing energy gradients in few-qubit systems, in commuting Hamiltonians, and in subspaces of the Hamiltonian, which are stated in Appendix H and proven in Appendix L. These new techniques build on the operator Fourier transform, and the secular approximation given in [42] for systematically handling energy uncertainty in thermal Lindbladians, which we review and adapt for our purpose in Appendix K. Using these techniques, we analyze the energy gradient of the entire system perturbatively by considering a sequence of Hamiltonians \[\mathbf{H}_{1}\to\mathbf{H}_{2}\to\mathbf{H}_{3}=\mathbf{H}_{C} \text{with refining ground spaces}\quad\mathbf{P}_{1}\supset\mathbf{P}_{2}\supset\mathbf{P}_{3},\] \[\text{where}\quad\left\|\mathbf{H}_{1}\right\|\gg\left\|\mathbf{H}_{2}- \mathbf{H}_{1}\right\|\gg\left\|\mathbf{H}_{3}-\mathbf{H}_{2}\right\|.\] Through these perturbations, we sequentially rule out local minima in excited states of the Hamiltonian \(\mathbf{H}_{1},\mathbf{H}_{2}\) and, finally, \(\mathbf{H}_{3}=\mathbf{H}_{C}\). For example, we show the first Hamiltonian \(\mathbf{H}_{1}\) satisfies the negative gradient condition and that the gradient is _stable_ under perturbation going from \(\mathbf{H}_{1}\to\mathbf{H}_{2}\to\mathbf{H}_{3}\). Controlling perturbations of the energy gradient is surprisingly challenging, and it is not _a priori_ clear why this stability property should hold due to multiple (possibly competing) energy scales, including \(\beta^{-1},\tau^{-1}\), the spectral gap, and the Bohr-frequency gap.5 The perturbative errors are not suppressed by the spectral gap of the Hamiltonian as seen in standard settings, but instead by the Bohr-frequency gap, which can be much smaller (see Theorem 12). These techniques allow us to establish the robustness of energy gradients when perturbing a degenerate Hamiltonian with a sufficiently large Bohr-frequency gap. Footnote 5: Recall that spectral gap is the minimum non-zero difference between energy eigenvalues. Bohr-frequency gap is the minimum non-zero difference between the difference of energy eigenvalues. We emphasize that while we proved that \(\mathbf{H}_{C}\) has no suboptimal local minima when \(C\) is a polynomial-size quantum circuit, the same is not true for general local Hamiltonians. Finding the ground state of a local Hamiltonian is a \(\mathsf{QMA}\)-hard problem; hence, we do not expect it to be solved efficiently by the quantum thermal gradient descent algorithm or by any other quantum algorithm. In the case of a quantum circuit that verifies the witness for a problem in \(\mathsf{QMA}\), Kitaev's corresponding local Hamiltonian contains a term, often denoted \(\mathbf{H}_{\text{in}}\), which specifies some of the input qubits and leaves the input qubits corresponding to the witness unspecified, and a term, often denoted \(\mathbf{H}_{\text{out}}\), which checks whether the witness is accepted. Due to the unspecified witness qubits in \(\mathbf{H}_{\text{in}}\), the energy landscape contains a significant number of local minima corresponding to all possible witnesses. Furthermore, most of these local minima correspond to rejected witnesses and are suboptimal because of the energy penalty from \(\mathbf{H}_{\text{out}}\). For these \(\mathsf{QMA}\)-complete Hamiltonians, quantum thermal gradient descent is likely to remain stuck for a long time at a suboptimal local minimum. In \(\mathbf{H}_{C}\), the term \(\mathbf{H}_{\text{in}}\) specifies all input qubits, and the term \(\mathbf{H}_{\text{out}}\) is absent, which greatly simplifies the energy landscape, enabling quantum thermal gradient descent to find the global minimum efficiently. ## 3 Discussion We have good reasons for believing that scalable fault-tolerant quantum computers will be more powerful than classical computers, but for what problems of practical interest should we expect a superpolynomial quantum advantage? Quantum computers might substantially speed up the task of characterizing properties of ground states for some local Hamiltonians that arise in physics, chemistry, and materials science, but it is not clear how to identify particular problems for which such speedups occur [48]. In some cases, classical methods provide good solutions, while in other cases, the problem is hard even for quantum computers. Here we have focused on an easier problem, namely finding local minima rather than global minima of a Hamiltonian. This problem is very well motivated physically because the task of finding a local minimum under thermal perturbations is routinely carried out by actual physical systems when in contact with a cold thermal bath. We showed that this problem is solved efficiently by a proposed quantum optimization algorithm, the _quantum thermal gradient descent algorithm_. Furthermore, we showed that finding a local minimum is classically hard in general (assuming that \(\mathsf{BPP}\neq\mathsf{BQP}\)). Hence, the local minimum problem is a quantumly tractable alternative to the ground state problem for which superpolynomial quantum advantage can be achieved for some problem instances. Our main results pertain to perturbations that arise in quantum thermodynamics [38, 39, 40, 41, 42]. We noted that the energy landscape under such thermal perturbations is much nicer than the energy landscape encountered by quantum optimization algorithms relying on local unitary perturbations such as VQE [35, 36, 37]; see Theorems 1 and 3. From an algorithmic design perspective, we are free to choose any perturbation. Indeed, we may modify the thermal Lindbladians to have nicer analytic properties or algorithmic costs [42]. While these synthetic Lindbladians may not simulate Nature, they constitute a broader class of _Monte Carlo_ quantum algorithms [49, 50, 51, 42] that may improve upon Nature. Apart from Lindbladians, other families of perturbations, such as unitary perturbations accompanied by mid-circuit measurements and/or qubit resets, may also yield nice bowl-shaped energy landscapes without suboptimal local minima. Progress on this question could lead to more efficient quantum optimization algorithms for finding low-energy states or for other applications. There are a plethora of classical algorithms for minimizing energies of quantum systems based on classical variational ansatzes for quantum states, such as tensor networks [6, 7, 8, 9, 10, 11, 12, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61] and neural network quantum states [62, 63, 64, 65, 66, 67, 68, 69, 70, 13, 14, 15]. These classical algorithms find a local minimum within a family of states defined by the classical variational ansatz. However, a local minimum of the energy among the set of states subject to the classical ansatz might not be a local minimum under thermal perturbations. If not, we can load the state found by the classical algorithm into a quantum computer and find a lower energy state by running the quantum thermal gradient descent algorithm. A corollary of our main results states the following. A formal statement is given as Corollary E.1, and its proof is in Appendix E.2. **Corollary 3.1** (Quantum advantage in finding lower-energy state; informal).: _Assume that not all polynomial-size quantum circuits can be efficiently simulated on a classical computer. Then there are 2D geometrically local Hamiltonians such that given any classical ansatz that allows efficient estimation of single-qubit observables and an output state \(\mathbf{\rho}^{\#}\) of any efficient classical algorithm that optimizes the classical ansatz, running quantum thermal gradient descent starting at \(\mathbf{\rho}^{\#}\) will strictly lower the energy._ The point is that we have proved the existence of local Hamiltonians for which finding a local minimum is quantumly easy and classically hard. For any such Hamiltonian, any quantum state \(\mathbf{\rho}^{\#}\) found by the efficient classical algorithm will not be a local minimum; therefore, quantum thermal gradient descent will be able to descend to a state with strictly lower energy, even with just one gradient step. Furthermore, in many cases, we can evaluate the energy gradient at the classically optimized state \(\mathbf{\rho}^{\#}\) by executing an efficient classical computation. A negative energy gradient confirms that a quantum algorithm starting from \(\mathbf{\rho}^{\#}\) could outperform the classical algorithm. Many other interesting and challenging questions remain open. Theorem 3 shows that there are no suboptimal local minima in \(\mathsf{BQP}\)-hard \(n\)-qubit Hamiltonians for inverse temperature \(\beta=\text{poly}(n)\). Do there exist \(\mathsf{BQP}\)-hard Hamiltonians with no suboptimal local minimum even for constant temperature, i.e., \(\beta=\mathcal{O}(1)\)? If so, quantum advantage can be achieved by simply coupling a quantum system to a heat bath at a sufficiently low but constant temperature. Our conclusion that finding local minima under thermal perturbations is classically hard relied on the complexity-theoretic conjecture that \(\mathsf{BPP}\neq\mathsf{BQP}\). Can we prove unconditionally that finding local minima is hard for classical algorithms, perhaps within a black-box oracle model? Sometimes, when a system performs a random walk over a large plateau of suboptimal local minima for a sufficiently long time, the system escapes the plateau and reaches the true ground state (see e.g., Case 1 in Appendix I). Could we characterize when ground states can be found efficiently despite having many suboptimal local minima? We have shown that there is a quantum advantage in finding local minima of quantum systems. Might there also be a quantum advantage in finding better local minima in classical optimization problems under some variant of quantum thermal gradient descents? While ground state problems are hard to solve in general, many experimentally observed quantum systems efficiently relax to their ground states when cooled. This physical phenomenon suggests that perhaps many Hamiltonians of interest in physics, chemistry, and materials science have no suboptimal local minima. We have shown in Theorem 3 that a particular family of \(\mathsf{BQP}\)-hard Hamiltonians has no suboptimal local minima under thermal perturbation. An important future goal is to characterize broader classes of Hamiltonians that have a similarly good energy landscape. Our proposed negative gradient condition suffices to rule out suboptimal local minima (Lemma D.3), but checking this condition for a general Hamilton involves highly complex calculations. It would be helpful to develop more general-purpose and efficient methods to verify this property for specified physical Hamiltonians over spins, fermions, or bosons. We hope the ideas and techniques presented here will yield a deeper understanding of the energy landscapes of quantum systems and point toward promising opportunities for achieving quantum advantage for physically relevant problems. ### Acknowledgments: The authors thank Anurag Anshu, Ryan Babbush, Fernando Brandao, Garnet Chan, Sitan Chen, Soonwon Choi, Jordan Cotler, Jarrod R. McClean, and Mehdi Soleimanifar for valuable input and inspiring discussions. CFC is supported by the AWS Center for Quantum Computing internship. HH is supported by a Google PhD fellowship and a MediaTek Research Young Scholarship. HH acknowledges the visiting associate position at Massachusetts Institute of Technology. LZ acknowledges funding from the Walter Burke Institute for Theoretical Physics at Caltech. JP acknowledges support from the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research (DE-NA0003525, DE-SC0020290), the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator, and the National Science Foundation (PHY-1733907). The Institute for Quantum Information and Matter is an NSF Physics Frontiers Center. ## Appendix A Notations and Preliminaries Before we begin stating and proving our results formally in the rest of the appendices, we present some notations used throughout the paper. We also give a brief review of key concepts in quantum information theory that we utilize in this work. ### Notations This section recapitulates notations, and the reader may skim through this and return as needed. \[\mathbf{H} :=\sum_{i}E_{i}\ket{\psi_{i}}\bra{\psi_{i}}\] \[\text{Spec}(\mathbf{H}) :=\{E_{i}\}\] \[\nu\in B(\mathbf{H}) :=\{E_{i}-E_{j}\ket{E_{i},E_{j}\in\text{Spec}(\mathbf{H})}\}\] \[\Delta_{\nu}(\mathbf{H}) :=\min\{\ket{\nu_{1}-\nu_{2}}:\nu_{1}\neq\nu_{2}\in B(\mathbf{H})\}\] \[\mathbf{A}(t) :=\mathrm{e}^{\mathrm{i}\mathbf{H}t}\mathbf{A}\mathrm{e}^{-\mathrm{i} \mathbf{H}t}\] \[m :\] \[\{\mathbf{A}^{a}\}_{a=1}^{m} :\] \[\mathbf{\rho} :\] \[\mathcal{L} :\] a Lindbladian in the Schrodinger Picture the inverse temperature \[\hat{\mathbf{A}}(\omega)\equiv\hat{\mathbf{A}}_{f}(\omega) :=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(t)\mathrm{e}^{- \mathrm{i}\omega t}\mathbf{A}(t)\mathrm{d}t\] Operator Fourier transform of \[\mathbf{A}\] under \[f_{\tau}(t) :=\frac{1}{\sqrt{\tau}}\cdot\mathds{1}(\ket{t}\leq\tau/2)\] the normalized window function with width \[\tau\] \[\hat{f}(\omega) =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\mathrm{e}^{- \mathrm{i}\omega t}f(t)\mathrm{d}t\] Fourier transform of a scalar function \[f(t)\] \[\mathbf{A}_{\nu} :=\sum_{E_{2}-E_{1}=\nu}\mathbf{P}_{E_{2}}\mathbf{A}\mathbf{P}_{E_{1}}\] operator \[\mathbf{A}\] at exact Bohr frequency \[\nu\] \[\mathbf{I} :\] the identity operator \[\left\|f\right\|_{p} :=(\int_{-\infty}^{\infty}\ket{f(t)}^{p}\,\mathrm{d}t)^{1/p}\] the \[p\] -norm of a function \[\left\|\mathbf{O}\right\| :=\sup_{\ket{\psi},\ket{\phi}}\frac{\bra{\phi}\mathbf{O}\ket{\psi}}{ \left\|\psi\right\|\cdot\left\|\left\|\phi\right\rangle\right\|}\] the operator norm of a matrix \[\mathbf{O}\] the Schatten p-norm of a matrix \[\mathbf{O}\] the induced \[p-p\] norm of a superoperator \[\mathcal{L}\] We write scalars, functions and vectors in normal font, and natural constants \(\mathrm{e},\mathrm{i},\pi\) are particularly in Roman font. We write matrices in bold font \(\mathbf{O}\) and super-operators in curly font \(\mathcal{L}\). Furthermore, we define the indicator function \(\mathds{1}(S)\) which is \(1\) if the statement \(S\) is true and \(0\) otherwise. For any orthogonal projector \(\mathbf{P}\), we denote \(\mathbf{P}^{\perp}=\mathbf{I}-\mathbf{P}\). We say \(\mathbf{A}\stackrel{{ E}}{{\approx}}\mathbf{B}\) when \(\|\mathbf{A}-\mathbf{B}\|\leq E\). To simplify the notation, we often drop \(f\) as a subscript \(\hat{\mathbf{A}}_{f}(\omega)\equiv\hat{\mathbf{A}}(\omega)\), by which we have chosen the window function \(f(t)=f_{\tau}(t)\). ### Lindbladians Completely Positive Trace-Preserving (CPTP) maps, also called quantum channels and quantum processes in the literature, correspond to all possible physical operations that could transform quantum states into other quantum states. _Lindbladians_ are infinitesimal generators of CPTP maps. That is, they map density operators to density operators (even if the map is tensored with the identity) \[\mathcal{I}\otimes\mathrm{e}^{\mathcal{L}t}[\cdot]:\mathcal{S}\to\mathcal{S} \quad\text{for each}\quad t\geq 0.\] (A.1) In the Schrodinger Picture, a Lindbladian always has the following structure \[\mathcal{L}[\mathbf{\rho}]=\underbrace{-\mathrm{i}[\mathbf{H},\mathbf{\rho}]}_{\text{ coherent term}}+\sum_{j\in J}\bigg{(}\underbrace{\mathbf{L}_{j}\mathbf{\rho}\mathbf{L}_{j}^{\dagger}}_{ \text{transition rate}}-\underbrace{\frac{1}{2}\{\mathbf{L}_{j}^{\dagger}\mathbf{L}_{j}, \mathbf{\rho}\}}_{\text{decay rate}}\bigg{)}\] (A.2) where the commutator is shorthanded by \([\mathbf{A},\mathbf{B}]=\mathbf{A}\mathbf{B}-\mathbf{B}\mathbf{A}\) and the anti-commutator by \(\{\mathbf{A},\mathbf{B}\}=\mathbf{A}\mathbf{B}+\mathbf{B}\mathbf{A}\). The operator \(\mathbf{H}\) can be any Hermitian matrix, and the set of _Lindblad operators_\(\{\mathbf{L}_{j}\}_{j\in J}\) can be arbitrary as the second term always ensures trace-preserving. ### Thermal Lindbladians In this section, we describe the basic parameters that define a _thermal Lindbladian_, i.e., Lindbladian originating from generic system-bath interactions under a Markovian, weak-coupling assumption [41]. Consider an \(n\)-qubit quantum system governed by a Hamiltonian \(\mathbf{H}\) and a heat bath with inverse temperature \(\beta\) and time scale \(\tau\). The bath interacts with the system via a set of local interaction terms _acting on the system_\(\{\mathbf{A}^{1},\ldots,\mathbf{A}^{m}\}=\{\mathbf{A}^{a}\}_{a=1}^{m}\), where each operator \(\mathbf{A}^{a}\) acts on a constant number of qubits. Each operator \(\mathbf{A}^{a}\) can be arbitrary (\(\mathbf{A}^{a}\) does not need to be Hermitian nor unitary), but the set should be closed under Hermitian conjugate, \[\{\mathbf{A}^{a}\}_{a=1}^{m}=\{\mathbf{A}^{a\dagger}\}_{a=1}^{m}.\] (A.3) Each \(\mathbf{A}^{a}\) is referred to as a _jump operator_ and induces changes in energy (in the \(n\)-qubit system). For simplicity, we will enforce the following normalization for the interaction strengths, \[\left\|\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\|_{\infty}\leq 1\quad\text{for each}\quad a=1,\ldots,m.\] (A.4) For example, we may consider \(m=3n\) and \(\mathbf{A}^{1},\ldots,\mathbf{A}^{m}\) to be all single-qubit Pauli observables \(\mathbf{X}_{i},\mathbf{Y}_{i},\mathbf{Z}_{i}\) for \(i=1,\ldots,n\), which have an interaction strength \(\left\|\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\|_{\infty}=1\). The above parameters determine the thermal Lindbladian governing the equation of motion for the density operator, also referred to as the _coarse-grained master equation_[41] \[\frac{\mathrm{d}\mathbf{\rho}}{\mathrm{d}t}=-\mathrm{i}[\mathbf{H},\mathbf{\rho}]+\sum_{a= 1}^{m}\alpha_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}(\mathbf{\rho}),\] (A.5) The term \(-\mathrm{i}[\mathbf{H},\mathbf{\rho}]\) corresponds to the Hamiltonian dynamics governed by the system Hamiltonian \(\mathbf{H}\), the (closed system) Schrodinger's equation. The effects of system bath interaction are captured by a weighted average of the thermal Lindbladian \(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\), defined by each local jump operator \(\mathbf{A}^{a}\), the Hamiltonian \(\mathbf{H}\), and parameters of the bath \(\beta,\tau\). The weighting is captured by the nonnegative vector \(\mathbf{\alpha}\in\mathbb{R}_{\geq 0}^{m}\). By varying the \(m\)-dimensional nonnegative vector \(\mathbf{\alpha}\in\mathbb{R}_{\geq 0}^{m}\), the open system dynamics in Eq. (A.5) have the freedom to tune the interaction strengths for the jump operators. Each \(\alpha_{a}\) corresponds to the interaction strength of a jump operator \(\mathbf{A}^{a}\) and can be effectively absorbed into the set of jump operators by considering \[\left\{\sqrt{\alpha_{a}}\mathbf{A}^{a}\right\}_{a}.\] (A.6) The interaction strength \(\alpha_{a}\geq 0\) determines how much contribution each thermal Lindbladian \(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) provides, and can be regarded as a _probabilistic_ mixture. For example, if \(\alpha_{2}\) is set to \(0\), one removes the jump operator \(\mathbf{A}^{2}\) from the system-bath interaction. This flexibility lets us study a (convex) set of thermal perturbations due to system-bath interaction by considering all \(\mathbf{\alpha}\in\mathbb{R}_{\geq 0}^{m}\). As the system is weakly coupled to the bath, \(\mathbf{\alpha}\) is considered to be a vector with a small \(\left\|\mathbf{\alpha}\right\|_{1}=\sum_{a}\alpha_{a}\). For each local interaction term \(\mathbf{A}^{a}\), the corresponding thermal Lindbladian \(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) is an open system evolution with Lindblad jump operators \(\{\hat{\mathbf{A}}^{a}(\omega)\}_{\omega}\) for all possible energy differences \(\omega\in(-\infty,\infty)\). Each Lindblad jump operator \(\hat{\mathbf{A}}^{a}(\omega)\) is a restricted version of the system-bath interaction term \(\mathbf{A}^{a}\) that only contains transitions between eigenstates of \(\mathbf{H}\) whose associated eigenvalues, i.e., energies, differ by approximately \(\omega\). The inverse temperature \(\beta\) sets the transition weight \(\gamma_{\beta}(\omega)\), which determines the probability of occurrence for each Lindblad operator \(\hat{\mathbf{A}}^{a}(\omega)\). For \(\beta>0\), the transition weight \(\gamma_{\beta}(\omega)\) favors cooling (\(\omega<0\)) over heating (\(\omega>0\)) transitions. The timescale \(\tau\) sets the resolution (\(1/\tau\)) at which \(\hat{\mathbf{A}}(\omega)\) identifies the energy differences between the eigenstates. The exact form of thermal Lindbladians is relatively complex, so we defer further discussion to Appendix F when needed for the full technical proof. ## Appendix B Local minima in quantum systems In this appendix, we will introduce local minima in classical optimization, extend the definition to quantum systems, and formalize the problem of finding a local minimum in quantum systems. ### Local minima in classical optimization In this subsection, we describe the definition of local minima in finite-dimensional Euclidean spaces, introduce a direct generalization to geometries with tangent spaces and exponential maps (such as circles and spheres), and discuss the concept of approximate local minima. #### b.1.1 Local minima in Euclidean space In classical optimization, one considers a real-valued function \(h(\mathbf{x}):\mathcal{X}\to\mathbb{R}\) over a domain \(\mathcal{X}\subseteq\mathbb{R}^{n}\) consisting of \(n\)-dimensional vectors, and the goal is to find the global minimum of \(h(\mathbf{x})\), \[\mathbf{x}^{*}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathcal{X}}h(\mathbf{x}).\] (B.1) Finding the global minimum is already NP-hard even when \(h(\mathbf{x})\) is a quadratic function [30]. Instead of finding a global minimum, one typically resorts to finding a local minimum \(\mathbf{x}^{\#}\), which is the minimum in a neighborhood around \(\mathbf{x}^{\#}\). The definition of a local minimum \(\mathbf{x}^{\#}\) is that there exists a distance \(\delta>0\), such that \[h(\mathbf{x}^{\#}+\mathbf{\alpha})\geq h(\mathbf{x}^{\#}),\quad\text{ for all }\quad\|\mathbf{\alpha}\|\leq\delta\quad\text{and}\quad\mathbf{x}^{\#}+\mathbf{\alpha}\in \mathcal{X}.\] (B.2) Here the vector \(\mathbf{\alpha}\) is of the same dimension as \(\mathbf{x}^{\#}\). We will refer to the above as an _exact local minimum_ because all points in the neighborhood have to be _at least_\(h(\mathbf{x}^{\#})\). When there is an \(\mathbf{\alpha}\) such that \(h(\mathbf{x}^{\#}+\mathbf{\alpha})\) is only lower than \(h(\mathbf{x}^{\#})\) by an extremely small value, \(\mathbf{x}^{\#}\) is still not an exact local minimum. We will also define the approximate local minimum that relaxes this in Appendix B.1.3. #### b.1.2 Local minima in general geometrical spaces The concept of a local minimum can be directly generalized to any geometry with tangent spaces and exponential maps, such as spheres, density matrices, unitaries, and more general Riemannian manifolds. Consider the tangent space \(T_{\mathbf{x}}\) and the exponential map \(\exp_{\mathbf{x}}\) of a point \(\mathbf{x}\). In a physical picture, the tangent space \(T_{\mathbf{x}}\) is the space consisting of all vectors \(\mathbf{\alpha}\) that describe the direction \(\hat{\mathbf{\alpha}}\) and magnitude \(\|\mathbf{\alpha}\|\) for a particle moving at point \(\mathbf{x}\) on a manifold, and the exponential map \(\exp_{\mathbf{x}}\) is a function that takes in the vector \(\mathbf{\alpha}\in T_{\mathbf{x}}\) encompassing the direction and magnitude and outputs the point after moving \(\mathbf{x}\) in the direction \(\hat{\mathbf{\alpha}}\) with a magnitude \(\|\mathbf{\alpha}\|\).6 To visualize these concepts, we give two warm-up examples in the following. Footnote 6: Strictly speaking, to define the exponential map, we need to know how to “transport” the vector \(\alpha\) along itself. Fortunately, this is natural for all cases we consider. Euclidean space:In an \(m\)-dimensional Euclidean space, \(\forall\mathbf{x}\in\mathcal{X}=\mathbb{R}^{m}\), the tangent space is \[T_{\mathbf{x}}=\{\mathbf{\alpha}\in\mathbb{R}^{m}\}.\] (B.3) Given \(\mathbf{\alpha}\in T_{\mathbf{x}}\), when we move \(\mathbf{x}\) in the direction \(\hat{\mathbf{\alpha}}\) with a magnitude \(\|\mathbf{\alpha}\|\), we obtain \[\exp_{\mathbf{x}}(\mathbf{\alpha})=\mathbf{x}+\mathbf{\alpha}.\] (B.4) We can see that this matches our physical picture. Particle moving counter-clockwise on a circle:As another warm-up, let us consider a unit circle \(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{2}\,|\,\,\|\mathbf{x}\|=1\}\) where a particle can only move counter-clockwise. In this example, the tangent space \(T_{\mathbf{x}}\) of a unit vector \(\mathbf{x}\in\mathbb{R}^{2}\) with \(\|\mathbf{x}\|=1\) is the set of one-dimensional rays, \[T_{\mathbf{x}}=\{\alpha\in\mathbb{R}\,\,|\,\,\alpha\geq 0\}.\] (B.5) The condition \(\alpha\geq 0\) comes from the constraint that the particle can only move counter-clockwise (unidirectional rather than bidirectional). When we move \(\mathbf{x}\) according to \(\alpha\in T_{\mathbf{x}}\), we obtain \[\exp_{\mathbf{x}}(\alpha)=\exp\left(\begin{pmatrix}0&-\alpha\\ \alpha&0\end{pmatrix}\right)\mathbf{x}=\begin{pmatrix}\cos\alpha&-\sin\alpha\\ \sin\alpha&\cos\alpha\end{pmatrix}\mathbf{x}.\] (B.6) The larger \(\alpha\) is, the bigger the rotation is. Using the language of tangent spaces and exponential maps, an exact local minimum \(\mathbf{x}^{\#}\in\mathcal{X}\) of a function \(h\) is equivalent to the statement that there exists \(\delta>0\), such that \[h(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha}))\geq h(\mathbf{x}^{\#}),\quad\text{for all}\quad\mathbf{\alpha}\in T_{\mathbf{x}^{\#}},\;\|\mathbf{\alpha}\| \leq\delta.\] (B.7) For the case of optimizing over \(m\)-dimensional Euclidean space, the condition of Eq. (B.7) becomes the same as Eq. (B.2) noting \(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha})=\mathbf{x}^{\#}+\mathbf{\alpha}\). However, the condition can be quite different when the tangent space changes. For example, consider a 2-dimensional Euclidean space and the function \(h(\mathbf{x})=\left\|\mathbf{x}\right\|^{2}\). In general, there is only one exact local minimum \(\mathbf{x}^{\#}=0\). However, if the particle can only move to the right, the tangent space becomes \(T_{\mathbf{x}}=\{\mathbf{\alpha}\in\mathbb{R}^{2}\mid\mathbf{\alpha}_{1}\geq 0\}\) and every point \(\mathbf{x}\) with \(\mathbf{x}_{1}\geq 0\) and \(\mathbf{x}_{2}=0\) is an exact local minimum. Modifying the tangent space changes the definition of neighborhood. Hence, the set of local minima would be changed accordingly. We will consider the most suitable norm \(\|\mathbf{\alpha}\|\) for each context. #### b.1.3 Approximate local minima While global minima are computationally hard to find, exact local minima are not much easier. If there is an \(\mathbf{\alpha}\) such that \(h(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha}))\) is lower than \(h(\mathbf{x}^{\#})\) by an extremely small value, \(\mathbf{x}^{\#}\) is not consider to be an exact local minimum. The requirement to resolve an extremely small value in exact local minima leads to the fact that finding an exact local minimum is still computationally hard [34]. Furthermore, exact local minima are very sensitive to small perturbations to the function \(h\). Therefore, it is desirable to define approximate local minima to promote computational efficiency and robustness to small perturbations. We consider the following principle for defining \(\epsilon\)-approximate local minima: _if a function \(h^{*}\) is very close to \(h\), then an exact local minimum of \(h^{*}\) is an approximate local minimum of \(\tilde{h}\)._ The formal definition is given below. **Definition 1**.: _(\(\epsilon\)-approximate local minima) Given a space \(\mathcal{X}\) with tangent spaces \(T_{x}\) and exponential maps \(\exp_{x}\) for all \(x\in\mathcal{X}\), and a function \(h\): \(\mathcal{X}\to\mathbb{R}\). \(\mathbf{x}^{\#}\) is an \(\epsilon\)-approximate local minimum of \(h\) if \(\mathbf{x}^{\#}\) is the exact local minimum of some function \(h^{*}\), where \(\Delta(\mathbf{x}):=h^{*}(\mathbf{x})-h(\mathbf{x})\) satisfies_ \[|\Delta(\mathbf{x})|\leq\epsilon\quad\text{for each}\quad\mathbf{x}\in \mathcal{X},\] _(\[\epsilon\]-bounded),_ (B.8) \[|\Delta(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha}))|\leq\epsilon\left\|\bm {\alpha}\right\|\quad\text{for each}\quad\mathbf{\alpha}\in T_{\mathbf{x}^{\#}}, \text{($\epsilon$-Lipschitz around $x^{\#}$)}.\] (B.9) _A \((\epsilon=0)\)-approximate local minimum of \(h\) is an exact local minimum of \(h\)._ Under this definition, \(\mathbf{x}^{\#}\) is an approximate local minimum of \(h\) if there is an \(\mathbf{x}\) in the neighborhood of \(\mathbf{x}^{\#}\) such that \(h(\mathbf{x})\) is lower than \(h(\mathbf{x}^{\#})\) by an extremely small value. We also give the following equivalent characterization based on looking at the local neighborhood. **Proposition B.1**.: _(An equivalent characterization of \(\epsilon\)-approximate local minima) \(\mathbf{x}^{\#}\in\mathcal{X}\) is an \(\epsilon\)-approximate local minimum of the function \(h\) if and only if there exists a distance \(\delta>0\),_ \[h(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha}))\geq h(\mathbf{x}^{\#})-\epsilon\left\|\mathbf{ \alpha}\right\|\quad\text{for each}\quad\mathbf{\alpha}\in T_{\mathbf{x}^{\#}},\|\mathbf{ \alpha}\|\leq\delta,\] (B.10) _i.e., all the neighboring points can at most be \(\epsilon\left\|\mathbf{\alpha}\right\|\) lower than the point \(\mathbf{x}^{\#}\)._ Proof.: For the "only if" statement, we recall the definition of an exact local minimum that there exists \(\delta>0\), such that \(h^{*}(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha}))-h^{*}(\mathbf{x}^{\#})\geq 0\) for all \(\mathbf{\alpha}\in T_{\mathbf{x}^{\#}}\) and \(\|\mathbf{\alpha}\|\leq\delta\). From the \(\epsilon\)-Lipschitz condition around \(x^{\#}\) for the function \(\Delta(\mathbf{x})\), we have \[0\leq h^{*}(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha}))-h^{*}(\mathbf{x}^{\#})=h(\exp_{\mathbf{x} ^{\#}}(\mathbf{\alpha}))-h(\mathbf{x}^{\#})+\Delta(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha}))\] (B.11) \[\leq h(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha}))-h(\mathbf{x}^{\#})-\epsilon\left\|\mathbf{\alpha} \right\|.\] (B.12) This concludes the "only if" statement. For the "if" statement, consider \(\delta\) to be of at most \(1\) and let \[\Delta(\mathbf{x}):=\begin{cases}h(\mathbf{x}^{\#})-h(\mathbf{x}),&\text{if }\mathbf{x}=\exp_{\mathbf{x}^{\#}}(\mathbf{ \alpha})\text{ for some }\mathbf{\alpha}\in T_{\mathbf{x}^{\#}},\left\|\mathbf{\alpha} \right\|\leq\delta,\\ 0,&\text{otherwise}.\end{cases}\] (B.13) We have \(\mathbf{x}^{\#}\) is an exact local minimum for \(h^{*}(\mathbf{x}):=h(\mathbf{x})+\Delta(\mathbf{x})\). Furthermore, because \(h(\mathbf{x}^{\#})-h(\exp_{\mathbf{x}^{\#}}(\mathbf{\alpha}))\leq\epsilon\left\|\mathbf{\alpha }\right\|\leq\epsilon\), both \(\epsilon\)-bounded and \(\epsilon\)-Lipschitz around \(x^{\#}\) are satisfied by \(\Delta(\mathbf{x})\). ### Defining local minima in quantum systems To define local minima, we need to consider the domain \(\mathcal{X}\) of elements \(\mathbf{x}\in\mathcal{X}\), the optimization function \(h(\mathbf{x})\), the tangent space \(T_{\mathbf{x}}\) consisting of all possible directions and magnitudes to move an element \(\mathbf{x}\), where \(\mathbf{\alpha}\in T_{\mathbf{x}}\) encompass the direction \(\hat{\mathbf{\alpha}}\) and the magnitude \(\left\|\mathbf{\alpha}\right\|\), and the exponential map \(\exp_{\mathbf{x}}(\mathbf{\alpha})\) that describes the resulting element after moving \(\mathbf{x}\) under \(\mathbf{\alpha}\). In the following, we present two settings. The first setting in Appendix B.2.1 considers general quantum states that can evolve under thermodynamic processes induced by interacting with a low-temperature heat bath. This setting defines local minima under thermal perturbations. The second setting in Appendix B.2.2 considers pure quantum states that can move under any unitary generated by a set of local Hermitian operators (e.g., all two-qubit Pauli observables \(\mathbf{P}_{i}\otimes\mathbf{Q}_{j}\), where \(\mathbf{P},\mathbf{Q}\in\{\mathbf{X},\mathbf{Y},\mathbf{Z}\}\)). This setting defines local minima under local unitary perturbations. #### b.2.1 Definition based on thermal perturbations In quantum mechanics, the central optimization problem considers a function \(h\) defined by the Hamiltonian \(\mathbf{H}\) of an \(n\)-qubit quantum system, \[h(\mathbf{\rho})=\text{tr}(\mathbf{H}\mathbf{\rho}),\] (B.14) which is the average energy of an \(n\)-qubit quantum state \(\mathbf{\rho}\). The ground states \(\mathbf{\rho}^{(g)}\) of \(\mathbf{H}\) are the global minima of the optimization over \(h(\mathbf{\rho})=\text{tr}(\mathbf{H}\mathbf{\rho})\) in the quantum state space, i.e., the set of density operators (trace-one positive semidefinite matrices), \[\mathcal{S}_{2^{n}}:=\{\mathbf{\rho}\in\mathbb{C}^{2^{n}\times 2^{n}}\ |\ \mathbf{\rho}^{ \dagger}=\mathbf{\rho},\,\mathbf{\rho}\succeq 0,\,\text{tr}(\mathbf{\rho})=1\}.\] (B.15) When the quantum system is placed in a heat bath with inverse temperature \(\beta\in[0,\infty]\), time scale \(\tau\in[0,\infty]\), and system-bath interactions based on \(m\) local jump operators7\(\mathbf{A}^{1},\ldots,\mathbf{A}^{m}\), the system dynamics are effectively described by the _thermal Lindbladians_\(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\), Footnote 7: A local operator \(\mathbf{A}^{a}\) acts on \(\mathcal{O}(1)\) qubits, but the set of qubits that \(\mathbf{A}^{a}\) acts on may not be geometrically close. \[\frac{\text{d}\mathbf{\rho}(t)}{\text{d}t}=-\text{i}[\mathbf{H},\mathbf{\rho}]+\sum_{a=1}^ {m}\alpha_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}[\mathbf{\rho}],\] (B.16) where \(\alpha_{a}\geq 0\) for each \(a\). After time \(t\), the initial quantum state \(\mathbf{\rho}\) will evolve to \[\mathbf{\rho}(t)=\exp\left(-\text{i}t[\mathbf{H},\cdot]+\sum_{a=1}^{m}t\alpha_{a} \mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\right)(\mathbf{\rho}).\] (B.17) Each term \(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) is the thermal Lindbladian associated with a local jump operator \(\mathbf{A}^{a}\) (recall that local operator \(\mathbf{A}^{a}\) acts on a constant number of qubits). See Appendix A.3 for a brief review of thermal Lindbladians, and Appendix F for the exact form of thermal Lindbladians. The coefficient \(\alpha_{a}\geq 0\) corresponds to the interaction strength of each jump operator \(\mathbf{A}^{a}\). As \(\alpha_{a}<0\) is equivalent to reversing time, we cannot have \(\alpha_{a}<0\) since thermodynamic processes are irreversible in general. Different interaction strength vector \(\mathbf{\alpha}\) corresponds to a different system-bath interaction, and the thermodynamics could be different. Because \(\mathbf{\alpha}\) describes the probability of each jump occurring, the natural norm \(\left\|\mathbf{\alpha}\right\|\) for the interaction strength vector \(\mathbf{\alpha}\) is \(\left\|\mathbf{\alpha}\right\|_{1}\). We denote \(\hat{\mathbf{\alpha}}=\mathbf{\alpha}/\left\|\mathbf{\alpha}\right\|_{1}\) as the unit vector. The thermodynamics equation in Eq. (B.16) consists of a fast-rotating term \(-\mathrm{i}[\mathbf{H},\cdot]\) due to the system Hamiltonian \(\mathbf{H}\) that keeps the energy \(\mathrm{tr}(\mathbf{H}\mathbf{\rho})\) invariant and the thermal perturbation term \(\sum_{a}\alpha_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) due to the heat bath that cools the system. Because \(-\mathrm{i}[\mathbf{H},\cdot]\) keeps the energy constant, only the thermal perturbation term \(\sum_{a}\alpha_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) is relevant for minimizing the energy \(h(\mathbf{\rho})=\mathrm{tr}(\mathbf{H}\mathbf{\rho})\). For notational simplicity, we will only consider contributions from the thermal perturbations and absorb the \(t\) dependence in \(t\alpha_{a}\) into \(\alpha_{a}\) since \(\mathbf{\alpha}\) is an arbitrary nonnegative vector. Together, the thermal perturbation on \(\mathbf{\rho}\) due to a heat bath with inverse temperature \(\beta\), time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\) can be written as \[\mathbf{\rho}\rightarrow\exp\left(\sum_{a=1}^{m}\alpha_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\right)(\mathbf{\rho})\] (B.18) for a nonnegative vector \(\mathbf{\alpha}\in\mathbb{R}_{\geq 0}^{m}\) that combines the interaction strength vector and time \(t\). A dictionary between all the relevant functions and variables for optimizing \(\mathrm{tr}(\mathbf{H}\mathbf{\rho})\) in \(n\)-qubit quantum systems under a heat bath with inverse temperature \(\beta\) and time scale \(\tau\) and optimizing \(h(\mathbf{x})\) in an \(n\)-dimensional Euclidean space is given as follows. \[\mathcal{X}=\mathbb{R}^{n} \quad\leftrightarrow\quad\mathcal{X}=\mathcal{S}_{2^{n}}, \text{(domain)}\] (B.19) \[\mathbf{x}\in\mathbb{R}^{n} \quad\leftrightarrow\quad\mathbf{\rho}\in\mathcal{S}_{2^{n}}, \text{(an element)}\] (B.20) \[h(\mathbf{x}) \quad\leftrightarrow\quad h(\mathbf{\rho})=\mathrm{tr}(\mathbf{H}\mathbf{\rho}), \text{(optimization function)}\] (B.21) \[T_{\mathbf{x}}=\{\mathbf{\alpha}\in\mathbb{R}^{n}\} \quad\leftrightarrow\quad\{\mathbf{\alpha}\in\mathbb{R}_{\geq 0}^{m}\}, \text{(tangent space)}\] (B.22) \[\exp_{\mathbf{x}}(\mathbf{\alpha})=\mathbf{x}+\mathbf{\alpha} \quad\leftrightarrow\quad\exp\left(\sum_{a=1}^{m}\alpha_{a} \mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\right)(\mathbf{\rho}) \text{(exponential map)}.\] (B.23) The formal definition of tangent spaces and exponential maps via Lindbladians is given below. **Definition 2** (Tangent spaces of quantum states in a heat bath).: _Consider an \(n\)-qubit quantum state \(\mathbf{\rho}\), an \(n\)-qubit Hamiltonian \(\mathbf{H}\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a=1}^{m}\), and parameters \(\beta,\tau\geq 0\). The tangent space \(T_{\mathbf{\rho}}^{\beta,\tau,\mathbf{H},\{\mathbf{A}^{a}\}_{a=1}^{m}}\) under a heat bath with an inverse temperature \(\beta\), a time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\) is defined as_ \[T_{\mathbf{\rho}}^{\beta,\tau,\mathbf{H},\{\mathbf{A}^{a}\}_{a=1}^{m}}:=\left\{\mathbf{\alpha} \in\mathbb{R}_{\geq 0}^{m}\right\},\] (B.24) _which is independent of \(\beta,\tau,\mathbf{H},\{\mathbf{A}^{a}\}_{a=1}^{m}\). The exponential map \(\exp_{\mathbf{\rho}}^{\beta,\tau,\mathbf{H},\{\mathbf{A}^{a}\}_{a}}\) is defined as_ \[\exp_{\mathbf{\rho}}^{\beta,\tau,\mathbf{H},\{\mathbf{A}^{a}\}_{a}}(\mathbf{\alpha}):=\exp \left(\sum_{a=1}^{m}\alpha_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\right)(\mathbf{ \rho}).\] (B.25) With the definition of tangent spaces and exponential maps, we can define \(\epsilon\)-approximate local minimum similar to the classical case in Eq. (B.10). We consider the natural choice of \(\left\lVert\cdot\right\rVert_{1}\) for the nonnegative vector \(\mathbf{\alpha}\) encompassing the probability of each jump. Our results remain qualitatively the same for other reasonable vector norms, such as Euclidean norm \(\left\lVert\cdot\right\rVert_{2}\) or \(\ell_{p}\) norm \(\left\lVert\cdot\right\rVert_{p}\). **Definition 3** (Local minima under thermal perturbations).: _Given an \(n\)-qubit Hamiltonian \(\mathbf{H}\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a=1}^{m}\), and parameters \(\beta,\tau\geq 0\), an \(n\)-qubit state \(\mathbf{\rho}\in\mathcal{S}_{2^{n}}\) is an \(\epsilon\)-approximate local minimum of \(\mathbf{H}\) under thermal perturbations with an inverse temperature \(\beta\), a time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\) if there is a \(\delta>0\) such that_ \[\operatorname{tr}\left(\mathbf{H}\exp_{\mathbf{\rho}}^{\beta,\tau,\mathbf{H},\{\mathbf{A}^{a} \}_{a}}(\mathbf{\alpha})\right)\geq\operatorname{tr}(\mathbf{H}\mathbf{\rho})-\epsilon \left\lVert\mathbf{\alpha}\right\rVert_{1}\quad\text{for each}\quad\mathbf{\alpha}\in \mathbb{R}_{\geq 0}^{m},\left\lVert\mathbf{\alpha}\right\rVert_{1}\leq\delta,\] (B.26) _i.e., all the neighboring points can at most be \(\epsilon\left\lVert\mathbf{\alpha}\right\rVert_{1}\) lower than the point \(\mathbf{\rho}\)._ A central concept we will be using for characterizing local minima under thermal perturbations is the energy gradient. The energy gradient at an \(n\)-qubit state \(\mathbf{\rho}\) under thermal perturbation is determined by the following state-independent operator, \[\text{(energy gradient operator):}\qquad\sum_{a=1}^{m}\mathcal{L}_{a}^{ \dagger\beta,\tau,\mathbf{H}}(\mathbf{H})\hat{\mathbf{e}}_{a},\] (B.27) where we denote \(\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}\) to be the Hermitian conjugate of \(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\). The energy gradient operator is a vector of Hermitian observables. The terminology stems from the fact that evaluating the energy gradient operator on a state \(\mathbf{\rho}\) gives the energy gradient at the state \(\mathbf{\rho}\), \[\operatorname{tr}\left(\mathbf{H}\exp\left(-\mathrm{i}\sum_{a=1}^{m}\alpha_{a} \mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\right)(\mathbf{\rho})\right)=\operatorname{ tr}(\mathbf{H}\mathbf{\rho})+\mathbf{\alpha}\cdot\sum_{a=1}^{m}\operatorname{tr}\left( \mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}(\mathbf{H})\mathbf{\rho}\right)\hat{\mathbf{e} }_{a}+\mathcal{O}(\|\alpha\|^{2}).\] (B.28) In Appendix D, we provide more discussions about the energy gradient. Thermal perturbations depend on how the quantum system is interacting with the heat bath. Local minima defined above are local minima of the Hamiltonian \(\mathbf{H}\) under thermal perturbations induced by all system-bath interactions generated by the jump operators \(\{\mathbf{A}^{a}\}_{a}\). **Remark 1** (Thermodynamics at local minima).: Given a specific system-bath interaction, inverse temperature \(\beta\), and time scale \(\tau\), there could still be thermodynamics at a local minimum. For example, when \(\beta\) is not infinitely large, a local minimum could still move to other higher-energy states due to thermal fluctuations. Another example is when the local minimum is on a large and flat plateau, then the local minimum can still perform a random walk on the plateau. #### b.2.2 Definition based on local unitary perturbations Inspired by variation quantum eigensolvers [36, 37], another natural definition for tangent spaces, exponential maps, and local minima considers pure states and local unitary perturbation. Given \(m\) local Hermitian operators \(\mathbf{h}_{1},\dots,\mathbf{h}_{m}\) with \(\left\lVert\mathbf{h}_{a}\right\rVert_{\infty}=1\). Here, local means that each operator \(\mathbf{h}_{a}\) only acts on a constant number of qubits. We can consider all possible local unitary perturbations formed by performing time evolution under a Hamiltonian generated by the set \(\{\mathbf{h}_{a}\}_{a}\) of local Hermitian operators, \[\sum_{a=1}^{m}\alpha_{a}\mathbf{h}_{a},\] (B.29) for any \(\boldsymbol{\alpha}\in\mathbb{R}^{m}\). Since the time evolution under a Hamiltonian is always reversible, there is no additional requirement that \(\boldsymbol{\alpha}\) must be in the nonnegative orthant. Similar to thermal perturbations, we will absorb the contribution of evolution time \(t\) into the arbitrary vector \(\boldsymbol{\alpha}\). Consider the following dictionary between all the relevant functions and variables for optimizing \(\left\langle\psi\right|\boldsymbol{H}\left|\psi\right\rangle\) in \(n\)-qubit pure state \(\left|\psi\right\rangle\) under local unitary perturbation and optimizing \(h(\boldsymbol{x})\) in an \(n\)-dimensional Euclidean space. \[\mathcal{X}=\mathbb{R}^{n} \leftrightarrow\quad\mathcal{X}=\{\left|\psi\right\rangle\in \mathbb{C}^{2^{n}}|\left\langle\psi|\psi\right\rangle=1\},\] (domain) (B.30) \[\boldsymbol{x}\in\mathbb{R}^{n} \leftrightarrow\quad\left|\psi\right\rangle\in\mathbb{C}^{2^{n}}, \left\langle\psi|\psi\right\rangle=1,\] (an element) (B.31) \[h(\boldsymbol{x}) \leftrightarrow\quad h(\left|\psi\right\rangle)=\left\langle \psi\right|\boldsymbol{H}\left|\psi\right\rangle,\] (optimization function) (B.32) \[T_{\boldsymbol{x}}=\{\boldsymbol{\alpha}\in\mathbb{R}^{n}\} \leftrightarrow\quad\{\boldsymbol{\alpha}\in\mathbb{R}^{m}\},\] (tangent space) (B.33) \[\exp_{\boldsymbol{x}}(\boldsymbol{\alpha})=\boldsymbol{x}+ \boldsymbol{\alpha} \leftrightarrow\quad\exp\left(\sum_{a=1}^{m}\alpha_{a}\boldsymbol {h}_{a}\right)\left|\psi\right\rangle\] (exponential map). (B.34) The tangent space and the exponential map can be formally defined as follows. **Definition 4** (Tangent spaces of pure quantum states under local unitaries).: _Given an \(n\)-qubit pure quantum state \(\left|\psi\right\rangle\) and \(m\) local Hermitian operators \(\{\boldsymbol{h}_{a}\}_{a}\). The tangent space \(T_{\psi}\) is defined as_ \[T_{\psi}^{\{\boldsymbol{h}_{a}\}_{a}}:=\mathbb{R}^{m},\] (B.35) _and the exponential map \(\exp_{\psi}\) is defined as_ \[\exp_{\psi}^{\{\boldsymbol{h}_{a}\}_{a}}(\boldsymbol{\alpha}):=\exp\left(-i \sum_{a}\alpha_{a}\boldsymbol{h}_{a}\right)\left|\psi\right\rangle.\] (B.36) When the set \(\{\boldsymbol{h}_{a}\}_{a}\) is the set of all two-qubit Pauli observables, the tangent space \(T_{\psi}\) and exponential map \(\exp_{\psi}\) define a Riemannian manifold that connects all \(n\)-qubit pure states through unitary evolutions. This Riemannian manifold is the state version of the manifold over quantum unitaries defined in a seminal work on the geometry of quantum computation [45]. The optimization function is \(h(\left|\psi\right\rangle)=\left\langle\psi\right|\boldsymbol{H}\left|\psi\right\rangle\), the average energy of the Hamiltonian \(\boldsymbol{H}\) for the pure state \(\left|\psi\right\rangle\). Performing gradient descent on this pure state Riemannian manifold to minimize \(\left\langle\psi\right|\boldsymbol{H}\left|\psi\right\rangle\) is equivalent to performing adaptive variational quantum optimization [36] to minimize the Hamiltonian \(\boldsymbol{H}\). The local minima can be defined similarly as before. To be consistent with local minima under thermal perturbations, we consider the \(\ell_{1}\)-norm \(\left\|\boldsymbol{\alpha}\right\|_{1}\). All of our results remain qualitatively the same for other reasonable vector norms, such as the Euclidean norm or \(\ell_{p}\) norm. **Definition 5** (local minima under local unitary perturbations).: _Given an \(n\)-qubit Hamiltonian \(\boldsymbol{H}\), and \(m\) local Hermitian operators \(\{\boldsymbol{h}_{a}\}_{a}\). A pure state \(\left|\psi\right\rangle\) is an \(\epsilon\)-approximate local minimum of \(\boldsymbol{H}\) under local unitary perturbations generated by \(\{\boldsymbol{h}_{a}\}_{a}\) if_ \[\exp_{\psi}^{\{\boldsymbol{h}_{a}\}_{a}}(\boldsymbol{\alpha})^{\dagger} \boldsymbol{H}\exp_{\psi}^{\{\boldsymbol{h}_{a}\}_{a}}(\boldsymbol{\alpha}) \geq\left\langle\psi\right|\boldsymbol{H}\left|\psi\right\rangle-\epsilon \left\|\boldsymbol{\alpha}\right\|_{1},\text{ for each }\quad\boldsymbol{\alpha}\in T_{\psi}^{\{ \boldsymbol{h}_{a}\}_{a}},\left\|\boldsymbol{\alpha}\right\|_{1}\leq\delta.\] (B.37) _for some \(\delta>0\)._ This is also a valid definition of local minima in quantum systems. However, we will later show that the optimization landscape defined in this way always has a very large barren plateau. Hence, the problem of finding a local minimum defined in this way will be a trivial problem. ### The problem of finding a local minimum in quantum systems With these definitions of local minima, we can define the task of finding a local minimum in a straightforward manner. To formulate the problem to have purely classical output, we focus on outputting a simple property, such as the expectation value of a local observable \(\mathbf{O}\), of an approximate local minimum \(\mathbf{\rho}\). Furthermore, we only consider Hamiltonians \(\mathbf{H}\) that can be written as a sum of local observables, commonly referred to as local Hamiltonians in the literature. While there can be many approximate local minima, we consider the algorithm to be successful if the algorithm outputs the property of any one of the local minima. **Definition 6** (Finding a local minimum under low-temperature thermal perturbations).: _Given error \(\epsilon>0\), inverse temperature \(\beta\geq 0\), time scale \(\tau\geq 0\), an \(n\)-qubit local Hamiltonian \(\mathbf{H}\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a}\), and a local observable \(\mathbf{O}\) with \(\left\|\mathbf{O}\right\|_{\infty}\leq 1\). Output a real value \(v\in[-1,1]\), such that \(v\) is \(\epsilon\)-close to \(\operatorname{tr}(\mathbf{O}\mathbf{\rho})\) for an \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of \(\mathbf{H}\) under thermal perturbations with an inverse temperature \(\beta\), a time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\)._ **Definition 7** (Finding a local minimum under local unitary perturbations).: _Given error \(\epsilon>0\), an \(n\)-qubit local Hamiltonian \(\mathbf{H}\), \(m\) local Hermitian operators \(\{\mathbf{h}_{a}\}_{a}\), and a local observable \(\mathbf{O}\) with \(\left\|\mathbf{O}\right\|_{\infty}\leq 1\). Output a real value \(v\in[-1,1]\), such that \(v\) is \(\epsilon\)-close to \(\left\langle\psi\right|\mathbf{O}\left|\psi\right\rangle\) for an \(\epsilon\)-approximate local minimum \(\left|\psi\right\rangle\) of the Hamiltonian \(\mathbf{H}\) under local unitary perturbations generated by \(\{\mathbf{h}_{a}\}_{a}\)._ Ideally, we would like the two problems to be quantumly easy and classically hard. However, we will show that only the first problem based on thermal perturbations is both quantumly easy and classically hard. The second problem based on local unitary perturbation is classically trivial due to the presence of too many local minima in an exponentially large barren plateau. ### The importance of irreversible perturbations Suppose that the perturbations \(\mathcal{P}_{\mathbf{\alpha}}\) parameterized by a polynomial-size vector \(\mathbf{\alpha}\) are reversible \(\mathcal{P}_{-\mathbf{\alpha}}=\mathcal{P}_{\mathbf{\alpha}}^{-1}\) and are smooth. The following argument shows that the energy landscape must have doubly-exponentially many approximate local minima. Given any \(n\)-qubit state \(\mathbf{\rho}\) and any \(n\)-qubit Hamiltonian \(\mathbf{H}\) with \(\left\|\mathbf{H}\right\|_{\infty}=\operatorname{poly}(n)\). Consider a gradient descent algorithm that starts at \(\mathbf{\rho}\). Because \(\left\|\mathbf{H}\right\|_{\infty}=\operatorname{poly}(n)\), after a polynomial number of steps \(T\), the gradient descent algorithm can find an approximate local minimum \(\mathbf{\rho}^{\#}\) of \(\mathbf{H}\), \[\mathbf{\rho}^{\#}=\mathcal{P}_{\mathbf{\alpha}_{T}}\dots\mathcal{P}_{\mathbf{\alpha}_{1} }(\mathbf{\rho}).\] (B.38) From the reversibility of the perturbations, we have \[\mathbf{\rho}=\mathcal{P}_{-\mathbf{\alpha}_{1}}\dots\mathcal{P}_{-\mathbf{\alpha}_{T}}( \mathbf{\rho}^{\#}).\] (B.39) Consider a covering net \(\mathcal{N}\) for the set of approximate local minima of \(\mathbf{H}\). Because the packing net for all \(n\)-qubit states is of size \[\exp(\exp(\Omega(n))),\] (B.40) and the covering net for the perturbations is of size \[\exp(\operatorname{poly}(n)),\] (B.41) we have the following relationship, \[\exp(T\cdot\operatorname{poly}(n))|\mathcal{N}|=\exp(\exp(\Omega(n)))\] (B.42) Hence, we can see that \[|\mathcal{N}|=\exp(\exp(\Omega(n))-\operatorname{poly}(n))=\exp(\exp(\Omega(n)))\] (B.43) since \(\exp(\Omega(n))\) grows much faster than \(\operatorname{poly}(n)\). Characterizing local minima under local unitary perturbations Now that we have defined local minima in quantum systems, we present a set of results characterizing properties of local minima in quantum systems in this and the next appendix. These results provide a further understanding of local minima in quantum systems and are essential to establishing the main theorems given in Appendix E. We begin by looking at the energy landscape defined by local unitary perturbations. We will prove a central lemma portraying the energy landscapes defined by local unitary perturbations for pure quantum systems. The lemma states that most pure quantum states \(\ket{\psi}\) are local minima under local unitary perturbations with an expectation value close to \(\operatorname{tr}(\mathbf{O})/2^{n}=\operatorname{tr}(\mathbf{O}(\mathbf{I}/2^{n}))\) for any local observable \(\mathbf{O}\). Furthermore, the proof shows that the gradient at a randomly sampled local minimum \(\ket{\psi}\) is exponentially close to zero. Lemma C.1 and its proof provide the following physical picture. In the energy landscape defined by local unitary perturbations, there is an overwhelmingly large barren plateau consisting of local minima with almost equal energy as their neighbors. Furthermore, these local minima behave like the maximally mixed state \(\mathbf{I}/2^{n}\), which makes the task of predicting properties for a local minimum under local unitary perturbations classically trivial to solve. Given \(m\) local Hermitian operators \(\{\mathbf{h}_{a}\}_{a}\) and \(\mathbf{\alpha}\in\mathbb{R}^{m}\). By applying Taylor's theorem in Prop. D.1 to the one-dimensional function \[g(t)=\exp_{\psi}^{\{\mathbf{h}_{a}\}_{a}}(t\hat{\mathbf{\alpha}})^{\dagger}\mathbf{H}\exp _{\psi}^{\{\mathbf{h}_{a}\}_{a}}(t\hat{\mathbf{\alpha}})\] (C.1) for \(0\leq t\leq\left\lVert\mathbf{\alpha}\right\rVert_{1}\) and \(\hat{\mathbf{\alpha}}=\mathbf{\alpha}/\left\lVert\mathbf{\alpha}\right\rVert_{1}\), we can obtain the following proposition. **Proposition C.1** (Taylor's theorem for local unitary perturbations).: _Given an \(n\)-qubit Hamiltonian \(\mathbf{H}\), \(\mathbf{\alpha}\in\mathbb{R}^{n}\), \(m\) local Hermitian operators \(\{\mathbf{h}_{a}\}_{a}\), and an \(n\)-qubit pure state \(\ket{\psi}\). We have_ \[\exp_{\psi}(\mathbf{\alpha})^{\dagger}\mathbf{H}\exp_{\psi}(\mathbf{\alpha}) =\bra{\psi}\mathbf{H}\ket{\psi}-\mathrm{i}\bra{\psi}\left[\mathbf{H}, \sum_{a=1}^{m}\alpha_{a}\mathbf{h}_{a}\right]\ket{\psi}\] \[\qquad-\frac{1}{2}\sum_{a=1}^{m}\sum_{a^{\prime}=1}^{m}\alpha_{a} \alpha_{a^{\prime}}\exp_{\psi}(\eta\hat{\mathbf{\alpha}})^{\dagger}[[\mathbf{H},\mathbf{h} _{a}],\mathbf{h}_{a^{\prime}}]\exp_{\psi}(\eta\hat{\mathbf{\alpha}}),\] (C.2) _for some \(0\leq\eta\leq\left\lVert\mathbf{\alpha}\right\rVert_{1}\)._ **Lemma C.1** (A random state is a local minimum under local unitary perturbations; Restatement of Lemma 2.1).: _Consider a large problem size \(n\). Given error \(\epsilon\geq 1/2^{n/4}\), an \(n\)-qubit local Hamiltonian \(\mathbf{H}\) with \(\left\lVert\mathbf{H}\right\rVert_{\infty}=\operatorname{poly}(n)\), \(m\) local Hermitian operators \(\{\mathbf{h}_{a}\}_{a}\) with \(m=\operatorname{poly}(n)\) and \(\left\lVert\mathbf{h}_{a}\right\rVert_{\infty}=1\), and a local observable \(\mathbf{O}\) with \(\left\lVert\mathbf{O}\right\rVert_{\infty}\leq 1\). With probability at least \(1-1/2^{2^{n/4}}\), an \(n\)-qubit state \(\ket{\psi}\) sampled uniformly at random is an \(\epsilon\)-approximate local minimum of \(\mathbf{H}\) under local unitary perturbations generated by \(\{\mathbf{h}_{a}\}_{a}\) and \(\bra{\psi}\mathbf{O}\ket{\psi}\) is \(\epsilon\)-close to \(\operatorname{tr}(\mathbf{O})/2^{n}\)._ Proof.: From Lemma III.5 in [71], for any Pauli operator \(\mathbf{Q}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}\setminus\{\mathbf{I}^{ \otimes n}\}\) and a random \(n\)-qubit pure state \(\ket{\psi}\) sampled uniformly, we have \[\Pr_{\ket{\psi}}[\left\lvert\bra{\psi}\mathbf{Q}\ket{\psi}\right\rvert>\delta] \leq 2\exp\left(-\frac{2^{n}\delta^{2}}{10}\right),\] (C.3) for any \(0\leq\delta\leq 1\). Let \(\delta=1/2^{n/3}\). Then, we have \[\Pr_{\ket{\psi}}\left[\left\lvert\bra{\psi}\mathbf{Q}\ket{\psi}\right\rvert>\frac{ 1}{2^{n/3}}\right]\leq 2\exp\left(-\frac{2^{n/3}}{10}\right).\] (C.4) Recall that any Hermitian operator has a unique Pauli decomposition: \[\mathbf{H} =\sum_{\mathbf{P}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}}\alpha_{ \mathbf{P}}(\mathbf{H})\mathbf{P},\] (C.5) \[\mathbf{O} =\sum_{\mathbf{P}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}}\alpha_ {\mathbf{P}}(\mathbf{O})\mathbf{P},\] (C.6) \[\mathbf{h}^{a} =\sum_{\mathbf{P}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}} \alpha_{\mathbf{P}}(\mathbf{h}^{a})\mathbf{P},\] (C.7) where the Pauli coefficients \(\alpha_{\mathbf{P}}(\cdot)\) satisfy \[\sum_{\mathbf{P}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}}\alpha _{\mathbf{P}}^{2}(\mathbf{H}) \leq\left\|\mathbf{H}\right\|_{\infty}^{2}=\text{poly}(n),\] (C.8) \[\sum_{\mathbf{P}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}}\alpha _{\mathbf{P}}^{2}(\mathbf{O}) \leq\left\|\mathbf{O}\right\|_{\infty}^{2}=1.\] (C.9) \[\sum_{\mathbf{P}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}}\alpha _{\mathbf{P}}^{2}(\mathbf{h}_{a}) \leq\left\|\mathbf{h}_{a}\right\|_{\infty}^{2}=1.\] (C.10) Let \(S_{0}\) be the set of Pauli operator \(\mathbf{P}\) with non-zero Pauli coefficients \(\alpha_{\mathbf{P}}\) in the Pauli decompositions of either \(\mathbf{H}\) or \(\mathbf{O}\), \[S_{0}=\left\{\mathbf{P}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}\setminus\{ \mathbf{I}^{\otimes n}\}\ |\ \alpha_{\mathbf{P}}(\mathbf{H})\neq 0\ \text{or}\ \alpha_{\mathbf{P}}(\mathbf{O})\neq 0\right\},\] (C.11) and \(S_{E}\) be the set of Pauli operator \(\mathbf{P}\) with non-zero Pauli coefficients \(\alpha_{\mathbf{P}}\) in the Pauli decompositions of \(\mathbf{h}_{a}\) for some \(a\), \[S_{E}=\left\{\mathbf{P}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}\setminus\{ \mathbf{I}^{\otimes n}\}\ |\ \exists 1\leq a\leq m,\alpha_{\mathbf{P}}(\mathbf{h}_{a})\neq 0\right\}.\] (C.12) Because \(\mathbf{H}\) is a local Hamiltonian and \(\mathbf{O}\) is a local observable, we have \(|S_{0}|=\text{poly}(n)\). Because \(\mathbf{h}_{a}\) is a local observable, we have \(|S_{E}|=\mathcal{O}(m)=\text{poly}(n)\). We then define, \[S=\left\{\mathbf{P}^{\prime}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}\setminus \{\mathbf{I}^{\otimes n}\}\ |\ \exists\mathbf{Q}\in S_{0},\mathbf{P}\in S_{E},\text{tr}(\mathbf{P}^{\prime}[\mathbf{Q},\mathbf{P}] )\neq 0\right\}\cup S_{0}.\] (C.13) Because \(|S_{0}|=\text{poly}(n)\) and \([\mathbf{Q},\mathbf{P}]\) is another Pauli observable up to a phase, we have \(|S|\leq|S_{E}||S_{1}|=\text{poly}(n)\). The union bound yields the following probabilistic statement, \[1-\Pr_{|\psi\rangle}\left[|\langle\psi|\,\mathbf{Q}\,|\psi\rangle|<\frac{1}{2^{n/ 3}},\ \forall\mathbf{Q}\in S\right]\leq 2|S|\exp\left(-\frac{2^{n/3}}{10}\right)\leq \frac{\text{poly}(n)}{2^{2^{n/3}/10}}<\frac{1}{2^{2^{n/4}}},\] (C.14) where the last inequality holds for any large \(n\) since \(2^{2^{n/3}/10-2^{n/4}}\) grows much faster than any polynomial of \(n\). We condition on the event for the random state \(|\psi\rangle\) that \[|\langle\psi|\,\mathbf{Q}\,|\psi\rangle|<\frac{1}{2^{n/3}}\quad\text{for all}\quad\mathbf{Q}\in S,\] (C.15) referred to as event \(E^{*}\). We can obtain the following from Cauchy-Schwarz inequality, \[|\langle\psi|\,[\mathbf{H},\mathbf{h}_{a}]\,|\psi\rangle|\leq\sum_{\mathbf{Q},\mathbf{P}\in\{ \mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}}|\alpha_{\mathbf{Q}}(\mathbf{H})|\,|\alpha_ {\mathbf{P}}(\mathbf{h}_{a})|\,|\langle\psi|\,[\mathbf{Q},\mathbf{P}]\,|\psi\rangle|\] \[\leq\sqrt{\sum_{\mathbf{Q},\mathbf{P}\in\{\mathbf{I},\mathbf{X},\mathbf{Y},\mathbf{Z}\}^{\otimes n}} \alpha_{\mathbf{Q}}^{2}(\mathbf{H})\alpha_{\mathbf{P}}^{2}(\mathbf{h}_{a})}\sqrt{\frac{|S|}{2^{ 2n/3}}}\leq\frac{\mathrm{poly}(n)}{2^{n/3}},\] (C.16) where the second inequality uses the conditioning on event \(E^{*}\) and \([\mathbf{Q},\mathbf{P}]\neq 0\implies\mathbf{Q},\mathbf{P}\neq\mathbf{I}^{\otimes n}\), and the third inequality uses \(|S|=\mathrm{poly}(n)\) and Eq. (C.8), (C.10). Similarly, we also have \[\left|\left\langle\psi\right|\left[[\mathbf{H},\mathbf{h}_{a}],\mathbf{h}_{a^{\prime}} \right]\left|\psi\right\rangle\right|\leq\frac{\mathrm{poly}(n)}{2^{n/3}}.\] (C.17) Using Eq. (C.9) instead of Eq. (C.8), we can similarly obtain \[\left|\left\langle\psi\right|\mathbf{O}\left|\psi\right\rangle-\alpha_{I^{\otimes n }}(\mathbf{O})\right|=\left|\left\langle\psi\right|\mathbf{O}\left|\psi\right\rangle- \frac{\mathrm{tr}(O)}{2^{n}}\right|\leq\frac{\mathrm{poly}(n)}{2^{n/3}}<\frac{ 1}{2^{n/4}}\leq\epsilon\] (C.18) for any large problem size \(n\) since \(2^{n/3}\) grows much faster than any polynomial in \(n\). We now show that \(\left|\psi\right\rangle\) is an \(\epsilon\)-approximate local minimum of \(\mathbf{H}\) under local unitary perturbations. To establish this claim, from Def. 5, we need to prove that \[\exp_{\psi}(\mathbf{\alpha})^{\dagger}\mathbf{H}\exp_{\psi}(\mathbf{\alpha})\geq\left\langle \psi\right|\mathbf{H}\left|\psi\right\rangle-\epsilon\left\|\mathbf{\alpha}\right\|_{1 },\text{ for each }\quad\mathbf{\alpha}\in T_{\psi},\left\|\mathbf{\alpha}\right\|_{1}\leq\delta\] (C.19) for some \(\delta>0\). Recall from Lemma C.1 based on Taylor's theorem (Prop. D.1), we have \[\exp_{\psi}(\mathbf{\alpha})^{\dagger}\mathbf{H}\exp_{\psi}(\mathbf{\alpha}) =\left\langle\psi\right|\mathbf{H}\left|\psi\right\rangle-\mathrm{i} \left\langle\psi\right|\left[\mathbf{H},\sum_{a=1}^{m}\alpha_{a}\mathbf{h}_{a}\right] \left|\psi\right\rangle\] \[\qquad-\frac{1}{2}\sum_{a=1}^{m}\sum_{a^{\prime}=1}^{m}\alpha_{a} \alpha_{a^{\prime}}\exp_{\psi}(\eta\hat{\mathbf{\alpha}})^{\dagger}[[\mathbf{H},\mathbf{h }_{a}],\mathbf{h}_{a^{\prime}}]\exp_{\psi}(\eta\hat{\mathbf{\alpha}}),\] (C.20) for some \(0\leq\eta\leq\left\|\mathbf{\alpha}\right\|_{1}\). For the linear term, from Eq. (C.16) bounding \(\left|\left\langle\psi\right|[\mathbf{H},\mathbf{h}_{a}]\left|\psi\right\rangle\right|\), we have \[\left|-\mathrm{i}\left\langle\psi\right|\left[\mathbf{H},\sum_{a=1}^{m}\alpha_{a} \mathbf{h}_{a}\right]\left|\psi\right\rangle\right|\leq\sum_{a=1}^{m}\left|\alpha_ {a}\right|\frac{\mathrm{poly}(n)}{2^{n/3}}\leq\frac{\mathrm{poly}(n)}{2^{n/3}} \left\|\mathbf{\alpha}\right\|_{1},\] (C.21) where the last inequality uses \(m=\mathrm{poly}(n)\). For the quadratic residual term, we have \[\left|\frac{1}{2}\sum_{a=1}^{m}\sum_{a^{\prime}=1}^{m}\alpha_{a}\alpha_{a^{ \prime}}\exp_{\psi}(\eta\hat{\mathbf{\alpha}})^{\dagger}[[\mathbf{H},\mathbf{h}_{a}],\mathbf{h }_{a^{\prime}}]\exp_{\psi}(\eta\hat{\mathbf{\alpha}})\right|\leq 2\left\|\mathbf{\alpha} \right\|_{1}^{2}\left\|\mathbf{H}\right\|_{\infty}=\mathrm{poly}(n)\left\|\mathbf{\alpha} \right\|_{1}^{2}.\] (C.22) Together, we can combine the inequalities to get \[\exp_{\psi}(\mathbf{\alpha})^{\dagger}\mathbf{H}\exp_{\psi}(\mathbf{\alpha}) \geq\left\langle\psi\right|\mathbf{H}\left|\psi\right\rangle-\frac{ \mathrm{poly}(n)}{2^{n/3}}\left\|\mathbf{\alpha}\right\|_{1}-\mathrm{poly}(n) \left\|\mathbf{\alpha}\right\|_{1}^{2}\] \[\geq\left\langle\psi\right|\mathbf{H}\left|\psi\right\rangle-0.5 \epsilon\left\|\mathbf{\alpha}\right\|_{1}-\mathrm{poly}(n)\left\|\mathbf{\alpha} \right\|_{1}^{2},\] (C.23) where the second inequality holds for any large problem size \(n\) since \(\epsilon\geq 1/2^{n/4}\) decays much slower than \(\mathrm{poly}(n)/2^{n/3}\). For any \(\left\|\mathbf{\alpha}\right\|<\delta:=0.5\epsilon/\mathrm{poly}(n)\), we have \[\exp_{\psi}(\mathbf{\alpha})^{\dagger}\mathbf{H}\exp_{\psi}(\mathbf{\alpha})\geq\left\langle \psi\right|\mathbf{H}\left|\psi\right\rangle-0.5\epsilon\left\|\mathbf{\alpha}\right\|_{1 }-0.5\epsilon\left\|\mathbf{\alpha}\right\|_{1}=\left\langle\psi\right|\mathbf{H} \left|\psi\right\rangle-\epsilon\left\|\mathbf{\alpha}\right\|_{1},\] (C.24) which shows that \(\left|\psi\right\rangle\) is an \(\epsilon\)-approximate local minimum of \(\mathbf{H}\) under local unitary perturbations. Finally, because the event \(E^{*}\) occurs with probability at least \(0.99\), by combining Eq. (C.18) and the above, we establish the claim that, with high probability, a random \(n\)-qubit state \(\left|\psi\right\rangle\) sampled uniformly is an \(\epsilon\)-approximate local minimum of \(\mathbf{H}\) under local unitary perturbations and \(\left\langle\psi\right|\mathbf{O}\left|\psi\right\rangle\) is \(\epsilon\)-close to \(\mathrm{tr}(O)/2^{n}\). Characterizing local minima under thermal perturbations In this appendix, we characterize local minima under thermal perturbations. In particular, we will focus on the gradients of the energy landscape, conditions of local minima, and conditions on the Hamiltonian \(\mathbf{H}\) that ensure approximate local minima are approximate global minima, i.e., there are no suboptimal local minima in the energy landscape. ### Energy gradients The energy landscape is much more nontrivial when defined under thermal perturbations. We can study the energy landscape by looking at the energy gradients. Recall the exponential map \(\exp_{\mathbf{\rho}}^{\beta,\tau,\mathbf{H},\{\mathbf{A}^{a}\}_{a}}\) in Eq. (B.25) and consider the one-dimensional function \[g(t)=\operatorname{tr}\left(\mathbf{H}\exp_{\mathbf{\rho}}^{\beta,\tau,\mathbf{H},\{\mathbf{A} ^{a}\}_{a}}(t\hat{\mathbf{\alpha}})\right)=\operatorname{tr}\left(\mathbf{H}\exp\left( \sum_{a=1}^{m}t\hat{\alpha}_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\right)(\bm {\rho})\right)\] (D.1) for \(0\leq t\leq\left\|\mathbf{\alpha}\right\|_{1},\hat{\mathbf{\alpha}}=\mathbf{\alpha}/ \left\|\mathbf{\alpha}\right\|_{1}\). We have the following derivatives, \[\frac{dg}{dt}(t) =\operatorname{tr}\left(\mathbf{H}\sum_{a}\hat{\alpha}_{a}\mathcal{ L}_{a}^{\beta,\tau,\mathbf{H}}\left[\exp\left(\sum_{a=1}^{m}t\hat{\alpha}_{a} \mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\right)(\mathbf{\rho})\right]\right),\] (D.2) \[\frac{d^{2}g}{dt^{2}}(t) =\operatorname{tr}\left(\mathbf{H}\sum_{a}\sum_{a^{\prime}}\hat{ \alpha}_{a}\hat{\alpha}_{a^{\prime}}\mathcal{L}_{a^{\prime}}^{\beta,\tau,\mathbf{H} }\left[\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\left[\exp\left(\sum_{a=1}^{m}t\hat{ \alpha}_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\right)(\mathbf{\rho})\right]\right] \right).\] (D.3) Recall Taylor's theorem with the Lagrange form of the remainder from standard single-variate calculus. By applying Taylor's theorem in Prop. D.1 to \(g(t)\), we can obtain Prop. D.2. **Proposition D.1** (Taylor's theorem).: _Let \(g(t):\mathbb{R}\to\mathbb{R}\) be twice differentiable on the open interval between \(0\) and \(t\) and \(g^{\prime}(t)\) continuous on the closed interval between \(0\) and \(t\). Then_ \[g(t)=g(0)+g^{\prime}(0)t+\frac{1}{2}g^{\prime\prime}(\eta)t^{2},\] (D.4) _for some real number \(\eta\) between \(0\) and \(t\)._ **Proposition D.2** (Taylor's theorem for thermal perturbations).: _Given an \(n\)-qubit Hamiltonian \(\mathbf{H}\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a}\), parameters \(\beta,\tau\geq 0\), \(\mathbf{\alpha}\in\mathbb{R}_{\geq 0}^{m}\), and an \(n\)-qubit state \(\mathbf{\rho}\in\mathcal{S}_{2^{n}}\)._ \[\operatorname{tr}\left(\mathbf{H}\exp_{\mathbf{\rho}}^{\beta,\tau,\mathbf{H}, \{\mathbf{A}^{a}\}_{a}}(\mathbf{\alpha})\right) =\operatorname{tr}(\mathbf{H}\mathbf{\rho})+\sum_{a}\alpha_{a} \operatorname{tr}(\mathbf{H}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}[\mathbf{\rho}])\] \[\quad+\frac{1}{2}\sum_{a}\sum_{a^{\prime}}\alpha_{a}\alpha_{a^{ \prime}}\operatorname{tr}\left(\mathbf{H}\mathcal{L}_{a^{\prime}}^{\beta,\tau,\bm {H}}[\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}[\exp_{\mathbf{\rho}}^{\beta,\tau,\mathbf{H}, \{\mathbf{A}^{a}\}_{a}}(\eta\hat{\mathbf{\alpha}})]]\right)\] (D.5) _for some \(0\leq\eta\leq\left\|\mathbf{\alpha}\right\|_{1}\)._ We define the energy gradients as follows. We separately consider a positive and a negative energy gradient. The motivation of the definition is that the positive (negative) energy gradient should determine the direction of the thermodynamics that causes the energy of the state to increase (decrease). Because our goal is to understand local minima, we will focus on the negative energy gradient. When one studies local maxima, one will focus on the positive energy gradient. **Definition 8** (Energy gradients of a state under thermal perturbations).: _Given an \(n\)-qubit Hamiltonian \(\mathbf{H}\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a}\), and parameters \(\beta,\tau\geq 0\), the gradients of an \(n\)-qubit state \(\mathbf{\rho}\in\mathcal{S}_{2^{n}}\) under thermal perturbations with inverse temperature \(\beta\), time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\) is defined as_ \[\mathbf{\nabla}^{+}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho}) :=\sum_{a=1}^{m}\max\left(+\operatorname{tr}\left(\mathbf{H}\mathcal{ L}_{a}^{\beta,\tau,\mathbf{H}}[\mathbf{\rho}]\right),0\right)\hat{\mathbf{e}}_{a}, \text{(positive energy gradient)}\] (D.6) \[\mathbf{\nabla}^{-}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho}) :=\sum_{a=1}^{m}\max\left(-\operatorname{tr}\left(\mathbf{H}\mathcal{ L}_{a}^{\beta,\tau,\mathbf{H}}[\mathbf{\rho}]\right),0\right)\hat{\mathbf{e}}_{a}, \text{(negative energy gradient)}\] (D.7) \[\mathbf{\nabla}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho}) :=\sum_{a=1}^{m}\operatorname{tr}\left(\mathbf{H}\mathcal{L}_{a}^{ \beta,\tau,\mathbf{H}}[\mathbf{\rho}]\right)\hat{\mathbf{e}}_{a}, \text{(energy gradient)}\] (D.8) _where \(\hat{\mathbf{e}}_{a}\) is the unit vector along the \(a\)-th coordinate._ Since the set of jump operators \(\{\mathbf{A}^{a}\}_{a}\) will be fixed, we will sometimes drop the dependence on \(\{\mathbf{A}^{a}\}_{a}\) for notational simplicity. The positive/negative energy gradient belongs to the tangent space \(\mathbb{R}_{\geq 0}^{m}\), but the energy gradient \[\mathbf{\nabla}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho})=\mathbf{\nabla}^{+}_ {\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho})-\mathbf{\nabla}^{-}_{\beta,\tau, \{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho})\] (D.9) may not be in the tangent space due to negative values. So, in general, one could not move in the direction of the energy gradient. However, one could move in the direction of the positive or negative energy gradient. It is instructive to think about the Heisenberg picture and define the _energy gradient operator_. **Definition 9** (Energy gradient operator).: _Given an \(n\)-qubit Hamiltonian \(\mathbf{H}\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a}\), inverse temperature \(\beta\geq 0,\) and time scale \(\tau\geq 0\), the energy gradient operators under thermal perturbations is_ \[\sum_{a=1}^{m}\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}[\mathbf{H}]\ \hat{\mathbf{e}}_{a},\] (D.10) _which is a vector of \(n\)-qubit Hermitian operators._ We can provide an upper and lower bound to the energy gradients by combining Prop. F.2 and Prop. F.3 to obtain the following proposition. **Proposition D.3** (Bound on the energy gradients).: _Given an \(n\)-qubit Hamiltonian \(\mathbf{H}\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a}\), inverse temperature \(\beta\geq 0,\) and time scale \(\tau\geq 0\),_ \[\left\|\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}(\mathbf{H})\right\|_{\infty}\leq 3 \left\|\mathbf{H}\right\|_{\infty}\] (D.11) _for all \(a=1,\dots,m\)._ The \(\beta,\tau\to\infty\) limit (zero temperature heat bath with an infinite time scale) recovers the Davies' generator \(\mathcal{L}_{a}^{\infty,\infty,\mathbf{H}}\). The Davies' generator takes an energy eigenvector \(\ket{\psi_{j}}\!\!\bra{\psi_{j}}\) of \(\mathbf{H}\) to energy eigenvectors with equal or lower energy, i.e., for any \(t\geq 0\), \[\bra{\psi_{k}}\exp\left(t\mathcal{L}_{a}^{\infty,\infty,\mathbf{H}}\right)\left( \ket{\psi_{j}}\!\!\bra{\psi_{j}}\right)\ket{\psi_{k}}=0\quad\text{for any}\quad j,k\quad\text{such that}\quad E_{k}>E_{j}.\] (D.12) We can use the above to obtain the following proposition. **Proposition D.4** (Vanishing positive energy gradient).: _For \(\beta=\tau=\infty\), we have_ \[\mathcal{L}_{a}^{\infty,\infty,\boldsymbol{H}\dagger}[\boldsymbol{H}]\preceq 0, \quad\text{for each}\quad a.\] (D.13) _Hence, the positive energy gradient vanishes \(\boldsymbol{\nabla}_{\infty,\infty}^{+}(\boldsymbol{H},\boldsymbol{\rho})=0\) and \(\boldsymbol{\nabla}_{\infty,\infty}(\boldsymbol{H},\boldsymbol{\rho})=- \boldsymbol{\nabla}_{\infty,\infty}^{-}(\boldsymbol{H},\boldsymbol{\rho})\) for all Hamiltonian \(\boldsymbol{H}\) and state \(\boldsymbol{\rho}\)._ This proposition illustrates that thermal perturbations induced by a zero-temperature heat bath with an infinite time scale should only absorb energy from the quantum system and not cause the energy to increase. Hence, the positive energy gradient must vanish. ### A sufficient condition and a necessary condition of local minima Using the negative gradient, we can show a necessary condition and a sufficient condition for local minima under thermal perturbations. They differ only slightly (\(<\) vs \(\leq\)). From the conditions, we can see that local minima are well characterized by the negative energy gradient. Recall that \(\left\|\boldsymbol{x}\right\|_{\infty}=\max_{i}\left|x_{i}\right|\) is the \(\ell_{\infty}\) norm for a finite-dimensional vector \(\boldsymbol{x}\). **Lemma D.1** (A sufficient condition for local minima under thermal perturbations).: _Given \(\epsilon>0\), an \(n\)-qubit Hamiltonian \(\boldsymbol{H}\), \(m\) local jump operators \(\{\boldsymbol{A}^{a}\}_{a}\), and parameters \(\beta,\tau\geq 0\), an \(n\)-qubit state \(\boldsymbol{\rho}\) with a small negative energy gradient,_ \[\left\|\boldsymbol{\nabla}_{\beta,\tau,\{\boldsymbol{A}^{a}\}_{a}}^{-}( \boldsymbol{H},\boldsymbol{\rho})\right\|_{\infty}<\epsilon,\] (D.14) _is an \(\epsilon\)-approximate local minimum \(\boldsymbol{\rho}\) of the \(n\)-qubit Hamiltonian \(\boldsymbol{H}\) under thermal perturbations with inverse temperature \(\beta\), time scale \(\tau\), and system-bath interactions generated by \(\{\boldsymbol{A}^{a}\}_{a}\)._ Proof.: Consider \(C_{L}=\max_{a}\left\|\mathcal{L}_{a}^{\beta,\tau,\boldsymbol{H}}\right\|_{1-1 }>0\) and \(C_{H}=\left\|\boldsymbol{H}\right\|_{\infty}\). Given \(\boldsymbol{\alpha}\in\mathbb{R}_{\geq 0}^{m}\), we have \[\left|\sum_{a}\sum_{a^{\prime}}\alpha_{a}\alpha_{a^{\prime}}\operatorname{tr} (\boldsymbol{H}\mathcal{L}_{a^{\prime}}^{\beta,\tau,\boldsymbol{H}}[ \mathcal{L}_{a}^{\beta,\tau,\boldsymbol{H}}[\boldsymbol{\sigma}]])\right| \leq C_{L}^{2}C_{H}\left\|\boldsymbol{\alpha}\right\|_{1}^{2},\] (D.15) for any state \(\boldsymbol{\sigma}\). Let \(\epsilon_{0}:=\left\|\boldsymbol{\nabla}_{\beta,\tau,\{\boldsymbol{A}^{a}\}_ {a}}^{-}(\boldsymbol{H},\boldsymbol{\rho})\right\|_{\infty}<\epsilon\). From \(\alpha_{a}\geq 0\) and Cauchy-Schwarz inequality, \[\sum_{a}\alpha_{a}\operatorname{tr}(\boldsymbol{H}\mathcal{L}_{a}^{\beta,\tau,\boldsymbol{H}}[\boldsymbol{\rho}])\geq-\sum_{a}\left|\alpha_{a}\right|\max \left(-\operatorname{tr}\left(\boldsymbol{H}\mathcal{L}_{i}^{\beta,\tau, \boldsymbol{H}}[\boldsymbol{\rho}]\right),0\right)\geq-\left\|\boldsymbol{ \alpha}\right\|_{1}\epsilon_{0}.\] (D.16) Together, Taylor's theorem for thermal perturbations (Prop. D.2) implies \[\operatorname{tr}\left(\boldsymbol{H}\exp_{\boldsymbol{\rho}}^{\beta,\tau, \boldsymbol{H},\{\boldsymbol{A}^{a}\}_{a}}(\boldsymbol{\alpha})\right)\geq \operatorname{tr}(\boldsymbol{H}\boldsymbol{\rho})-\left\|\boldsymbol{ \alpha}\right\|_{1}\epsilon_{0}-\frac{\left\|\boldsymbol{\alpha}\right\|_{1}^ {2}}{2}C_{L}^{2}C_{H},\] (D.17) for any \(\boldsymbol{\alpha}\in\mathbb{R}_{\geq 0}^{m}\). From the above, we see that for any \(\left\|\boldsymbol{\alpha}\right\|_{1}^{2}<\delta:=\frac{2(\epsilon-\epsilon_ {0})}{C_{L}^{2}C_{H}}\), \[\operatorname{tr}\left(\boldsymbol{H}\exp_{\boldsymbol{\rho}}^{ \beta,\tau,\boldsymbol{H},\{\boldsymbol{A}^{a}\}_{a}}(\boldsymbol{\alpha})\right) \geq\operatorname{tr}(\boldsymbol{H}\boldsymbol{\rho})-\epsilon \left\|\boldsymbol{\alpha}\right\|_{1}+\left\|\boldsymbol{\alpha}\right\|_{1} \left((\epsilon-\epsilon_{0})-\frac{C_{L}^{2}C_{H}}{2}\left\|\boldsymbol{\alpha }\right\|_{1}^{2}\right)\] \[\geq\operatorname{tr}(\boldsymbol{H}\boldsymbol{\rho})-\epsilon \left\|\boldsymbol{\alpha}\right\|_{1}.\] (D.18) So, \(\boldsymbol{\rho}\) is an \(\epsilon\)-approximate local minimum of \(\boldsymbol{H}\) under thermal perturbations with inverse temperature \(\beta\), time scale \(\tau\), and system-bath interactions generated by \(\{\boldsymbol{A}^{a}\}_{a}\). **Lemma D.2** (A necessary condition for local minima under thermal perturbations).: _Given \(\epsilon>0\), an \(n\)-qubit Hamiltonian \(\mathbf{H}\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a}\), and parameters \(\beta,\tau\geq 0\), an \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of \(\mathbf{H}\) under thermal perturbations with inverse temperature \(\beta\), time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\) satisfies_ \[\left\|\mathbf{\nabla}^{-}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho}) \right\|_{\infty}\leq\epsilon,\] (D.19) _which differs only slightly from the condition in Eq. (D.14)._ Proof.: Recall that \(\mathbf{\nabla}^{-}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho})\in\mathbb{R} ^{m}_{\geq 0}\). Let \(a^{*}=\arg\max_{a}\left(\nabla^{-}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H}, \mathbf{\rho})_{a}\right)\). If the negative energy gradient vector \(\operatorname{tr}(\mathbf{H}\mathcal{L}^{\beta,\tau,\mathbf{H}}_{a^{*}}[\mathbf{\rho}])\) is zero, then the claim holds. Hence, we only need to consider the case when the negative energy gradient vector is nonzero. In this case, \[0<\left\|\mathbf{\nabla}^{-}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho}) \right\|_{\infty}=\nabla^{-}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho} )_{a^{*}}=-\operatorname{tr}(\mathbf{H}\mathcal{L}^{\beta,\tau,\mathbf{H}}_{a^{*}}[\bm {\rho}]).\] (D.20) Consider \(\hat{\mathbf{\alpha}}:=\hat{\mathbf{e}}_{a^{*}}\in\mathbb{R}^{m}_{\geq 0}\), which satisfies \(\left\|\hat{\mathbf{\alpha}}\right\|_{1}=1\). We have \[\lim_{t\to 0^{+}}\frac{\operatorname{tr}(\mathbf{H}\exp^{\beta, \tau,\mathbf{H},\{\mathbf{A}^{a}\}_{a}}_{a}(t\hat{\mathbf{\alpha}}))-\operatorname{tr}(\bm {H}\mathbf{\rho})}{t} =\operatorname{tr}(\mathbf{H}\mathcal{L}^{\beta,\tau,\mathbf{H}}_{a^{*}}[ \mathbf{\rho}])\] \[=-\left\|\mathbf{\nabla}^{-}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H}, \mathbf{\rho})\right\|_{\infty}.\] (D.21) At the same time, for any \(t>0\), we also have \[\frac{\operatorname{tr}(\mathbf{H}\exp^{\beta,\tau,\mathbf{H},\{\mathbf{A}^{a}\}_{a}}_{a} (t\hat{\mathbf{\alpha}}))-\operatorname{tr}(\mathbf{H}\mathbf{\rho})}{t}\geq-\epsilon \left\|\hat{\mathbf{\alpha}}\right\|_{1}=-\epsilon.\] (D.22) Together, we obtain the desired claim. ### Hamiltonians without suboptimal local minima An important concept in classical optimization is to understand when all local minima are global minima. For example, in convex optimization, checking the convexity of the objective function \(h(\mathbf{x})\) ensures that all local minima are global minima. When all local minima are global minima, it is commonly referred to as having no suboptimal local minima in the optimization landscape. For optimizing quantum Hamiltonians, we can define a similar concept. Let us begin with a definition of approximate global minimum. **Definition 10** (Approximate global minimum of Hamiltonians).: _Given \(\epsilon,\delta>0\) and an \(n\)-qubit Hamiltonian \(\mathbf{H}\) with minimum energy \(E_{0}\). Let \(\mathbf{P}_{G+\epsilon}(\mathbf{H})\) be the projector to the subspace of energy eigenstates of \(\mathbf{H}\) with energy at most \(E_{0}+\epsilon\). An \(n\)-qubit state \(\mathbf{\rho}\) is an \(\epsilon\)-approximate global minimum of \(\mathbf{H}\) with failure probability \(\leq\delta\) if \(\operatorname{tr}(\mathbf{P}_{G+\epsilon}(\mathbf{H})\mathbf{\rho})\geq 1-\delta\)._ **Definition 11** (No suboptimal local minima).: _Given \(\epsilon>0\). We say an \(n\)-qubit Hamiltonian \(\mathbf{H}\) has no suboptimal \(\epsilon\)-approximate local minima with failure probability \(\delta\) if any \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of \(\mathbf{H}\) is an \(\epsilon\)-approximate global minimum of \(\mathbf{H}\) with failure probability \(\leq\delta\), i.e., \(\operatorname{tr}(\mathbf{P}_{G+\epsilon}(\mathbf{H})\mathbf{\rho})\geq 1-\delta\)._ While the above definitions apply to any Hamiltonian \(\mathbf{H}\), in this work, we will focus on Hamiltonians with a gap \(\Delta>0\) between the minimum energy and the second minimum energy, also known as the spectral gap. By definition of \(\mathbf{P}_{G+\epsilon}(\mathbf{H})\) and spectral gap \(\Delta\), we have \[\epsilon<\Delta\implies\mathbf{P}_{G+\epsilon}(\mathbf{H})=\mathbf{P}_{G}(\mathbf{H}).\] (D.23) As we will almost always consider \(\epsilon<\Delta\), any \(\epsilon\)-approximate global minimum is an \(0\)-approximate global minimum or _exact_ global minimum. Convexity implies all local minima are global in classical optimization. In the following lemma, we present a sufficient condition for ensuring that all local minima are global in quantum systems. As we can see, all we need is to check the negative gradient operator is sufficiently positive in the non-ground-state space \(\mathbf{I}-\mathbf{P}_{G}\). We will refer to this as the _negative gradient condition_. **Lemma D.3** (A sufficient condition ensuring all local minima are global).: _Given \(\epsilon,\delta>0\), an \(n\)-qubit Hamiltonian \(\mathbf{H}\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a}\), and parameters \(\beta,\tau\geq 0\). Let \(\mathbf{P}_{G}(\mathbf{H})\) be the projection onto the ground state space of \(\mathbf{H}\). If there exists \(\mathbf{\alpha}\in\mathbb{R}_{\geq 0}^{m}\) with \(\left\lVert\mathbf{\alpha}\right\rVert_{1}=1\), such that the negative gradient operator satisfies_ \[\text{(negative gradient condition):}\quad-\sum_{a}\alpha_{a}\mathcal{ L}_{a}^{\dagger\beta,\tau,\mathbf{H}}[\mathbf{H}]\succeq\frac{2\epsilon}{\delta}(\mathbf{I}-\mathbf{P}_{G}( \mathbf{H}))-\epsilon\mathbf{I},\] (D.24) _then any \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of the \(n\)-qubit Hamiltonian \(\mathbf{H}\) under thermal perturbations with inverse temperature \(\beta\), time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\) is an exact global minimum with failure probability \(\leq\delta\). That is, \(\operatorname{tr}(\mathbf{P}_{G}(\mathbf{H})\mathbf{\rho})\geq 1-\delta\)._ Proof.: From the necessary condition for local minima in Lemma D.2, any \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of the \(n\)-qubit Hamiltonian \(\mathbf{H}\) under thermal perturbations with inverse temperature \(\beta\), time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\) satisfies \[-\operatorname{tr}(\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}[\mathbf{H}]\mathbf{\rho })\leq\epsilon\quad\text{for each}\quad a=1,\dots,m.\] (D.25) Hence, from Eq. (D.24), we have \[\epsilon\geq-\sum_{a}\alpha_{a}\operatorname{tr}(\mathcal{L}_{a}^{\dagger \beta,\tau,\mathbf{H}}[\mathbf{H}]\mathbf{\rho})\geq\frac{2\epsilon}{\delta}(1- \operatorname{tr}(\mathbf{P}_{G}(\mathbf{H})\mathbf{\rho}))-\epsilon.\] (D.26) This immediately implies that \(\operatorname{tr}(\mathbf{P}_{G}(\mathbf{H})\mathbf{\rho})\geq 1-\delta\). ## Appendix E Complexity of finding a local minimum in quantum systems In this appendix, we formally present the main results of this paper shown earlier in Section 2 regarding the computational complexity of finding a local minimum in quantum systems. We separate the results into two parts. First, we look at the problem of finding a local minimum under local unitary perturbations (Def. 7), showing that the problem is classically trivial to solve. Next, we look at the problem of finding a local minimum under low-temperature thermal perturbations (Def. 6). We will see that this problem is quantumly easy but classically hard to solve, establishing a promising candidate problem for quantum advantage. ### Finding a local minimum under local unitary perturbations We begin with the first main result stating the problem of a local minimum under local unitary perturbations is classically trivial. The main issue is that there is a large barren plateau (which consists of many local minima with high energy) in the quantum optimization landscape. Hence, a classical algorithm can efficiently estimate the properties of a single local minimum. **Theorem 5** (Classically easy to find a local minimum under local unitary perturbations; Restatement of Theorem 1).: _Consider a large problem size \(n\). There is a trivial classical algorithm that guarantees the following. Given error \(\epsilon=1/\mathrm{poly}(n)\), an \(n\)-qubit local Hamiltonian \(\mathbf{H}\) with \(\left\lVert\mathbf{H}\right\rVert_{\infty}=\mathrm{poly}(n)\), \(m\) local Hermitian operators \(\{\mathbf{h}^{a}\}_{a=1}^{m}\) with \(m=\mathrm{poly}(n)\) and \(\left\lVert\mathbf{h}_{a}\right\rVert_{\infty}=1\), and a local observable \(\mathbf{O}\) with \(\left\lVert\mathbf{O}\right\rVert_{\infty}\leq 1\)._ _The classical algorithm runs in time \(\mathcal{O}(1)\) and outputs a real value \(v\in[-1,1]\), such that \(v\) is \(\epsilon\)-close to \(\left\langle\psi\right|\mathbf{O}\left|\psi\right\rangle\) for an \(\epsilon\)-approximate local minimum \(\left|\psi\right\rangle\) of the Hamiltonian \(\mathbf{H}\) under local unitary perturbations generated by \(\{\mathbf{h}^{a}\}_{a}\)._ Proof.: From Lemma C.1 given in Appendix C characterizing local minima of \(\mathbf{H}\) under local unitary perturbations, with high probability, a state \(\left|\psi\right\rangle\) sampled uniformly at random from the space of pure states is an \(\epsilon\)-approximate local minimum \(\left|\psi\right\rangle\) of the local Hamiltonian \(\mathbf{H}\) under local unitary perturbations, and \(\left\langle\psi\right|\mathbf{O}\left|\psi\right\rangle\) is \(\epsilon\)-close to \(\mathrm{tr}(\mathbf{O})/2^{n}\). Hence, there exists an \(\epsilon\)-approximate local minimum \(\left|\psi\right\rangle\) of \(\mathbf{H}\) under local unitary perturbations, and \(\left\langle\psi\right|\mathbf{O}\left|\psi\right\rangle\) is \(\epsilon\)-close to \(\mathrm{tr}(\mathbf{O})/2^{n}\). This characterization of local minima gives rise to the following trivial classical algorithm. Given a local observable \(\mathbf{O}\), represented by the subset \(S\) of qubits \(\mathbf{O}\) acts on with \(\left|S\right|=\mathcal{O}(1)\) and a \(2^{\left|S\right|\times\left|S\right|}\) Hermitian matrix \(\mathbf{O}^{*}\). A classical algorithm can compute \(\mathrm{tr}(\mathbf{O})/2^{n}\) efficiently by computing the trace of \(\mathbf{O}^{*}\) and dividing by \(2^{\left|S\right|}\). This trivial classical algorithm runs in time \(\mathcal{O}(1)\). ### Finding a local minimum under thermal perturbations We now turn to the second main result of this work, which shows that finding a local minimum under low-temperature thermal perturbations is easy with a quantum computer. This is in contrast to the task of finding the ground state (global minimum), which is hard on quantum computers. The formal statement is given below in Theorem 6. **Theorem 6** (Quantumly easy to find a local minimum under thermal perturbations; Restatement of Theorem 2).: _Let \(n\) be the problem size. There is a \(\mathrm{poly}(n)\)-time quantum algorithm that guarantees the following. Given error \(\epsilon=1/\mathrm{poly}(n)\), inverse temperature \(0\leq\beta\leq\mathrm{poly}(n)\), time scale \(\tau=\mathrm{poly}(n)\), an \(n\)-qubit local Hamiltonian \(\mathbf{H}\) with \(\left\lVert\mathbf{H}\right\rVert_{\infty}=\mathrm{poly}(n)\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a=1}^{m}\) with \(m=\mathrm{poly}(n)\), and a local observable \(\mathbf{O}\) with \(\left\lVert\mathbf{O}\right\rVert_{\infty}\leq 1\)._ _The quantum algorithm outputs a real value \(v\in[-1,1]\), such that \(v\) is \(\epsilon\)-close to \(\mathrm{tr}(\mathbf{O}\mathbf{\rho})\) for an \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of \(\mathbf{H}\) under thermal perturbations with an inverse temperature \(\beta\), a time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\)._ Proof idea.: We consider a version of gradient descent, which we refer to as _Quantum thermal gradient descent_, that mimics how Nature cools the quantum system when the system is interacting locally and weakly with a low-temperature heat bath. The algorithm starts with an arbitrary initial state \(\mathbf{\rho}_{0}\). For each step \(t=0,1,2,\ldots\), the algorithm considers the current state \(\mathbf{\rho}_{t}\) and proposes the next state \(\mathbf{\rho}_{t+1}\). The tangent space \(T^{\beta,\tau,\mathbf{H}}_{\mathbf{\rho}_{t}}\) at \(\mathbf{\rho}_{t}\) is high dimensional with many possible directions/dynamics depending on the system-bath interaction. The algorithm chooses a direction that lowers the energy as fast as possible by computing the gradient of the energy and proposes \(\mathbf{\rho}_{t+1}\) by performing gradient descent. As long as the current state \(\mathbf{\rho}_{t}\) is not an \(\epsilon\)-approximate local minimum of \(\mathbf{H}\) under thermal perturbations, the energy will decrease by a sufficiently large amount \[\mathrm{tr}(\mathbf{H}\mathbf{\rho}_{t+1})<\mathrm{tr}(\mathbf{H}\mathbf{\rho}_{t})-\frac{1}{ \mathrm{poly}(n)}.\] (E.1) Because the energy is bounded from below, there are, at most, a polynomial number of steps \(t\leq\mathrm{poly}(n)\) until the algorithm arrives at an \(\epsilon\)-approximate local minimum of \(\mathbf{H}\) under thermal perturbations. The detailed proof of Theorem 6 is given in Appendix G. Finally, we turn to the third main result establishing the difficulty of finding a local minimum under thermal perturbations using a classical computer. To establish this result, we consider a class of geometrically local Hamiltonians \(\{\mathbf{H}_{C}\}_{C}\) on 2D lattices. Each Hamiltonian \(\mathbf{H}_{C}\) corresponds to a 2D circuit \(\mathbf{U}_{C}=\mathbf{U}_{T}\cdots\mathbf{U}_{2}\mathbf{U}_{1}\) acting on \(n\) qubits with \(T=2t_{0}+L=\operatorname{poly}(n)\) gates as constructed in Fig. 1 of [43] with the additional padding to the construction in [43] such that the first and last \(t_{0}=cL^{2}\) gates being the identity gates for a constant \(c=\mathcal{O}(1)\). The construction in [43] has the property that each gate of the 2D circuit \(\mathbf{U}_{C}\) is geometrically adjacent to the subsequent gate. Given the 2D circuit \(\mathbf{U}_{C}\) on \(n\) qubits with \(T\) gates. The geometrically local Hamiltonian \(\mathbf{H}_{C}\) acts on \(n+T\) qubits on a 2D lattice and has a highly-entangled unique ground state that encodes the quantum computation based on the 2D circuit \(\mathbf{U}_{C}\), \[\ket{\eta_{\mathbf{0}}}=\sum_{t=0}^{T}\sqrt{\xi_{t}}\big{(}\mathbf{U}_{t}\cdots\mathbf{U} _{1}\ket{0^{n}}\big{)}\otimes\ket{0^{t}1^{T-t}},\qquad\text{where}\quad\xi_{t }:=\frac{1}{2^{T}}\binom{T}{t}.\] (E.2) We present the detailed construction of the 2D Hamiltonian \(\mathbf{H}_{C}\) in Definition 14 in Appendix J. We have the following proposition for estimating single-qubit observables on the ground state of \(\mathbf{H}_{C}\). **Proposition E.1** (\(\mathsf{BQP}\)-hardness for estimating properties of the ground state of \(\mathbf{H}_{C}\)).: _If there is a classical algorithm that can estimate any single-qubit observable on the unique ground state of the geometrically local Hamiltonian \(\mathbf{H}_{C}\) in time polynomial in the number of qubits in \(\mathbf{H}_{C}\) to error \(1/4\) for any \(\mathbf{H}_{C}\) in the class, then \(\mathsf{BPP}=\mathsf{BQP}\)._ Proof.: Consider the single-qubit observable \(\mathbf{Z}_{j}\) and let \(T_{j}\) be the last time that qubit \(j\) is acted by a gate in the circuit \(C\). The ground state expectation of \(\mathbf{Z}_{j}\) is \[\bra{\eta_{\mathbf{0}}}\mathbf{Z}_{j}\ket{\eta_{\mathbf{0}}} =\sum_{t=T_{j}+1}^{T}\xi_{t}\bra{0^{n}}\mathbf{U}_{1}^{\dagger} \cdots\mathbf{U}_{t}^{\dagger}\mathbf{Z}_{j}\mathbf{U}_{t}\cdots\mathbf{U}_{1}\ket{0^{n}}+\sum _{t=0}^{T_{j}}\xi_{t}\bra{0^{n}}\mathbf{U}_{1}^{\dagger}\cdots\mathbf{U}_{t}^{\dagger} \mathbf{Z}_{j}\mathbf{U}_{t}\cdots\mathbf{U}_{1}\ket{0^{n}}\] \[=\bra{0^{n}}\mathbf{U}_{1}^{\dagger}\cdots\mathbf{U}_{T}^{\dagger}\mathbf{Z}_ {j}\mathbf{U}_{T}\cdots\mathbf{U}_{1}\ket{0^{n}}P_{t>T_{j}}+\epsilon_{j},\] (E.3) \[\text{where}\quad P_{t>T_{j}}:=\sum_{t>T_{j}}\xi_{t},\quad\epsilon _{j}:=\sum_{t\leq T_{j}}\xi_{t}\bra{0^{n}}\mathbf{U}_{1}^{\dagger}\cdots\mathbf{U}_{t }^{\dagger}\mathbf{Z}_{j}\mathbf{U}_{t}\cdots\mathbf{U}_{1}\ket{0^{n}}.\] (E.4) We have used the fact that \(U_{t}\) for \(t>T_{j}\) acts like identity on the \(j\)-th qubit. Note that \[\ket{\epsilon_{j}}\leq 1-P_{t>T_{j}}=:P_{t\leq T_{j}}.\] (E.5) We can make \(\epsilon_{j}\) arbitrarily small using a tail bound on the binomial distribution. Given any circuit, one could always pad more identity gates to form an \(L\)-gate circuit, such that the last \(3/4\) of the \(L\) gates are identity. Recall that \(T=2t_{0}+L=2cL^{2}+L\). Then \(T_{j}\leq cL^{2}+L/4\) and we have \(\ket{\epsilon_{j}}\leq P_{t\leq T_{j}}\leq P_{t\leq cL^{2}+L/4}\). Using Hoeffding's inequality, we can bound the probability of sampling a time \(t\), such that \(t\leq cL^{2}+L/4\), according to the Binomial distribution \(\{\xi_{t}\}_{t=0}^{T}\). This yields, \[\ket{\epsilon_{j}}\leq\exp\left[-2T\left(\frac{1}{2}-\frac{cL^{2}+L/4}{2cL^{2 }+L}\right)^{2}\right]=\mathrm{e}^{-\frac{L}{8+16cL}}.\] (E.6) By choosing a small constant \(c\leq 1/(16\ln 18)-1/(2L)\), we have \(\ket{\epsilon_{j}}\leq 1/18\) and \(P_{t>T_{j}}\geq 17/18\). Because of the bounds on the error \(\ket{\epsilon_{j}}\) and the probability \(P_{t>T_{j}}\), a classical algorithm satisfying the assumption of the proposition can determine whether \[\bra{0^{n}}\mathbf{U}_{1}^{\dagger}\cdots\mathbf{U}_{T}^{\dagger}Z_{j}\mathbf{U}_{T}\cdots \mathbf{U}_{1}\ket{0^{n}}>1/3\quad\text{or}\quad\bra{0^{n}}\mathbf{U}_{1}^{\dagger} \cdots\mathbf{U}_{T}^{\dagger}Z_{j}\mathbf{U}_{T}\cdots\mathbf{U}_{1}\ket{0^{n}}<-1/3,\] (E.7) for any 2D circuit \(\mathbf{U}_{C}\) with \(T=2t_{0}+L=2cL^{2}+L\) gates, where the first \(t_{0}\) and and the last \((3/4)L+t_{0}\) gates are identity. Because one could think of the circuit \(\mathbf{U}_{C}\) as having \(L/4\) gates for any \(L=\operatorname{poly}(n)\), this immediately implies that a polynomial-time classical algorithm can decide whether the expectation value of \(Z_{j}\) on the output state is \(>1/3\) or \(<-1/3\) for any polynomial-size 2D circuit where all consecutive gates are adjacent in the 2D geometry. A 2D circuit \(\mathbf{U}_{C}\) such that any gate is adjacent to the subsequent gate can be constructed from any quantum circuit without the 2D constraint, such that a single-qubit observable \(Z_{i}\) on the output of the original quantum circuit corresponds to a single-qubit observable \(Z_{j}\) on the output of the 2D circuit. As a result, any polynomial-time classical algorithm that can determine whether the expectation value of \(Z_{j}\) on \(\mathbf{U}_{t}\cdots\mathbf{U}_{1}\ket{0^{n}}\) is greater than \(1/3\) or smaller than \(-1/3\) can be used to simulate any polynomial-time quantum algorithm for solving decision problems in classical polynomial time. Hence, \(\mathsf{BPP}=\mathsf{BQP}\). Using a series of mathematical techniques presented in Appendix H for characterizing whether all local minima are global minima in a many-body Hamiltonian, we prove that all local minima \(\mathbf{H}_{C}\) are close to the unique ground state \(\ket{\eta_{\mathbf{0}}}\) in Theorem 7. This theorem is the most involved technical contribution of this work. Intuitively, one can think of the energy landscape of the 2D Hamiltonian \(\mathbf{H}_{C}\) over the space of \(n\)-qubit density matrices under low-temperature thermal perturbations to have a good bowl shape. This is in stark contrast to the energy landscape under local unitary perturbations, where the landscape always contains an overwhelmingly large barren plateau causing the problem of finding local minima to be classically easy. Furthermore, this theorem shows that a low-temperature cooling can always find a state close to the ground state irrespective of where we initialize the state in the exponentially large quantum state space. **Theorem 7** (All local minima are global in \(\mathsf{BQP}\)-hard Hamiltonians; Restatement of Theorem 3).: _Let \(\mathbf{P}_{G}(\mathbf{H}_{C})=\ket{\eta_{\mathbf{0}}}\!\!\bra{\eta_{\mathbf{0}}}\) be the ground state of the 2D Hamiltonian \(\mathbf{H}_{C}\) acting on \(n+T=\operatorname{poly}(n)\) qubits. There is a choice of \(m=\operatorname{poly}(n)\) two-qubit jump operators \(\{\mathbf{A}^{a}\}_{a}\) satisfying the following._ _Given \(0<\delta<1\). For any small error \(\epsilon=1/\operatorname{poly}(n,1/\delta)\), any \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of \(\mathbf{H}_{C}\) under thermal perturbations with a large inverse temperature \(\beta=\operatorname{poly}(n,1/\delta)\), a large time scale \(\tau=\operatorname{poly}(n,1/\delta)\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\) is an exact global minimum with high probability, i.e., we have \(\operatorname{tr}(\mathbf{P}_{G}(\mathbf{H}_{C})\mathbf{\rho})\geq 1-\delta\)._ The proof of Theorem 7 is given in Appendix J. To show that the landscape has a good bowl shape, we utilize the negative gradient condition given in Appendix D.3. However, the negative energy gradient operator is not easy to study. To establish this strong claim, we give a series of techniques in Appendix H for characterizing negative energy gradient operator in few-qubit systems, in commuting Hamiltonians, and in perturbed Hamiltonians. These technical tools can also be used to understand the energy landscape in other interacting many-body Hamiltonians. While finding a local minimum under local unitary perturbations is classically easy, the characterization of the energy landscape in these \(\mathsf{BQP}\)-hard Hamiltonians \(\mathbf{H}_{C}\) implies that finding a local minimum under thermal perturbations is _universal for quantum computation_ and is hence classically hard if \(\mathsf{BPP}\neq\mathsf{BQP}\). Recall that \(\mathsf{BPP}=\mathsf{BQP}\) implies that all single-qubit measurements of all polynomial-size quantum circuits can be simulated in polynomial time on a classical computer. Since one expects some quantum circuits to be hard to simulate on a classical computer, Theorem 8 implies that finding a local minimum under thermal perturbations is classically hard. **Theorem 8** (Classically hard to find a local minimum under thermal perturbations; Restatement of Theorem 4).: _Let \(n\) be the problem size. Suppose there is a \(\operatorname{poly}(n)\)-time classical algorithm guaranteeing the following. Given error \(\epsilon=1/\operatorname{poly}(n)\), inverse temperature \(0\leq\beta\leq\operatorname{poly}(n)\), time scale \(0\leq\tau\leq\operatorname{poly}(n)\), an \(n\)-qubit local Hamiltonian \(\boldsymbol{H}\) with \(\left\lVert\boldsymbol{H}\right\rVert_{\infty}=\operatorname{poly}(n)\), \(m\) local jump operators \(\{\boldsymbol{A}^{a}\}_{a=1}^{m}\) with \(m=\operatorname{poly}(n)\), and a single-qubit observable \(\boldsymbol{O}\) with \(\left\lVert\boldsymbol{O}\right\rVert_{\infty}\leq 1\)._ _The classical algorithm outputs a real value \(v\in[-1,1]\), such that \(v\) is \(\epsilon\)-close to \(\operatorname{tr}(\boldsymbol{O}\boldsymbol{\rho})\) for an \(\epsilon\)-approximate local minimum \(\boldsymbol{\rho}\) of the Hamiltonian \(\boldsymbol{H}\) under thermal perturbations with an inverse temperature \(\beta\), a time scale \(\tau\), and system-bath interactions generated by \(\{\boldsymbol{A}^{a}\}_{a}\). Then \(\mathsf{BPP}=\mathsf{BQP}\)._ Proof.: Assuming the existence of a polynomial-time classical algorithm that satisfies the properties stated in the theorem. Apply this classical algorithm to the 2D Hamiltonian \(\boldsymbol{H}_{C}\) considered in Theorem 7 with a sufficiently small approximation error \(\epsilon\), such that any \(\epsilon\)-approximate local minimum \(\boldsymbol{\rho}\) of \(\boldsymbol{H}_{C}\) under thermal perturbations with polynomially-large \(\beta\), \(\tau\) and system-bath interactions generated by \(\{\boldsymbol{A}^{a}\}_{a}\) is an exact global minimum with high probability, i.e., \[\langle\eta_{\boldsymbol{0}}|\,\boldsymbol{\rho}\,|\eta_{\boldsymbol{0}} \rangle=\operatorname{tr}(\boldsymbol{P}_{\!G}(\boldsymbol{H}_{C})\boldsymbol {\rho})\geq 1-\frac{1}{16^{2}},\] (E.8) where \(|\eta_{\boldsymbol{0}}\rangle\) is the unique ground state of \(\boldsymbol{H}_{C}\). We further consider \(\epsilon\) to be small enough such that \[\epsilon<\frac{1}{8}.\] (E.9) Let \(\boldsymbol{\rho}\) be an \(\epsilon\)-approximate local minimum of the Hamiltonian \(\boldsymbol{H}\) under thermal perturbations. Consider the observable \(\boldsymbol{O}_{j}=\boldsymbol{Z}_{j}\) from the proof of Proposition E.1. Using the Fuchs-van de Graaf inequalities, we have \[\left\lVert\boldsymbol{\rho}-|\eta_{\boldsymbol{0}}\rangle\!\langle\eta_{ \boldsymbol{0}}|\right\rVert_{1}\leq\frac{1}{8}.\] (E.10) Because the classical algorithm can estimate \(\operatorname{tr}(\boldsymbol{O}_{j}\boldsymbol{\rho})\) to error \(\epsilon\), from Eq. (E.9) and (E.10), the classical algorithm can estimate \(\langle\eta_{\boldsymbol{0}}|\boldsymbol{O}_{j}|\eta_{\boldsymbol{0}}\rangle\) to error \(1/4\) in time polynomial in the number of qubits in \(\boldsymbol{H}_{C}\). From Prop. E.1, this implies that \(\mathsf{BPP}=\mathsf{BQP}\). In the following, we use the previous theorem to show that quantum machines can improve over any efficient classical algorithm that variationally optimizes a classical ansatz that can efficiently predict local properties by performing low-temperature cooling. Examples of the classical ansatz include tensor networks with efficient tensor contraction algorithms and neural-network quantum states with fast sampling algorithms. This result provides a physically-relevant problem that yields an advantage in minimizing the energy of a geometrically-local Hamiltonian. **Corollary E.1** (Quantum advantage over variationally optimized classical ansatz).: _Under the conjecture that \(\mathsf{BPP}\neq\mathsf{BQP}\), there exists a class of \(n\)-qubit geometrically-local Hamiltonian \(\boldsymbol{H}\) on a two-dimensional lattice with \(\left\lVert\boldsymbol{H}\right\rVert_{\infty}=\mathcal{O}(n)\) that satisfies the following. Given any classical ansatz of \(n\)-qubit state \(\boldsymbol{\rho}\) that can estimate the expectation value of single-qubit observables to \(1/\operatorname{poly}(n)\) error in \(\operatorname{poly}(n)\)-time on classical computers, any \(\operatorname{poly}(n)\)-time classical algorithm for minimizing the energy \(\operatorname{tr}(\boldsymbol{H}\boldsymbol{\rho})\) using the classical ansatz, and samples of the state \(\boldsymbol{\rho}\) represented by the optimized classical ansatz. A quantum machine can find a state \(\boldsymbol{\rho}^{\#}\) with strictly lower energy than \(\boldsymbol{\rho}\) in \(\operatorname{poly}(n)\) time by running a quantum thermal gradient descent based on low-temperature cooling._ Proof.: The central claim is that the state \(\boldsymbol{\rho}\) found by an efficient classical algorithm cannot be an \(\epsilon\)-approximate local minimum under low-temperature thermal perturbations. We establish this claim by contradiction. Suppose that the classical ansatz for \(\boldsymbol{\rho}\) found by the efficient classical algorithm is an \(\epsilon\)-approximate local minimum. Then the classical algorithm can use the classical ansatz to predict the expectation values of single-qubit observables of an \(\epsilon\)-approximate local minimum \(\boldsymbol{\rho}\) of \(\boldsymbol{H}\) to \(\epsilon\) error. From Theorem 8, this implies that \(\mathsf{BPP}=\mathsf{BQP}\), which is a contradiction. Because \(\mathbf{\rho}\) is not an \(\epsilon\)-approximate local minimum under low-temperature thermal perturbations, a quantum machine can use samples of \(\mathbf{\rho}\) to initialize at the state \(\mathbf{\rho}\) and perform one gradient descent step based on low-temperature cooling. From Lemma D.2 on the necessary condition for local minima, there exists \(a\in\{1,\ldots,m\}\) such that \(\operatorname{tr}(\mathbf{H}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}[\mathbf{\rho}])<-\epsilon\). From Lemma G.1 on cooling by gradient descent, one can show that a single gradient descent step yields a state \(\mathbf{\rho}^{\text{(next)}}\) with a strictly lower energy than the state \(\mathbf{\rho}\). Hence, one establishes the desired claim. ## Appendix F Details of thermal Lindbladians In the rest of the appendices, we give the full detailed proofs of Theorems 6 and 7 that are central to establishing the computational complexity of finding local minima under thermal perturbations in the previous appendix. To that end, we need to provide the techincal details of thermal Lindbladians that generate such perturbations. We have previously presented a high-level introduction to thermal Lindbladians in Appendix A.3. This has been sufficient for defining local minima and analyzing some basic properties, but not enough for proving Theorems 6 and 7. In this appendix, we present the exact form of thermal Lindbladians, their properties, and the algorithmic primitives for simulating quantum thermodynamics. ### Exact form The exact form of the thermal Lindbladian depends on a few physical concepts due to the microscopic derivation from a system-bath interaction [41]. For each jump \(\mathbf{A}^{a}\), we have \[\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}(\mathbf{\rho}):=-\mathrm{i}[\mathbf{H}_{LS,a}^{ \beta,\tau,\mathbf{H}},\mathbf{\rho}]+\underbrace{\int_{-\infty}^{\infty}\gamma_{ \beta}(\omega)\Big{[}\hat{\mathbf{A}}^{a}(\omega)\mathbf{\rho}\hat{\mathbf{A}}^{a}(\omega) ^{\dagger}-\frac{1}{2}\{\hat{\mathbf{A}}^{a}(\omega)^{\dagger}\hat{\mathbf{A}}^{a}( \omega),\mathbf{\rho}\}\Big{]}\mathrm{d}\omega}_{:=\mathcal{D}_{a}^{\beta,\tau,\bm {H}}[\mathbf{\rho}]}\] (F.1) where \(\mathcal{D}_{a}^{\beta,\tau,\mathbf{H}}\) is the purely dissipative part of the thermal Lindbladian. Implicitly, the operator \(\hat{\mathbf{A}}^{a}(\omega)\) also depends on the Hamiltonian \(\mathbf{H}\) and the time scale \(\tau.\) We now unpack the physical concepts that form the building blocks of this expression. Transition weight.At a fixed inverse temperature \(\beta\), the _transition weight_\(\gamma_{\beta}(\omega)\) tells us how strong the rate of a transition/jump should be, depending on the energy difference \(\omega\). In particular, the transition weight satisfies the following _Kubo-Martin-Schwinger (KMS) condition_ and convenient normalization \[\gamma_{\beta}(\omega)/\gamma_{\beta}(-\omega)=\mathrm{e}^{-\beta\omega}\quad \text{and}\quad 0\leq\gamma_{\beta}(\omega)\leq 1\quad\text{for any}\quad\beta\geq 0\quad\text{and any}\quad\omega\in\mathbb{R},\] (F.2) which is reminiscent of how detailed balance is enforced in classical Markov chains. We remark that any \(\gamma_{\beta}(\omega)\) obeying the above KMS condition and normalization also satisfies the following tail bound \[\max_{\omega\geq\Delta}\omega\gamma_{\beta}(\omega)\leq\max_{\omega\geq\Delta }\omega\mathrm{e}^{-\beta\omega}=\frac{1}{\beta}\max_{x\geq\beta\Delta}x \mathrm{e}^{-x}\leq\frac{1}{\beta}\max_{x\geq\beta\Delta}\mathrm{e}^{-x/2}= \frac{\mathrm{e}^{-\beta\Delta/2}}{\beta}.\] (F.3) For concreteness, we usually adopt the common choice of \(\gamma_{\beta}\) corresponding to _Glauber dynamics_, with a cut-off frequency \(\Lambda_{0}\) to regulate the inverse Fourier transform: \[\gamma_{\beta}(\omega)=\frac{1}{2+\ln(1+\beta\Lambda_{0})}\frac{\mathrm{e}^{- \omega^{2}/2\Lambda_{0}^{2}}}{1+\mathrm{e}^{\beta\omega}}.\] (F.4) In the zero temperature regime (\(\beta=\infty\)), the function \((1+\mathrm{e}^{\beta\omega})^{-1}\) gives a step function (one for negative \(\omega\) and zero for positive \(\omega\)). Based on the choice of bath phenomenology, there are plenty of options for the transition weight, such as ohmic heating \(\gamma_{\beta}(\omega)=\frac{1}{\omega_{0}}\frac{\mathrm{\omega e}^{-\omega^{2}/ 2\Lambda_{0}^{2}}}{1-\mathrm{e}^{-\beta\omega}}\). However, for simplicity, we will stick to the Glauber dynamics. Unless otherwise stated, we will also choose the cut-off frequency to be \[\Lambda_{0}=1,\] (F.5) since each local jumps \(\mathbf{A}^{a}\) changes the energy by at most \(\mathcal{O}(1)\) for our usage (and is generally true for local Hamiltonians with bounded degree interaction graph and bounded-norm terms). We do not expect our main conclusion to change under other reasonable choices of \(\gamma_{\beta}(\omega)\). Operator Fourier transform.Given a jump operator \(\mathbf{A}^{a}\), we consider the _operator Fourier Transform_[42] for the Heisenberg-evolved jump operator \(\mathbf{A}^{a}(t)\) characterized by a time scale \(\tau\in\mathbb{R}\) of the heat bath \[\hat{\mathbf{A}}^{a}(\omega):=\frac{1}{\sqrt{2\pi\tau}}\int_{-\tau/2}^{\tau/2} \frac{\mathrm{e}^{\mathrm{i}\mathbf{H}t}\mathbf{A}^{a}\mathrm{e}^{-\mathrm{i}\mathbf{H}t} \mathrm{e}^{-\mathrm{i}\omega t}\mathrm{d}t}{=:\mathbf{A}^{a}(t)}\mathrm{e}^{- \mathrm{i}\omega t}\mathrm{d}t.\] (F.6) The operator \(\hat{\mathbf{A}}^{a}(\omega)\) corresponds to matrix elements in \(\mathbf{A}^{a}\) that induce jumps between energy eigenstates with an energy difference approximately \(\omega\pm\mathcal{O}(\frac{1}{\tau})\). The bigger \(\tau\) is, the more precise \(\omega\) corresponds to the true energy difference; see Appendix K for further details. Physically, \(\tau\) is related to microscopic parameters of the bath (the bath correlation time and the weak-coupling strength [41]), but our discussion only requires the single time scale \(\tau\) that sets the Fourier transform energy uncertainty. Lamb-shift.The interaction with the heat bath induces an additional correction term in the coherent Hamiltonian dynamics of the \(n\)-qubit system, known as the _Lamb-shift_. Given a jump operator \(\mathbf{A}^{a}\), we have the following Lamb-shift Hamiltonian that depends on the bath correlation function \(c_{\beta}(t)\) and the time scale \(\tau\) \[\mathbf{H}_{LS,a}^{\beta,\tau,\mathbf{H}}:=\frac{\mathrm{i}}{2\sqrt{2\pi\tau}}\int_{- \tau/2}^{\tau/2}\int_{-\tau/2}^{\tau/2}\mathrm{sgn}(t_{1}-t_{2})c_{\beta}(t_{2 }-t_{1})\mathbf{A}^{a}(t_{2})\mathbf{A}^{a}(t_{1})\mathrm{d}t_{2}\mathrm{d}t_{1}.\] (F.7) While the Lamb-shift term is physically important, for our purposes, it is largely treated as a source of error; the energy gradient contribution comes from the dissipative part \(\mathcal{D}_{a}^{\beta,\tau,\mathbf{H}}\). Bath correlation function.In the Lamb-shift term, the bath correlation function \(c_{\beta}(t)\) is the Fourier transform of the transition weight \(\gamma_{\beta}(\omega)\), \[c_{\beta}(t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\gamma_{\beta}( \omega)\mathrm{e}^{+\mathrm{i}\omega t}\mathrm{d}t.\] (F.8) The prefactor in Eq. (F.4) is chosen such that (see Proposition F.1) \[\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}|c_{\beta}(t)|\,\mathrm{d}t\leq 1.\] (F.9) This normalization sets the strength of \(\left\lVert\mathbf{H}_{LS,a}^{\beta,\tau,\mathbf{H}}\right\rVert\) to be bounded by \(\mathcal{O}(1)\). Absolute zero \(\beta=\infty\).It is instructive to consider the case of zero temperature \(\beta=\infty\) and infinite time scale \(\tau=\infty\). In this case, the transition weight \(\gamma_{\beta}(\omega)\) is a step function (1 for \(\omega<0\) and 0 for \(\omega>0\)) and \(\hat{\mathbf{A}}^{a}(\omega)\) measures the energy difference perfectly. Thus, all heating transitions (\(|E\rangle\to|E+\omega\rangle\) for \(\omega>0\)) are forbidden, and all cooling transitions (\(|E\rangle\to|E+\omega\rangle\) for \(\omega<0\)) will remain. Hence, in the case when \(\beta=\tau=\infty\), the thermal Lindbladian only lowers the energy. This matches our physical intuition that a zero-temperature bath only absorbs energy from the system. Multiple jumps.The thermal Lindbladian \(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) considers merely a single jump operator \(\mathbf{A}^{a}\) in the system-bath interaction. When there are multiple jump operators, the total thermal Lindbladian is a weighted sum of the individual thermal Lindbladian \(\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\), \[\mathcal{L}^{\beta,\tau,\mathbf{H}}=\sum_{a=1}^{m}\alpha_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}},\] (F.10) where \(\alpha_{a}\geq 0\) is a nonnegative weight. Again, the interaction strength vector \(\mathbf{\alpha}\in\mathbb{R}_{\geq 0}^{m}\) weights the contribution of each thermal Lindbladian. Thus, the total equation of motion under multiple jumps reads \[\frac{d\mathbf{\rho}}{dt} =-\mathrm{i}[\mathbf{H},\mathbf{\rho}]+\mathcal{L}^{\beta,\tau,\mathbf{H}}( \mathbf{\rho})\] (F.11) \[=-\mathrm{i}\left[\mathbf{H}+\sum_{a=1}^{m}\alpha_{a}\mathbf{H}_{LS,a}^{ \beta,\tau,\mathbf{H}},\mathbf{\rho}\right]+\sum_{a=1}^{m}\alpha_{a}\mathcal{D}_{a}^{ \beta,\tau,\mathbf{H}}(\mathbf{\rho}),\] (F.12) which consists of a coherent part and a purely dissipative part. Calculation for normalization of \(c_{\beta}(t)\).We now give a supplemental calculation that shows our choice of \(\gamma_{\beta}(\omega)\) in Eq. (F.4) satisfies the condition in Eq. (F.9). **Proposition F.1**.: _For_ \[\hat{f}(\omega):=\frac{\mathrm{e}^{-\omega^{2}/2\Lambda_{0}^{2}}}{1+\mathrm{e} ^{\beta\omega}},\] (F.13) _we have that_ \[\frac{1}{\sqrt{2\pi}}\left\lVert f\right\rVert_{1}\leq 2+\ln(1+\beta \Lambda_{0}).\] (F.14) Proof.: We want to bound the 1-norm of \(f(t)\) in the time domain. To do so, we bound the moments in the time domains \[\sqrt{2\pi}\left\lVert f\right\rVert_{\infty} \leq\left\lVert\hat{f}\right\rVert_{1}\leq\int_{-\infty}^{\infty} \mathrm{e}^{-\omega^{2}/2\Lambda_{0}^{2}}\mathrm{d}\omega=\Lambda_{0}\sqrt{2 \pi}.\] (F.15) \[\sqrt{2\pi}\left\lVert tf(t)\right\rVert_{\infty} \leq\left\lVert\frac{\mathrm{d}}{\mathrm{d}\omega}\hat{f}\right\rVert _{1}=2\cdot\left\lVert\hat{f}\right\rVert_{\infty}\leq 2.\] (F.16) \[\sqrt{2\pi}\left\lVert t^{2}f(t)\right\rVert_{\infty} \leq\left\lVert\frac{\mathrm{d}^{2}}{\mathrm{d}\omega^{2}}\hat{f }\right\rVert_{1}\leq 4\cdot\left\lVert\frac{\mathrm{d}}{\mathrm{d}\omega}\hat{f} \right\rVert_{\infty}\leq T.\] (F.17) The second line uses the fact that \(\hat{f}\) is increasing and then decreasing (from \(-\infty\) to \(\infty\)). The third line evaluates the derivative \[4\left\lvert\frac{\mathrm{d}}{\mathrm{d}\omega}\left(\frac{ \mathrm{e}^{-\omega^{2}/2\Lambda_{0}^{2}}}{1+\mathrm{e}^{\beta\omega}}\right) \right\rvert=4\left\lvert\frac{-\mathrm{e}^{-\omega^{2}/2\Lambda_{0}^{2}} \omega/\Lambda_{0}^{2}}{1+\mathrm{e}^{\beta\omega}}-\frac{\mathrm{e}^{-\omega ^{2}/2\Lambda_{0}^{2}}\beta\mathrm{e}^{\beta\omega}}{(1+\mathrm{e}^{\beta \omega})^{2}}\right\rvert\] (F.18) \[\leq 4\left(\frac{1}{\sqrt{\mathrm{e}}\Lambda_{0}}+\beta\right)=:T. \text{(since $\mathrm{e}^{-x^{2}/2}x\leq\frac{1}{\sqrt{\mathrm{e}}}$)}\] Thus, we may partition into three integrals to optimize the bound \[\|f\|_{1} =\left(\int_{|t|\leq\Lambda_{0}^{-1}}+\int_{T\geq|t|\geq\Lambda_{0} ^{-1}}+\int_{|t|\geq T}\right)|f(t)|\,\mathrm{d}t\] \[\leq\int_{|t|\leq\Lambda_{0}^{-1}}\Lambda_{0}\mathrm{d}t+\frac{1 }{\sqrt{2\pi}}\int_{T\geq|t|\geq\Lambda_{0}^{-1}}\frac{2}{|t|}\mathrm{d}t+ \frac{1}{\sqrt{2\pi}}\int_{|t|\geq T}\frac{T}{t^{2}}\mathrm{d}t\] \[\leq 2+\frac{4}{\sqrt{2\pi}}\ln(\Lambda_{0}T)+\frac{2}{\sqrt{2 \pi}}\] \[\leq\frac{2+2\sqrt{2\pi}+4\ln(\frac{4}{\sqrt{\mathrm{e}}}+4\beta \Lambda_{0})}{\sqrt{2\pi}}\] \[\leq\frac{2+2\sqrt{2\pi}+8\ln(2)+4\ln(1+\beta\Lambda_{0})}{\sqrt{ 2\pi}}\leq\sqrt{2\pi}(2+\ln(1+\beta\Lambda_{0})).\] where in the last line, we used \(1/\sqrt{\mathrm{e}}\leq 1\) among other numerical bounds. ### Properties of thermal Lindbladians From the exact forms of the thermal Lindbladians, we have the following propositions. **Proposition F.2** (Norm for the dissipative part [42]).: _Any purely dissipative Lindbladian \(\sum_{a}\mathcal{D}_{a}^{\beta,\tau,\mathbf{H}}\) defined in Eq. (F.1) for any set of jump operators \(\{\mathbf{A}^{a}\}_{a=1}^{m}\) and any transition weight satisfying Eq. (F.2) have bounded superoperator norms_ \[\left\|\sum_{a=1}^{m}\alpha_{a}\mathcal{D}_{a}^{\dagger\beta,\tau,\mathbf{H}} \right\|_{\infty-\infty}=\left\|\sum_{a=1}^{m}\alpha_{a}\mathcal{D}_{a}^{ \beta,\tau,\mathbf{H}}\right\|_{1-1}\leq 2\left\|\sum_{a=1}^{m}\alpha_{a}\mathbf{A}^{a \dagger}\mathbf{A}^{a}\right\|.\] (F.19) _The first equality is the duality between the \(1-1\) and \(\infty-\infty\) superoperator norms._ **Proposition F.3** (Properties of the Lamb-shift term [42]).: _The sum of Lamb-shift term (F.7) for any set of jump operators \(\{\mathbf{A}^{a}\}_{a=1}^{m}\) under a normalized bath correlation function \(c_{\beta}(t)\) given by Eq. (F.9) satisfies that8_ Footnote 8: Implicitly, the Lamb-shift term has units of energy yet do not scale with \(\|\mathbf{H}\|\). \[\left\|\sum_{a=1}^{m}\alpha_{a}\mathbf{H}_{LS,a}^{\beta,\tau,\mathbf{H}} \right\| \leq\frac{1}{2}\left\|\sum_{a=1}^{m}\alpha_{a}\mathbf{A}^{a\dagger} \mathbf{A}^{a}\right\|\] (F.20) \[\left\|\sum_{a=1}^{m}\alpha_{a}[\mathbf{H}_{LS,a}^{\beta,\tau,\mathbf{H} },\mathbf{H}]\right\| \leq\mathcal{O}\left(\frac{\|\mathbf{H}\|^{3/4}}{\tau^{1/4}}\left\| \sum_{a=1}^{m}\alpha_{a}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\|\right).\] (F.21) _For large enough \(\tau\), the Lamb-shift term almost commutes with the Hamiltonian._ From Prop. F.2 and Prop. F.3, we have the following norm bound for thermal Lindbladians. **Proposition F.4** (Norm of thermal Lindbladians).: _Given a Hamiltonian \(\mathbf{H}\), an inverse temperature \(\beta\geq 0\), a time scale \(\tau\geq 0\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a=1}^{m}\), a transition weight \(\gamma_{\beta}(\omega)\) satisfying Eq. (F.2), a normalized bath correlation function \(c_{\beta}(t)\) satisfying Eq. (F.9). The associated thermal Lindbladian \(\sum_{a=1}^{m}\alpha_{a}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) has bounded superoperator norms_ \[\left\|\sum_{a=1}^{m}\alpha_{a}\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}} \right\|_{\infty-\infty}=\left\|\sum_{a=1}^{m}\alpha_{a}\mathcal{L}_{a}^{ \beta,\tau,\mathbf{H}}\right\|_{1-1}\leq 3\left\|\sum_{a=1}^{m}\alpha_{a}\mathbf{A}^{a \dagger}\mathbf{A}^{a}\right\|,\] (F.22) _which is controlled by the interaction strength vector \(\mathbf{\alpha}\) under the normalization of \(\mathbf{A}^{a}\) in Eq. (A.4)._ ### Algorithmic primitives for simulating thermal Lindbladians In this subsection, we review existing algorithmic primitives for simulating thermal Lindbladians [42], estimating energy and expectation value of observables using block-encoding and quantum singular value transform (QSVT). See [72] for a tutorial on block encoding and QSVT. We begin with a definition of a block-encoding for Hermitian matrices, i.e., observables. **Definition 12** (Block-encoding for Hermitian matrices).: _We say that a unitary \(\mathbf{U}\) is a block-encoding for a Hermitian matrix \(\mathbf{O}\) if_ \[(\bra{0^{d}}\otimes\mathbf{I})\cdot\mathbf{U}\cdot(\ket{0^{d}}\otimes\mathbf{I})=\mathbf{O} \quad\text{for}\quad d\in\mathbb{Z}^{+}.\] (F.23) Recall the following result stating that expectation values can be estimated using block-encoding. **Proposition F.5** (Measuring observable using block-encoding).: _Given a block-encoding \(\mathbf{U}_{\mathbf{O}}\) for a Hermitian matrix \(\mathbf{O}\) and samples of a state \(\mathbf{\rho}\). One could estimate \(\operatorname{tr}(\mathbf{O}\mathbf{\rho})\) to small error \(0<\epsilon<0.5\) using only \(\tilde{\mathcal{O}}(1/\epsilon^{2})\) queries to the unitary \(\mathbf{U}_{\mathbf{O}}\)._ Proof.: Consider \(\ket{0}\bra{0}\otimes\mathbf{\rho}\) and apply the Hadamard test to sample \(\operatorname{tr}[\mathbf{U}\ket{0}\bra{0}\otimes\mathbf{\rho}]=\operatorname{tr}[\bm {O}\mathbf{\rho}]\)9. Footnote 9: We thank Yu Tong for discussions on this argument. Linear combinations of unitaries allow us to make efficient block-encodings of Hamiltonians presented as a sum of local terms. This fact results in the following proposition. **Proposition F.6** (Block-encoding for Hamiltonian; see [72, 73, 74]).: _Any \(n\)-qubit Hamiltonian \(\mathbf{H}\) has an efficient block-encoding \(\mathbf{U}_{\mathbf{H}/\lambda_{1}}\) for some scalar \(\lambda_{1}\) being the 1-norm of Pauli expansion coefficients._ From [42], we have the following for the Lamb-shift term \(\mathbf{H}_{LS}\) from Eq. (F.7). Conveniently, the Lamb-shift term is already normalized (Proposition F.3). **Proposition F.7** (Block-encoding for Lamb-shift term; see [42]).: _The Lamb-shift term \(\mathbf{H}_{LS}\) has an efficient block-encoding \(\mathbf{U}_{LS}\)._ We define the block-encoding for a Lindbladian without the coherent commutator term \(-\mathrm{i}[\mathbf{H},\mathbf{\rho}]\). **Definition 13** (Block-encoding for Lindblad operators [42]).: _Given a purely irreversible Lindbladian_ \[\mathcal{L}[\mathbf{\rho}]:=\sum_{j\in J}\left(\mathbf{L}_{j}\mathbf{\rho}\mathbf{L}_{j}^{ \dagger}-\frac{1}{2}\mathbf{L}_{j}^{\dagger}\mathbf{L}_{j}\mathbf{\rho}-\frac{1}{2}\mathbf{ \rho}\mathbf{L}_{j}^{\dagger}\mathbf{L}_{j}\right),\] (F.24) _we say that a unitary \(\mathbf{U}\) is a block-encoding for Lindblad operators \(\{\mathbf{L}_{j}\}_{j\in J}\) if 10_ Footnote 10: In the first register, we could use any orthonormal basis, sticking to computational basis elements \(\ket{j}\) is just for ease of presentation. Intuitively one can think about \(b\) as the number of ancilla qubits used for implementing the operators \(\mathbf{L}_{j}\), while typically \(a-b\approx\log|J|\). \[(\bra{0^{b}}\otimes\mathbf{I})\cdot\mathbf{U}\cdot(\ket{0^{c}}\otimes\mathbf{I})=\sum_{j \in J}\ket{j}\otimes\mathbf{L}_{j}\quad\text{for}\quad b\leq c\in\mathbb{Z}^{+}.\] (F.25) **Theorem 9** (Linear-time Lindbladian simulation [42]).: _Suppose the jumps \(\mathbf{A}^{a}\) can be block-encoded by a unitary \(\mathbf{V}_{jump}\) using \(c\in\mathbb{Z}\) ancillas qubits. Then, we can simulate the map \(\mathrm{e}^{t\mathcal{L}}\) for (F.12) to \(\epsilon\leq 1/2\) precision in the diamond norm using_ \[\tilde{\mathcal{O}}((c+1)) \text{resettable ancilla},\] (F.26) \[\tilde{\mathcal{O}}((t+1)\tau) \text{controlled Hamiltonian simulation time},\] (F.27) \[\tilde{\mathcal{O}}((t+1)(c+1)) \text{other two-qubit gates},\] (F.28) \[\text{and}\quad\tilde{\mathcal{O}}(t+1) \text{queries to }\mathbf{W}\text{, }\mathbf{prep}_{c_{\beta}(t)}\text{,}\mathbf{Prep}^{\prime}_{c_{\beta}(\bar{t})}\text{, and }\mathbf{V}_{jump}\] (F.29) _where \(\tilde{\mathcal{O}}(\cdot)\) absorbed poly-logarithmic dependences on \(t,\left\lVert\mathbf{H}\right\rVert,\epsilon,\tau,\beta\). Furthermore, a block-encoding of the purely irreversible Lindbladian \(\mathcal{D}^{\beta,\tau,\mathbf{H}}\) with discretized frequency labels can be implemented efficiently._ The above uses the following circuit components required for implementation: the controlled Hamiltonian simulation \[\sum_{\bar{t}\in S_{t_{0}}}\lvert\bar{t}\rangle\!\langle\bar{t} \rvert\otimes\mathrm{e}^{\pm\mathrm{i}\bar{t}\mathbf{H}},\] (F.30) the unitary gates for preparing the bath correlation function in superposition \[\mathbf{Prep}_{c_{\beta}(\bar{t})}:\lvert\bar{0}\rangle\to\sum_{\bar{t }\in S_{t_{0}}}\sqrt{\lvert c_{\beta}(\bar{t})\rvert}\,\lvert\bar{t}\rangle \quad\text{and}\quad\mathbf{Prep}^{\prime}_{c_{\beta}(\bar{t})}:\lvert\bar{0} \rangle\to\sum_{\bar{t}\in S_{t_{0}}}\frac{c_{\beta}(\bar{t})}{\sqrt{\lvert c_ {\beta}(\bar{t})\rvert}}\,\lvert\bar{t}\rangle\,,\] (F.31) and the controlled rotation for transition weights \[\mathbf{W}:=\sum_{\bar{\omega}\in S_{\omega_{0}}}\binom{\sqrt{\gamma (\bar{\omega})}}{\sqrt{1-\gamma(\bar{\omega})}}\quad-\sqrt{1-\gamma(\bar{ \omega})}\choose\sqrt{1-\gamma(\bar{\omega})}\quad\sqrt{\gamma(\bar{\omega})} \endpsparbox[c]{180.0pt}\otimes\bar{\omega}\rangle\!\langle\bar{\omega}|.\] (F.32) Indeed, the above implementation uses discrete labels for the time \(\bar{t}\in S_{t_{0}}\) and frequencies \(\bar{\omega}\in S_{\omega_{0}}\) corresponding to \(\mathrm{d}t\) and \(\mathrm{d}\omega\); these dominate the ancilla use. For conceptual simplicity, we focus on the continuous integral everywhere else and emphasize that the discretization is merely for implementation and introduces a negligible error; see [42]. The controlled Hamiltonian simulation can be implemented efficiently for any \(n\)-qubit local Hamiltonian \(\mathbf{H}\)[72, 73, 74]. The other operations \(\mathbf{W}\), \(\mathbf{prep}_{c_{\beta}(t)}\text{,}\mathbf{Prep}^{\prime}_{c_{\beta}(\bar{t})}\) can all be implemented efficiently [42] with the physically-motivated choice considered in F.1. **Proposition F.8** (Gradient of an observable under Lindbladian evolution; adapted from [42]).: _Given block-encodings \(\mathbf{U}\) for a purely irreversible Lindbladian (Def. 13) and \(\mathbf{U_{O}}\) for a Hermitian observable \(\mathbf{O}\), we get a block-encoding of_ \[\sum_{j\in J}\mathbf{L}_{j}^{\dagger}\mathbf{OL}_{j}\quad\text{via}\quad \mathbf{V}:=(\mathbf{Y}_{\frac{1}{2}}\otimes\mathbf{U}^{\dagger}\otimes\mathbf{I}_{d})\cdot \Big{(}2\lvert 0^{b+1}\rangle\!\langle 0^{b+1}\rvert\otimes\mathbf{I}-\mathbf{I} \Big{)}\otimes\mathbf{U_{O}}\cdot(\mathbf{Y}_{\frac{1}{2}}\otimes\mathbf{U}\otimes\mathbf{I}_ {d}),\] (F.33) _where \(\lvert\pm\rangle:=(\lvert 0\rangle\pm\lvert 1\rangle)/\sqrt{2}\) and \(\mathbf{Y}_{\frac{1}{2}}:=\frac{1}{\sqrt{2}}\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}\)._ Proof.: We calculate \[(\langle 0^{c+1}\rvert\otimes\mathbf{I}\otimes\langle 0^{d}\rvert)\cdot\mathbf{V} \cdot(\lvert 0^{c+1}\rangle\otimes\mathbf{I}\otimes\lvert 0^{d}\rangle)\] \[= \left(\langle-|\otimes(\langle 0^{c}|\otimes\mathbf{I})\mathbf{U}^{\dagger} \otimes\langle 0^{d}|\right)\cdot\left(2|0^{b+1}\rangle\langle 0^{b+1}|\otimes\mathbf{I}-\mathbf{I} \right)\otimes\mathbf{U_{O}}\cdot\left(\ |+\rangle\otimes\mathbf{U}(|0^{c}\rangle\otimes\mathbf{I})\otimes|0^{d}\right)\right)\] \[= \left(\langle-|\otimes(\langle 0^{c}|\otimes\mathbf{I})\mathbf{U}^{\dagger} \otimes\langle 0^{d}|\right)\cdot\left(2|0^{b+1}\rangle\langle 0^{b+1}| \otimes\mathbf{I}\right)\otimes\mathbf{U_{O}}\cdot\left(\ |+\rangle\otimes\mathbf{U}(|0^{c}\rangle \otimes\mathbf{I})\otimes|0^{d}\right)\] \[= \left(\langle 0^{c}|\otimes\mathbf{I}\rangle\cdot\mathbf{U}^{\dagger}\cdot(|0 ^{b}\rangle\langle 0^{b}|\otimes\mathbf{I})\otimes\mathbf{O}\cdot(|0^{c}\rangle \otimes\mathbf{I})\right.\] (F.34) \[= \left(\sum_{j\in J}\left\langle j\right|\otimes\mathbf{L}_{j}^{ \dagger}\right)\mathbf{I}\otimes\mathbf{O}\left(\sum_{j^{\prime}\in J}|j^{\prime} \rangle\otimes\mathbf{L}_{j^{\prime}}\right)=\sum_{j\in J}\mathbf{L}_{j}^{\dagger} \mathbf{O}\mathbf{L}_{j}.\qed\] **Corollary F.1** (Block-encoding the gradient of the Hamiltonian).: _Given a block-encoding for a purely irreversible Lindbladian \(\mathcal{L}\) and a Hamiltonian \(\mathbf{H}\), there is an efficient block-encoding for_ \[\frac{1}{2}\mathcal{L}^{\dagger}[\mathbf{H}],\] (F.35) _which is a Hermitian operator corresponding to the gradient of \(\mathbf{H}\) under \(\mathcal{L}\)._ Proof.: Apply Proposition F.8 for Lindbladian \(\mathcal{L}\), Hermitian observables \(\mathbf{H}\) and \(\mathbf{I}\) to obtain block-encodings for \(\sum_{j\in J}\mathbf{L}_{j}^{\dagger}\mathbf{H}\mathbf{L}_{j}\) and \(\sum_{j\in J}\mathbf{L}_{j}^{\dagger}\mathbf{L}_{j}\). Then, use quantum singular value transform (QSVT) for products and sums of block-encoding to obtain the block-encoding for \[\frac{1}{2}\mathcal{L}^{\dagger}[\mathbf{H}]=\frac{1}{2}\sum_{j\in J}\mathbf{L}_{j}^{ \dagger}\mathbf{H}\mathbf{L}_{j}-\frac{1}{4}\sum_{j\in J}\mathbf{L}_{j}^{\dagger}\mathbf{L}_{ j}\mathbf{H}-\frac{1}{4}\mathbf{H}\sum_{j\in J}\mathbf{L}_{j}^{\dagger}\mathbf{L}_{j}\] (F.36) at high precision. From all of the above propositions, corollaries, and theorems, we can obtain the following. **Lemma F.1** (Measuring energy gradient).: _Given an \(n\)-qubit Hamiltonian \(\mathbf{H}\), inverse temperature \(\beta\geq 0,\) time scale \(\tau\geq 0\), samples of an \(n\)-qubit state \(\mathbf{\rho}\), a thermal Lindbladian \(\mathcal{L}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}\) from Eq. (F.12). The energy gradient_ \[\operatorname{tr}(\mathbf{H}\mathcal{L}^{\beta,\tau,\mathbf{H}}(\mathbf{\rho}))= \operatorname{tr}(\mathcal{L}^{\dagger\beta,\tau,\mathbf{H}}(\mathbf{H})\mathbf{\rho})\] (F.37) _can be estimated to error \(\epsilon\) using time and samples of \(\mathbf{\rho}\) polynomial in \(n,1/\epsilon,\left\|\mathbf{H}\right\|,\beta,\tau\)._ Proof.: From the form of thermal Lindbladians (F.12) and dropping the scripts \(\mathcal{L}^{\beta,\tau,\mathbf{H}}=\mathcal{L},\mathbf{H}_{LS}^{\beta,\tau,\mathbf{H}}= \mathbf{H}_{LS},\mathcal{D}^{\beta,\tau,\mathbf{H}}=\mathcal{D}\) we have \[\mathcal{L}^{\dagger}(\mathbf{H})=\mathrm{i}[\mathbf{H}_{LS},\mathbf{H}]+\mathcal{D}^{ \dagger}(\mathbf{H}).\] (F.38) Our goal is to create the block-encoding for \(\mathcal{L}^{\dagger}(\mathbf{H})\). First, we use quantum singular value transform (QSVT) for products and sums of block-encoding to obtain the block-encoding for \(\mathrm{i}[\mathbf{H}_{LS},\mathbf{H}]\) from block-encoding for \(\mathbf{H}\) and \(\mathbf{H}_{LS}\) in Propositions F.6 and F.7. Next, using the block-encoding for the purely irreversible Lindbladian \(\mathcal{D}\) from Theorem 9 and the block-encoding for \(\mathbf{H}\), we can apply Corollary F.1 to obtain efficient block-encoding for \(\mathcal{D}^{\dagger}(\mathbf{H})\). To obtain the block-encoding for \(\mathcal{L}^{\dagger}(\mathbf{H})\), we use QSVT for sums of block-encoding to add \(\mathrm{i}[\mathbf{H}_{LS},\mathbf{H}]\) and \(\mathcal{D}^{\dagger}(\mathbf{H})\). Finally, using Prop. F.5, we can estimate \(\operatorname{tr}(\mathcal{L}^{\dagger}(\mathbf{H})\mathbf{\rho})\) efficiently. All the above QSVT manipulations operate at high precision, and the discrete Fourier transform well-approximates the continuum at poly-logarithmic costs [42]. A polynomial-time quantum algorithm for finding a local minimum under thermal perturbations (Proof of Theorem 6) In this appendix, we present the proof of Theorem 6 by giving a polynomial-time quantum algorithm for finding local minima under thermal perturbations. We refer to the efficient quantum algorithm as _Quantum thermal gradient descent_ as the algorithm performs gradient descent using thermal Lindbladians induced by a heat bath. The algorithm uses the properties of thermal Lindbladians presented in Appendix F. ### Cooling by gradient descent The central idea of quantum thermal gradient descent is the following. When we are not at a local minimum under thermal perturbations, the negative energy gradient will be sufficiently large, and we can decrease the energy by following a direction with a negative energy gradient. This is characterized by the following lemma. We will use this lemma to design the gradient descent algorithm for finding a local minimum. **Lemma G.1** (Cooling by gradient descent).: _Given parameters \(0<\tilde{\epsilon}<0.5,B\geq 1\), \(\beta,\tau\geq 0\), an \(n\)-qubit Hamiltonian \(\mathbf{H}\) with \(\left\lVert\mathbf{H}\right\rVert_{\infty}\leq B\), and \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a}\). Consider \(a=1,\ldots,m\) with an approximate energy gradient \(g_{a}\) satisfying_ \[\left|g_{a}-\operatorname{tr}\left(\mathbf{H}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}[ \mathbf{\rho}^{(t-1)}]\right)\right|<0.01\tilde{\epsilon}\] (G.1) _Suppose there exist \(a^{*}\in\{1,\ldots,m\}\) with sufficiently negative approximate energy gradient,_ \[g_{a^{*}}<-0.99\tilde{\epsilon}.\] (G.2) _The state after evolving \(\mathbf{\rho}\) along the direction \(\hat{\mathbf{e}}_{a^{*}}\) for a small step \(s=|g_{a^{*}}|/(9B^{2})>0\),_ \[\mathbf{\rho}^{\rm(next)}:=\exp^{\beta,\tau,\mathbf{H},\{\mathbf{A}^{a}\}_{a}}_{a}\left(s \hat{\mathbf{e}}_{a^{*}}\right)\] (G.3) _guarantees the following energy decrease,_ \[\operatorname{tr}\left(\mathbf{H}\mathbf{\rho}^{\rm(next)}\right)<\operatorname{tr}( \mathbf{H}\mathbf{\rho})-\frac{\tilde{\epsilon}^{2}}{20B^{2}}.\] (G.4) Proof.: From Prop. D.2 on Taylor's theorem, we have the following identity \[\operatorname{tr}\left(\mathbf{H}\mathbf{\rho}^{\rm(next)}\right)= \operatorname{tr}(\mathbf{H}\mathbf{\rho})+s\operatorname{tr}(\mathbf{H}\mathcal{L}_{a^{ *}}^{\beta,\tau,\mathbf{H}}[\mathbf{\rho}])+\frac{s^{2}}{2}\operatorname{tr}(\mathbf{H} \mathcal{L}_{a^{*}}^{\beta,\tau,\mathbf{H}}[\mathcal{L}_{a^{*}}^{\beta,\tau,\mathbf{H }}[\mathbf{\sigma}]])\] (G.5) for some \(n\)-qubit state \(\mathbf{\sigma}\). We will separately control the linear term and the quadratic term. Linear term.From the definition of the energy gradient vector \(\mathbf{\nabla}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}(\mathbf{H},\mathbf{\rho})\), we have \[\operatorname{tr}(\mathbf{H}\mathcal{L}_{a^{*}}^{\beta,\tau,\mathbf{H}}[\mathbf{\rho}])<g_ {a^{*}}+0.01\tilde{\epsilon}<\frac{98}{99}g_{a^{*}}=-\frac{98}{99}|g_{a^{*}}|.\] (G.6) The second inequality follows from \(g_{a^{*}}<-0.99\tilde{\epsilon}\), hence \(0.01\tilde{\epsilon}<-(1/99)g_{a^{*}}\). Quadratic term.We can bound the quadratic term as follows, \[\frac{1}{2}\operatorname{tr}(\mathbf{H}\mathcal{L}_{a^{*}}^{\beta,\tau,\mathbf{H}}[ \mathcal{L}_{a^{*}}^{\beta,\tau,\mathbf{H}}[\mathcal{\sigma}]])\leq\frac{1}{2} \left\lVert\mathcal{L}_{a^{*}}^{\dagger\beta,\tau,\mathbf{H}}\right\rVert_{\infty- \infty}\left\lVert\mathcal{L}_{a^{*}}^{\dagger\beta,\tau,\mathbf{H}}(\mathbf{H}) \right\rVert_{\infty}.\] (G.7) From Prop. D.3 and Prop. F.4 that bounds the norm of these objects, we have \[\frac{1}{2}\left\lVert\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}\right\rVert_{ \infty-\infty}\left\lVert\mathcal{L}_{a^{\prime}}^{\dagger\beta,\tau,\mathbf{H}}( \mathbf{H})\right\rVert_{\infty}\leq 4.5\left\lVert\mathbf{H}\right\rVert_{\infty}^{2} \leq 4.5B^{2}.\] (G.8) Combining the linear and quadratic terms with \(s=|g_{a^{*}}|/(9B^{2})>0\), we have \[\operatorname{tr}\left(\mathbf{H}\mathbf{\rho}^{(\text{next})}\right)\leq \operatorname{tr}(\mathbf{H}\mathbf{\rho})-\left(\frac{98}{99\times 9}-\frac{1}{18} \right)\frac{|g_{a^{*}}|^{2}}{B^{2}}<\operatorname{tr}(\mathbf{H}\mathbf{\rho})-0.054 \frac{|g_{a^{*}}|^{2}}{B^{2}}.\] (G.9) We can use \(|g_{a^{*}}|^{2}>0.99^{2}\tilde{\epsilon}^{2}\) to obtain the desired claim. ### Quantum thermal gradient descent Given error \(\epsilon=1/\text{poly}(n)\), norm bound \(B=\text{poly}(n)\), inverse temperature \(0\leq\beta\leq\text{poly}(n)\), time scale \(\tau=\text{poly}(n)\), an \(n\)-qubit local Hamiltonian \(\mathbf{H}\) with \(\left\lVert\mathbf{H}\right\rVert_{\infty}\leq B\), \(m\) local jump operators \(\{\mathbf{A}^{a}\}_{a}\) with \(m=\text{poly}(n)\), and a local observable \(\mathbf{O}\) with \(\left\lVert\mathbf{O}\right\rVert_{\infty}\leq 1\). We consider a coordinate-wise gradient descent algorithm that implements the following. The initial state \(\mathbf{\rho}^{(0)}\) is arbitrary as long as copies of the state can be prepared on the quantum computer. For example, we can set \(\mathbf{\rho}^{(0)}\) to be the maximally mixed state \(\frac{\mathbf{I}}{2^{n}}\). The total number of steps is \[T:=\frac{42B^{3}}{\epsilon^{2}}.\] (G.10) For each time step \(t\) from \(1\) to \(T\), the algorithm does the following. 1. For each direction \(a=1,\dots,m\), estimate an approximate energy gradient \(g_{a}^{(t)}\) satisfying \[\left|g_{a}^{(t)}-\operatorname{tr}\left(\mathcal{L}_{a}^{\dagger\beta,\tau, \mathbf{H}}(\mathbf{H})\mathbf{\rho}^{(t-1)}\right)\right|<0.0099\epsilon.\] (G.11) The energy gradient can be estimated efficiently using Lemma F.1 given copies of \(\mathbf{\rho}^{(t-1)}\) prepared through Eq. (G.12) and Theorem 9. From the bound on energy gradients in Prop. D.3, we have \(|g_{a}^{(t)}|\leq 3B+0.0099\epsilon\). If \(g_{a}^{(t)}<-0.99\epsilon\), set \(a^{(t)}:=a\) and _terminate_ the for-loop over \(a\). 2. If \(a^{(t)}\) is not found, set \(\mathbf{\rho}^{(T)}:=\mathbf{\rho}^{(t-1)}\) and _terminate_ the for-loop over \(t\). Otherwise evolve \(\mathbf{\rho}^{(t-1)}\) under the direction \(\hat{\mathbf{e}}_{a^{(t)}}\) for a small step \(s^{(t)}:=|g_{a}^{(t)}|/(9B^{2})\), \[\mathbf{\rho}^{(t)}:=\exp\left(s^{(t)}\mathcal{L}_{a^{(t)}}^{\beta,\tau,\mathbf{H}} \right)\left(\mathbf{\rho}^{(t-1)}\right)=\prod_{t^{\prime}=1}^{t}\exp\left(s^{(t)} \mathcal{L}_{a^{(t)}}^{\beta,\tau,\mathbf{H}}\right)\left(\mathbf{\rho}^{(0)}\right).\] (G.12) Because \(0\leq s^{(t)}\leq 1/(2B)\), a single copy of \(\mathbf{\rho}^{(t)}\) can be prepared in polynomial-time using the thermal Lindbladian simulation algorithm in [42]; see Theorem 9. We will show that the state \(\mathbf{\rho}^{(T)}\) created by the gradient descent algorithm is an \(\epsilon\)-approximate local minimum of \(\mathbf{H}\) under thermal perturbations. Furthermore, using the thermal Lindbladian simulation algorithm, a quantum machine can efficiently create many copies of \(\mathbf{\rho}^{(T)}\). ### Proof of Theorem 6 The central idea in the proof of Theorem 6 is the following lemma. The lemma combines the key results characterizing local minima in Appendix D. **Lemma G.2** (Gradient descent finds a local minimum).: \(\mathbf{\rho}^{(T)}\) _from Eq._ (G.12) _is an \(\epsilon\)-approximate local minimum of \(\mathbf{H}\) under thermal perturbations with inverse temperature \(\beta\), time scale \(\tau\), and system-bath interactions generated by \(\{\mathbf{A}^{a}\}_{a}\)._ Proof.: Suppose the algorithm terminates at some time step \(t<T\), then \(g_{a}^{(t)}\geq-0.99\epsilon\). From Eq. (G.11), we have \(\operatorname{tr}\left(\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}(\mathbf{H})\mathbf{ \rho}^{(t-1)}\right)\geq-0.9999\epsilon\). Hence, \[\left\|\mathbf{\nabla}_{\beta,\tau,\{\mathbf{A}^{a}\}_{a}}^{-}(\mathbf{H},\mathbf{\rho}^{(t-1 )})\right\|_{\infty}\leq 0.9999\epsilon<\epsilon.\] (G.13) From the sufficient condition for local minima given in Lemma D.1, we have \(\mathbf{\rho}^{(T)}=\mathbf{\rho}^{(t-1)}\) is an \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of the \(n\)-qubit Hamiltonian \(\mathbf{H}\) under thermal perturbations. We now show by contradiction that the algorithm must terminate early. Assume that the algorithm did not terminate early. Then, we can use Lemma G.1 with \(\tilde{\epsilon}=0.99\epsilon\) for cooling by gradient descent to obtain \[\operatorname{tr}(\mathbf{H}\mathbf{\rho}^{(T)})\leq\operatorname{tr}(\mathbf{H}\mathbf{ \rho}^{(T-1)})-\frac{0.99^{2}\epsilon^{2}}{20B^{2}}\leq\ldots\leq\operatorname {tr}(\mathbf{H}\mathbf{\rho}^{(0)})-\frac{0.99^{2}\epsilon^{2}}{20B^{2}}T\leq\left\| \mathbf{H}\right\|_{\infty}-\frac{0.99^{2}\epsilon^{2}}{20B^{2}}T.\] (G.14) From the definition of \(T\) in Eq. (G.10) and \(\left\|\mathbf{H}\right\|_{\infty}\leq B\), we have \[\operatorname{tr}(\mathbf{H}\mathbf{\rho}^{(T)})\leq\left\|\mathbf{H}\right\|_{\infty}- \frac{0.99^{2}\epsilon^{2}}{20B^{2}}\frac{42B^{3}}{\epsilon^{2}}\leq\left\| \mathbf{H}\right\|_{\infty}-2.05B\leq-1.05B.\] (G.15) At the same time, because \(\left\|\mathbf{\rho}^{(T)}\right\|_{1}=1\), \[\operatorname{tr}(\mathbf{H}\mathbf{\rho}^{(T)})\geq-\left\|\mathbf{H}\right\|_{\infty} \geq-B.\] (G.16) This is a contradiction. Hence the algorithm must terminate early. The polynomial-time quantum algorithm for establishing Theorem 6 is as follows. The algorithm runs quantum thermal gradient descent to find a local minimum \(\mathbf{\rho}^{(T)}\) of \(\mathbf{H}\) under thermal perturbations. Recall that \(B\) is the upper bound on \(\left\|\mathbf{H}\right\|_{\infty}\), and is equal to \(\operatorname{poly}(n)\), and \(1/\epsilon=\operatorname{poly}(n)\). Because every step can be done in polynomial time, and there are at most \(T=42B^{3}/\epsilon^{2}=\operatorname{poly}(n)\) time steps, quantum thermal gradient descent runs in time polynomial in \(n\). Now, given any observable \(\mathbf{O}\). The quantum algorithm prepares \(\mathcal{O}(1/\epsilon^{2})=\operatorname{poly}(n)\) copies of \(\mathbf{\rho}^{(T)}\) in \(\operatorname{poly}(n)\) time, then measures \(\mathbf{O}\) on the \(\mathcal{O}(1/\epsilon^{2})\) copies of \(\mathbf{\rho}^{(T)}\) to estimate \(\operatorname{tr}(\mathbf{O}\mathbf{\rho}^{(T)})\) to \(\epsilon\) error. This concludes the proof of Theorem 6. ## Appendix H Characterizing energy gradients in low-temperature heat bath Recall from Appendix D.3 on certifying Hamiltonians without suboptimal local minima, if there exists \(\mathbf{\alpha}\in\mathbb{R}_{\geq 0}^{m}\) with \(\left\|\mathbf{\alpha}\right\|_{1}=1\), such that the negative gradient condition holds, \[-\sum_{a}\alpha_{a}\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}[\mathbf{H}]\succeq \frac{2\epsilon}{\delta}(\mathbf{I}-\mathbf{P}_{G}(\mathbf{H}))-\epsilon\mathbf{I},\] (H.1) then any \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of the \(n\)-qubit Hamiltonian \(\mathbf{H}\) under thermal perturbations is an exact global minimum of \(\mathbf{H}\) with failure probability \(\leq\delta\), i.e., \(\operatorname{tr}(\mathbf{P}_{G}(\mathbf{H})\mathbf{\rho})\geq 1-\delta\), where \(\mathbf{P}_{G}(\mathbf{H})\) is the projection onto the ground state space. To understand when the above condition holds, it is imperative to characterize the energy gradients, \(\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}[\mathbf{H}]\). In this appendix, we present various lemmas and theorems characterizing the energy gradients, which will be used in our proof of Theorem 7 in Appendix J for showing that a certain family of Hamiltonians has no suboptimal local minima. We remark that the proofs of many formal statements in this appendix require concepts and results that won't be shown till later in Appendices K and L, and we recommend the first-time reader to freely skip the proofs and return later. For simplicity, we will focus on the nonnegative vector \(\mathbf{\alpha}\) being uniform over a subset \(S\) for the remaining appendices. We will show that this is sufficient for our purposes even though having the ability to choose \(\mathbf{\alpha}\) is more powerful. We define the following Lindbladian with uniform weights over a subset \(S\subseteq\{1,\dots,m\}\), \[\mathcal{L}:=\sum_{a\in S}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}.\] (H.2) Recall from Appendix F that each \(\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}}\) corresponds to a jump operator \(\mathbf{A}^{a}\) satisfying the normalization condition \(\left\lVert\mathbf{A}^{a}\right\rVert_{\infty}\leq 1\). If we let \(r:=2m\epsilon/\delta,\epsilon^{\prime}=m\epsilon\) and \(S=\{1,\dots,m\}\), then the negative gradient condition becomes \[\text{(negative gradient condition)}:\quad-\mathcal{L}^{\dagger}[\mathbf{H}] \succeq r(\mathbf{I}-\mathbf{P}_{G})-\epsilon^{\prime}\mathbf{I},\] (H.3) This will be the central inequality we would like to establish for the remaining appendices. Throughout the proofs, we will consider different subsets \(S\) and show a relation similar to Eq. (H.3) for subset \(S\). ### Basic properties of energy gradients in low-temperature bath We show a few basic properties of energy gradients under a low-temperature, long-time-scale bath. First, we show that the energy gradient, at large \(\beta\) (i.e., low temperatures), is negative semi-definite up to controllable error. Intuitively, this can be seen from the KMS condition in Eq. (F.2), \(\gamma_{\beta}(\omega)=\gamma_{\beta}(-\omega)\mathrm{e}^{-\beta\omega}\): the heating transition is suppressed by the Boltzmann weight, allowing energy to increase by \(\omega\sim\beta^{-1}\). Another source of error is the uncertainty in energy \(\tau^{-1}\). **Lemma H.1** (Almost negative gradients).: _Consider the thermal Lindbladian \(\mathcal{L}=\sum_{a\in S}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) with jump operators \(\{\mathbf{A}^{a}\}_{a\in S}\) where \(\left\lVert\mathbf{A}^{a}\right\rVert\leq 1\), and \(\gamma_{\beta}(\omega)\) satisfying Eq. (F.2). Then,_ \[\mathcal{L}^{\dagger}[\mathbf{H}]\preceq\mathcal{O}\left(|S|\left(\frac{\left\lVert \mathbf{H}\right\rVert^{3/4}}{\tau^{1/4}}+\frac{1}{\tau}+\frac{1}{\beta}\right) \right)\cdot\mathbf{I}.\] (H.4) Proof.: Rewrite the energy gradient with an error controlled by Proposition F.3 and Lemma L.1 gives \[\mathcal{L}^{\dagger}[\mathbf{H}] \approx\sum_{a\in S}\int_{-\infty}^{\infty}\gamma_{\beta}(\omega )\omega\hat{\mathbf{A}}^{a}(\omega)^{\dagger}\hat{\mathbf{A}}^{a}(\omega)\mathrm{d}\omega\] \[=\sum_{a\in S}\int_{0}^{\infty}\gamma_{\beta}(\omega)\omega\hat{ \mathbf{A}}^{a}(\omega)^{\dagger}\hat{\mathbf{A}}^{a}(\omega)\mathrm{d}\omega+\sum_{a \in S}\int_{-\infty}^{0}\gamma_{\beta}(\omega)\omega\hat{\mathbf{A}}^{a}(\omega)^ {\dagger}\hat{\mathbf{A}}^{a}(\omega)\mathrm{d}\omega\] (H.5) and bound the positive operator \[\left\lVert\sum_{a\in S}\int_{0}^{\infty}\gamma_{\beta}(\omega)\omega\hat{ \mathbf{A}}^{a}(\omega)^{\dagger}\hat{\mathbf{A}}^{a}(\omega)\mathrm{d}\omega\right\rVert _{\infty}\leq|S|\max_{\omega\geq 0}\gamma_{\beta}(\omega)\omega\leq\frac{|S|}{ \beta}.\] (H.6) The second inequality uses the tail bound in Eq. (F.3) with \(\Delta=0\) Second, we show that the energy gradient operator is nearly diagonal in the energy basis. The intuition is that for any operator \(\mathbf{A}\), the product \[\hat{\mathbf{A}}^{\dagger}(\omega)\hat{\mathbf{A}}(\omega)\] (H.7) is nearly diagonal in the energy basis for large \(\tau\). **Lemma H.2** (Energy gradient is almost diagonal).: _In the setting of Lemma H.1, assume that for any two well-isolated energy eigensubspaces \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) such that the two sets of eigenvalues have at least distance \(\delta\). Then,_ \[\left\|\mathbf{P}_{1}\mathcal{L}^{\dagger}[\mathbf{H}]\mathbf{P}_{2}\right\|\leq\mathcal{ O}\left(|S|\left(\frac{\left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4}}+\frac{1}{\tau}+ \frac{\left\|\theta_{\beta}\right\|_{\infty}}{\sqrt{\delta}\tau}\right)\right).\] (H.8) _where \(\theta_{\beta}(\omega):=\gamma_{\beta}(\omega)\omega\)._ Proof.: Formally, approximate the energy gradient by dropping the Lamb-shift term (Proposition F.3) and applying Lemma L.1, \[\mathcal{L}^{\dagger}[\mathbf{H}]\approx\sum_{a\in S}\int_{-\infty}^{\infty} \gamma_{\beta}(\omega)\omega\hat{\mathbf{A}}^{a}(\omega)^{\dagger}\hat{\mathbf{A}}^{a }(\omega)\mathrm{d}\omega.\] (H.9) We then apply the secular approximation for \(\mu=\delta/2\) (Corollary K.3) such that the transition amplitudes vanishes between the subspaces \[\mathbf{P}_{1}\hat{\mathbf{S}}_{\mu}^{a}(\omega)^{\dagger}\hat{\mathbf{S}}_{\mu}^{a}( \omega)\mathbf{P}_{2}=0\quad\text{for each}\quad\omega\in\mathbb{R}\quad\text{and} \quad a\in A.\] (H.10) Combining the errors in each of the approximations leads to the claimed result. Next, we show that the finite-\(\tau\) Lindbladian can be approximated by the infinite-\(\tau\) version under certain conditions. The latter, known as the Davies' generator [75], has a simpler form that is more amenable for analysis in some situations. **Lemma H.3** (Recovering Davies' generator).: _Consider the dissipative part of the thermal Lindbladian \(\mathcal{D}_{a}^{\beta,\tau,\mathbf{H}}\) with the jump operators \(\mathbf{A}^{a}\) where \(\left\|\mathbf{A}^{a}\right\|\leq 1\), and any \(\gamma_{\beta}\) such that \(\left\|\gamma_{\beta}\right\|_{\infty}\leq 1\). Suppose the Bohr-frequency gap is \(\Delta_{\nu}(\mathbf{H})\), then_ \[\left\|\mathcal{D}_{a}^{\dagger\beta,\tau,\mathbf{H}}-\mathcal{D}_{a}^{\dagger \beta,\infty,\mathbf{H}}\right\|_{\infty-\infty}\leq\mathcal{O}\left(\max_{\nu} \left|\gamma_{\beta}(\nu)-\int_{-\infty}^{\infty}\gamma_{\beta}(\omega) \left|\hat{f}(\omega-\nu)\right|^{2}\mathrm{d}\omega\right|+\frac{1}{\sqrt{ \Delta_{\nu}(\mathbf{H})\tau}}\right).\] (H.11) Therefore, the Bohr-frequency gap sets a timescale \(\sim\Delta_{\nu}^{-1}\) such that the map \(\mathcal{D}_{a}^{\dagger\beta,\tau,\mathbf{H}}\) stabilized. Proof.: It suffices to consider \(\mathcal{D}_{a}^{\dagger\beta,\tau,\mathbf{H}}[\mathbf{O}]\) acting on arbitrary operator \(\mathbf{O}\) such that \(\left\|\mathbf{O}\right\|=1\): \[\mathcal{D}_{a}^{\dagger\beta,\tau,\mathbf{H}}[\mathbf{O}] =\int_{-\infty}^{\infty}\gamma_{\beta}(\omega)\Big{[}\hat{\mathbf{A} }^{a}(\omega)^{\dagger}\mathbf{O}\hat{\mathbf{A}}^{a}(\omega)-\frac{1}{2}\{\hat{\mathbf{A} }^{a}(\omega)^{\dagger}\hat{\mathbf{A}}^{a}(\omega),\mathbf{O}\}\Big{]}\mathrm{d}\omega\] (H.12) \[\stackrel{{ E_{1}}}{{\approx}}\int_{-\infty}^{\infty }\gamma_{\beta}(\omega)\Big{[}\hat{\mathbf{S}}^{a}(\omega)^{\dagger}\mathbf{O}\hat{ \mathbf{S}}^{a}(\omega)-\frac{1}{2}\{\hat{\mathbf{S}}^{a}(\omega)^{\dagger}\hat{\mathbf{S} }^{a}(\omega),\mathbf{O}\}\Big{]}\mathrm{d}\omega\] (secular approximation: Corollary K.2) \[=\sum_{\nu,\nu^{\prime}\in B(\mathbf{H})}\Big{(}\mathbf{A}_{\nu^{\prime} }^{a\dagger}\mathbf{O}\mathbf{A}_{\nu}^{a}-\frac{1}{2}\{\mathbf{A}_{\nu^{\prime}}^{a \dagger}\mathbf{A}_{\nu}^{a},\mathbf{O}\}\Big{)}\int_{-\infty}^{\infty}\gamma_{\beta }(\omega)\hat{f}_{\mu}^{*}(\omega-\nu^{\prime})\hat{f}_{\mu}(\omega-\nu) \mathrm{d}\omega\] (truncated at frequency \[\mu=\tfrac{\Delta_{\nu}}{2}\] ) \[=\sum_{\nu\in B(\mathbf{H})}\left(\mathbf{A}_{\nu}^{a\dagger}\mathbf{O}\mathbf{A}_{ \nu}^{a}-\frac{1}{2}\{\mathbf{A}_{\nu}^{a\dagger}\mathbf{A}_{\nu}^{a},\mathbf{O}\}\right) \int_{-\infty}^{\infty}\gamma_{\beta}(\omega)\left|\hat{f}_{\mu}(\omega-\nu) \right|^{2}\mathrm{d}\omega\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \[\leq\mathcal{O}(\frac{1}{\Lambda_{0}}+\beta).\] (with change of variable \[x=\beta\omega\] and \[y=\omega/\Lambda_{0}\] ) The second term uses the tail bound \(P(x)\leq\frac{4}{\pi x\tau}\) from Eq. (K.21) and that \(\gamma^{\prime}_{\beta}(\nu\pm x)\) are each rapidly decaying outside an \(x\in[\mp\nu-\Lambda_{0},\mp\nu+\Lambda_{0}]\) window so that the integral over \(\frac{1}{x}\mathrm{d}x\) only contributes at most \(\mathcal{O}(\int_{1/\tau}^{\Lambda_{0}}\frac{1}{x}\mathrm{d}x)=\mathcal{O}( \log(\tau\Lambda_{0}))\). ### Relating subspace and local gradients to global gradients As a method of proof, we will often analyze a Lindbladian by its constituents, and here we present a few useful relations. First, when studying gradients, the gradient acting on a subspace is often conceptually simpler. The following lemma relates the energy gradient in a subspace and the full energy gradient. This is a direct consequence of Lemma H.1 and Lemma H.2 above. **Lemma H.5** (Subspace gradient and global gradient).: _In the setting of Lemma H.2, suppose \(\mathbf{P}\) projects onto a set of eigenstates of \(\mathbf{H}\) separated by the rest by a gap of at least \(\delta\). Then,_ \[-\mathcal{L}^{\dagger}[\mathbf{H}]\succeq-\mathbf{P}\mathcal{L}^{\dagger}[\mathbf{H}]\bm {P}-\mathcal{O}\left(|S|\left(\frac{\left\lVert\mathbf{H}\right\rVert^{3/4}}{\tau ^{1/4}}+\frac{1}{\beta}+\frac{1}{\tau}+\frac{\left\lVert\theta_{\beta}\right \rVert_{\infty}}{\sqrt{\delta\tau}}\right)\right)\cdot\mathbf{I}.\] (H.20) Proof.: Let \(\mathbf{L}=\mathcal{L}^{\dagger}[\mathbf{H}]\). We have, \[\mathbf{L}=\mathbf{PLP}+\mathbf{P}^{\perp}\mathbf{LP}^{\perp}+\mathbf{PLP}^{\perp}+\mathbf{P}^{\perp} \mathbf{LP}.\] (H.21) Using Lemma H.1 establishing the almost negativity of the energy gradient, \[-\mathbf{L}\succeq-\mathcal{O}\left(|S|\left(\frac{\left\lVert\mathbf{H}\right\rVert^ {3/4}}{\tau^{1/4}}+\frac{1}{\tau}+\frac{1}{\beta}\right)\right)\mathbf{I},\] (H.22) we have \[-\mathbf{P}^{\perp}\mathbf{LP}^{\perp} \succeq-\mathcal{O}\left(|S|\left(\frac{\left\lVert\mathbf{H}\right \rVert^{3/4}}{\tau^{1/4}}+\frac{1}{\tau}+\frac{1}{\beta}\right)\right)\mathbf{P}^{ \perp}.\] (Lemma H.1 ) \[\left\lVert\mathbf{PLP}^{\perp}+\mathbf{P}^{\perp}\mathbf{LP}\right\rVert \leq\mathcal{O}\left(|S|\left(\frac{\left\lVert\mathbf{H}\right\rVert ^{3/4}}{\tau^{1/4}}+\frac{1}{\tau}+\frac{\left\lVert\theta_{\beta}\right\rVert _{\infty}}{\sqrt{\delta\tau}}\right)\right).\] (Lemma H.2 ) Putting the bounds together yields the advertised result. Next, we provide a lemma that gives a simplified expression of the energy gradient operator when restricted to a subspace of low-energy eigenstates. **Lemma H.6** (Gradient in a subspace).: _In the setting of Lemma H.1, suppose \(\mathbf{H}\) has a subspace of low-energy eigenstates with corresponding projector \(\mathbf{Q}\) that is separated from the higher energy eigenstates by an excitation gap \(\Delta_{\mathbf{Q}}\). Let \(\Delta_{\nu}=\min_{\nu_{1}\neq\mu_{2}\in B(\mathbf{H}|_{\mathbf{Q}})}|\nu_{1}-\nu_{2}|\) be the Bohr-frequency gap of \(\mathbf{H}\) restricted to the subspace. Assuming \(\Delta_{\nu}/2<\Delta_{\mathbf{Q}}\), then the energy gradient operator in the subspace can be approximated using_ \[\left\lVert\mathbf{Q}\mathcal{L}^{\dagger}[\mathbf{H}]\mathbf{Q}-\sum_{a\in S}\sum_{\nu \in B(\mathbf{H}|_{\mathbf{Q}})}\mathbf{QA}_{\nu}^{a\dagger}\mathbf{QA}_{\nu}^{a}\mathbf{Q}\int_{ -\infty}^{0}\gamma_{\beta}(\omega)\omega|\hat{f}_{\mu}(\omega-\nu)|^{2} \mathrm{d}\omega\right\rVert\leq\epsilon\] (H.23) _where \(\mu=\Delta_{\nu}/2\) and_ \[\epsilon\leq|S|\mathcal{O}\left(\frac{\left\lVert\mathbf{H}\right\rVert^{3/4}}{ \tau^{1/4}}+\frac{1}{\tau}+\frac{1}{\beta}+\frac{\left\lVert\omega\gamma_{ \beta}(\omega)\right\rVert_{\infty}}{\sqrt{\Delta_{\nu}\tau}}\right).\] (H.24) Proof.: We invoke a series of approximations to rewrite in terms of the exact Bohr frequencies on the subspace \(\mathbf{P}_{\mathrm{I}}\). \[\mathcal{L}^{\dagger}[\mathbf{H}] \stackrel{{ E_{1}}}{{\approx}}\mathcal{D}^{\dagger}[\bm {H}]\] (Proposition F.3) \[\stackrel{{ E_{2}}}{{\approx}}\sum_{a\in S}\int_{- \infty}^{\infty}\gamma_{\beta}(\omega)\omega\mathbf{A}^{a}(\omega)^{\dagger}\mathbf{A}^ {a}(\omega)\mathrm{d}\omega\] (Lemma L.1) \[\stackrel{{ E_{3}}}{{\approx}}\sum_{a\in S}\int_{- \infty}^{0}\gamma_{\beta}(\omega)\omega\mathbf{A}^{a}(\omega)^{\dagger}\mathbf{A}^{a}( \omega)\mathrm{d}\omega\] (Operator norms: Corollary K.1) \[\stackrel{{ E_{4}}}{{\approx}}\sum_{a\in S}\int_{- \infty}^{0}\gamma_{\beta}(\omega)\omega\mathbf{S}^{a}(\omega)^{\dagger}\mathbf{S}^{a}( \omega)\mathrm{d}\omega\] (secular approximation: Corollary K.3) \[=\sum_{a\in S}\sum_{\nu^{\prime},\nu\in B(\mathbf{H})}\int_{- \infty}^{0}\gamma_{\beta}(\omega)\omega\mathbf{A}^{a\dagger}_{\nu^{\prime}}\mathbf{A} ^{a}_{\nu}\hat{f}_{\mu}^{\ *}(\omega-\nu^{\prime})\hat{f}_{\mu}(\omega-\nu)\mathrm{d} \omega=:\mathbf{X}.\] (H.25) The errors are \(E_{1}=\mathcal{O}(\left|S\right|\left\|\mathbf{H}\right\|^{3/4}/\tau^{1/4})\), \(E_{2}=\mathcal{O}(\left|S\right|/\tau)\), \(E_{3}=\mathcal{O}(\left|S\right|/\beta)\), and \(E_{4}=\mathcal{O}(\left|S\right|\times\|\omega\gamma_{\beta}(\omega)\|_{\infty }/\sqrt{\mu\tau})\). In particular, \(E_{3}\) arises from dropping the positive integral range, with error bounded by \(\max_{\omega\geq 0}\omega\gamma_{\beta}(\omega)\leq 1/\beta\). Sandwiching Eq. (H.25) with \(\mathbf{Q}\) further simplifies the expression as it restricts to transitions in the subspace. Specifically, we have \[\mathbf{Q}\mathbf{X}\mathbf{Q}= \sum_{a\in S}\sum_{\nu^{\prime},\nu\in B(\mathbf{H})}\int_{-\infty}^ {0}\gamma_{\beta}(\omega)\omega\mathbf{Q}\mathbf{A}^{a\dagger}_{\nu^{\prime}}\mathbf{Q} \mathbf{A}^{a}_{\nu}\mathbf{Q}\hat{f}_{\mu}^{\ *}(\omega-\nu^{\prime})\hat{f}_{\mu}(\omega-\nu) \mathrm{d}\omega\quad\text{(no heating transitions)}\] \[= \sum_{a\in S}\sum_{\nu\in B(\mathbf{H}|_{\mathbf{Q}})}\int_{-\infty}^{0} \gamma_{\beta}(\omega)\omega\mathbf{Q}\mathbf{A}^{a\dagger}_{\nu}\mathbf{Q}\mathbf{A}^{a}_{\nu }\mathbf{Q}\left|\hat{f}_{\mu}(\omega-\nu)\right|^{2}\mathrm{d}\omega.\] (different Bohr-frequencies decohere) The first line inserts an additional projector \(\mathbf{Q}\) between \(\mathbf{A}^{a\dagger}_{\nu^{\prime}}\) and \(\mathbf{A}^{a}_{\nu}\) because any transition to excited states require \(\nu,\nu^{\prime}>\Delta_{\mathbf{Q}}\), but this is forbidden by the restrictions that \(\omega\leq 0\) (from the integral) and that \(|\nu-\omega|,|\nu^{\prime}-\omega|<\mu<\Delta_{\mathbf{Q}}\) (from the secular approximation). In the second line, since the Bohr frequencies in \(B(\mathbf{H}|_{\mathbf{Q}})\) are at least \(\Delta_{\nu}=2\mu\) apart, we must have that \[\hat{f}_{\mu}^{*}(\omega-\nu^{\prime})\hat{f}_{\mu}(\omega-\nu)=0\quad\text{ for all}\quad\omega\in\mathbb{R},\quad\text{unless}\quad\nu^{\prime}=\nu.\] (H.26) Combining the above with Eq. (H.25) to conclude the proof. When the Hamiltonian is local, thinking about the gradient "locally" is sometimes useful. The following lemma gives a sufficient condition that guarantees a global gradient. Since the consequence is strong, the premise is also more stringent; it is only helpful when the Hamiltonian is frustration-free. **Lemma H.7** (Local-to-global gradient condition).: _Suppose \(\mathbf{H}=\sum_{i}\mathbf{h}_{i}\), where each term \(\mathbf{h}_{i}\succeq 0\). Then for any (not necessarily thermal) Lindbladian \(\mathcal{L}\),_ \[-\mathcal{L}^{\dagger}[\mathbf{h}_{i}]\succeq r_{i}\mathbf{h}_{i}\qquad \Longrightarrow\qquad-\mathcal{L}^{\dagger}[\mathbf{H}]\succeq r\mathbf{H},\] (H.27) _where \(r=\min_{i}r_{i}\)._ Proof.: By linearity, we have \(-\mathcal{L}^{\dagger}[\mathbf{H}]=\sum_{i}-\mathcal{L}^{\dagger}[\mathbf{h}_{i}] \succeq\sum_{i}r_{i}\mathbf{h}_{i}\). Since \(r_{i}\mathbf{h}_{i}\succeq r\mathbf{h}_{i}\), we have \(-\mathcal{L}^{\dagger}[\mathbf{H}]\succeq r\sum_{i}\mathbf{h}_{i}=r\mathbf{H}\), concluding the proof. ### Gradients for commuting Hamiltonians When we are given a commuting Hamiltonian, the energy gradient induced by any local jump operator can be understood by restricting the system to its neighborhood. In this situation, the negative gradient condition for the overall Hamiltonian can be decomposed into conditions that can be checked locally. This gives an efficient method to show a commuting Hamiltonian has a negative gradient for all its excited states, which we elucidate in this section of the appendix. Recall the thermal Lindbladian \(\mathcal{L}:=\mathcal{L}^{\beta,\tau,\mathbf{H}}\) defined in Eq. (F.1) for a local jump operator \(\mathbf{A}^{a}\), whose Heisenberg picture is \[\mathcal{L}^{\dagger\beta,\tau,\mathbf{H}}_{a}[\mathbf{O}]=\mathrm{i}[\mathbf{H}^{\beta, \tau,\mathbf{H}}_{LS,a},\mathbf{O}]+\mathcal{D}^{\dagger\beta,\tau,\mathbf{H}}_{a}[\mathbf{O}],\] (H.28) where \[\mathcal{D}^{\dagger\beta,\tau,\mathbf{H}}_{a}[\mathbf{O}]=\int_{-\infty}^{\infty} \gamma_{\beta}(\omega)\Big{[}\hat{\mathbf{A}}^{a}(\omega)^{\dagger}\mathbf{O}\hat{\bm {A}}^{a}(\omega)-\frac{1}{2}\{\hat{\mathbf{A}}^{a}(\omega)^{\dagger}\hat{\mathbf{A}}^{ a}(\omega),\mathbf{O}\}\Big{]}\mathrm{d}\omega.\] (H.29) Note \(\hat{\mathbf{A}}^{a}(\omega)\) is the operator Fourier transform of \(\mathbf{A}^{a}(t)=\mathrm{e}^{\mathrm{i}\mathbf{H}t}\mathbf{A}^{a}\mathrm{e}^{-\mathrm{i} \mathbf{H}t}\), and \(\mathbf{H}^{\beta,\tau,\mathbf{H}}_{LS,a}\) is a Lamb-shift term defined in Eq. (F.7). When \(\mathbf{H}\) is a commuting Hamiltonian (e.g., [76, 77]), an important observation is that \(\mathbf{A}^{a}(t)\) only depends on the part of \(\mathbf{H}\) that does not commute with \(\mathbf{A}^{a}\). In particular, the energy gradient for each jump operator only depends on the neighborhood of \(\mathbf{A}^{a}\). **Lemma H.8** (Commuting Hamiltonian and localized Lindblad operators).: _Suppose \(\mathbf{H}=\sum_{e}\mathbf{h}_{e}\) is a commuting Hamiltonian. For any jump operator \(\mathbf{A}^{a}\), the associated energy gradient simplifies to_ \[\mathcal{L}^{\dagger\beta,\tau,\mathbf{H}}_{a}[\mathbf{H}]=\mathcal{L}^{\dagger\beta, \tau,\mathbf{H}_{\exists a}}_{a}[\mathbf{H}_{\exists a}]\] (H.30) _where \(\mathbf{H}_{\exists a}=\sum_{e:[\mathbf{h}_{e},\mathbf{A}^{a}]\neq 0}\mathbf{h}_{e}\) is the part of \(\mathbf{H}\) does not commute with \(\mathbf{A}^{a}\)._ Proof.: When \(\mathbf{H}\) is commuting, we have \(\mathbf{A}^{a}(t)=\mathrm{e}^{\mathrm{i}\mathbf{H}t}\mathbf{A}^{a}\mathrm{e}^{-\mathrm{i} \mathbf{H}t}=\mathrm{e}^{\mathrm{i}\mathbf{H}_{\exists a}t}\mathbf{A}^{a}\mathrm{e}^{- \mathrm{i}\mathbf{H}_{\exists a}t}\), so the Lindbladian superoperator only depends on \(\mathbf{H}_{\exists a}\), i.e., \(\mathcal{L}^{\dagger\beta,\tau,\mathbf{H}}_{a}=\mathcal{L}^{\dagger\beta,\tau,\mathbf{ H}_{\exists a}}_{a}\). Let \(\mathbf{H}_{\not\exists a}=\mathbf{H}-\mathbf{H}_{\exists a}\) be the part of \(\mathbf{H}\) that commutes with \(\mathbf{A}^{a}\). Since \([\mathbf{H}_{\not\exists a},\mathbf{H}_{\exists a}]=0\), we have \([\mathbf{A}^{a}(t),\mathbf{H}_{\not\exists a}]=0\) for each \(t\), which implies \([\mathbf{H}^{\beta,\tau,\mathbf{H}}_{LS,a},\mathbf{H}_{\not\exists a}]=[\hat{\mathbf{A}}^{a}( \omega),\mathbf{H}_{\not\exists a}]=0\). Thus we have \(\mathcal{L}^{\dagger\beta,\tau,\mathbf{H}}_{a}[\mathbf{H}_{\not\exists a}]=0\). ### Negative gradient condition under perturbations to Hamiltonians We next look at how the negative energy gradient condition changes under perturbations to the \(n\)-qubit Hamiltonian \(\mathbf{H}\). See Appendix L for the proof of the following theorem. **Theorem 10** (Monotonicity of gradient under level splitting).: _Consider a highly degenerate Hamiltonian \(\mathbf{H}=\sum_{\bar{E}}\bar{E}\mathbf{P}_{\bar{E}}\) with Bohr-frequency gap \(\Delta_{\nu}:=\min_{\nu_{1}\neq\mu_{2}\in B(\mathbf{H})}|\nu_{1}-\nu_{2}|\) of \(\mathbf{H}\), and add a perturbation \(\mathbf{H}^{\prime}:=\mathbf{H}+\mathbf{V}\). Let \(\mathbf{P}=\mathbf{P}_{\bar{E}}\) be a projector to an energy subspace and \(\mathbf{P}^{\prime}\) the corresponding perturbed subspace. Suppose the perturbation is weaker than the Bohr-frequency gap, \(\|\mathbf{V}\|\leq\frac{1}{8}\Delta_{\nu}\). For any \(\beta,\tau>0\), let \(\mathcal{L}=\sum_{a\in S}\mathcal{L}^{\beta,\tau,\mathbf{H}}_{a},\mathcal{L}^{ \prime}=\sum_{a\in S}\mathcal{L}^{\beta,\tau,\mathbf{H}^{\prime}}_{a}\) be thermal Lindbladians with jumps \(\{\mathbf{A}^{a}\}_{a\in S}\), where \(\|\mathbf{A}^{a}\|\leq 1\) and the transition weight \(\gamma_{\beta}(\omega)\) is given by Eq. (F.4). Then we have the monotone property that_ \[-\mathcal{L}^{\dagger}[\mathbf{H}]\succeq r(\mathbf{I}-\mathbf{P})-\epsilon\mathbf{I}\quad \text{implies}\quad-\mathcal{L}^{\prime\dagger}[\mathbf{H}^{\prime}]\succeq r(\bm {I}-\mathbf{P}^{\prime})-\epsilon^{\prime}\mathbf{I}\] (H.31) _where_ \[\epsilon^{\prime}\leq\epsilon+|S|\cdot\mathcal{O}\left(\frac{1}{\tau}+\frac{\| \mathbf{H}\|^{3/4}}{\tau^{1/4}}+\frac{\Lambda_{0}^{2/3}}{\tau^{1/3}}+\frac{\Lambda_ {0}}{\sqrt{\Delta_{\nu}\tau}}+\frac{\mathrm{e}^{-\beta\Delta_{\nu}/4}}{\beta}+ \left(1+\frac{\Lambda_{0}+r}{\Delta_{\nu}}\right)\|\mathbf{V}\|\right).\] (H.32) Finally, we look at how the negative energy gradient condition changes when restricted to a subspace. See Appendix L.6 for proofs of the following two corollaries. **Corollary H.1** (Monotonicity of gradient on a subspace).: _Consider a Hamiltonian \(\mathbf{H}=\sum_{\bar{E}}\bar{E}\mathbf{P}_{\bar{E}}\) and its perturbation \(\mathbf{H}^{\prime}:=\mathbf{H}+\mathbf{V}\). Let \(\mathbf{P}\) be the ground space projector for \(\mathbf{H}\) and \(\mathbf{P}^{\prime}\) be the corresponding perturbed eigensubspace of \(\mathbf{H}^{\prime}\). Let \(\mathbf{Q}\) be a low-energy eigensubspace projector of \(\mathbf{H}\) (i.e., \(\mathbf{Q}=\sum_{E\leq E_{\mathbf{Q}}}\mathbf{P}_{E}\) for \(E_{\mathbf{Q}}\in\text{Spec}(\mathbf{H})\)) with excitation gap \(\Delta_{\mathbf{Q}}\). Assume \(\frac{\left\|\mathbf{V}\right\|\left\|\mathbf{H}\right\|}{\Delta_{\mathbf{Q}}}\leq\frac{1 }{144}\Delta_{\nu}\) where \(\Delta_{\nu}:=\min_{\nu_{1}\neq\nu_{2}\in B(\mathbf{H}|_{\mathbf{Q}})}\left|\nu_{1}- \nu_{2}\right|\) is the Bohr-frequency gap of \(\mathbf{H}\) within the subspace \(\mathbf{Q}\). For any \(\beta,\tau>0\), let \(\mathcal{L}=\sum_{a\in S}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}},\mathcal{L}^{ \prime}=\sum_{a\in S}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}^{\prime}}\) be thermal Lindbladians with jumps \(\{\mathbf{A}^{a}\}_{a\in S}\), where \(\left\|\mathbf{A}^{a}\right\|\leq 1\) and the transition weight \(\gamma_{\beta}(\omega)\) is given by Eq. (F.4). Then we have the monotone property that_ \[-\mathbf{Q}\mathcal{L}^{\dagger}[\mathbf{H}]\mathbf{Q}\succeq\mathbf{r}\mathbf{Q}(\mathbf{I}-\mathbf{P})- \epsilon\mathbf{I}\quad\text{implies}\quad-\mathbf{Q}^{\prime}\mathcal{L}^{\prime \dagger}[\mathbf{H}^{\prime}]\mathbf{Q}^{\prime}\succeq\mathbf{r}\mathbf{Q}^{\prime}(\mathbf{I}- \mathbf{P}^{\prime})-\epsilon^{\prime}\mathbf{I}\] (H.33) _where \(\mathbf{Q}^{\prime}\) projects onto the perturbed eigensubspace of \(\mathbf{H}^{\prime}\) identified with \(\mathbf{Q}\), and_ \[\epsilon^{\prime}\leq\epsilon+\left|S\right|\cdot\mathcal{O}\bigg{(} \frac{1}{\tau}+\frac{\left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4}}+\frac{\Lambda_{0 }^{2/3}}{\tau^{1/3}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}\tau}}+\frac{ \Lambda_{0}}{\sqrt{\Delta_{\mathbf{Q}}\tau}}+\frac{\mathrm{e}^{-\beta\Delta_{\mathbf{Q }}/4}}{\beta}+\frac{\mathrm{e}^{-\beta\Delta_{\mathbf{Q}}/4}}{\beta}\] \[+\left(1+\frac{\Lambda_{0}}{\Delta_{\nu}}\right)\frac{\left\|\mathbf{ V}\right\|\left\|\mathbf{H}\right\|}{\Delta_{\mathbf{Q}}}+r\Big{(}\frac{\left\|\mathbf{V} \right\|}{\Delta_{\mathbf{Q}}}+\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}\Big{)} \bigg{)}.\] (H.34) **Corollary H.2** (Monotonicity of gradient on a subspace under off-block-diagonal perturbation).: _In the setting of Corollary H.1, instead assume \(\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}},\frac{\left\|\mathbf{V}\right\|}{\Delta _{\mathbf{Q}}}\leq(const.)\), and that the perturbation is off-block-diagonal, i.e., \(\mathbf{Q}\mathbf{V}\mathbf{Q}=(\mathbf{I}-\mathbf{Q})\mathbf{V}(\mathbf{I}-\mathbf{Q})=0\). Then,_ \[-\mathbf{Q}\mathcal{L}^{\dagger}[\mathbf{H}]\mathbf{Q}\succeq\mathbf{r}\mathbf{Q}(\mathbf{I}-\mathbf{P})- \epsilon\mathbf{I}\quad\text{implies}\quad-\mathbf{Q}^{\prime}\mathcal{L}^{\prime \dagger}[\mathbf{H}^{\prime}]\mathbf{Q}^{\prime}\succeq\mathbf{r}\mathbf{Q}^{\prime}(\mathbf{I}- \mathbf{P}^{\prime})-\epsilon^{\prime}\mathbf{I}\] (H.35) _where_ \[\epsilon^{\prime}\leq\epsilon+\left|S\right|\cdot\mathcal{O}\bigg{(} \frac{1}{\tau}+\frac{\left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4}}+\frac{\Lambda_{0 }^{2/3}}{\tau^{1/3}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}\tau}}+\frac{\Lambda _{0}}{\sqrt{\Delta_{\mathbf{Q}}\tau}}+\frac{\mathrm{e}^{-\beta\Delta_{\mathbf{Q}}/4}}{ \beta}+\frac{\mathrm{e}^{-\beta\Delta_{\mathbf{Q}}/4}}{\beta}\\ +\frac{\left\|\mathbf{V}\right\|^{2}}{\Delta_{\mathbf{Q}}}+\left\|\mathbf{H _{Q}}\right\|\cdot\Big{(}\frac{\left\|\mathbf{H_{Q}}\right\|\left\|\mathbf{V}\right\|}{ \Delta_{\mathbf{Q}}\Delta_{\nu}}+\frac{\left\|\mathbf{V}\right\|^{2}}{\Delta_{\mathbf{Q}} \Delta_{\nu}}\Big{)}+r\Big{(}\frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q}}}+ \frac{\left\|\mathbf{V}\right\|^{2}}{\Delta_{\mathbf{Q}}\Delta_{\nu}}\Big{)}\bigg{)}.\] (H.36) Energy landscape of an Ising chain In this appendix, we take a brief aside to characterize the energy landscape of the one-dimensional ferromagnetic Ising chain under thermal perturbations. This provides a basic example on how the definition of local minima under thermal perturbations is related to the physical picture. We will see that this system has many suboptimal local minima in the absence of an external field with a lifetime polynomial in the system size. Once an external field is added, however, the system essentially has no suboptimal local minima and can quickly cool to the ground state where all spins are aligned. This observation corresponds to the following physical phenomena: When there is no external magnetic field, a ferromagnetic system will often be stuck in a configuration with many domain walls, and an externally applied magnetic field can quickly magnetize the system. The Hamiltonian for the ferromagnetic Ising chain on a periodic boundary condition is \[\mathbf{H}=-\sum_{j=1}^{n}\mathbf{Z}_{j}\mathbf{Z}_{j+1}-h\sum_{j=1}^{n}\mathbf{Z}_{j},\] (I.1) where we identify \(\mathbf{Z}_{n+1}\equiv\mathbf{Z}_{1}\). Intuitively, this system energetically favors configurations where adjacent spins are aligned. When \(h=0\), we have two degenerate ground states, \(|00\cdots 0\rangle\) and \(|11\cdots 1\rangle\), which are the global minima. This degeneracy is broken when \(h\neq 0\), and these two states split by energy \(2nh\). The system also has many excited states with _domain walls_, i.e., locations where adjacent spins are anti-aligned such as \(|01\rangle\) and \(|10\rangle\). In what follows, we study the energy landscape of the above system under thermal perturbations with jump operator \(\{\mathbf{A}^{j}=\mathbf{X}_{j}\}_{j=1}^{n}\), setting \(\tau=\infty\) for simplicity. We analyze three cases. Case 1: no external field (\(h=0\)).In this case, we will see that any bit string state with domain walls sufficiently far from each other, e.g. \(|\cdots 0001111000\cdots\rangle\) is a suboptimal local minimum. Indeed, there is no local operation to strictly decrease the energy of such states; the jump operators \(\{\mathbf{X}_{j}\}\) can only displace the domain walls by one site, which does not change the energy. We can see this more formally by computing the energy gradient operator. Since \(\mathbf{H}\) is a commuting Hamiltonian, we may apply Lemma H.8 and study the gradient induced by a single jump operator \(\mathbf{X}_{j}\) by restricting the Hamiltonian to its neighborhood, i.e., \[\mathbf{H}_{\ni j}=-\mathbf{Z}_{j-1}\mathbf{Z}_{j}-\mathbf{Z}_{j}\mathbf{Z}_{j+1}.\] (I.2) Observe \(\mathbf{H}_{\ni j}\) has three degenerate eigenspaces \(\mathbf{P}_{j}^{E}\) with energy \(E\) as follows: \[\mathbf{P}_{j}^{-2}=\sum_{\mathbf{s}\in\{000,111\}}|\mathbf{s}\rangle\!\langle \mathbf{s}|_{j-1,j,j+1}, \mathbf{P}_{j}^{2}=\sum_{\mathbf{s}\in\{010,101\}}|\mathbf{s}\rangle\!\langle\mathbf{s}|_{j-1,j,j+1},\] \[\text{and}\qquad\mathbf{P}_{j}^{0}=\sum_{\mathbf{s}\in\{001,100,011,110\}} |\mathbf{s}\rangle\!\langle\mathbf{s}|_{j-1,j,j+1}.\] (I.3) Then the negative Bohr-frequencies and the associated jumps are \[\mathbf{A}_{\nu_{1}}^{j} =\mathbf{P}_{j}^{-2}\mathbf{X}_{j}\mathbf{P}_{j}^{2}=(|000\rangle\!\langle 010|+|111 \rangle\!\langle 101|)_{j-1,j,j+1}, \nu_{1}=-4,\] \[\mathbf{A}_{\nu_{2}}^{j} =\mathbf{P}_{j}^{0}\mathbf{X}_{j}\mathbf{P}_{j}^{2}+\mathbf{P}_{j}^{-2}\mathbf{X}_{j} \mathbf{P}_{j}^{0}=0, \nu_{2}=-2.\] (I.4) Hence, the energy gradient operator associated with jump \(\mathbf{X}_{j}\) is \[\mathcal{D}_{j}^{\dagger\beta,\infty,\mathbf{H}}[\mathbf{H}]=\mathcal{D}_{j}^{\dagger \beta,\infty,\mathbf{H}_{\ni j}}[\mathbf{H}_{\ni j}]=\sum_{\nu\in B(\mathbf{H}_{\ni j})} \nu\gamma_{\beta}(\nu)\mathbf{A}_{\nu}^{j\dagger}\mathbf{A}_{\nu}^{j}\] \[=\theta_{0}\cdot(|010\rangle\!\langle 010|+|101\rangle\!\langle 101|)_{j-1,j,j+1}+ \mathcal{O}(e^{-4\beta})\] (I.5) where \(\theta_{0}=-4\gamma_{\beta}(-4)=-\Omega(1)\). As we can see, the energy gradient is essentially \(0\) when the domain walls are more than distance \(1\) apart, and only becomes significant when two domain walls are next to each other, as in \(|\cdots 010\cdots\rangle\) or \(|\cdots 101\cdots\rangle\). This implies the presence of exponentially many suboptimal local minima; for example, choose whether or not to have a domain wall every \(2\) sites. Despite the presence of many suboptimal local minima, we now argue that they have a lifetime polynomial in the system size \(n\) when the system evolves under thermal perturbations. We may understand the dynamics of the system as a random walk of domain walls, and two domain walls annihilate each other when they meet. Since two domain walls at distance \(\ell\) apart moving under diffusive dynamics take \(\mathcal{O}(\ell^{2})\) time to meet, a suboptimal local minimum with \(k\) domain walls decays to a lower energy state after approximately \(\mathcal{O}(n^{2}/k^{2})\) time. Case 2: weak external field (\(0<h<2\)).In this case, the ground state of \(\boldsymbol{H}\) is uniquely \(|0^{n}\rangle\), as all spins are slightly favored to be in the \(|0\rangle\) state instead of the \(|1\rangle\) state. When the domain walls are far apart, e.g. \(|\cdots 0001111000\cdots\rangle\), the applied external field causes an attraction across the domain of \(1\)'s, which energetically favors the domain walls to move closer together. The presence of the field \(h\) removes all the suboptimal local minima that were in the previous case. The state \(|1^{n}\rangle\), which was a ground state in the previous case, becomes now the only suboptimal local minimum. We now more formally characterize the energy landscape of \(\boldsymbol{H}\) using the energy gradient operator. Again applying Lemma H.8, we may consider the gradient induced by a single jump operator by focusing on its neighborhood. The relevant neighborhood Hamiltonian is \[\boldsymbol{H}_{\ni j}=-\boldsymbol{Z}_{j-1}\boldsymbol{Z}_{j}-\boldsymbol{Z }_{j}\boldsymbol{Z}_{j+1}-h\boldsymbol{Z}_{j}.\] (I.6) Turning the crank, we see that the negative Bohr-frequencies and the associated jumps are: \[\boldsymbol{A}_{\nu_{1}}^{j} =|000\rangle\!\langle 010|_{j-1,j,j+1}, \nu_{1} =-4-2h,\] \[\boldsymbol{A}_{\nu_{2}}^{j} =|111\rangle\!\langle 101|_{j-1,j,j+1}, \nu_{2} =-4+2h,\] \[\boldsymbol{A}_{\nu_{3}}^{j} =(|001\rangle\!\langle 011|+|100\rangle\!\langle 110|)_{j-1,j,j+1}, \nu_{3} =-2h.\] (I.7) Then the energy gradient operator associated with jump \(\boldsymbol{X}_{j}\) is \[\mathcal{D}_{j}^{\dagger\beta,\infty,\boldsymbol{H}}[\boldsymbol{ H}]=\sum_{\nu\in B(\boldsymbol{H}_{\ni j})}\nu\gamma_{\beta}(\nu)\boldsymbol{A}_{ \nu}^{j\dagger}\boldsymbol{A}_{\nu}^{j}\] \[=(\theta_{1}|010\rangle\!\langle 010|+\theta_{2}|101 \rangle\!\langle 101|+\theta_{3}|011\rangle\!\langle 011|+\theta_{3}|110 \rangle\!\langle 110|)_{j-1,j,j+1}+\mathcal{O}(e^{-2\beta h}),\] (I.8) where \(\theta_{j}=\nu_{j}\gamma_{\beta}(\nu_{j})\). As we can see, any configuration with a domain wall now has a significant gradient from at least one of the jumps. The only configurations without a significant energy gradient are \(|0^{n}\rangle\), the ground state, and \(|1^{n}\rangle\), a metastable local minimum. Case 3: strong external field (\(h>2\)).In this case, the external field is sufficiently strong that the state \(|1^{n}\rangle\) is no longer a local minimum, and \(\boldsymbol{H}\) has no suboptimal local minima. To see this, we note that \(h>2\) implies that \(\nu_{2}>0\) in Eq. (I.7), which means the energetically favored jump operator is actually \(\boldsymbol{A}_{-\nu_{2}}^{j}=\boldsymbol{A}_{\nu_{2}}^{j\dagger}\). This implies the energy gradient operator induced by the jump \(\boldsymbol{X}_{j}\) in this case is \[\mathcal{D}_{j}^{\dagger\beta,\infty,\boldsymbol{H}}[\boldsymbol{H}]=(\theta_ {1}|010\rangle\!\langle 010|+\theta_{2}^{\prime}|111\rangle\!\langle 111|+ \theta_{3}|011\rangle\!\langle 011|+\theta_{3}|110\rangle\!\langle 110|)_{j-1,j,j+1} +\mathcal{O}(e^{-2\beta(h-2)}),\] (I.9) where \(\theta_{2}^{\prime}=-\nu_{2}\gamma_{\beta}(-\nu_{2})\). This gives the state \(|1^{n}\rangle\) a significant energy gradient, and thus the ground state \(|0^{n}\rangle\) is the only local minimum of \(\boldsymbol{H}\). All local minima are global in BQP-hard Hamiltonians (Proof of Theorem 7) A main result of our work is that the task of finding a local minimum for \(\mathbf{H}_{C}\) under thermal perturbation is universal for quantum computation and hence classically hard. As we have seen in the main text and Appendix E.2, this main result follows from Theorem 7, which we prove in this appendix. We start by defining \(\mathbf{H}_{C}\) in detail. Given a 2D \(n\)-qubit circuit \(\mathbf{U}_{C}=\mathbf{U}_{T}\cdots\mathbf{U}_{2}\mathbf{U}_{1}\) with \(T=2t_{0}+L=\mathrm{poly}(n)\) gates as constructed in Fig. 1 of Ref. [43], where the first and last \(t_{0}\) gates are identity gates and each gate of the 2D circuit \(\mathbf{U}_{C}\) is geometrically adjacent to the subsequent gate. We consider a geometrically local Hamiltonian on a 2D lattice with \(n+T\) qubits defined as follows. **Definition 14** (Modified circuit-to-Hamiltonian construction).: _Consider a 2D circuit_ \[\mathbf{U}_{C}=\mathbf{U}_{T}\cdots\mathbf{U}_{2}\mathbf{U}_{1}\] _on \(n\) qubits with \(T=2t_{0}+L\) gates, where the first and last \(t_{0}=cL^{2}\) gates are identity gates with \(c=\mathcal{O}(1)\), and each consecutive gates are geometrically adjacent. We define a geometrically-local Hamiltonian \(\mathbf{H}_{C}\) on a 2D lattice with \(n+T\) qubits as follows,_ \[\mathbf{H}_{C}:=\mathbf{H}_{\mathrm{clock}}+\mathbf{H}_{\mathrm{in}}+\mathbf{H}_{\mathrm{prop }}\quad\text{acting on}\quad(\mathbb{C}^{2})^{\otimes n}\otimes(\mathbb{C}^{2}) ^{\otimes T},\] (J.1) _where each individual term is given by_ \[\mathbf{H}_{\mathrm{clock}} :=J_{\mathrm{clock}}\sum_{t=1}^{T-1}f_{t}\mathbf{I}\otimes|01\rangle \!\langle 01|_{t,t+1},\] \[\mathbf{H}_{\mathrm{in}} :=J_{\mathrm{in}}\sum_{j=1}^{n}g_{j}|1\rangle\!\langle 1|_{j} \otimes|10\rangle\!\langle 10|_{t_{j}-1,t_{j}},\] \[\mathbf{H}_{\mathrm{prop}} :=\frac{1}{2}J_{\mathrm{prop}}\sum_{t=1}^{T}\mathbf{H}_{\mathrm{prop }}(t),\] \[\mathbf{H}_{\mathrm{prop}}(1) :=\mathbf{I}-h_{1}(\mathbf{U}_{1}\otimes|10\rangle\!\langle 00|_{1,2}+ \mathbf{U}_{1}^{\dagger}\otimes|00\rangle\!\langle 10|_{1,2}),\] \[\mathbf{H}_{\mathrm{prop}}(t) :=\mathbf{I}-h_{t}(\mathbf{U}_{t}\otimes|110\rangle\!\langle 100|_{t-1,t,t+ 1}+\mathbf{U}_{t}^{\dagger}\otimes|100\rangle\!\langle 110|_{t-1,t,t+1})\qquad\text{ for each}\quad 1<t<T,\] \[\mathbf{H}_{\mathrm{prop}}(T) :=\mathbf{I}-h_{T}(\mathbf{U}_{T}\otimes|11\rangle\!\langle 10|_{T-1,T}+ \mathbf{U}_{T}^{\dagger}\otimes|10\rangle\!\langle 11|_{T-1,T}).\] _The \(T\) qubits correspond to the \(T\) geometrically-local gates and are placed next to each gate to ensure \(\mathbf{H}_{C}\) is geometrically local. The couplings are chosen as_ \[J_{\mathrm{clock}}=1,\quad f_{t}=(T-t)/T,\quad g_{j}=1/\xi_{t_{j}-1},\quad h_{ t}=\sqrt{t(T-t+1)}.\] (J.2) _We will set the other parameters \(J_{\mathrm{in}},J_{\mathrm{prop}}\) later. The time \(t_{j}\) is the first time qubit \(j\) is acted on._ We will show later in Appendix J.1 that \(\mathbf{H}_{C}\) has a unique ground state given by \[|\eta_{\mathbf{0}}\rangle=\sum_{t=0}^{T}\sqrt{\xi_{t}}\big{(}\mathbf{U}_{t}\cdots\mathbf{ U}_{1}\,|0^{n}\rangle\,\big{)}\otimes|0^{t}1^{T-t}\rangle\qquad\text{where} \quad\xi_{t}:=\frac{1}{2^{T}}\binom{T}{t}.\] (J.3) Note this state encodes the computational history of the circuit \(\mathbf{U}_{C}\). By choosing \(t_{0}=L^{2}\), we ensure that each time in the interesting part of the computational history (i.e., the intermediate \(L\) gates) can be observed with \(\Omega(1/T)\) probability as we will show later in Proposition J.1. This also implies \(g_{j}=\mathcal{O}(T)\). We now state a detailed version of Theorem 7 based on the definition of \(\mathbf{H}_{C}\) in the following. **Theorem 11** (All local minima are global in \(\mathbf{H}_{C}\)).: _Let \(\mathbf{P}_{G}\) be the ground-space projector for the Hamiltonian \(\mathbf{H}_{C}\) in Eq. (1.1). For any failure probability \(0<\delta<1\), there is a parameter choice \(J_{\rm in},J_{\rm prop}={\rm poly}(n,T,\delta^{-1})\) and a choice of \(m\) two-qubit jump operators_ \[S_{0}=\{\mathbf{A}^{a}\}_{a=1}^{m}:=\{\mathbf{I}\otimes\mathbf{X}_{t},\mathbf{I}\otimes\mathbf{Z}_{ t}\}_{t=1}^{T}\cup\{\mathbf{X}_{j}\otimes|0\rangle\!\langle 0|_{t_{j}}\}_{j=1}^{n}\] (J.4) _with \(m=2T+n\) satisfying the following:_ _For a sufficiently small \(\epsilon=1/\operatorname{poly}(n,T,\delta^{-1})\), any \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of \(\mathbf{H}_{C}\) under thermal perturbations with sufficiently large \(\beta=\operatorname{poly}(n,T,\delta^{-1})\), \(\tau=\operatorname{poly}(n,T,\delta^{-1})\), and system-bath interactions generated by \(S_{0}\) is an exact global minimum with probability \(\operatorname{tr}(\mathbf{P}_{G}(\mathbf{H}_{C})\mathbf{\rho})\geq 1-\delta\)._ We remind the reader that the thermal Lindbladians that generate the perturbations are defined in Eq. (F.1). The transition weight \(\gamma_{\beta}(\omega)\) is chosen to be Glauber dynamics as defined in Eq. (F.4), with energy cut-off \(\Lambda_{0}=1\) as a convenient choice so that \(\|\omega\gamma_{\beta}(\omega)\|_{\infty}\leq 1\). We do not expect our result to change with other reasonable choices of \(\gamma_{\beta}(\omega)\). **Remark 2**.: Our \(\mathbf{H}_{C}\) is similar to previous circuit-Hamiltonian constructions (see e.g., [20, 43, 44]), but there are some significant differences. One key change is that \(\mathbf{H}_{\rm prop}\) is no longer frustration-free, and its couplings \(h_{t}\) are not uniform; consequently, this revised \(\mathbf{H}_{\rm prop}\) has better spectral properties that enable us to lower bound its Bohr-frequency gap. Furthermore, \(\mathbf{H}_{\rm clock}\) is given non-uniform couplings \(f_{t}\) so that any local excitation has an incentive to move rightwards (e.g. \(|0011\rangle\to|0001\rangle\)), ensuring \(\mathbf{H}_{\rm clock}\) has no local minima except its ground states. These modifications allow us to prove that all excited states of \(\mathbf{H}_{C}\) have significant negative gradients, so that they will all flow to the ground state under thermal perturbations. ### Characterizing low energy states of \(\mathbf{H}_{C}\) We will start by characterizing the low energy states of the circuit Hamiltonian. We define the following sequence of Hamiltonians \[\mathbf{H}_{\rm I} =\mathbf{H}_{\rm clock}\] \[\mathbf{H}_{\rm II} =\mathbf{H}_{\rm clock}+\mathbf{H}_{\rm prop}\] \[\mathbf{H}_{\rm III} =\mathbf{H}_{\rm clock}+\mathbf{H}_{\rm prop}+\mathbf{H}_{\rm in}=\mathbf{H}_{C}\] with the ground space projectors \(\mathbf{P}_{j}\) such that \[\mathbf{P}_{\rm I}\supset\mathbf{P}_{\rm II}\supset\mathbf{P}_{\rm III}.\] (J.5) Equivalently, we have \(\mathbf{P}_{\rm I}\mathbf{P}_{\rm II}=\mathbf{P}_{\rm II}\) and \(\mathbf{P}_{\rm I}\mathbf{P}_{\rm III}=\mathbf{P}_{\rm II}\mathbf{P}_{\rm III}=\mathbf{P}_{\rm III}\). Our approach to calculating the gradient for \(\mathbf{H}_{\rm III}=\mathbf{H}_{C}\) is _perturbative_: we start with the simple Hamiltonian \(\mathbf{H}_{\rm I}\) and gradually add perturbations (which will split the spectrum, Figure 3). Remarkably, the gradient is _stable_ as long as the perturbation is weak enough. That is, it suffices to analyze the gradient of the simpler, unperturbed Hamiltonians on suitable subspaces. Now we describe explicitly the ground subspaces \(\mathbf{P}_{\rm I}\), \(\mathbf{P}_{\rm II}\) and \(\mathbf{P}_{\rm III}\). Let \[|C_{t}\rangle=|1^{t}0^{T-t}\rangle\quad\text{ for each }\quad t=0,1,\ldots,T\] (J.6) and \[|\eta_{\mathbf{x},t}\rangle=\left(\mathbf{U}_{t}\cdots\mathbf{U}_{1}\,|\mathbf{x}\rangle\, \right)\otimes|C_{t}\rangle\quad\text{ for each }\quad\mathbf{x}\in\{0,1\}^{n}\quad\text{and}\quad 0\leq t\leq T.\] (J.7) The set of \(|\eta_{\mathbf{x},t}\rangle\) forms an orthonormal basis for the ground space of \(\mathbf{H}_{\rm I}=\mathbf{H}_{\rm clock}\), with energy \(0\) and a spectral gap of \(J_{\rm clock}/T\). The ground space projector is \[\mathbf{P}_{\rm I}=\sum_{\mathbf{x}\in\{0,1\}^{n}}\sum_{t=0}^{T}\lvert\eta_{\mathbf{x},t} \rangle\!\langle\eta_{\mathbf{x},t}\rvert\quad\text{and}\quad\langle\eta_{\mathbf{y},t ^{\prime}}|\eta_{\mathbf{x},t}\rangle=\delta_{\mathbf{y}\mathbf{x}}\delta_{tt^{\prime}}.\] (J.8) Observe that \([\mathbf{H}_{\rm prop},\mathbf{P}_{\rm I}]=0\), so the ground states of \(\mathbf{H}_{\rm II}=\mathbf{H}_{\rm clock}+\mathbf{H}_{\rm prop}\) are given as the ground states of \[\mathbf{P}_{\rm I}\mathbf{H}_{\rm prop}\mathbf{P}_{\rm I}=\frac{J_{\rm prop}}{2}\sum_{t=1}^ {T}[\mathbf{I}-h_{t}(\mathbf{U}_{t}\otimes|C_{t}\rangle\!\langle C_{t-1}|+\mathbf{U}_{t}^ {\dagger}\otimes|C_{t-1}\rangle\!\langle C_{t}|)].\] (J.9) Furthermore, observe the orthogonality relations \[\langle\eta_{\mathbf{x},t}|\mathbf{H}_{\rm prop}|\eta_{\mathbf{y},t^{\prime}}\rangle=0 \qquad\text{when}\quad\mathbf{x}\neq\mathbf{y}\quad\text{for each}\quad t,t^{\prime}.\] (J.10) That is, \(\mathbf{P}_{\rm I}\mathbf{H}_{\rm prop}\mathbf{P}_{\rm I}\) is block diagonal with blocks labeled by \(\mathbf{x}\). Moreover, for any \(\mathbf{x}\), in the basis of \(\ket{\eta_{\mathbf{x},0}},\ket{\eta_{\mathbf{x},1}},\ldots,\ket{\eta_{\mathbf{x},T}}\), we can explicitly write down the effective \((T+1)\times(T+1)\) Hamiltonian \[\mathbf{P}_{\rm I}\mathbf{H}_{\rm prop}\mathbf{P}_{\rm I}=\frac{J_{\rm prop}}{2}\begin{pmatrix} T&-h_{1}\\ -h_{1}&T&-h_{2}\\ &-h_{2}&T&-h_{3}\\ &&-h_{3}&\ddots&\ddots\\ &&&\ddots&T&-h_{T}\\ &&&-h_{T}&T\end{pmatrix}=J_{\rm prop}\Big{(}\frac{T}{2}\mathbf{I}-\mathbf{L}_{x}\Big{)},\] (J.11) where \(\mathbf{L}_{x}\) is the matrix representation of the spin-\(T/2\) angular momentum operator whose spectrum is well known. In particular, the unique ground state is \[\ket{\eta_{\mathbf{x}}}:=\sum_{t=0}^{T}\sqrt{\xi_{t}}\ket{\eta_{\mathbf{x},t}}\quad \text{with energy}\quad 0\quad\text{and spectral gap}\quad J_{\rm prop}.\] (J.12) We will call \(\ket{\eta_{\mathbf{x}}}\) the _history state_ with respect to input \(\ket{\mathbf{x}}\). The ground space projector of \(\mathbf{H}_{\rm II}\) is then given as \[\mathbf{P}_{\rm II}=\sum_{\mathbf{x}\in\{0,1\}^{n}}\lvert\eta_{\mathbf{x}}\rangle\!\langle \eta_{\mathbf{x}}\rvert.\] (J.13) Finally, note \(\ket{\eta_{\mathbf{0}}}\) for \(\mathbf{0}=(0,0,\cdots,0)\) is the unique ground state of \(\mathbf{H}_{C}=\mathbf{H}_{\rm III}\) and so \[\mathbf{P}_{\rm III}=\lvert\eta_{\mathbf{0}}\rangle\!\langle\eta_{\mathbf{0}}\rvert.\] (J.14) This is because \(\mathbf{H}_{\rm in}\) is positive semi-definite, and \(\ket{\eta_{\mathbf{0}}}\) is the only state in \(\mathbf{P}_{\rm II}\) with zero eigenvalue with respect to \(\mathbf{H}_{\rm in}\). ### Proof of Theorem 11 To prove Theorem 11, we show that all excited states in \(\mathbf{I}-\mathbf{P}_{\rm III}\) have significant gradient relative to \(\mathbf{H}_{\rm III}\). Our analysis for the gradient will be carried out in three subspaces \[\mathbf{I}-\mathbf{P}_{\rm III}=\underbrace{(\mathbf{I}-\mathbf{P}_{\rm I})}_{\text{studying }\mathbf{H}_{\rm I}}+\underbrace{\mathbf{P}_{\rm I}(\mathbf{I}-\mathbf{P}_{\rm II})}_{\text{studying }\mathbf{H}_{\rm II}\mathbf{P}_{\rm I}}+\underbrace{\mathbf{P}_{\rm II}(\mathbf{I}-\mathbf{P}_{ \rm III})}_{\text{studying }\mathbf{H}_{\rm III}\mathbf{P}_{\rm II}}\] (J.15) Let \(\mathcal{L}_{j}:=\mathcal{L}^{\beta,\tau,\mathbf{H}_{j}}\) be the thermal Lindbladian with uniform weights as in Eq. (H.2), defined with respect to \(\mathbf{H}_{j}\) and the jump operators in Eq. (J.4). Case 1: Gradients for \(\mathbf{I}-\mathbf{P_{\mathrm{I}}}\) from \(\mathbf{H_{\mathrm{I}}}\).We first show excited states of \(\mathbf{H_{\mathrm{I}}}\) have good energy gradient: \[-\mathcal{L}_{\mathrm{I}}^{\dagger}[\mathbf{H_{\mathrm{I}}}]\succeq r_{1}(\mathbf{I}-\bm {P_{\mathrm{I}}})-\epsilon_{1a}\mathbf{I}.\] (J.16) Because \(\mathbf{H_{\mathrm{I}}}=\mathbf{H_{\mathrm{clock}}}\) is a commuting Hamiltonian, the global gradient can be lower bounded by checking the local gradient from individual local jumps. We carry out this computation in Appendix J.3.1, where we show \(r_{1}=\Omega(1/T\ln\beta)\) and \(\epsilon_{1a}=\mathcal{O}(T^{7/4}/\tau^{1/4}+T/\beta+T(1+\beta)\ln\tau/\tau)\) in Lemma J.2. We then apply Theorem 10 (with \(\mathbf{H}=\mathbf{H_{\mathrm{I}}}\) and \(\mathbf{H}^{\prime}=\mathbf{H_{\mathrm{III}}}\)) to show excited states of \(\mathbf{H_{\mathrm{I}}}\) have large gradient with respect to \(\mathbf{H_{\mathrm{III}}}\). In other words, \[-\mathcal{L}_{\mathrm{III}}^{\dagger}[\mathbf{H_{\mathrm{III}}}]\succeq r_{1}(\mathbf{ I}-\mathbf{P_{\mathrm{I}}})-\epsilon_{1}\mathbf{I}.\] (J.17) To do this, we only need to check that the conditions of Theorem 10 are satisfied. Note \(\mathbf{P_{\mathrm{I}}}\) also projects onto eigenstates of \(\mathbf{H_{\mathrm{III}}}\) since \([\mathbf{P_{\mathrm{I}}},\mathbf{H_{\mathrm{III}}}]=0.\) Note \(\mathbf{H_{\mathrm{I}}}\) has a discrete spectrum with a minimum Bohr-frequency gap of at least \(\Delta_{\nu}\geq 1/T\). We can choose sufficiently small \(J_{\mathrm{prop}},J_{\mathrm{in}}\) such that \(\|\mathbf{V_{\mathrm{I}}}\|\coloneqq\|\mathbf{H}_{\mathrm{prop}}+\mathbf{H}_{\mathrm{in}} \|\leq J_{\mathrm{prop}}T^{2}+J_{\mathrm{in}}ng_{\mathrm{max}}\ll\Delta_{\nu} (\mathbf{H_{\mathrm{I}}})=1/T\), where \(g_{max}:=\max_{1\leq j\leq n}g_{j}=\mathcal{O}(T)\). And plugging in \(\Lambda_{0}=1\) and other parameters into the error bound (H.32) \[\epsilon_{1}=\epsilon_{1a}+|S_{0}|\mathcal{O}\left(\frac{1}{\tau}+\frac{\|\mathbf{ H_{\mathrm{I}}}\|^{3/4}}{\tau^{1/4}}+\frac{1}{\tau^{1/3}}+\frac{1}{\sqrt{ \Delta_{\nu}\tau}}+\frac{\mathrm{e}^{-\beta\Delta_{\nu}/4}}{\beta}+\left(1+ \frac{1+r_{1}}{\Delta_{\nu}}\right)\|\mathbf{V_{\mathrm{I}}}\|\right).\] (J.18) Noting that \(|S_{0}|,\|\mathbf{H_{\mathrm{I}}}\|,g_{\mathrm{max}}=\mathcal{O}(T)\), we can make \(\epsilon_{1}/r_{1}\leq\delta/6\) by choosing appropriate powers \[\tau\geq\tilde{\Omega}(T^{11}/\delta^{4}),\quad\beta\geq\tilde{\Omega}(T^{2}/ \delta),\quad J_{\mathrm{prop}}\leq\tilde{\mathcal{O}}(\delta/T^{5}),\quad \text{and}\quad J_{\mathrm{in}}\leq\tilde{\mathcal{O}}(\delta/nT^{4}).\] (J.19) Figure 3: The degenerate levels of \(\mathbf{H_{\mathrm{clock}}}\) split under perturbations \(\mathbf{H_{\mathrm{prop}}}\) and \(\mathbf{H_{\mathrm{in}}}\). In particular, the ground state splitting is tracked in blue shades. The careful choice of energy scales ensures that the levels can be identified with the original degenerate blocks. Case 2: Gradients for \(\mathbf{P_{\rm I}}(\mathbf{I}-\mathbf{P_{\rm II}})\) from \(\mathbf{H_{\rm II}}\).We next restrict our attention to the action of \(\mathcal{L}^{\dagger}_{\rm II}[\mathbf{H_{\rm II}}]\) inside the \(\mathbf{P_{\rm I}}\) subspace, which conveniently is also an eigensubspace of both \(\mathbf{H_{\rm II}}\) and \(\mathbf{H_{\rm III}}\) since \([\mathbf{P_{\rm I}},\mathbf{H_{\rm II}}]=[\mathbf{P_{\rm I}},\mathbf{H_{\rm III}}]=0\). Explicit computation in Appendix J.3.2 shows that \[-\mathbf{P_{\rm I}}\mathcal{L}^{\dagger}_{\rm II}[\mathbf{H_{\rm II}}]\mathbf{P_{\rm I}} \succeq r_{2}\mathbf{P_{\rm I}}(\mathbf{I}-\mathbf{P_{\rm II}})-\epsilon_{2a}\mathbf{I},\] (J.20) with the bounds from Lemma J.4.1 promising \[r_{2}=\Omega(\frac{J_{\rm prop}}{T\ln\beta})\quad\text{and}\quad\epsilon_{2a} =|S_{0}|\cdot\mathcal{O}\Big{(}\frac{1}{\tau}+\frac{\|\mathbf{H_{\rm II}}\|^{3/4}} {\tau^{1/4}}+\frac{1}{\beta}+\frac{1}{\sqrt{\tau J_{\rm prop}}}\Big{)}.\] (J.21) We then invoke Corollary H.1 with \(\mathbf{Q}=\mathbf{Q}^{\prime}=\mathbf{P_{\rm I}}\), \(\mathbf{H}=\mathbf{H_{\rm II}}\) and \(\mathbf{H}^{\prime}=\mathbf{H_{\rm III}}\) to show monotonicity of energy gradient on a subspace under perturbation \[-\mathbf{P_{\rm I}}\mathcal{L}^{\dagger}_{\rm III}[\mathbf{H_{\rm III}}]\mathbf{P_{\rm I}} \succeq r_{2}\mathbf{P_{\rm I}}(\mathbf{I}-\mathbf{P_{\rm II}^{\prime}})-\epsilon_{2b}\bm {I},\] (J.22) where \(\mathbf{P_{\rm II}^{\prime}}\) is the perturbed eigensubspace of \(\mathbf{H_{\rm III}}\) that is identified with \(\mathbf{P_{\rm II}}\). To justify the application of Corollary H.1, we note in \(\mathbf{H_{\rm II}}\), the eigensubspace \(\mathbf{P_{\rm I}}\) has an excitation gap of \(\Delta_{\mathbf{Q}}\geq 1/T-2\|\mathbf{H_{\rm prop}}\|=\Omega(1/T)\), where extra \(2\|\mathbf{H_{\rm prop}}\|\) term is due to shifts in eigenvalues of \(\mathbf{H_{\rm I}}\) bounded by Weyl's inequality (see Proposition L.3). The perturbation on \(\mathbf{H_{\rm II}}\) has strength \(\|\mathbf{V_{\rm II}}\|:=\|\mathbf{H_{\rm in}}\|\leq J_{\rm in}ng_{\rm max}\), and \(\Delta_{\nu}(\mathbf{H_{\rm II}}|_{\mathbf{P_{\rm I}}})=J_{\rm prop}\). Then noting \(\Delta_{\nu}\ll\Delta_{\mathbf{Q}}\), we keep the dominant terms in the error bound (H.34) and get \[\epsilon_{2b}\leq\epsilon_{2a}+|S_{0}|\cdot\mathcal{O}\Big{(}\frac{\|\mathbf{V_{ \rm II}}\|\|\mathbf{H_{\rm II}}\|}{\Delta_{\nu}\Delta_{\mathbf{Q}}}+r_{2}\frac{\|\mathbf{V_ {\rm II}}\|}{\Delta_{\nu}}\Big{)}.\] (J.23) Furthermore, the eigensubspace \(\mathbf{P_{\rm I}}\) in \(\mathbf{H_{\rm III}}\) is separated by a spectral gap of \(1/T-2\|\mathbf{V_{\rm I}}\|=\Omega(1/T)\) from the other eigenstates, we may apply Lemma H.5 to show \[-\mathcal{L}^{\dagger}_{\rm III}[\mathbf{H_{\rm III}}]\succeq r_{2}\mathbf{P_{\rm I}} (\mathbf{I}-\mathbf{P_{\rm II}^{\prime}})-\epsilon_{2}\mathbf{I}\quad\text{where}\quad \epsilon_{2}=\epsilon_{2b}+|S_{0}|\mathcal{O}(\frac{1}{\beta}+\frac{1}{\tau}+ \sqrt{\frac{T}{\tau}}).\] (J.24) Since \(|S_{0}|,\ g_{\rm max},\ \|\mathbf{H_{\rm II}}\|,\ \|\mathbf{H_{\rm III}}\|= \mathcal{O}(T)\), we can make \(\epsilon_{2}/r_{2}\leq\delta/6\) by choosing \[\tau\geq\tilde{\Omega}\Big{(}\frac{T^{11}}{J_{\rm prop}^{2}\delta^{4}}\Big{)}, \quad\beta\geq\tilde{\Omega}\Big{(}\frac{T^{2}}{J_{\rm prop}\delta}\Big{)}, \quad\text{and}\quad J_{\rm in}\leq\tilde{\mathcal{O}}\Big{(}\frac{J_{\rm prop }^{2}\delta}{nT^{5}}\Big{)}.\] (J.25) Case 3: Gradients for \(\mathbf{P_{\rm II}}(\mathbf{I}-\mathbf{P_{\rm III}})\) from \(\mathbf{H_{\rm III}}\).Now, we restrict our attention to \(\mathbf{P_{\rm II}^{\prime}}\), the perturbed eigensubspace in \(\mathbf{H_{\rm III}}\) that correspond to \(\mathbf{P_{\rm II}}\). We can show by explicit computation (deferred to Appendix J.3.3) that \[-\mathbf{P_{\rm II}^{\prime}}\mathcal{L}^{\dagger}_{\rm III}[\mathbf{H_{\rm III}}]\mathbf{P _{\rm II}^{\prime}}\succeq r_{3}\mathbf{P_{\rm II}^{\prime}}(\mathbf{I}-\mathbf{P_{\rm III }})-\epsilon_{3a}\mathbf{I}.\] (J.26) This computation shows that all valid history states \(|\eta_{\mathbf{x}}\rangle\) except for \(\mathbf{x}=\mathbf{0}\) have nonzero gradient with respect to \(\mathcal{L}_{\rm III}\). The derivation uses a more fine-grained version of subspace gradient monotonicity (Corollary H.2) since the standard version yields insufficient bounds. Roughly, we need to capture the fact that off-diagonal perturbations induce only _second-order_ perturbation on the eigenvalues. The final calculated bounds in Eqs. (J.92) and (J.95) give us \[r_{3}=\Omega\Big{(}\frac{J_{\rm in}}{T^{2}\ln\beta}\Big{)}\quad\text{and}\quad \epsilon_{3a}\leq T\mathcal{O}\bigg{(}\frac{T^{3/4}}{\tau^{1/4}}+\frac{1}{ \beta}+\frac{1}{\sqrt{J_{\rm in}\tau}}+\mathrm{e}^{-\beta J_{\rm in}}+n\frac{ (nTJ_{\rm in})^{2}}{J_{\rm prop}}\bigg{)}.\] (J.27) Using the fact that \(\mathbf{P}_{\texttt{II}}^{\prime}\) is separated by energy of at least \(J_{\text{prop}}-2\|\mathbf{H}_{\text{in}}\|\) from the other eigenstates in \(\mathbf{H}_{\texttt{III}}\), we can apply Lemma H.5 to get \[-\mathcal{L}_{\texttt{III}}^{\dagger}[\mathbf{H}_{\texttt{III}}]\succeq r_{3}\mathbf{P}_ {\texttt{II}}^{\prime}(\mathbf{I}-\mathbf{P}_{\texttt{III}})-\epsilon_{3}\mathbf{I}\quad \text{where}\quad\epsilon_{3}=\epsilon_{3a}+|S_{0}|\,\mathcal{O}\Big{(}\frac{1 }{\beta}+\frac{1}{\tau}+\frac{1}{\sqrt{J_{\text{prop}}\tau}}\Big{)}.\] (J.28) We may ensure \(\epsilon_{3}/r_{3}\leq\delta/6\) by choosing \[\tau\geq\tilde{\Omega}\Big{(}\frac{T^{15}}{J_{\text{in}}^{4}\delta^{4}}\Big{)},\quad\beta\geq\tilde{\Omega}\Big{(}\frac{T^{3}}{J_{\text{in}}\delta}\Big{)},\quad\text{and}\quad J_{\text{in}}\leq\tilde{\mathcal{O}}\Big{(}\frac{J_{ \text{prop}}\delta}{n^{3}T^{5}}\Big{)}.\] (J.29) Altogether.Based on the conditions in Eqs. (J.19) (J.25) (J.29) and the fact that \(T=\Omega(n)\), a consistent choice of parameters that satisfies all the bounds and ensures \(\epsilon_{j}/r_{j}\leq\delta/6\) are \[\tau=\tilde{\Theta}\Big{(}\frac{T^{79}}{\delta^{16}}\Big{)},\quad\beta=\tilde {\Theta}\Big{(}\frac{T^{19}}{\delta^{4}}\Big{)},\quad J_{\text{prop}}=\tilde{ \Theta}\Big{(}\frac{\delta}{T^{5}}\Big{)},\quad\text{and}\quad J_{\text{in}}= \tilde{\Theta}\Big{(}\frac{\delta^{3}}{T^{16}}\Big{)}.\] (J.30) Then combining Eqs. (J.17), (J.24), and (J.28) implies that \[\mathbf{I}-\mathbf{P}_{\texttt{I}} \preceq\frac{\epsilon_{1}}{r_{1}}\mathbf{I}-\frac{1}{r_{1}}\mathcal{L }_{\texttt{III}}^{\dagger}[\mathbf{H}_{\texttt{III}}],\] \[\mathbf{P}_{\texttt{I}}-\mathbf{P}_{\texttt{II}}^{\prime} \preceq\frac{\epsilon_{2}}{r_{2}}\mathbf{I}-\frac{1}{r_{2}}\mathcal{L }_{\texttt{III}}^{\dagger}[\mathbf{H}_{\texttt{III}}],\] \[\mathbf{P}_{\texttt{II}}^{\prime}-\mathbf{P}_{\texttt{III}} \preceq\frac{\epsilon_{3}}{r_{3}}\mathbf{I}-\frac{1}{r_{3}}\mathcal{L }_{\texttt{III}}^{\dagger}[\mathbf{H}_{\texttt{III}}].\] (J.31) Note we have used \(\mathbf{P}_{\texttt{I}}\mathbf{P}_{\texttt{II}}^{\prime}=\mathbf{P}_{\texttt{II}}^{\prime}\) and \(\mathbf{P}_{\texttt{II}}^{\prime}\mathbf{P}_{\texttt{III}}=\mathbf{P}_{\texttt{III}}\). Recall that \(\mathcal{L}_{\texttt{III}}=\sum_{a=1}^{m}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}_{C}}\). Adding all three inequalities together and normalizing suitably by the number of jumps \(m=|S_{0}|\), we have \[\mathbf{I}-\mathbf{P}_{\texttt{III}}\preceq-\left(\sum_{j=1}^{3}\frac{m}{r_{j}}\right) \left(\frac{1}{m}\sum_{a=1}^{m}\mathcal{L}_{a}^{\dagger\beta,\tau,\mathbf{H}_{C}}[ \mathbf{H}_{\texttt{III}}]\right)+\left(\sum_{j=1}^{3}\frac{\epsilon_{j}}{r_{j}} \right)\mathbf{I}.\] (J.32) The above provides the desired negative gradient condition on the full Hamiltonian \(\mathbf{H}_{\texttt{III}}=\mathbf{H}_{C}\). From Lemma D.2, any \(\epsilon\)-approximate local minimum \(\mathbf{\rho}\) of \(\mathbf{H}_{C}\) under thermal perturbation satisfies \[1-\text{tr}(\mathbf{P}_{\texttt{III}}\mathbf{\rho})\leq\sum_{j=1}^{3}\frac{\epsilon_{j }+m\epsilon}{r_{j}}\leq\frac{\delta}{2}+\epsilon m\sum_{j=1}^{3}\frac{1}{r_{j}}.\] (J.33) By choosing \(\epsilon\leq\frac{\delta}{2m}\big{(}\sum_{j=1}^{3}1/r_{j}\big{)}^{-1}=1/\, \text{poly}(n,T,\delta^{-1})\), we guarantee that \(1-\text{tr}(\mathbf{P}_{\texttt{III}}\mathbf{\rho})\leq\delta\). This concludes our proof of Theorem 11. ### Explicit calculations for energy gradients In this section of the appendix, we provide the missing calculations supporting the claims that were asserted in Eqs. (J.16), (J.20), and (J.26) in the above proof of Theorem 11. #### j.3.1 Gradient from \(\mathbf{H}_{\rm clock}\) Note that \(\mathbf{H}_{\rm I}=\mathbf{H}_{\rm clock}\) is a commuting Hamiltonian \[\mathbf{H}_{\rm I}=\mathbf{H}_{\rm clock}=J_{\rm clock}\sum_{t=1}^{T-1}f_{t}\mathbf{h}_{t,t+1 }\quad\text{where}\qquad\mathbf{h}_{t,t+1}=\mathbf{I}\otimes|01\rangle\!\langle 01|_{t,t+1}\] (J.34) where we set \(f_{t}=(T-t)/T\) and \(J_{\rm clock}=1\). We start by computing the gradient from a single jump operator, using the simplification from Lemma H.8: **Lemma J.1**.: _Let the jump operator \(\mathbf{A}^{t}=\mathbf{I}\otimes\mathbf{X}_{t}\) for each \(t\in[T]\). For all \(t=2,\ldots,T\), we have_ \[-\mathcal{D}_{t}^{\dagger\beta,\tau,\mathbf{H}_{\rm I}}[\mathbf{H}_{\rm I}]\succeq r_{ 1}\mathbf{h}_{t-1,t}-\epsilon_{0}\mathbf{I}\] (J.35) _where \(r_{1}=\Omega(\frac{1}{T\ln\beta})\) and \(\epsilon_{0}=\mathcal{O}(\sqrt{T/\tau}+[(1+\beta)\ln\tau]/\tau+1/\beta)\)._ Proof.: By Lemma H.8, we have \(\mathcal{D}_{t}^{\dagger\beta,\tau,\mathbf{H}_{\rm I}}[\mathbf{H}_{\rm I}]=\mathcal{D }_{t}^{\dagger\beta,\tau,\mathbf{H}_{\rm I^{\pm}t}}[\mathbf{H}_{\rm\ni t}]\). We then proceed in two cases. **Case 1: \(2\leq t\leq T-1\).** In this case, the relevant part of \(\mathbf{H}_{\rm I}\) that does not commute with \(\mathbf{A}^{t}\) is \[\mathbf{H}_{\ni t}=f_{t-1}|01\rangle\!\langle 01|_{t-1,t}+f_{t}|01\rangle\! \langle 01|_{t,t+1}.\] (J.36) Observe that \(\mathbf{H}_{\ni t}\) has three degenerate eigenspaces with corresponding energies as follows: \[\mathbf{P}_{t}^{0} =\sum_{\mathbf{s}\in\{000,100,110,111\}}|\mathbf{s}\rangle\!\langle\mathbf{s} |_{t-1,t,t+1}, E_{0} =0,\] \[\mathbf{P}_{t}^{L} =\sum_{\mathbf{s}\in\{010,011\}}|\mathbf{s}\rangle\!\langle\mathbf{s}|_{t-1, t,t+1}, E_{L} =f_{t-1},\] \[\mathbf{P}_{t}^{R} =\sum_{\mathbf{s}\in\{001,101\}}|\mathbf{s}\rangle\!\langle\mathbf{s}|_{t-1, t,t+1}, E_{R} =f_{t}.\] (J.37) The possible negative Bohr frequencies and the associated jumps are \[\mathbf{A}_{\nu_{1}}^{t} =\mathbf{P}_{t}^{0}\mathbf{X}_{t}\mathbf{P}_{t}^{L}=|000\rangle\!\langle 010|_{t- 1,t,t+1}, \nu_{1} =-f_{t-1},\] \[\mathbf{A}_{\nu_{2}}^{t} =\mathbf{P}_{t}^{0}\mathbf{X}_{t}\mathbf{P}_{t}^{R}=|111\rangle\!\langle 101|_{t -1,t,t+1}, \nu_{2} =-f_{t},\] \[\mathbf{A}_{\nu_{3}}^{t} =\mathbf{P}_{t}^{R}\mathbf{X}_{t}\mathbf{P}_{t}^{L}=|001\rangle\!\langle 011|_{t -1,t,t+1}, \nu_{3} =f_{t}-f_{t-1}=-1/T.\] (J.38) Furthermore, observe that the Bohr frequencies are exactly integer multiples of \(1/T\), so we can lower bound the Bohr-frequency gap \(\Delta_{\nu}(\mathbf{H}_{\ni t})\geq 1/T\). Then by Lemma H.3 and H.4, we can replace \(\mathcal{D}_{t}^{\dagger\beta,\tau,\mathbf{H}_{\ni t}}\) with \(\mathcal{D}_{t}^{\dagger\beta,\infty,\mathbf{H}_{\ni t}}\) up to an \(\mathcal{O}(\sqrt{T/\tau}+[(1+\beta)\ln\tau]/\tau)\) error. Letting \(\theta_{j}=\nu_{j}\gamma_{\beta}(\nu_{j})\) for \(j=1,2,3\) (recall \(\gamma_{\beta}\) is given in Eq. (F.4) with \(\Lambda_{0}=1\)), we have \[\mathcal{D}_{t}^{\dagger\beta,\infty,\mathbf{H}_{\ni t}}[\mathbf{H}_{\ni t }] =\sum_{\nu\in B(\mathbf{H}_{\ni t})}\nu\gamma_{\beta}(\nu)\mathbf{A}_{\nu}^{ \ni t}\mathbf{A}_{\nu}^{a}\] \[=\big{(}\theta_{1}|010\rangle\!\langle 010|+\theta_{2}|101\rangle\! \langle 101|+\theta_{3}|011\rangle\!\langle 011|\big{)}_{t-1,t,t+1}+\mathcal{O}(1/ \beta),\] (J.39) where the last error term is due to heating transitions (positive Bohr frequencies), which incur errors of at most \(\|\omega\gamma_{\beta}(\omega)\mathds{1}(\omega>0)\|_{\infty}=\mathcal{O}(1/\beta)\). Note that \(\theta_{1},\theta_{2},\theta_{3}<0\), and furthermore we have \[\min\{|\theta_{1}|,|\theta_{3}|\}\geq\min_{\omega\in[-1,-1/T]}|\omega|\gamma_{ \beta}(\omega)=:r_{1}=\Omega\big{(}\frac{1}{T\ln\beta}\big{)}.\] (J.40) Hence, \[-\mathcal{D}_{t}^{\dagger\beta,\infty,\mathbf{H}_{\exists t}}[\mathbf{H}_{\exists t}] \succeq r_{1}\big{(}|010\rangle\!\langle 010|+|011\rangle\!\langle 011|\big{)}_{t-1,t +1}-\mathcal{O}(1/\beta)\mathbf{I}.\] (J.41) Note the first term combines to make \(\mathbf{h}_{t-1,t}\). We then return to the finite-\(\tau\) Lindbladian up to the aforementioned error: \[-\mathcal{D}_{t}^{\dagger\beta,\tau,\mathbf{H}_{\exists t}}[\mathbf{H}_{\exists t}] \succeq r_{1}\mathbf{h}_{t-1,t}-\mathcal{O}\Big{(}\sqrt{\frac{T}{\tau}}+\frac{(1+ \beta)\ln\tau}{\tau}+\frac{1}{\beta}\Big{)}\mathbf{I}\] (J.42) **Case 2: \(t=T\).** The relevant part of \(\mathbf{H}_{\mathsf{I}}\) in this case is \[\mathbf{H}_{\exists T}=f_{T-1}|01\rangle\!\langle 01|_{T-1,T},\] (J.43) which has two eigenspaces. There is only one negative Bohr frequency with a corresponding jump operator filtered at \(\nu\) \[\mathbf{A}_{\nu}^{T}=(\mathbf{I}-|01\rangle\!\langle 01|_{T-1,T})\mathbf{X}_{T}|01\rangle\! \langle 01|_{T-1,T}=|00\rangle\!\langle 01|_{T-1,T}\quad\text{where}\quad\nu=-f_{T-1 }=-\frac{1}{T}.\] (J.44) Then, \[\mathcal{D}_{t}^{\dagger\beta,\infty,\mathbf{H}_{\exists T}}[\mathbf{H}_{\exists T}] =-\tfrac{1}{T}\gamma_{\beta}(-\tfrac{1}{T})|01\rangle\!\langle 01|_{T-1,T}+ \mathcal{O}(\tfrac{1}{\beta})\mathbf{I}.\] (J.45) Note that \(\tfrac{1}{T}\gamma_{\beta}(-\tfrac{1}{T})\geq r_{1}\). Applying Lemma H.3 and H.4 to return to the finite \(\tau\) expression, \[-\mathcal{D}_{t}^{\dagger\beta,\tau,\mathbf{H}_{\exists T}}[\mathbf{H}_{\exists T}] \succeq r_{1}\mathbf{h}_{T-1,T}-\mathcal{O}\Big{(}\sqrt{\frac{T}{\tau}}+\frac{(1 +\beta)\ln\tau}{\tau}+\frac{1}{\beta}\Big{)}\mathbf{I},\] (J.46) which is the advertised result. We are now ready to prove Eq. (J.16), which we state as the following lemma: **Lemma J.2**.: _Assume \(1\leq T\leq\tau\). We have_ \[-\mathcal{L}_{\mathsf{I}}^{\dagger}[\mathbf{H}_{\mathsf{I}}]\succeq r_{1}(\mathbf{I}- \mathbf{P}_{\mathsf{I}})-\epsilon_{1a}\mathbf{I}\] (J.47) _where_ \[r_{1}=\Omega\Big{(}\frac{1}{T\ln\beta}\Big{)}\quad\text{and}\quad\epsilon_{1 a}=\mathcal{O}\Big{(}\frac{T^{7/4}}{\tau^{1/4}}+\frac{T}{\beta}+\frac{T(1+ \beta)\ln\tau}{\tau}\Big{)}.\] (J.48) Proof.: Note by linearity, we have \[\mathcal{L}_{\mathsf{I}}^{\dagger}[\mathbf{H}_{\mathsf{I}}]=\sum_{a\in S}\mathcal{ L}_{a}^{\dagger\beta,\tau,\mathbf{H}_{\mathsf{I}}}[\mathbf{H}_{\mathsf{I}}].\] (J.49) Let \(S_{\mathsf{I}}=\{\mathbf{I}\otimes\mathbf{X}_{t}:2\leq t\leq T\}\) be a subset of the jump operators. Then \[-\mathcal{L}_{\mathsf{I}}^{\dagger}[\mathbf{H}_{\mathsf{I}}]\succeq-\sum_{a\in S_ {\mathsf{I}}}\mathcal{D}_{a}^{\dagger\beta,\tau,\mathbf{H}_{\mathsf{I}}}[\mathbf{H}_{ \mathsf{I}}]-\mathcal{O}\bigg{(}\left|S_{0}\right|\Big{(}\frac{\|\mathbf{H}_{ \mathsf{I}}\|^{3/4}}{\tau^{1/4}}+\frac{1}{\tau}+\frac{1}{\beta}\Big{)}\bigg{)} \mathbf{I},\] (J.50) where the error contribution from neglecting the Lamb-shift term and the other jump operators in \(S_{0}\setminus S_{\mathsf{I}}\) are bounded by Proposition F.3 and Lemma H.1. Applying Lemma J.1 to the sum on the right hand side above, we get \[-\sum_{a\in S_{\mathsf{I}}}\mathcal{D}_{a}^{\dagger\beta,\tau,\mathbf{H}_{\mathsf{I }}}[\mathbf{H}_{\mathsf{I}}]\succeq r_{1}\sum_{t=1}^{T-1}\mathbf{h}_{t,t+1}-T\epsilon _{0}\mathbf{I}.\] (J.51) It is not difficult to see that \[\sum_{t=1}^{T-1}\mathbf{h}_{t,t+1}\succeq\mathbf{I}-\mathbf{P}_{\mathbb{I}},\] (J.52) that is, the smallest excitation has energy \(1\). Hence, \[-\mathcal{L}_{\mathbb{I}}^{\dagger}[\mathbf{H}_{\mathbb{I}}]\succeq r_{1}(\mathbf{I}- \mathbf{P}_{\mathbb{I}})-\epsilon_{1a}\mathbf{I},\] (J.53) where \[\epsilon_{1a} =\mathcal{O}\bigg{(}\left|S_{0}\right|\Big{(}\frac{\|\mathbf{H}_{ \mathbb{I}}\|^{3/4}}{\tau^{1/4}}+\frac{1}{\tau}+\frac{1}{\beta}\Big{)}\bigg{)} +\mathcal{O}\Big{(}\frac{T^{3/2}}{\tau^{1/2}}+\frac{T(1+\beta)\ln\tau}{\tau}+ \frac{T}{\beta}\Big{)}\] \[=\mathcal{O}\Big{(}\frac{T^{7/4}}{\tau^{1/4}}+\frac{T}{\tau}+ \frac{T}{\beta}+\frac{T^{3/2}}{\tau^{1/2}}+\frac{T(1+\beta)\ln\tau}{\tau} \Big{)}.\] (J.54) The last equality uses that \(\left|S_{0}\right|,\left\|\mathbf{H}_{\mathbb{I}}\right\|=\mathcal{O}(T)\). Since \(1\leq T\leq\tau\), we have \(T^{7/4}/\tau^{1/4}\geq T/\tau\) and \(T^{7/4}/\tau^{1/4}\geq T^{3/2}/\tau^{1/2}\), so we drop the latter two terms for the final error estimate in the lemma statement. #### j.3.2 Gradient from \(\mathbf{H}_{\mathrm{prop}}\) In this subsection, we prove \[-\mathbf{P}_{\mathbb{I}}\mathcal{L}_{\mathbb{II}}^{\dagger}[\mathbf{H}_{\mathbb{II}}] \mathbf{P}_{\mathbb{I}}\succeq r_{2}\mathbf{P}_{\mathbb{I}}(\mathbf{I}-\mathbf{P}_{\mathbb{II} })-\epsilon_{2a}\mathbf{I}.\] (J.55) Denote \(\left|t\right\rangle:=\left|\eta_{\mathbf{x},t}\right\rangle\) in what follows. Let \(\mathbf{L}_{+}\) be the raising operator whose only non-trivial action is \[\mathbf{L}_{+}\left|t\right\rangle =\sqrt{(t+1)(T-t)}\left|t+1\right\rangle\quad\text{for each}\quad 0 \leq t\leq T-1\] \[\text{and}\quad\mathbf{L}_{-} =:\mathbf{L}_{+}^{\dagger}.\] (J.56) Furthermore, let \(\mathbf{L}_{x}=\frac{1}{2}(\mathbf{L}_{+}+\mathbf{L}_{-})\), \(\mathbf{L}_{y}=\frac{1}{2\mathrm{i}}(\mathbf{L}_{+}-\mathbf{L}_{-})\), \(\mathbf{L}_{z}=\sum_{t=0}^{T}(t-T/2)|t\rangle\!\langle t|\). These operators form a set of angular momentum operators as \([\mathbf{L}_{a},\mathbf{L}_{b}]=\mathrm{i}\epsilon_{abc}\mathbf{L}_{c}\) for \(a,b,c\in\{x,y,z\}\). As noted earlier, we have \[\mathbf{P}_{\mathbb{I}}\mathbf{H}_{\mathrm{prop}}\mathbf{P}_{\mathbb{I}}=J_{\mathrm{prop}} (\frac{T}{2}-\mathbf{L}_{x}).\] (J.57) The eigenstates are known to be \[\left|v_{k}\right\rangle=\mathrm{e}^{\mathrm{i}\pi\mathbf{L}_{y}/2}\left|k\right\rangle \quad\text{with eigenvalues}\quad\lambda_{k}=kJ_{\mathrm{prop}}\quad\text{for }\quad k=0,1,\ldots,T.\] (J.58) This integer spectrum means the minimum Bohr-frequency gap in the subspace \(\mathbf{P}_{\mathbb{I}}\) is \(\Delta_{\nu}(\mathbf{H}_{\mathrm{prop}}|_{\mathbf{P}_{\mathbb{I}}})=J_{\mathrm{prop}}\). Next, we give jump operators with nontrivial gradient on any excited state of \(\mathbf{P}_{\mathbb{I}}\mathbf{H}_{\mathrm{prop}}\mathbf{P}_{\mathbb{I}}\). These will be the \(1\)-local jumps acting on the clock register \[\mathbf{I}\otimes\mathbf{Z}_{\ell}\quad\text{for each}\quad 1\leq\ell\leq T\] (J.59) which nicely respects the block-diagonal structure of \(\mathbf{H}_{\mathrm{prop}}\) such that \[\langle\eta_{\mathbf{x},t}|\mathbf{I}\otimes\mathbf{Z}_{\ell}|\eta_{\mathbf{y},t^{\prime}} \rangle=0\quad\text{if}\quad\mathbf{y}\neq\mathbf{x}.\] (J.60) Thus, fixing \(\mathbf{x}\), we merely need to consider effective jump operators \[\mathbf{P}_{\mathbb{I}}(\mathbf{I}\otimes\mathbf{Z}_{\ell})\mathbf{P}_{\mathbb{I}}\equiv\mathbf{ \sigma}_{\ell}\quad\text{such that}\quad\mathbf{\sigma}_{\ell}\left|t\right\rangle= (-1)^{\mathbf{1}_{t\geq\ell}}\left|t\right\rangle.\] (J.61) **Lemma J.3** (Good transition rates).: _For the operators \(\mathbf{\sigma}_{\ell}\) in Eq. (J.61) and any \(0\leq k<T\), we have that_ \[\max_{\ell\in[T]}|\langle v_{k}|\mathbf{\sigma}_{\ell}|v_{k+1}\rangle|\geq\frac{1}{ \sqrt{T}}.\] (J.62) Proof.: Fix any \(0\leq k<T\). Observe that \[\langle v_{k}|\mathbf{L}_{z}|v_{k+1}\rangle=\langle k|\mathrm{e}^{-\mathrm{i}\pi \mathbf{L}_{y}/2}\mathbf{L}_{z}\mathrm{e}^{\mathrm{i}\pi\mathbf{L}_{y}/2}|k+1\rangle= \langle k|\mathbf{L}_{x}|k+1\rangle=\frac{1}{2}\sqrt{(k+1)(T-k)}.\] (J.63) Then, \[\max_{t}|\langle v_{k}|\mathbf{\sigma}_{t}|v_{k+1}\rangle| \geq\frac{1}{T}\sum_{t=1}^{T}|\langle v_{k}|\mathbf{\sigma}_{t}|v_{k+ 1}\rangle|\] (maximum is at least the mean) \[\geq\frac{2}{T}\left|\langle v_{k}|\mathbf{L}_{z}|v_{k+1}\rangle\right|\] (triangle inequality for \[\mathbf{L}_{z}=-\tfrac{1}{2}\sum_{t=1}^{T}\mathbf{\sigma}_{t}\] \[\geq\frac{1}{\sqrt{T}}\] (Since \[\sqrt{(k+1)(T-k)}\geq\sqrt{T}\] when \[k<T\] ) as advertised. Now that we understand the connectivity between the \(t\) labels, we may restore the \(2^{n}\) many labels \(\mathbf{x}\) \[|v_{k}\rangle\to|v_{k,\mathbf{x}}\rangle\quad\text{such that}\quad\langle v_{k^{ \prime},\mathbf{y}}|v_{k,\mathbf{x}}\rangle=\delta_{kk^{\prime}}\delta_{\mathbf{x}\mathbf{y}}.\] (J.64) Fortunately, we do not need to address the explicit labels \(\mathbf{x}\) due to the orthogonality properties in Eq. (J.60). We may now calculate the gradient operator \(\mathbf{P}_{\mathbf{\mathrm{I}}}\mathcal{L}_{\mathrm{II}}^{\dagger}[\mathbf{H}_{\mathrm{II }}]\mathbf{P}_{\mathrm{I}}\). **Lemma J.4**.: _Consider the thermal Lindbladian \(\mathcal{L}_{\mathrm{II}}=\sum_{a\in S_{0}}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H} _{\mathrm{II}}}\). Then,_ \[-\mathbf{P}_{\mathbf{\mathrm{I}}}\mathcal{L}_{\mathrm{II}}^{\dagger}[\mathbf{H}_{\mathrm{ II}}]\mathbf{P}_{\mathrm{I}}\succeq r_{2}\mathbf{P}_{\mathrm{I}}(\mathbf{I}-\mathbf{P}_{ \mathrm{II}})-\epsilon_{2}\mathbf{I}\] (J.65) _for_ \[r_{2}=\Omega(\frac{J_{\mathrm{prop}}}{T\ln\beta})\quad\text{and}\quad\epsilon_ {2}=|S_{0}|\cdot\mathcal{O}\left(\frac{1}{\tau}+\frac{\left\lVert\mathbf{H}_{ \mathrm{II}}\right\rVert^{3/4}}{\tau^{1/4}}+\frac{1}{\beta}+\frac{1}{\sqrt{ \tau J_{\mathrm{prop}}}}\right).\] (J.66) Proof.: Observe that \(\mathbf{P}_{\mathrm{I}}\) projects onto a low-energy subspace of \(\mathbf{H}_{\mathrm{II}}\) with an excitation gap of at least \(J_{\mathrm{clock}}/T-2\lVert\mathbf{H}_{\mathrm{prop}}\rVert\) from Weyl's inequality. Furthermore, \(\mathbf{H}_{\mathrm{II}}\) restricted to \(\mathbf{P}_{\mathrm{I}}\) has eigenvalues that are integer multiples of \(J_{\mathrm{prop}}\), so the Bohr-frequency gap in the subspace is \(\Delta_{\nu}(\mathbf{H}_{\mathrm{II}}|_{\mathbf{P}_{\mathrm{I}}})=J_{\mathrm{prop}}\). Assuming \(J_{\mathrm{prop}}/2<J_{\mathrm{clock}}/T-2\lVert\mathbf{H}_{\mathrm{prop}}\rVert\), we may apply Lemma H.6 with \(\mathbf{H}=\mathbf{H}_{\mathrm{II}}\), \(\mathbf{Q}=\mathbf{P}_{\mathrm{I}}\) to get \[\mathbf{P}_{\mathrm{I}}\mathcal{L}_{\mathrm{II}}^{\dagger}[\mathbf{H}_{ \mathrm{II}}]\mathbf{P}_{\mathrm{I}} \approx\sum_{a\in S_{0}}\sum_{\nu\in B(\mathbf{H}_{\mathrm{II}}|_{ \mathbf{P}_{\mathrm{I}}})}\mathbf{P}_{\mathrm{I}}\mathbf{A}_{\nu}^{a\dagger}\mathbf{P}_{ \mathrm{I}}\mathbf{A}_{\nu}^{a}\mathbf{P}_{\mathrm{I}}\int_{-\infty}^{0}\gamma_{\beta}( \omega)\omega\left|\hat{f}_{\mu}(\omega-\nu)\right|^{2}\mathrm{d}\omega\] \[\preceq\sum_{a\in S_{\mathrm{II}}}\sum_{\nu\in B(\mathbf{H}_{\mathrm{ II}}|_{\mathbf{P}_{\mathrm{I}}})}\mathbf{P}_{\mathrm{I}}\mathbf{A}_{\nu}^{a\dagger}\mathbf{P}_{ \mathrm{I}}\mathbf{A}_{\nu}^{a}\mathbf{P}_{\mathrm{I}}\int_{-\infty}^{0}\gamma_{\beta}( \omega)\omega\left|\hat{f}_{\mu}(\omega-\nu)\right|^{2}\mathrm{d}\omega\] (J.67) where \(\mu=\Delta_{\nu}/2\) and \[E=|S_{0}|\,\mathcal{O}\left(\frac{\left\lVert\mathbf{H}\right\rVert^{3/4}}{\tau^{1 /4}}+\frac{1}{\tau}+\frac{1}{\beta}+\frac{1}{\sqrt{\Delta_{\nu}\tau}}\right).\] (J.68) The second line uses the negativity of the half-integral to reduce to the following subset of jump operators from Eq. (J.4) \[S_{\mathrm{\Pi I}}=\{\mathbf{I}\otimes\mathbf{Z}_{\ell}:\ell\in[T]\}.\] (J.69) Let us now explicitly display the matrix elements of the above jump operators in the \(\ket{v_{k,\mathbf{x}}}\) basis: \[\mathbf{P}_{\mathrm{\Pi}}(\mathbf{I}\otimes\mathbf{Z}_{\ell})\mathbf{P}_{\mathrm{\Pi}}=\sum_{k,k^ {\prime},\mathbf{x},\mathbf{y}}\ket{v_{k^{\prime},\mathbf{y}}}\bra{v_{k^{\prime},\mathbf{y}}} \mathbf{I}\otimes\mathbf{Z}_{\ell}\ket{v_{k,\mathbf{x}}}\bra{v_{k,\mathbf{x}}}=\sum_{k,k^{ \prime},\mathbf{x}}\ket{v_{k^{\prime},\mathbf{x}}}\bra{v_{k^{\prime},\mathbf{x}}}\mathbf{I} \otimes\mathbf{Z}_{\ell}\ket{v_{k,\mathbf{x}}}\bra{v_{k,\mathbf{x}}},\] where we applied Eq. (J.60) to drop the sum on \(\mathbf{y}\). Thus we can rewrite the RHS of Eq. (J.67) as \[(cont.) =\sum_{\ell,k^{\prime},k,\mathbf{x}}\ket{v_{k,\mathbf{x}}}\bra{v_{k,\bm {x}}}\mathbf{\sigma}_{\ell}\ket{v_{k^{\prime},\mathbf{x}}}\bra{v_{k^{\prime},\mathbf{x}}} \mathbf{\sigma}_{\ell}\ket{v_{k,\mathbf{x}}}\bra{v_{k,\mathbf{x}}}\int_{-\infty}^{0}\gamma _{\beta}(\omega)\omega\ket{\hat{f}_{\mu}(\omega-J_{\mathrm{prop}}(k^{\prime}-k) )}\ket{}^{2}\mathrm{d}\omega\] \[\preceq-\Omega\left(\frac{J_{\mathrm{prop}}}{\ln\beta}\right)\sum _{k,\mathbf{x}}\ket{v_{k,\mathbf{x}}}\bra{v_{k,\mathbf{x}}}\max_{\ell\in[T]}\bra{v_{k,\mathbf{ x}}}\mathbf{\sigma}_{\ell}\ket{v_{k-1,\mathbf{x}}}^{2}\] \[\preceq-\Omega\left(\frac{J_{\mathrm{prop}}}{T\ln\beta}\right)\sum _{k\geq 1,\mathbf{x}}\ket{v_{k,\mathbf{x}}}\bra{v_{k,\mathbf{x}}}\] (applying Lemma J.3 ) \[\preceq-\Omega\left(\frac{J_{\mathrm{prop}}}{T\ln\beta}\right) \left(\mathbf{P}_{\mathrm{\Pi}}-\sum_{\mathbf{x}}\ket{v_{0,\mathbf{x}}}\bra{v_{0,\mathbf{x}}} \right)=-\Omega(\frac{J_{\mathrm{prop}}}{T\ln\beta})\cdot\mathbf{P}_{\mathrm{\Pi}}( \mathbf{I}-\mathbf{P}_{\mathrm{\Pi I}})\] (J.70) The first line uses the orthogonality condition \(\bra{v_{k^{\prime},\mathbf{y}}}\ket{v_{k,\mathbf{x}}}=\delta_{kk^{\prime}}\delta_{\bm {x}\mathbf{y}}\), and the fact that the identical \(\nu\) labels on \(\mathbf{A}_{\nu}^{a\dagger}\) and \(\mathbf{A}_{\nu}^{a}\) enforce that transitions from \(k^{\prime}\) need to be to the same \(k\) on both sides. The second line uses the negativity of the half-integral to focus on cooling transitions (which includes \(k\to k-1\)) and evaluates the integral (which concentrates near \(\omega\approx J_{\mathrm{prop}}\)). Lastly, we combine the above with the error bound on \(E\) to conclude the proof. #### j.3.3 Gradient from \(\mathbf{H}_{\mathrm{in}}\) The goal of this subsection is to prove \[-\mathbf{P}_{\mathrm{\Pi I}}^{\prime}\mathcal{L}_{\mathrm{\Pi I}}^{\dagger}[\mathbf{H} _{\mathrm{\Pi I}}]\mathbf{P}_{\mathrm{\Pi I}}^{\prime}\succeq r_{3}\mathbf{P}_{ \mathrm{\Pi I}}^{\prime}(\mathbf{I}-\mathbf{P}_{\mathrm{\Pi I}})-\epsilon_{3a}\mathbf{I}.\] (J.71) Here \(\mathbf{P}_{\mathrm{\Pi I}}^{\prime}\) is the projector onto perturbed low-energy eigenstates of \(\mathbf{H}_{\mathrm{\Pi I}}=\mathbf{H}_{\mathrm{\Pi I}}+\mathbf{H}_{\mathrm{in}}\), corresponding to \[\mathbf{P}_{\mathrm{\Pi I}}=\sum_{\mathbf{x}\in\{0,1\}^{n}}\ket{\eta_{\mathbf{x}}}\!\bra{ \eta_{\mathbf{x}}},\] (J.72) where \[\ket{\eta_{\mathbf{x}}}=\sum_{t=0}^{T}\sqrt{\xi_{t}}\ket{\eta_{\mathbf{x},t}}=\sum_{t =0}^{T}\sqrt{\xi_{t}}U_{t}\cdots U_{1}\ket{\mathbf{x}}\otimes\ket{C_{t}}\quad \text{and}\qquad\quad\xi_{t}=\frac{1}{2^{T}}\binom{T}{t}.\] (J.73) Recall our Definition 14 where given a circuit with \(L\) computational gates, we pad it in the beginning and the end with \(t_{0}\) identity gates to make a total \(T=2t_{0}+L\) gates. We can understand \(\xi_{t}\) as the probability from a symmetric binomial distribution \(\mathrm{Binom}(T,\frac{1}{2})\), which has substantial weight near the center where the interesting computation takes place. **Proposition J.1** (Lower bound \(\xi_{t}\) in the center).: _Suppose \(T=2t_{0}+L\) and \(t_{0}=cL^{2}\) are positive integers. Then we have_ \[\xi_{t}\geq\frac{\mathrm{e}^{-c/4}}{T+1}\qquad\text{for each}\quad t\in[t_{0},T -t_{0}].\] (J.74) Proof.: As a property of the binomial distribution \(\mathrm{Binom}(T,\frac{1}{2})\), we have \(\xi_{t}\geq\xi_{t_{0}}\) for all \(t\in[t_{0},T-t_{0}]\). Observe that \[\binom{T}{t}^{-1}=(T+1)\int_{0}^{1}x^{t}(1-x)^{T-t}\mathrm{d}x\leq(T+1)\Big{(} \frac{t}{T}\Big{)}^{t}\Big{(}1-\frac{t}{T}\Big{)}^{T-t},\] (J.75) where the inequality comes from the fact that \(\arg\max_{x\in[0,1]}x^{t}(1-x)^{T-t}=t/T\). Then \[\xi_{t_{0}} =\frac{1}{2^{T}}\binom{T}{t_{0}}\geq\frac{f(L)}{T+1}\] \[\mathrm{where}\quad f(L) =\frac{1}{2^{T}}\Big{(}\frac{T}{t_{0}}\Big{)}^{t_{0}}\Big{(}\frac {T}{T-t_{0}}\Big{)}^{T-t_{0}}=\Big{(}1+\frac{1}{2cL}\Big{)}^{cL^{2}}\Big{(}1- \frac{1}{2(cL+1)}\Big{)}^{cL^{2}+L}.\] (J.76) The last equality is obtained after plugging in \(T=2t_{0}+L\), \(t_{0}=cL^{2}\) and simplifying. We can use the first-derivative test to check that \(f(L)\) is monotonically decreasing, and so \(f(L)\geq\lim_{L\to\infty}f(L)=\mathrm{e}^{-c/4}\). Hence, \(\xi_{t}\geq\xi_{t_{0}}\geq f(L)/(T+1)\geq\mathrm{e}^{-c/4}/(T+1)\). Using the fact that \(U_{t_{j}-1}\cdots U_{1}\) acts trivially on the \(j\)-th qubit (by definition of \(t_{j}\)), we see that \(\mathbf{P}_{\mathbbm{1}}\mathbf{H}_{\mathrm{in}}\mathbf{P}_{\mathbbm{1}}\) is diagonal in the \(|\eta_{\mathbf{x}}\rangle\) basis: \[\langle\eta_{\mathbf{x}}|\mathbf{H}_{\mathrm{in}}|\eta_{\mathbf{y}}\rangle =J_{\mathrm{in}}\sum_{t,t^{\prime}=0}^{T}\sqrt{\xi_{t}\xi_{t^{ \prime}}}\,\langle\eta_{\mathbf{x},t}|\Big{(}\sum_{j=1}^{n}g_{j}|1\rangle\!\langle 1 |_{j}\otimes|C_{t_{j}-1}\rangle\!\langle C_{t_{j}-1}|\Big{)}|\eta_{\mathbf{y},t^{ \prime}}\rangle\] \[=\delta_{\mathbf{x},\mathbf{y}}\cdot J_{\mathrm{in}}\sum_{j=1}^{n}x_{j}g_ {j}\xi_{t_{j}-1}.\] Since \(g_{j}=1/\xi_{t_{j}-1}\) (see Definition 14), then the above implies that that \(|\eta_{\mathbf{x}}\rangle\) are eigenstates of \(\mathbf{P}_{\mathbbm{1}}\mathbf{H}_{\mathrm{in}}\mathbf{P}_{\mathbbm{1}}\) with eigenvalue \(J_{\mathrm{in}}\cdot\mathrm{wt}(\mathbf{x})\), where \(\mathrm{wt}(\mathbf{x})\) is the Hamming weight of bit string \(\mathbf{x}\). While \(|\eta_{\mathbf{x}}\rangle\) are eigenstates of \(\mathbf{P}_{\mathbbm{1}}\mathbf{H}_{\mathrm{in}}\mathbf{P}_{\mathbbm{1}}\), unfortunately, only \(|\eta_{\mathbf{0}}\rangle\) is an eigenstate of \(\mathbf{H}_{\mathrm{in}}\); this will require an additional perturbation step to handle this off-block-diagonal effect. Let \(\mathbf{H}_{\mathbbm{1}\mathbbm{1}}=\mathbf{\tilde{H}}_{\mathbbm{1}\mathbbm{1}}+\mathbf{V} _{\mathbbm{1}\mathbbm{1}}\) where \[\tilde{\mathbf{H}}_{\mathbbm{1}\mathbbm{1}}=\mathbf{H}_{\mathbbm{1}}+\mathbf{P}_{ \mathbbm{1}\mathbbm{1}}\mathbf{H}_{\mathrm{in}}\mathbf{P}_{\mathbbm{1}\mathbbm{1}}+\mathbf{ P}_{\mathbbm{1}}^{\perp}\mathbf{H}_{\mathrm{in}}\mathbf{P}_{\mathbbm{1}}^{\perp}\quad \text{and}\quad\mathbf{V}_{\mathbbm{1}\mathbbm{1}}=\mathbf{P}_{\mathbbm{1}\mathbbm{1 }}\mathbf{H}_{\mathrm{in}}\mathbf{P}_{\mathbbm{1}}^{\perp}+\mathbf{P}_{\mathbbm{1} \mathbbm{1}}^{\perp}\mathbf{H}_{\mathrm{in}}\mathbf{P}_{\mathbbm{1}}.\] (J.77) We will also denote \(\tilde{\mathcal{L}}_{\mathbbm{1}\mathbbm{1}}\) as the thermal Lindbladian with respect to \(\tilde{\mathbf{H}}_{\mathbbm{1}\mathbbm{1}}\). We start by studying the gradient on \(\tilde{\mathbf{H}}_{\mathbbm{1}\mathbbm{1}}\) and showing all its excited states have negative energy gradients. It suffices to focus on the states in \(\mathbf{P}_{\mathbbm{1}\mathbbm{1}}\), which is a low-energy subspace of eigenstates of \(\tilde{\mathbf{H}}_{\mathbbm{1}\mathbbm{1}}\) with excitation gap of at least \(J_{\mathrm{prop}}-2\|\mathbf{H}_{\mathrm{in}}\|\) from Weyl's inequality. The effective Hamiltonian in this subspace has a simple form of decoupled qubits, which we write as \[\tilde{\mathbf{H}}_{\mathrm{eff}}:=\tilde{\mathbf{H}}_{\mathbbm{1}\mathbbm{1}}|_{\mathbf{P} _{\mathbbm{1}}}=\mathbf{P}_{\mathbbm{1}}\mathbf{H}_{\mathrm{in}}\mathbf{P}_{\mathbbm{1}}=J _{\mathrm{in}}\sum_{\mathbf{x}}\mathrm{wt}(\mathbf{x})|\eta_{\mathbf{x}}\rangle\!\langle \eta_{\mathbf{x}}|\equiv J_{\mathrm{in}}\sum_{j=1}^{n}(\mathbf{I}-\mathbf{Z}_{j}^{\mathrm{ eff}})/2,\] (J.78) where \(\mathbf{Z}_{j}^{\mathrm{eff}}\) is the Pauli Z operator of a virtual qubit defined as \(\mathbf{Z}_{j}^{\mathrm{eff}}\,|\eta_{\mathbf{x}}\rangle=(-1)^{x_{j}}\,|\eta_{\mathbf{x}}\rangle\). Observe that \(\tilde{\mathbf{H}}_{\mathrm{eff}}\) has eigenvalues that are integer multiples of \(J_{\mathrm{in}}\), and thus its Bohr-frequency gap is \(\Delta_{\nu}(\tilde{\mathbf{H}}_{\mathrm{eff}})=J_{\mathrm{in}}\). Hence, assuming \(J_{\mathrm{in}}/2<J_{\mathrm{prop}}-2\|\mathbf{H}_{\mathrm{in}}\|\), we can apply Lemma H.6 to see that the gradient operator sandwiched by \(\mathbf{P}_{\mathbbm{1}}\) can be understood by fully restricting to the subspace: \[\mathbf{P}_{\mathbbm{1}}\tilde{\mathcal{L}}_{\mathbbm{1}\mathbbm{1}}^{\dagger}[ \tilde{\mathbf{H}}_{\mathbbm{1}\mathbbm{1}}]\mathbf{P}_{\mathbbm{1}}\overset{\tilde{E}_{ a}}{\approx}\sum_{a\in S_{0}}\sum_{\nu\in B(\tilde{\mathbf{H}}_{\mathrm{eff}})}\mathbf{P}_{ \mathbbm{1}\mathbbm{1}}\mathbf{A}_{\nu}^{a\dagger}\mathbf{P}_{\mathbbm{1}\mathbbm{1}} \mathbf{A}_{\nu}^{a}\mathbf{P}_{\mathbbm{1}}\int_{-\infty}^{0}\gamma_{\beta}(\omega) \omega|\hat{f}_{\mu}(\omega-\nu)|^{2}\mathrm{d}\omega\] (J.79) where \(\mu=J_{\rm in}/2\) and \[\tilde{E}_{a}=|S_{0}|\mathcal{O}\left(\frac{\|\tilde{\mathbf{H}}_{\rm III}\|^{3/4}}{ \tau^{1/4}}+\frac{1}{\tau}+\frac{1}{\beta}+\frac{1}{\sqrt{J_{\rm in}\tau}} \right).\] (J.80) To show that this has good gradients for all states in \(\mathbf{P}_{\rm II}(\mathbf{I}-\mathbf{P}_{\rm III})\), it is sufficient to consider the following subset of the jump operators from Eq. (J.4): \[S_{\rm III}=\left\{\mathbf{X}_{j}\otimes|0\rangle\!\langle 0|_{t_{j}}\right\}_{j=1}^ {n}.\] (J.81) These jump operators from \(S_{\rm III}\) effectively flip the individual virtual qubits by \[\langle\eta_{\mathbf{y}}|(\mathbf{X}_{j}\otimes|0\rangle\!\langle 0|_{t_{j}})|\eta_{ \mathbf{x}}\rangle=\langle\mathbf{y}|\mathbf{X}_{j}|\mathbf{x}\rangle\sum_{t=0}^{t_{j}-1}\xi_{ t}=:\langle\mathbf{y}|\mathbf{X}_{j}|\mathbf{x}\rangle\sqrt{\alpha_{j}}\] (J.82) where we have denoted \(\alpha_{j}=\left(\sum_{t<t_{j}}\xi_{t}\right)^{2}\). Let \[\mathbf{X}_{j}^{\rm eff}=\mathbf{P}_{\rm II}(\mathbf{X}_{j}\otimes|0\rangle\! \langle 0|_{t_{j}})\mathbf{P}_{\rm II}/\sqrt{\alpha_{j}}\quad\text{such that}\quad\left\|\mathbf{X}_{j}^{\rm eff}\right\|=1\quad\text{for each}\quad j=1,\dots,n.\] (J.83) Note \(\mathbf{X}_{j}^{\rm eff}\) is effectively the Pauli X operator for the \(j\)-th virtual qubit. Furthermore, note since \(t_{j}\in[t_{0},T-t_{0}]\) by our circuit construction in Definition 14, we have \(\alpha_{j}\geq\xi_{t_{0}}^{2}\geq\Omega(1/T^{2})\) by Proposition J.1. Next, we replace \(\tilde{\mathbf{H}}_{\rm III}\) by \(\tilde{\mathbf{H}}_{\rm eff}\) so we only need to talk about the virtual qubits. We first restore the RHS of Eq. (J.79) to the thermal Lindbladian form by undoing the approximation in Lemma H.6, which incurs another error bounded by \(\tilde{E}_{a}\): \[\mathbf{P}_{\rm II}\tilde{\mathcal{L}}_{\rm III}^{\dagger}[\tilde{\mathbf{H}}_{\rm III }]\mathbf{P}_{\rm II}\stackrel{{ 2\tilde{E}_{a}}}{{\approx}}\mathcal{L}^{ \dagger\beta,\tau,\mathbf{H}_{\rm eff}}[\mathbf{H}_{\rm eff}].\] (J.84) We then focus on the subset \(S_{\rm III}\) of jump operators and write \[\mathcal{L}^{\dagger\beta,\tau,\mathbf{H}_{\rm eff}}[\mathbf{H}_{\rm eff}]\stackrel{{ \tilde{E}_{b}}}{{\approx}}\sum_{j=1}^{n}\alpha_{j}\tilde{\mathcal{L}}_{j}^{ \dagger}[\tilde{\mathbf{H}}_{\rm eff}],\] (J.85) where we denoted \(\tilde{\mathcal{L}}_{j}:=\mathcal{L}_{j}^{\beta,\tau,\tilde{\mathbf{H}}_{\rm eff}}\) to be the thermal Lindbladian associated with effective jump operator \(\mathbf{X}_{j}^{\rm eff}\) and pulled out the normalization factor \(\alpha_{j}\). The error \[\tilde{E}_{b}=|S_{0}|\mathcal{O}\left(\frac{\|\tilde{\mathbf{H}}_{\rm eff}\|^{3/4} }{\tau^{1/4}}+\frac{1}{\tau}+\frac{1}{\beta}\right)\] (J.86) comes from neglecting the other jump operators \(S_{0}\setminus S_{\rm III}\), and is bounded using Lemma H.1. Since the effective operators on different virtual qubits commute, we can treat them independently. More formally, by Lemma H.8 we have \(\tilde{\mathcal{L}}_{j}^{\dagger}[\tilde{\mathbf{H}}_{\rm eff}]=\mathcal{L}_{j}^{ \dagger\beta,\tau,\mathbf{h}_{j}}[\mathbf{h}_{j}]\), where \(\mathbf{h}_{j}=J_{\rm in}(\mathbf{I}-\mathbf{Z}_{\rm eff}^{j})/2\). To bound the global gradient in Eq. (J.85), we first consider cooling a single qubit. **Lemma J.5** (Cooling a qubit).: _On a qubit, consider the thermal Lindbladian \(\mathcal{L}=\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}}\) with the Hamiltonian \(\mathbf{H}=J_{\rm in}(\mathbf{I}-\mathbf{Z})/2\) and one jump operator \(\mathbf{A}=\mathbf{X}\). Then,_ \[-\mathcal{L}^{\dagger}[\mathbf{H}]\succeq r_{\rm in}(\mathbf{I}-\mathbf{Z})-\epsilon_{\rm in }\mathbf{I}\] (J.87) _where_ \[r_{\rm in}=\Omega\Big{(}\frac{J_{\rm in}}{\ln\beta}\Big{)}\quad\text{and} \quad\epsilon_{\rm in}=\mathcal{O}\Big{(}\frac{J_{\rm in}^{3/4}}{\tau^{1/4}}+ \frac{1}{\tau}+\frac{1}{\sqrt{\tau J_{\rm in}}}+\mathrm{e}^{-\beta J_{\rm in} }\Big{)}.\] (J.88) Proof.: Again, we invoke a series of approximations \[\mathcal{L}^{\dagger}[\mathbf{H}] \approx\mathcal{D}^{\dagger}[\mathbf{H}]\] (Proposition F.3) \[\approx\int_{-\infty}^{\infty}\gamma_{\beta}(\omega)\omega\mathbf{A}( \omega)^{\dagger}\mathbf{A}(\omega)\mathrm{d}\omega\] (Lemma L.1) \[\approx\int_{-\infty}^{\infty}\gamma_{\beta}(\omega)\omega\mathbf{S}( \omega)^{\dagger}\mathbf{S}(\omega)\mathrm{d}\omega\qquad\quad\text{(secular approximation ($\mu=J_{\mathrm{in}}/2$): Corollary K.3)}\] \[=\sum_{\nu\in B(\mathbf{H})}\mathbf{A}_{\nu}^{\dagger}\mathbf{A}_{\nu}\int_{- \infty}^{\infty}\gamma_{\beta}(\omega)\omega\left|\hat{f}_{\mu}(\omega-\nu) \right|^{2}\mathrm{d}\omega\qquad\quad\text{(different blocks $\nu\neq\nu^{\prime}$ decohere)}\] \[\preceq-\Omega(J_{\mathrm{in}}/\ln\beta)\left|1\right>\left<1 \right|+\mathcal{O}(\mathrm{e}^{-\beta J_{\mathrm{in}}})\left|0\right>\left< 0\right|.\] (J.89) The last line uses the transition matrix elements for the two Bohr frequencies \(\nu=\pm J_{\mathrm{in}}\): \[\mathbf{A}_{+1}=\left|1\right>\left<0\right|,\qquad\text{and}\qquad\mathbf{A}_{-1}= \left|0\right>\left<1\right|.\] (J.90) Combine the error bound to conclude the proof. Summing up over the contributions from the individual qubits, the global gradient satisfies \[-\mathbf{P}_{\Pi}\tilde{\mathcal{L}}_{\Pi\Pi}^{\dagger}[\tilde{\mathbf{H} }_{\Pi\Pi}]\mathbf{P}_{\Pi} \succeq\sum_{j=1}^{n}\alpha_{j}\mathbf{P}_{\Pi}\left[r_{\mathrm{in}}( \mathbf{I}-\mathbf{Z}_{j}^{\mathrm{eff}})-\epsilon_{\mathrm{in}}\mathbf{I}\right]\mathbf{P}_{ \Pi}-(2\tilde{E}_{a}+\tilde{E}_{b})\mathbf{I}\] \[\succeq r_{3}\mathbf{P}_{\Pi}(\mathbf{I}-\mathbf{P}_{\Pi\Pi})-\tilde{\epsilon}_{ 3}\mathbf{I}\] (J.91) where we used the fact that \(\sum_{j=1}^{n}(\mathbf{I}-\mathbf{Z}_{j}^{\mathrm{eff}})\succeq\mathbf{P}_{\Pi\Pi}\), and denoted \(r_{3}=r_{\mathrm{in}}\min_{j}\alpha_{j}\), and \(\tilde{\epsilon}_{3}=2\tilde{E}_{a}+\tilde{E}_{b}+\epsilon_{\mathrm{in}}\sum_{ j=1}^{n}\alpha_{j}\). Since \(\alpha_{j}=\Omega(1/T^{2})\) and \(\alpha_{j}\leq 1\), we have that \[r_{3}=\Omega\Big{(}\frac{J_{\mathrm{in}}}{T^{2}\ln\beta}\Big{)}\quad\text{and }\quad\tilde{\epsilon}_{3}\leq 2\tilde{E}_{a}+\tilde{E}_{b}+n \epsilon_{\mathrm{in}}=\left|S_{0}\right|\mathcal{O}\Big{(}\frac{\|\tilde{\bm {H}}_{\Pi\Pi}\|^{3/4}}{\tau^{1/4}}+\frac{1}{\tau}+\frac{1}{\beta}+\frac{1}{ \sqrt{J_{\mathrm{in}}\tau}}+\mathrm{e}^{-\beta J_{\mathrm{in}}}\Big{)}.\] (J.92) Lastly, to obtain the gradient for the final Hamiltonian \(\mathbf{H}_{\Pi\Pi}\), we need to add the off-block-diagonal perturbation and show that the gradient persists on the subspace. Directly applying subspace monotonicity (Corollary H.1) yields loose bounds; we will need to invoke a finer-grained subspace monotonicity (Corollary H.2) that exploits the fact that \(\mathbf{V}\) is off-block-diagonal and contribute to eigenvalue change at _second order_\(\mathcal{O}(\|\mathbf{V}\|^{2})\). We apply Corollary H.2 with \(\mathbf{H}=\tilde{\mathbf{H}}_{\Pi\Pi}\) and \(\mathbf{H}^{\prime}=\mathbf{H}_{\Pi\Pi}\), and parameters \[\mathbf{Q} =\mathbf{P}_{\Pi} \text{(low-energy eigensubspace of $\mathbf{H}=\tilde{\mathbf{H}}_{\Pi\Pi}$)}\] \[\Delta_{\mathbf{Q}} =J_{\mathrm{prop}}-2\left\|\mathbf{H}_{\mathrm{in}}\right\| \text{(excitation gap)}\] \[\mathbf{V} =\mathbf{P}_{\Pi}\mathbf{H}_{\mathrm{in}}(\mathbf{I}-\mathbf{P}_{\Pi})+(\mathbf{I}- \mathbf{P}_{\Pi})\mathbf{H}_{\mathrm{in}}\mathbf{P}_{\Pi} \text{(off-block-diagonal perturbation)}\] \[\Delta_{\nu} =J_{\mathrm{in}} \text{(subspace Bohr-frequency gap)}\] Therefore, by Corollary H.2, \[-\mathbf{P}_{\Pi}^{\prime}\mathcal{L}_{\Pi\Pi}^{\dagger}[\mathbf{H}_{\Pi\Pi}]\mathbf{P}_ {\Pi}^{\prime}\succeq r_{3}\mathbf{P}_{\Pi}^{\prime}(\mathbf{I}-\mathbf{P}_{\Pi\Pi})- \epsilon_{3a}\mathbf{I}\] (J.93) where \[\epsilon_{3a}\leq\tilde{\epsilon_{3}}+\left|S_{0}\right|\cdot\mathcal{O}\bigg{(} \frac{1}{\tau}+\frac{\|\tilde{\mathbf{H}}_{\Pi\Pi}\|^{3/4}}{\tau^{1/4}}+\frac{ \Lambda_{0}^{2/3}}{\tau^{1/3}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}\tau}}+ \frac{\Lambda_{0}}{\sqrt{\Delta_{\mathbf{Q}}\tau}}+\frac{\mathrm{e}^{-\beta \Delta_{\mathbf{Q}}/4}}{\beta}\] \[\frac{\left\|\boldsymbol{V}\right\|^{2}}{\Delta_{\boldsymbol{Q}}}+ \|\boldsymbol{H_{Q}}\|\cdot\Big{(}\frac{\|\boldsymbol{H_{Q}}\|\,\|\boldsymbol{V} \|}{\Delta_{\boldsymbol{Q}}\Delta_{\nu}}+\frac{\|\boldsymbol{V}\|^{2}}{\Delta_{ \boldsymbol{Q}}\Delta_{\nu}}\Big{)}+r_{3}\Big{(}\frac{\|\boldsymbol{V}\|}{ \Delta_{\boldsymbol{Q}}}+\frac{\|\boldsymbol{V}\|^{2}}{\Delta_{\nu}\Delta_{ \boldsymbol{Q}}}\Big{)}\Big{)}.\] (J.94) Noting that \(|S_{0}|,\|\tilde{\boldsymbol{H}}_{\tt III}\|=\mathcal{O}(T)\), \(\|\boldsymbol{V}\|=\mathcal{O}(nTJ_{\rm in})\), \(\|\boldsymbol{H_{Q}}\|=nJ_{\rm in}\), and \(J_{\rm in}\ll J_{\rm prop}\), we simplify the error bound above by keeping the dominant term to get \[\epsilon_{3a}\leq T\mathcal{O}\bigg{(}\frac{T^{3/4}}{\tau^{1/4}}+ \frac{1}{\beta}+\frac{1}{\sqrt{J_{\rm in}\tau}}+\mathrm{e}^{-\beta J_{\rm in} }+n\frac{(nTJ_{\rm in})^{2}}{J_{\rm prop}}\bigg{)}\] (J.95) as advertised earlier in Eq. (J.26). ## Appendix K Operator Fourier Transform Recall that the exact form of thermal Lindbladians in Appendix F involves the operator Fourier transform (OFT) [42] for a set of jump operators \(\boldsymbol{A}^{a}\). In this appendix, we provide the key properties of OFT which are used in the proofs of many statements in Appendices H and L. For any operator \(\boldsymbol{A}\) (the jump operator), and Hermitian operator \(\boldsymbol{H}\) (the Hamiltonian), and a weight function \(f\), the _operator Fourier Transform_ (OFT) is an integral over time-evolution of the operator \(\boldsymbol{A}\), \[\hat{\boldsymbol{A}}_{\hat{f}}(\omega):=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i}\boldsymbol{H}t}\boldsymbol{A} \mathrm{e}^{-\mathrm{i}\boldsymbol{H}t}\mathrm{e}^{-\mathrm{i}\omega t}f(t) \mathrm{d}t.\] (K.1) Often, we will also write \(\hat{\boldsymbol{A}}(\omega)\) when we choose \(f\) to be the default _normalized window function_ \[f_{\tau}(t)=\frac{1}{\sqrt{\tau}}\cdot\begin{cases}1&\text{if} \quad t\in[-\tau/2,\tau/2]\\ 0&\text{else}.\end{cases}\] (K.2) It is usually helpful to consider the energy eigenspaces \(\boldsymbol{H}=\sum_{i}E_{i}\left|\psi_{i}\right\rangle\left\langle\psi_{i} \right|=\sum_{E\in\mathrm{Spec}(\boldsymbol{H})}E\boldsymbol{P}_{E}\) and and write \(\boldsymbol{A}\) as the following decomposition \[\boldsymbol{A}=\sum_{E_{2},E_{1}\in\mathrm{Spec}(\boldsymbol{H})} \boldsymbol{P}_{E_{2}}\boldsymbol{A}\boldsymbol{P}_{E_{1}}=\sum_{\nu\in B( \boldsymbol{H})}\boldsymbol{A}_{\nu}\quad\text{where}\quad\boldsymbol{A}_{\nu }:=\sum_{E_{2}-E_{1}=\nu}\boldsymbol{P}_{E_{2}}\boldsymbol{A}\boldsymbol{P}_{ E_{1}}.\] (K.3) Formally, these energy differences \(\nu\in B(\boldsymbol{H}):=\{E_{i}-E_{j}\,|\,E_{i},E_{j}\in\mathrm{Spec}( \boldsymbol{H})\}\) are called the _Bohr frequencies_ (Figure 2), and \(\boldsymbol{A}_{\nu}\) collects the matrix elements \(\left\langle\psi_{i}\right|\boldsymbol{A}\left|\psi_{j}\right\rangle\) that changes the energy by \(\nu=E_{i}-E_{j}\). The Bohr frequencies are natural for the Heisenberg-picture evolution of \(\boldsymbol{A}\) since \[\mathrm{e}^{\mathrm{i}\boldsymbol{H}t}\boldsymbol{A}\mathrm{e}^{-\mathrm{i} \boldsymbol{H}t}=\sum_{\nu\in B(\boldsymbol{H})}\mathrm{e}^{\mathrm{i}\nu t} \boldsymbol{A}_{\nu}.\] (K.4) Then, executing the Fourier integral yields the frequency domain representation \[\hat{\boldsymbol{A}}_{f}(\omega)=\sum_{\nu\in B}\boldsymbol{A}_{ \nu}\hat{f}(\omega-\nu)\quad\text{where}\quad\hat{f}(\omega):=\frac{1}{\sqrt{ 2\pi}}\int_{-\infty}^{\infty}f(t)\mathrm{e}^{-\mathrm{i}\omega t}\mathrm{d}t,\] (K.5) which contains a collection of Bohr frequencies \(\nu\) near \(\omega\). Conceptually, we can think of the operator Fourier transform \(\hat{\boldsymbol{A}}_{f}(\omega)\) as the _smooth_ probe of \(\boldsymbol{A}_{\nu}\) with exact Bohr frequency \(\nu\), which generally requires resolving arbitrarily close eigenvalues. ### Useful properties We instantiate some useful properties of the Operator Fourier Transform. **Proposition K.1** (Operator Parseval's identity [42]).: _Consider a set of operators \(\{\mathbf{A}^{a}\}_{a}\) and its operator Fourier transform with weight \(f\in\mathcal{L}_{2}(\mathbb{R})\) and Hamiltonian \(\mathbf{H}\). Then, we have a certain symmetry \((\mathbf{A}^{a}_{f}(\omega))^{\dagger}=\mathbf{A}^{a\dagger}_{f^{*}}(-\omega)\) and certain Parseval-type identity_ \[\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}^{a}_{f}(\omega)^ {\dagger}\hat{\mathbf{A}}^{a}_{f}(\omega)\mathrm{d}\omega =\sum_{a\in S}\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i}\mathbf{H }t}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\mathrm{e}^{-\mathrm{i}\mathbf{H}t}\left|f(t) \right|^{2}\mathrm{d}t\] (K.6) \[\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}^{a}_{f}(\omega) \hat{\mathbf{A}}^{a}_{f}(\omega)^{\dagger}\mathrm{d}\omega =\sum_{a\in S}\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i}\mathbf{H }t}\mathbf{A}^{a}\mathbf{A}^{a\dagger}\mathrm{e}^{-\mathrm{i}\mathbf{H}t}\left|f(t) \right|^{2}\mathrm{d}t\] (K.7) Intuitively, the above tells us that the Fourier Transforms \(\hat{\mathbf{A}}^{a}_{f}(\omega)\) with different frequencies \(\omega\) are "orthogonal" to each other and that the average of squares of strengths is bounded (reminiscent of a probability). Without using this norm sum constraint from Fourier transforms, one easily gets loose bounds. An alternative view of the above (as a natural purification) will prove useful for manipulating norms of expression involving \(\mathbf{A}^{a}_{f}(\omega)\). **Corollary K.1** (Purification of Operator Fourier Transform).: _In the prevailing notation, the abstract operator Fourier Transform has a norm bound,_ \[\left\|\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}^{a}_{f}(\omega) \otimes\left|a\right\rangle\otimes\left|\omega\right\rangle\mathrm{d}\omega \right\|\leq\left\|f\right\|_{2}\sqrt{\left\|\sum_{a\in S}\mathbf{A}^{a\dagger} \mathbf{A}^{a}\right\|},\] (K.8) _where the continuous basis vectors satisfy the normalization_ \[\langle\omega^{\prime}|\omega\rangle=\delta(\omega^{\prime}-\omega).\] (K.9) Proof.: We multiply the conjugate \[\left(\sum_{a^{\prime}\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}^ {a^{\prime}}_{f}(\omega^{\prime})^{\dagger}\otimes\langle a^{\prime}|\otimes \langle\omega^{\prime}|\,\mathrm{d}\omega^{\prime}\right)\cdot\left(\sum_{a \in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}^{a}_{f}(\omega)\otimes\left|a \right\rangle\otimes\left|\omega\right\rangle\mathrm{d}\omega\right)\] \[=\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}^{a}_{f}(\omega) ^{\dagger}\hat{\mathbf{A}}^{a}_{f}(\omega)\mathrm{d}\omega.\] (K.10) Take the operator norm and use Proposition K.1 to conclude the proof. The above "purification" trick applies to other quantities with a liberal choice of summation indices, whether they are \(a,\omega,\) or \(\nu\). **Lemma K.1** (Norm inequalities from operator purification).: _For any operator \(\mathbf{O}\) and any set of operators \(\mathbf{A}_{i},\mathbf{A}^{\prime}_{j}\) acting on the same Hilbert space, we have that_ \[\left\|\sum_{i,j}\mathbf{A}^{\dagger}_{i}\mathbf{O}\mathbf{A}^{\prime}_{j}G_{ ij}\right\| \leq\left\|\mathbf{G}\right\|\left\|\mathbf{O}\right\|\sqrt{\left\|\sum_{i}\mathbf{A}^{ \dagger}_{i}\mathbf{A}_{i}\right\|\left\|\sum_{j}\mathbf{A}^{\prime}_{j}\mathbf{A}^{ \prime}_{j}\right\|}\] (K.11) \[\left\|\sum_{i,j}\mathbf{A}^{\dagger}_{i}\mathbf{A}^{\prime}_{j}G_{ij} \right\| \leq\left\|\mathbf{G}\right\|\sqrt{\left\|\sum_{i}\mathbf{A}^{\dagger}_{i} \mathbf{A}_{i}\right\|\left\|\sum_{j}\mathbf{A}^{\prime\dagger}_{j}\mathbf{A}^{\prime}_{j} \right\|}\] (K.12) Proof.: By homogeneity, it suffices to set normalization to be \(\left\|\sum_{j}\mathbf{A}_{j}^{\dagger}\mathbf{A}_{j}^{\prime}\right\|=\left\|\sum_{i} \mathbf{A}_{i}^{\dagger}\mathbf{A}_{i}\right\|=\left\|\mathbf{O}\right\|=\left\|\mathbf{G} \right\|=1\). Introduce purifications \[\mathbf{G}^{\prime} :=\mathbf{I}\otimes\sum_{i,j}G_{ij}\left|i\right\rangle\left\langle j \right|,\qquad\mathbf{O}^{\prime}:=\mathbf{O}\otimes\mathbf{I},\] (K.13) \[\mathbf{V}^{\prime} :=\sum_{j}\mathbf{A}_{j}^{\prime}\otimes\left|j\right\rangle,\qquad \mathbf{V}:=\sum_{i}\mathbf{A}_{i}\otimes\left|i\right\rangle,\] (K.14) which are all bounded by \(\left\|\mathbf{G}^{\prime}\right\|,\left\|\mathbf{O}^{\prime}\right\|,\left\|\mathbf{V}^{ \prime}\right\|,\left\|\mathbf{V}\right\|\leq 1\). Then, \[\left\|\sum_{i,j}\mathbf{A}_{i}^{\dagger}\mathbf{O}\mathbf{A}_{j}^{\prime}G_ {ij}\right\| =\left\|\mathbf{V}^{\prime}\mathbf{O}^{\prime}\mathbf{G}^{\prime}\mathbf{V}\right\|\leq 1\] (K.15) \[\left\|\sum_{i,j}\mathbf{A}_{i}^{\dagger}\mathbf{A}_{j}^{\prime}G_{ij} \right\| =\left\|\mathbf{V}^{\prime}\mathbf{G}^{\prime}\mathbf{V}\right\|\leq 1.\] (K.16) Rescale to obtain the advertised result. ### Secular approximation Due to the energy-time uncertainty principle, the energies \(\omega\) that are accessed by finite-time quantum algorithms always inherit an uncertainty. Indeed, when we choose our weight function \(f_{\tau}(t)\) in Eq. (K.2) for our operator Fourier transform in the frequency domain, we have \[\hat{\mathbf{A}}_{f}(\omega)=\sum_{\nu\in B}\mathbf{A}_{\nu}\hat{f}(\omega-\nu)\quad \text{where}\quad\hat{f}(\omega)=\frac{\mathrm{e}^{\mathrm{i}\omega\tau/2}- \mathrm{e}^{-\mathrm{i}\omega\tau/2}}{\mathrm{i}\omega\sqrt{2\pi\tau}}\quad \text{when}\quad f(t)=f_{\tau}(t).\] (K.17) Note with this choice, \(\hat{f}(\omega)\) has a heavy tail \(\sim 1/\omega\), which is reminiscent of unamplified phase estimation. Therefore, even when restricting to jumps with \(\omega<0\), there is a decent chance that \(\hat{\mathbf{A}}_{f}(\omega)\) mistakenly activates a heating transition (\(\nu>0\)) instead of a cooling transition (\(\nu<0\)), unintentionally heating up the system instead of cooling it. To control the resulting error, in this section, we introduce the _secular approximation_[42] of the Fourier transformed operators \(\mathbf{A}_{f}(\omega)\). The secular approximation applies truncation to the Fourier-transformed operators in the frequency domain by truncating Bohr frequencies \(E\in B\) that deviate substantially from the frequency label \(\omega\). Truncation at energy difference \(\mu\) can be achieved by setting a step function and defining the following secular-approximated operators as follows \[\hat{\mathbf{S}}_{f,\mu}(\omega):=\sum_{\nu\in B}\mathbf{A}_{\nu}\hat{f}(\omega-\nu) \cdot\hat{s}_{\mu}(\omega-\nu)\quad\text{where}\quad\hat{s}_{\mu}(\omega):= \mathds{1}(\left|\omega\right|<\mu).\] (K.18) We often drop subscript \(f,\mu\) for simplicity. The truncation error is an operator whose norm can be bounded by the following (as a variant of Corollary K.1). **Corollary K.2** (Secular approximation).: _In the prevailing notation,_ \[\left\|\sum_{a\in S}\int_{-\infty}^{\infty}(\hat{\mathbf{S}}_{f}^{a}(\omega)-\hat{ \mathbf{A}}_{f}^{a}(\omega))\otimes\left|a\right\rangle\otimes\left|\omega\right \rangle\mathrm{d}\omega\right\|\leq\left\|\hat{f}\cdot(1-\hat{s}_{\mu})\right\| _{2}\sqrt{\left\|\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\|}.\] (K.19) The error is controlled by the 2-norm for the truncated tail \[\left\lVert\hat{f}\cdot(1-\hat{s}_{\mu})\right\rVert_{2}^{2}=\int_{|\omega|\geq \mu}|\hat{f}(\omega)|^{2}\mathrm{d}\omega\] (K.20) noting that conveniently, the Fourier transform preserves the 2-norm of functions. For our bump function (K.2) in particular, we can integrate the tail-bound \[\left\lVert\hat{f}_{\tau}\cdot(1-\hat{s}_{\mu})\right\rVert_{2}^{2}\leq\frac{4 }{\pi\mu\tau}.\] (K.21) This conveniently leads to bounds on other quantities involving operator Fourser Transforms \(\hat{\mathbf{A}}^{a}(\omega)\). **Corollary K.3** (Error from secular approximation).: _For any real function \(\theta:\mathbb{R}\to\mathbb{R}\),_ \[\left\lVert\sum_{a\in S}\int_{-\infty}^{\infty}\theta(\omega^{ \prime})\hat{\mathbf{A}}_{f}^{a}(\omega^{\prime})^{\dagger}\hat{\mathbf{A}}_{f}^{a}( \omega^{\prime})\mathrm{d}\omega^{\prime}-\sum_{a\in S}\int_{-\infty}^{\infty }\theta(\omega^{\prime})\hat{\mathbf{S}}_{f,\mu}^{a}(\omega^{\prime})^{\dagger} \hat{\mathbf{S}}_{f,\mu}^{a}(\omega^{\prime})\mathrm{d}\omega^{\prime}\right\rVert\] \[\qquad\qquad\qquad\qquad\qquad\qquad\leq 2\left\lVert\theta \right\rVert_{\infty}\left\lVert\hat{f}\cdot(1-\hat{s}_{\mu})\right\rVert_{2} \left\lVert f\right\rVert_{2}\sqrt{\left\lVert\sum_{a\in S}\mathbf{A}^{a\dagger} \mathbf{A}^{a}\right\rVert}.\] (K.22) Proof.: It suffices to set normalization \(\left\lVert\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\rVert=1\). Introduce purifications \[\mathbf{F} :=\mathbf{I}\otimes\mathbf{I}\otimes\int_{-\infty}^{\infty}\theta(\omega) \left|\omega\right\rangle\left\langle\omega\right|\mathrm{d}\omega,\] (K.23) \[\mathbf{V} :=\left(\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}_{f}^{a}( \omega)\otimes\left|a\right\rangle\otimes\left|\omega\right\rangle\mathrm{d} \omega\right),\] (K.24) \[\mathbf{V}^{\prime} :=\left(\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{S}}_{f}^{a}( \omega)\otimes\left|a\right\rangle\otimes\left|\omega\right\rangle\mathrm{d} \omega\right).\] (K.25) Then, by a telescoping sum, \[(LHS)=\left\lVert\mathbf{V}^{\dagger}\mathbf{F}\mathbf{V}-\mathbf{V}^{{}^{\prime}\dagger}\bm {F}\mathbf{V}^{\prime}\right\rVert\leq\left\lVert(\mathbf{V}^{\dagger}-\mathbf{V}^{{}^{ \prime}\dagger})\mathbf{F}\mathbf{V}\right\rVert+\left\lVert\mathbf{V}^{{}^{\prime}\dagger }\mathbf{F}(\mathbf{V}-\mathbf{V}^{\prime})\right\rVert.\] (K.26) We conclude the proof using the individual bounds on the operator norm \[\left\lVert\mathbf{V}\right\rVert \leq\left\lVert f\right\rVert_{2}\] (Corollary K.1) \[\left\lVert\mathbf{V}^{\prime}\right\rVert \leq\left\lVert\hat{f}\cdot\hat{s}_{\mu}\right\rVert_{2}\leq \left\lVert f\right\rVert_{2}\] (Corollary K.1) \[\left\lVert\mathbf{F}\right\rVert \leq\left\lVert\theta\right\rVert_{\infty}\] (K.27) \[\left\lVert\mathbf{V}^{\prime}-\mathbf{V}\right\rVert \leq\left\lVert\hat{f}\cdot(1-\hat{s}_{\mu})\right\rVert_{2}\quad.\] (Corollary K.2) This concludes the proof. ## Appendix L Proving monotonicity of energy gradient under level splitting It will often be helpful to understand how the energy gradients of a Hamiltonian \(\mathbf{H}\) change when a perturbation \(\mathbf{V}\) is added to yield \(\mathbf{H}^{\prime}=\mathbf{H}+\mathbf{V}\). This allows us to characterize the energy gradient of \(\mathbf{H}^{\prime}\) by analyzing the unperturbed spectrum of \(\mathbf{H}\), which is usually much simpler than that of \(\mathbf{H}^{\prime}\). Indeed, this is an important part of our proof strategy for showing our key result that \(\mathbf{H}_{C}\) has no suboptimal local minima (Theorem 11 in Appendix J). The relationship we can prove, which was previously stated in Theorem 10 in Appendix H, takes the form of _monotonicity_. As the name implies, the result only holds in one direction; it fails when the \(\mathbf{H}^{\prime}\) and \(\mathbf{H}\) are switched. It is imperative that \(\mathbf{H}\) have a highly degenerate spectrum with the _Bohr-frequency gap_ \[\Delta_{\nu}(\mathbf{H}):=\min_{\nu_{1}\neq\nu_{2}\in B(\mathbf{H})}\left| \nu_{1}-\nu_{2}\right|,\] (L.1) which sets an energy scale for which \(\mathbf{V}\) is perturbative. Note that the Bohr-frequency gap is upper bounded by the gap in the spectrum11 Footnote 11: The Bohr-frequency gap can be much smaller than the eigenvalue gap. For example, consider the energies \(\{-0.99,0,1\}\), which has an eigenvalue gap of \(0.99\) and a Bohr-frequency gap of \(0.01\). \[\Delta_{\nu}\leq\Delta_{E}\quad\text{where}\quad\Delta_{E}=\min_{ E_{1}\neq E_{2}\in\operatorname{Spec}(\mathbf{H})}\left|E_{1}-E_{2}\right|.\] (L.2) We now state a more general version of Theorem 10, which we prove in the remainder of this appendix. **Theorem 12** (Monotonicity of gradient under level splitting, expanded version).: _Consider a Hamiltonian \(\mathbf{H}\) with a highly degenerate spectrum and Bohr-frequency gap \(\Delta_{\nu}:=\min_{\nu_{1}\neq\nu_{2}\in B(\mathbf{H})}\left|\nu_{1}-\nu_{2}\right|\), and a perturbed Hamiltonian \(\mathbf{H}^{\prime}=\mathbf{H}+\mathbf{V}\). Suppose the perturbation is weaker than the Bohr-frequency gap, \(\left\|\mathbf{V}\right\|\leq\frac{1}{8}\Delta_{\nu}\). For any \(\beta,\tau>0\), let \(\mathcal{L}=\sum_{a\in S}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}},\mathcal{L}^{ \prime}=\sum_{a\in S}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}^{\prime}}\) be thermal Lindbladians with jumps \(\{\mathbf{A}^{a}\}_{a\in S}\), where \(\left\|\mathbf{A}^{a}\right\|\leq 1\) and the transition weight \(\gamma_{\beta}(\omega)\) is given by Eq. (F.4). Let \(\delta_{\lambda}=\max_{j}|\lambda_{j}(\mathbf{H})-\lambda_{j}(\mathbf{H}^{\prime})|\), where \(\lambda_{j}(\mathbf{X})\) is the \(j\)-th largest eigenvalue of \(\mathbf{X}\), and let \(\theta_{\max}=\max_{\nu\in B(\mathbf{H})}\left|\nu\gamma_{\beta}(\nu)\mathds{1}( \nu\leq\Delta_{\nu}/2)\right|\). For any two operators \(\mathbf{O}\) and \(\mathbf{O}^{\prime}\), where \([\mathbf{O}^{\prime},\mathbf{H}^{\prime}]=0\), we have the monotone property that_ \[-\mathcal{L}^{\dagger}[\mathbf{H}]\succeq r\mathbf{O}-\epsilon\mathbf{I}\quad \text{implies}\quad-\mathcal{L}^{\prime\dagger}[\mathbf{H}^{\prime}]\succeq r\mathbf{ O}^{\prime}-\epsilon^{\prime}\mathbf{I}\] (L.3) _where_ \[\epsilon^{\prime}\leq\epsilon+|S|\cdot\mathcal{O}\left(\frac{1}{ \tau}+\frac{\left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4}}+\frac{\Lambda_{0}^{2/3}}{ \tau^{1/3}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}}\tau}+\frac{\mathrm{e}^{- \beta\Delta_{\nu}/4}}{\beta}+\delta_{\lambda}+\theta_{\max}\frac{\left\|\mathbf{V} \right\|}{\Delta_{\nu}}+r\|\mathbf{O}-\mathbf{O}^{\prime}\|\right).\] (L.4) _For the special case of \(\mathbf{O}=\mathbf{I}-\mathbf{P}\) and \(\mathbf{O}^{\prime}=\mathbf{I}-\mathbf{P}^{\prime}\), where \(\mathbf{P}\) projects onto an eigensubspace of \(\mathbf{H}\), and \(\mathbf{P}^{\prime}\) projects onto the corresponding perturbed eigensubspace in \(\mathbf{H}^{\prime}\), then we have the following simpler error bound_ \[\epsilon^{\prime}\leq\epsilon+|S|\cdot\mathcal{O}\left(\frac{1}{ \tau}+\frac{\left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4}}+\frac{\Lambda_{0}^{2/3}}{ \tau^{1/3}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}}\tau}+\frac{\mathrm{e}^{- \beta\Delta_{\nu}/4}}{\beta}+\left(1+\frac{\Lambda_{0}+r}{\Delta_{\nu}}\right) \left\|\mathbf{V}\right\|\right).\] (L.5) The above result is non-trivial because naive perturbation theory fails: the Lindbladian depends sensitively on the perturbation \(\mathbf{V}\) (as it uses a long Hamiltonian simulation time \(\tau\|\mathbf{V}\|\gg 1\)). In fact, it drastically fails if the energy spectrum of \(\mathbf{H}\) has a (nearly) continuous spectrum (as the opposite of the premise of gapped degenerate subspaces). We can understand the energy scale associated with the minimum Bohr-frequency gap \(\Delta_{\nu}\) as the meaningful quantity for which \(\mathbf{V}\) is a perturbation \[\frac{1}{\tau}\ll\left\|\mathbf{V}\right\|\ll\Delta_{\nu}.\] (L.6) Otherwise, the \(1/\tau\) energy resolution is too small compared to the intended perturbation \(\|\mathbf{V}\|\). The proof of Theorem 12 will be quite involved. Technically, we heavily utilize the manipulations using the operator Fourier Transform (Appendix K). The key subroutines of the proof are discussed separately as follows. First, in Appendix L.1, we will simplify the intimidating expression for energy gradient \(\mathcal{L}^{\dagger}[\mathbf{H}]\). Secondly, in Appendix L.2, we isolate the key non-perturbative argument, which roughly says level splitting only improves the gradient. We then provide some results from perturbation theory in Appendix L.3. The altogether proof is presented in Appendix L.4, with minor supporting calculations in Appendix L.5. We also prove two corollaries of Theorem 12 that apply to subspace gradients in Appendix L.6. Since this appendix only consider thermal Lindbladians, in what follows we will drop the superscripts \(\beta,\tau,\mathbf{H}\), i.e. \(\mathcal{L}\equiv\mathcal{L}^{\beta,\tau,\mathbf{H}}\), \(\mathcal{L}^{\prime}\equiv\mathcal{L}^{\beta,\tau,\mathbf{H}^{\prime}}\), \(\mathcal{D}\equiv\mathcal{D}^{\beta,\tau,\mathbf{H}}\), etc. ### Expressing the energy gradient The thermal Lindbladian is quite cumbersome to manipulate. Nicely, the energy gradient operator associated with the dissipative part \(\mathcal{D}^{\dagger}[\mathbf{H}]\) permits a much simpler approximate form up to a controllable error. Combining with error bounds on the Lamb-shift term \([\mathbf{H}_{LS},\mathbf{H}]\) (Proposition F.3) allows us to approximate the full gradient operator \(\mathcal{L}^{\dagger}[\mathbf{H}]\). **Lemma L.1** (Expression for energy gradient).: _Consider the operator Fourier Transforms \(\hat{\mathbf{A}}^{a}(\omega)\) weighted by the bump function \(f_{\tau}\) in Eq. (K.2) with Hamiltonian \(\mathbf{H}\). Then, for any Fourier transform pairs \(\gamma(\omega)\) and \(c(t)\), the energy gradient associated with the purely dissipative Lindbladian_ \[\mathcal{D}^{\dagger}[\mathbf{H}]=\sum_{a\in S}\int_{-\infty}^{\infty}\gamma( \omega)\left(\hat{\mathbf{A}}^{a}(\omega)^{\dagger}\mathbf{H}\hat{\mathbf{A}}^{a}(\omega)- \frac{1}{2}\{\hat{\mathbf{A}}^{a}(\omega)^{\dagger}\hat{\mathbf{A}}^{a}(\omega),\mathbf{H} \}\right)\mathrm{d}\omega\] (L.7) _can be well approximated as a simpler form using_ \[\left\|\mathcal{D}^{\dagger}[\mathbf{H}]-\sum_{a\in S}\int_{-\infty}^{\infty} \gamma(\omega)\omega\hat{\mathbf{A}}^{a}(\omega)^{\dagger}\hat{\mathbf{A}}^{a}(\omega )\mathrm{d}\omega\right\|\leq\frac{2}{\sqrt{2\pi\tau}}\cdot\|c\|_{1}\cdot\left\| \sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\|.\] (L.8) Intuitively, the following expression is the simplest proxy one can write down to capture the rate of energy change \[\sum_{a\in S}\int_{-\infty}^{\infty}\omega\times\gamma(\omega)\hat{\mathbf{A}}^{a }(\omega)^{\dagger}\hat{\mathbf{A}}^{a}(\omega)\mathrm{d}\omega\sim(\text{energy difference})\times(\text{rate}).\] (L.9) Indeed, the Bohr frequency \(\omega\) is essentially the energy difference after jump operator \(\mathbf{A}^{a}(\omega)\); but because of the energy uncertainty in the operator Fourier transform (i.e., because \(\hat{f}(\omega)\) is not a delta function), this interpretation must be corrected by an error scaling as the energy resolution \(\sim 1/\tau\). The starting point of the calculation is a certain integration-by-part trick that relates the Hamiltonian operator \(\mathbf{H}\) to the scalar \(\omega\). **Proposition L.1** (Integration by parts).: _In the setting of Lemma L.1,_ \[[\mathbf{H},\hat{\mathbf{A}}(\omega)]=\omega\hat{\mathbf{A}}(\omega)+\frac{1}{\mathrm{i} \sqrt{2\pi\tau}}\left(\mathbf{A}(\tau/2)\mathrm{e}^{-\mathrm{i}\omega\tau/2}-\mathbf{A }(-\tau/2)\mathrm{e}^{\mathrm{i}\omega\tau/2}\right).\] (L.10) Proof.: Integration of the derivative can be expanded by the product rule \[\frac{1}{\sqrt{2\pi\tau}}\left(\mathbf{A}(\tau/2)\mathrm{e}^{-\mathrm{i}\omega \tau/2}-\mathbf{A}(-\tau/2)\mathrm{e}^{\mathrm{i}\omega\tau/2}\right)=\frac{1}{ \sqrt{2\pi\tau}}\int_{-\tau/2}^{\tau/2}\frac{\mathrm{d}}{\mathrm{d}t}\left(\bm {A}(t)\mathrm{e}^{-\mathrm{i}\omega t}\right)\mathrm{d}t\] \[=\frac{1}{\sqrt{2\pi\tau}}\int_{-\tau/2}^{\tau/2}\left(\mathrm{i}[ \boldsymbol{H},\boldsymbol{A}(t)]\mathrm{e}^{-\mathrm{i}\omega t}-\mathrm{i} \omega\boldsymbol{A}(t)\mathrm{e}^{-\mathrm{i}\omega t}\right)\mathrm{d}t\] \[=\mathrm{i}[\boldsymbol{H},\hat{\boldsymbol{A}}(\omega)]- \mathrm{i}\omega\hat{\boldsymbol{A}}(\omega).\] (L.11) Rearrange to conclude the proof. Observe that taking the infinite time limit \(\tau\to\infty\) (i.e., perfect energy resolution) in the above proposition recovers the relation for the true Bohr frequencies \(\nu\) \[[\boldsymbol{H},\boldsymbol{A}_{\nu}]=\nu\boldsymbol{A}_{\nu}\quad\text{for each}\quad\nu\in B(\boldsymbol{H}).\] (L.12) At finite \(\tau\), the above leads to simple bounds on the correction term. We now present the proof of Lemma L.1. Proof of Lemma L.1.: We calculate \[\mathcal{D}^{\dagger}[\boldsymbol{H}] =\sum_{a\in S}\int_{-\infty}^{\infty}\gamma(\omega)\left(\hat{ \boldsymbol{A}}^{a}(\omega)^{\dagger}\boldsymbol{H}\hat{\boldsymbol{A}}^{a}( \omega)-\frac{1}{2}\{\hat{\boldsymbol{A}}^{a}(\omega)^{\dagger}\hat{ \boldsymbol{A}}^{a}(\omega),\boldsymbol{H}\}\right)\mathrm{d}\omega\] \[=\sum_{a\in S}\int_{-\infty}^{\infty}\gamma(\omega)\frac{1}{2} \left(\hat{\boldsymbol{A}}^{a}(\omega)^{\dagger}[\boldsymbol{H},\hat{ \boldsymbol{A}}^{a}(\omega)]-[\boldsymbol{H},\hat{\boldsymbol{A}}^{a}(\omega)^ {\dagger}]\hat{\boldsymbol{A}}^{a}(\omega)\right)\mathrm{d}\omega\] \[=\sum_{a\in S}\int_{-\infty}^{\infty}\gamma(\omega)\omega\hat{ \boldsymbol{A}}^{a}(\omega)^{\dagger}\hat{\boldsymbol{A}}^{a}(\omega)\mathrm{d} \omega+\boldsymbol{E},\] (L.13) where the error term \(\boldsymbol{E}\) is given by Proposition L.1 as \[\boldsymbol{E} :=\frac{-\mathrm{i}}{2\sqrt{2\pi\tau}}\sum_{a\in S}\int_{- \infty}^{\infty}\gamma(\omega)\hat{\boldsymbol{A}}^{a}(\omega)^{\dagger} \left(\boldsymbol{A}^{a}(\tau/2)\mathrm{e}^{-\mathrm{i}\omega\tau/2}- \boldsymbol{A}^{a}(-\tau/2)\mathrm{e}^{\mathrm{i}\omega\tau/2}\right)\mathrm{d}\omega\] \[+\frac{\mathrm{i}}{2\sqrt{2\pi\tau}}\sum_{a\in S}\int_{-\infty}^{ \infty}\gamma(\omega)\left(\boldsymbol{A}^{a}(\tau/2)^{\dagger}\mathrm{e}^{ \mathrm{i}\omega\tau/2}-\boldsymbol{A}^{a}(-\tau/2)^{\dagger}\mathrm{e}^{- \mathrm{i}\omega\tau/2}\right)\hat{\boldsymbol{A}}^{a}(\omega)\mathrm{d}\omega.\] (L.14) To bound this error term, let us calculate one of the four individual terms as an example \[\sum_{a\in S}\int_{-\infty}^{\infty}\gamma(\omega)\boldsymbol{A} ^{a}(\tau/2)^{\dagger}\hat{\boldsymbol{A}}^{a}(\omega)\mathrm{e}^{\mathrm{i} \omega\tau/2}\mathrm{d}\omega\] (L.15) \[=\sum_{a\in S}\frac{1}{2\pi\sqrt{\tau}}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}c(t_{1})\mathrm{e}^{-\mathrm{i}\omega t_{1}}\mathrm{d}t _{1}\int_{-\tau/2}^{\tau/2}\mathrm{e}^{\mathrm{i}\omega\tau/2}\boldsymbol{A}^{a }(\tau/2)^{\dagger}\boldsymbol{A}^{a}(t_{2})\mathrm{e}^{-\mathrm{i}\omega t_{2} }\mathrm{d}t_{2}\mathrm{d}\omega\] (Fourier Transforms) \[=\frac{1}{\sqrt{\tau}}\sum_{a\in S}\int_{-\tau/2}^{\tau/2}c(\tau /2-t_{2})\boldsymbol{A}^{a}(\tau/2)^{\dagger}\boldsymbol{A}^{a}(t_{2}) \mathrm{d}t_{2}.\] (using \[\int_{-\infty}^{\infty}\mathrm{e}^{-\mathrm{i}\omega t}\mathrm{d}\omega=2 \pi\delta(t)\] ) We can bound the operator norm of this term by \(\|c\|_{1}/\sqrt{\tau}\) after applying the triangle inequality and using the fact that \[\left\|\sum_{a\in S}\boldsymbol{A}^{a}(\tau/2)^{\dagger} \boldsymbol{A}^{a}(t_{2})\right\|=\left\|\sum_{a\in S}\boldsymbol{A}^{a\dagger }(\tau/2)\boldsymbol{A}^{a}(t_{2})\right\|\] \[\leq\left\|\sum_{a\in S}\boldsymbol{A}^{a\dagger}\otimes \langle a\|\right\|\cdot\left\|\sum_{a\in S}\boldsymbol{A}^{a}\otimes|a \rangle\right\|=\left\|\sum_{a\in S}\boldsymbol{A}^{a\dagger}\boldsymbol{A}^{a }\right\|.\] (L.16) Repeat a similar argument for the other three terms to conclude the proof. ### Monotonicity of rates Thermal Lindbladians generally depend sensitively on the Hamiltonian as it uses Hamiltonian simulation for a long time \(\tau\) \[\mathrm{e}^{\mathrm{i}\mathbf{H}\tau}\quad\text{for}\quad\tau\gg 1.\] (L.17) Therefore, even adding a small perturbation to the Hamiltonian \(\mathbf{H}^{\prime}=\mathbf{H}+\mathbf{V}\) may have a non-perturbative effect on \(\mathcal{L}\) since \[\tau\left\|\mathbf{V}\right\|\gg 1\quad\text{implies}\quad\left\|\mathcal{L}- \mathcal{L}^{\prime}\right\|_{1-1}\gg 0.\] (L.18) In other words, at a large \(\tau\), it is not obvious at all why the Lindbladians \(\mathcal{L},\mathcal{L}^{\prime}\) are related. Indeed, the original Davies' generator (\(\tau\to\infty\)) is unstable against arbitrarily small perturbations to the Hamiltonian; whenever energy degeneracy is broken, the Lindbladian can change substantially. Nevertheless, what we can show as a compromise is that the rate only _increases_ if the perturbation only introduces _level splitting_; this amounts to the assumption that the original Hamiltonian has highly degenerate subspaces with a certain Bohr frequencies gap \(\Delta_{\nu}\) as another large energy scale \[\frac{1}{\tau}\ll\left\|\mathbf{V}\right\|\ll\Delta_{\nu}.\] (L.19) Intuitively, level splitting causes _decoherence_ (and only decoherence) in the Bohr frequencies; for large \(\tau\), the Lindblidan can indeed tell the transitions \(\omega,\omega^{\prime}\) apart if the Bohr frequencies are sufficiently different. Fortunately, even though decoherence can change the Lindbladian by a lot, we establish certain _monotonicity_ of transition rates. That is, \(\mathcal{L}^{\prime}\) must have as good transition rates as \(\mathcal{L}\). A good example of \(\mathbf{O}\) would be an energy subspace projector. However, the argument works for general \(\mathbf{O}\), which makes it more flexible to use. **Lemma L.2** (Decoherence increases the rates).: _For any set of operators \(\{\mathbf{A}^{a}\}_{a\in S}\), suppose there exists an operator \(\mathbf{O}\) such that_ \[\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\succeq\mathbf{O}\quad\text{where}\quad[ \mathbf{O},\mathbf{H}]=0.\] (L.20) _Then, the operator Fourier Transforms \(\hat{\mathbf{A}}^{a}(\omega)\) (subscript \(f\) omitted) for some normalized weight \(\int_{-\infty}^{\infty}|f(t)|^{2}\mathrm{d}t=1\) and Hamiltonian \(\mathbf{H}\) satisfies_ \[\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}^{a}(\omega)^{\dagger}\hat{ \mathbf{A}}^{a}(\omega)\mathrm{d}\omega\succeq\mathbf{O}.\] (L.21) Proof.: By Proposition K.1, \[\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}^{a}(\omega)^{ \dagger}\hat{\mathbf{A}}^{a}(\omega)\mathrm{d}\omega-\mathbf{O} =\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i}\mathbf{H}t}\left( \sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right)\mathrm{e}^{-\mathrm{i}\mathbf{H}t }|f(t)|^{2}\mathrm{d}t-\mathbf{O}\] \[=\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i}\mathbf{H}t}\left(\sum_ {a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}-\mathbf{O}\right)\mathrm{e}^{-\mathrm{i}\mathbf{H} t}|f(t)|^{2}\mathrm{d}t\] \[=\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i}\mathbf{H}t}\mathbf{X}^{ \dagger}\cdot\mathbf{X}\mathrm{e}^{-\mathrm{i}\mathbf{H}t}|f(t)|^{2}\mathrm{d}t\] \[\succeq 0.\] (L.22) The second equality uses that \(\int_{-\infty}^{\infty}|f(t)|^{2}\mathrm{d}t=1\) and that \(\mathrm{e}^{\mathrm{i}\mathbf{H}t}\mathbf{O}\mathrm{e}^{-\mathrm{i}\mathbf{H}t}=\mathbf{O}\). The last line establishes PSD order using the assumption that there exists operator \(\mathbf{X}\) such that \(\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}-\mathbf{O}=\mathbf{X}^{\dagger}\mathbf{X}\). Together, we establish the desired statement. ### Perturbation theory of eigenstates and eigenvalues We state a few useful facts about perturbed eigenspace and eigenvalues that would be useful in the proofs. **Proposition L.2** (Davis-Kahan \(\sin\Theta\) theorem (see also Theorem VII.3.1 of [78])).: _Let \(\mathbf{H}\) and \(\tilde{\mathbf{H}}\) be two equal-sized Hermitian matrices. Let \(\mathbf{P}\) be the projector onto eigenstates of \(\mathbf{H}\) with eigenvalue in an interval \([a,b]\). Let \(\tilde{\mathbf{P}}^{\perp}\) be the projector onto eigenstates of \(\tilde{\mathbf{H}}\) with eigenvalues outside the interval \([a-\delta,b+\delta]\). Then_ \[\|\mathbf{P}\tilde{\mathbf{P}}^{\perp}\|\leq\|\mathbf{H}-\tilde{\mathbf{H}}\|/\delta.\] (L.23) _Here \(\|\cdot\|\) is the spectral norm (or any unitarily invariant norm)._ Furthermore, the following fact bounds errors on perturbed eigenvalues: **Proposition L.3** (Weyl's inequality).: _For any two equal-sized Hermitian matrices \(\mathbf{H}\) and \(\tilde{\mathbf{H}}\), we have \(|\lambda_{j}(\mathbf{H})-\lambda_{j}(\tilde{\mathbf{H}})|\leq\|\mathbf{H}-\tilde{\mathbf{H}}\|\) for all \(j\), where \(\lambda_{j}(\mathbf{X})\) is the \(j\)-th largest eigenvalue of matrix \(\mathbf{X}\)._ Together, these facts imply that **Lemma L.3**.: _Let \(\mathbf{H}\) and \(\tilde{\mathbf{H}}=\mathbf{H}+\mathbf{V}\) be Hamiltonians. Let \(\mathbf{P}\) be the projector onto eigenstates of \(\mathbf{H}\) with eigenvalues in some interval \([a,b]\), which are separated from the other eigenvalues by a gap of at least \(\Delta\). If \(\|\mathbf{V}\|\leq\Delta/4\), then_ 1. _There exists a spectral projector_ \(\tilde{\mathbf{P}}\) _onto eigenstates of_ \(\tilde{\mathbf{H}}\) _with eigenvalues in_ \([a-\Delta/4,b+\Delta/4]\)_, which are separated from the other eigenvalues by a gap of at least_ \(\Delta/2\)_._ 2. \(\|\mathbf{P}-\tilde{\mathbf{P}}\|\leq 8\|\mathbf{V}\|/\Delta\)_._ Proof.: The existence of \(\tilde{\mathbf{P}}\) (item 1) holds because of Proposition L.3 above. Then observe that \[\|\mathbf{P}-\tilde{\mathbf{P}}\|=\|\mathbf{P}-\mathbf{P}\tilde{\mathbf{P}}+\mathbf{P} \tilde{\mathbf{P}}-\tilde{\mathbf{P}}\|\leq\|\mathbf{P}\tilde{\mathbf{P}}^{\perp}\|+\|\mathbf{P}^ {\perp}\tilde{\mathbf{P}}\|\leq 8\|\mathbf{V}\|/\Delta,\] (L.24) where the last inequality is obtained by applying Proposition L.2 with \(\delta=\Delta/4\) to bound \(\|\mathbf{P}\tilde{\mathbf{P}}^{\perp}\|\) and \(\|\mathbf{P}^{\perp}\tilde{\mathbf{P}}\|\). **Lemma L.4** (Off-block-diagonal perturbation).: _Consider a block diagonal Hermitian matrix \(\mathbf{D}=\mathbf{D}_{1}+\mathbf{D}_{2}\), where the two blocks correspond to orthogonal subspace projectors \(\mathbf{P}_{1}\) and \(\mathbf{I}-\mathbf{P}_{1}=\mathbf{P}_{2}\) and are separated by eigenvalue gap at least \(\Delta\). Add an off-block-diagonal Hermitian perturbation \(\mathbf{V}=\mathbf{V}_{12}+\mathbf{V}_{21}\) such that \(\|\mathbf{V}\|\leq\Delta/4\). Then, there is an anti-Hermitian operator \(\mathbf{B}\) and an absolute constant \(C_{0}\) such that_ \[\mathbf{D}+\mathbf{V}=\mathrm{e}^{-\mathbf{B}}\mathbf{D}\mathrm{e}^{\mathbf{B}}+(\bm {D}+\mathbf{V}-\mathrm{e}^{-\mathbf{B}}\mathbf{D}\mathrm{e}^{\mathbf{B}})\] \[\text{where}\qquad\|\mathbf{B}\|\leq C_{0}\frac{\|\mathbf{V}\|}{\Delta}, \qquad\text{and}\qquad\left\|\mathrm{e}^{\mathbf{B}}(\mathbf{D}+\mathbf{V})\mathrm{e}^{- \mathbf{B}}-\mathbf{D}\right\|\leq C_{0}\frac{\left\|\mathbf{V}\right\|^{2}}{\Delta}.\] (L.25) _This implies the sorted eigenvalues of \(\mathbf{D}\) are perturbed by \(C_{0}\left\|\mathbf{V}\right\|^{2}/\Delta\)._ We remark that the scaling with respect to \(\|\mathbf{V}\|\) is consistent with perturbation theory: the _angle_ change is first-order \(\sim\frac{\|\mathbf{V}\|}{\Delta}\), and the _eigenvalue_ change is second-order \(\sim\frac{\|\mathbf{V}\|^{2}}{\Delta}\). Note that for diagonal perturbation, the eigenvalue change is only bounded by \(\sim\|\mathbf{V}\|\). Proof.: Observe that \[\mathrm{e}^{\mathbf{B}}(\mathbf{D}+\mathbf{V})\mathrm{e}^{-\mathbf{B}}=\mathbf{D}+(\mathbf{V}+[\mathbf{B},\mathbf{ D}])+[\mathbf{B},\mathbf{V}]+\sum_{k=2}^{\infty}\frac{1}{k!}\,\mathrm{ad}_{\mathbf{B}}^{k}(\mathbf{D}+ \mathbf{V}),\] (L.26) Let us choose \(\mathbf{B}\) to cancel the first order term in \(\mathbf{V}\), i.e., \[\mathbf{V}=-[\mathbf{B},\mathbf{D}].\] (L.27) We can solve for \(\mathbf{B}\) by working in the eigenbasis of \(\mathbf{D}=\sum_{i}D_{i}|\psi_{i}\rangle\!\langle\psi_{i}|\). Then denoting \(O_{ij}=\langle\psi_{i}|\mathbf{O}|\psi_{j}\rangle\), we can rewrite Eq. (L.27) as \[V_{ij}=B_{ij}(D_{i}-D_{j})\qquad\text{or}\qquad B_{ij}=\frac{V_{ij}}{D_{i}-D_{ j}}.\] (L.28) Note \(B_{ij}=V_{ij}=0\) whenever \(|D_{i}-D_{j}|\geq\Delta\) by assumption. Hence, we can solve for \(\mathbf{B}\) using the Heisenberg picture Fourier transform: \[B_{ij} =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(t)\mathrm{e}^{ \mathrm{i}(D_{i}-D_{j})t}V_{ij}\mathrm{d}t=V_{ij}\hat{f}(D_{j}-D_{i})\quad \text{for each}\quad i,j\] \[\text{or}\quad\mathbf{B} =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(t)\underbrace{ \mathrm{e}^{\mathrm{i}\mathbf{D}t}\mathbf{V}\mathrm{e}^{-\mathrm{i}\mathbf{D}t}}_{=:\mathbf{V} (t)}\mathrm{d}t,\] (L.29) where we choose the function \(f(t)\) whose Fourier transform matches the reciprocal at sufficiently large values, \[\hat{f}(\omega)=\frac{1}{-\omega}\quad\text{when}\quad|\omega|\geq\Delta,\] (L.30) but remain "nice" near \(\omega=0\). One example is to use a smooth bump function \[-\frac{1}{\omega}\cdot b(\frac{\omega}{\Delta})\quad\text{where}\quad b(x)= \begin{cases}1&\quad\text{if}\quad|x|\geq 1\\ \mathcal{O}(x^{2})&\quad\text{if}\quad|x|\approx 0.\end{cases}\] (L.31) For concreteness, we take \[b(x)=\begin{cases}1-\exp(\frac{1}{1-\frac{1}{x^{2}}})&\quad\text{if}\quad|x|< 1\\ 1&\quad\text{else}\end{cases}\] (L.32) Then, taking triangle inequality and using the unitary invariance of the operator norm, \[\|\mathbf{B}\|\leq\frac{1}{\sqrt{2\pi}}\,\|f\|_{1}\cdot\|\mathbf{V}\|\leq C_{0}\frac{ \|\mathbf{V}\|}{\Delta}.\] (L.33) The last inequality bounds the Fourier transform by change-of-variable \(x=\omega/\Delta\) and leaves a constant \(C_{0}\) that depends on the inverse Fourier transform of the "dimensionless function" \(b(x)/x\). This bound on \(\mathbf{B}\) then allows us to control the higher-order errors \[\mathrm{e}^{\mathbf{B}}(\mathbf{D}+\mathbf{V})\mathrm{e}^{-\mathbf{B}}=\mathbf{D}+\mathbf{V}+\int_{0}^ {1}\mathrm{e}^{\mathbf{B}s}[\mathbf{B},\mathbf{V}]\mathrm{e}^{-\mathbf{B}s}\mathrm{d}s+[\mathbf{B},\mathbf{D}]+\int_{0}^{1}\mathrm{e}^{\mathbf{B}s}[\mathbf{B},[\mathbf{B},\mathbf{D}]]\mathrm{e}^{- \mathbf{B}s}(1-s)\mathrm{d}s.\] Substitute \([\mathbf{B},\mathbf{D}]=-\mathbf{V}\) and rearranging, we get \[\mathrm{e}^{\mathbf{B}}(\mathbf{D}+\mathbf{V})\mathrm{e}^{-\mathbf{B}}-\mathbf{D}=\int_{0}^{1} \mathrm{e}^{\mathbf{B}s}[\mathbf{B},\mathbf{V}]\mathrm{e}^{-\mathbf{B}s}s\mathrm{d}s.\] (L.34) Apply the triangle inequality, we get \[\|\mathrm{e}^{\mathbf{B}}(\mathbf{D}+\mathbf{V})\mathrm{e}^{-\mathbf{B}}-\mathbf{D}\|\leq\|[\mathbf{B}, \mathbf{V}]\|\int_{0}^{1}s\mathrm{d}s\leq C_{0}\frac{\|\mathbf{V}\|^{2}}{\Delta}.\] (L.35) Finally, to obtain sorted eigenvalues of \(\mathbf{D}\), use the fact that \(\mathrm{e}^{-\mathbf{B}}\) is unitary and apply Weyl's inequality for \(\mathrm{e}^{-\mathbf{B}}\mathbf{D}\mathrm{e}^{\mathbf{B}}\). ### Proof of Theorem 12 We combine the above ingredients for the proof of Theorem 12. In what follows, let \(E_{j}\), \(\mathbf{P}_{E_{j}}\), \(\nu\) be the eigenvalues, eigenspace projectors, and Bohr frequencies of the unperturbed Hamiltonian \(\mathbf{H}\); furthermore let \(E_{j}^{\prime}\), \(\mathbf{P}_{E_{j}^{\prime}}\ \nu^{\prime}\), be their counterpart for the perturbed Hamiltonian \(\mathbf{H}^{\prime}\). It will be helpful to display the structure of the Bohr frequencies and the energy eigenspaces under perturbation by \[\mathbf{A}^{a} =\sum_{\nu\in B(\mathbf{H})}\sum_{E_{1}-E_{2}=\nu}\mathbf{P}_{E_{1}}\mathbf{A }^{a}\mathbf{P}_{E_{2}}=\sum_{\nu\in B(\mathbf{H})}\mathbf{A}^{a}_{\nu}\] \[=\sum_{\nu^{\prime}\in B(\mathbf{H}^{\prime})}\sum_{E_{1}^{\prime}-E _{2}^{\prime}=\nu^{\prime}}\mathbf{P}_{E_{1}^{\prime}}\mathbf{A}^{a}\mathbf{P}_{E_{2}^{ \prime}}=\sum_{\nu^{\prime}\in B(\mathbf{H}^{\prime})}\mathbf{A}^{a}_{\nu^{\prime}}= \sum_{\nu\in B(\mathbf{H})}\mathbf{A}^{a}_{\approx\nu}\] (L.36) where we defined \[\mathbf{A}^{a}_{\approx\nu}:=\sum_{\nu^{\prime}\in B(\mathbf{H}^{\prime}),\ \nu^{\prime}\approx\nu}\mathbf{A}^{a}_{\nu^{\prime}}\qquad\text{and}\quad\nu^{ \prime}\approx\nu\iff\left|\nu^{\prime}-\nu\right|\leq 2\delta_{\lambda}.\] (L.37) In other words, the perturbed set of Bohr frequencies can be identified with the original degenerate blocks according to eigenvalue perturbation12 bounded by \(\delta_{\lambda}\), under the assumption that the pertur bation is weaker than the Bohr frequency differences \(\Delta_{\nu}>4\delta_{\lambda}\). This structure is crucial for proving the monotonicity of gradients; see Figure 4. For later use, we also define \(\hat{\mathbf{A}}^{a}_{\kappa\nu}(\omega^{\prime})\) to be the operator Fourier transform of \(\mathbf{A}^{a}_{\kappa\nu}\) with respect to the perturbed Hamiltonian \(\mathbf{H}^{\prime}\), and consider its secular approximation \(\hat{\mathbf{S}}^{a}_{\kappa\nu}(\omega^{\prime})\) at truncation scale \(\mu\), i.e., \[\hat{\mathbf{A}}^{a}_{\kappa\nu}(\omega^{\prime})=\sum_{\nu^{\prime}\approx\nu}\mathbf{ A}^{a}_{\nu^{\prime}}\hat{f}_{\tau}(\omega^{\prime}-\nu^{\prime}),\qquad\hat{\mathbf{S }}^{a}_{\kappa\nu}(\omega^{\prime})=\sum_{\nu^{\prime}\approx\nu}\mathbf{A}^{a}_{ \nu^{\prime}}\hat{f}_{\tau}(\omega^{\prime}-\nu^{\prime})\mathds{1}(|\omega^{ \prime}-\nu^{\prime}|<\mu).\] (L.38) In what follows, we will denote \(\theta(\omega)=\gamma(\omega)\omega\). It is worth recalling the following bounds, \[\left\|\mathbf{A}^{a}\right\|,\ \left\|f_{\tau}\right\|_{2},\ \frac{\left\|c_{\beta} \right\|_{1}}{\sqrt{2\pi}}\leq 1,\qquad\left\|\theta\right\|_{\infty}=\mathcal{O}( \Lambda_{0}),\qquad\Lambda_{0}=\Theta(1).\] (L.39) Our strategy for proving Theorem 10 is to rewrite the energy gradients \(\mathcal{L}^{\dagger}[\mathbf{H}]\) and \(\mathcal{L}^{{}^{\prime}\dagger}[\mathbf{H}^{\prime}]\) in a form amenable to Lemma L.2 between a set of operators and their Fourier Transforms. **Step 1.** For the perturbed Hamiltonian \(\mathbf{H}^{\prime}\), we apply a sequence of approximations to establish \[\left\|\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\theta_{-}(\nu)\int_{-\infty}^{ \infty}\hat{\mathbf{A}}^{a}_{\kappa\nu}(\omega^{\prime})^{\dagger}\hat{\mathbf{A}}^{a }_{\kappa\nu}(\omega^{\prime})\mathrm{d}\omega^{\prime}-\mathcal{L}^{{}^{ \prime}\dagger}[\mathbf{H}^{\prime}]\right\|\leq\epsilon_{A}\] (L.40) for some \(\epsilon_{A}>0\) and a function \(\theta_{-}\) to be soon specified. Recall we write \(\mathbf{A}\overset{E}{\approx}\mathbf{B}\) if \(\left\|\mathbf{A}-\mathbf{B}\right\|\leq E\). \[\mathcal{L}^{{}^{\prime}\dagger}[\mathbf{H}^{\prime}] \overset{E_{1}}{\approx}\mathcal{D}^{{}^{\prime}\dagger}[\mathbf{H}^{ \prime}]\] (bounds on the Lamb-shift: Prop. F.3 ) \[\overset{E_{2}}{\approx}\sum_{a\in S}\int_{-\infty}^{\infty} \theta(\omega^{\prime})\hat{\mathbf{A}}^{a}(\omega^{\prime})^{\dagger}\hat{\mathbf{A}} ^{a}(\omega^{\prime})\mathrm{d}\omega^{\prime}\] (simplify: Lemma L.1 ) \[\overset{E_{3}}{\approx}\sum_{a\in S}\int_{-\infty}^{\infty} \theta(\omega^{\prime})\hat{\mathbf{S}}^{a}(\omega^{\prime})^{\dagger}\hat{\mathbf{S} }^{a}(\omega^{\prime})\mathrm{d}\omega^{\prime}\] (secular approximation: Corollary K.3 ) \[=\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\int_{-\infty}^{\infty} \theta(\omega^{\prime})\hat{\mathbf{S}}^{a}_{\kappa\nu}(\omega^{\prime})^{\dagger} \hat{\mathbf{S}}^{a}_{\kappa\nu}(\omega^{\prime})\mathrm{d}\omega^{\prime}\] (different blocks \[\nu\]s decohere) \[\overset{E_{4}}{\approx}\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})} \int_{-\infty}^{\infty}\theta(\omega^{\prime})\hat{\mathbf{A}}^{a}_{\kappa\nu}( \omega^{\prime})^{\dagger}\hat{\mathbf{A}}^{a}_{\kappa\nu}(\omega^{\prime}) \mathrm{d}\omega^{\prime}\] (undoing secular approximation: Corollary K.3 ) \[\overset{E_{5}}{\approx}\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})} \theta(\nu)\int_{-\infty}^{\infty}\hat{\mathbf{A}}^{a}_{\kappa\nu}(\omega^{\prime}) ^{\dagger}\hat{\mathbf{A}}^{a}_{\kappa\nu}(\omega^{\prime})\mathrm{d}\omega^{\prime}\] (rounding \[\theta\] ) \[\overset{E_{6}}{\approx}\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})} \theta_{-}(\nu)\int_{-\infty}^{\infty}\hat{\mathbf{A}}^{a}_{\kappa\nu}(\omega^{ \prime})^{\dagger}\hat{\mathbf{A}}^{a}_{\kappa\nu}(\omega^{\prime})\mathrm{d} \omega^{\prime}.\] (dropping positive values of \[\theta\] ) The approximations \(E_{1},E_{2}\) are bounded by \[E_{1} \leq\mathcal{O}\left(\frac{\left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4 }}\left\|c_{\beta}\right\|_{1}\left\|\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a} \right\|\right)=\left|S\right|\mathcal{O}(\frac{\left\|\mathbf{H}\right\|^{3/4}}{ \tau^{1/4}})\] (L.41) \[E_{2} \leq\frac{2\left\|\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a} \right\|}{\sqrt{2\pi}\tau}\left\|c_{\beta}\right\|_{1}=\left|S\right|\mathcal{O} (\frac{1}{\tau}).\] (L.42) In the approximations \(E_{3}\) and \(E_{4}\), we choose the secular approximation parameter \(\mu\) such that \[\mu<(\Delta_{\nu}-4\delta_{\lambda})/2\] (L.43) with associated errors given by Corollary 13 as \[E_{3} \leq 2\left\|\theta\right\|_{\infty}\left\|\sum_{a\in S}\mathbf{A}^{a \dagger}\mathbf{A}^{a}\right\|\left\|\hat{f}_{\tau}\cdot(1-\hat{s}_{\mu})\right\|_{2 }\left\|f_{\tau}\right\|_{2}=\left|S\right|\mathcal{O}(\Lambda_{0}\sqrt{\frac{ 1}{\mu\tau}}), \tag{144}\] \[E_{4} \leq 2\left\|\theta\right\|_{\infty}\left\|\sum_{a,\nu}\mathbf{A}^{a \dagger}_{\approx\nu}\mathbf{A}^{a}_{\approx\nu}\right\|\left\|\hat{f}_{\tau}\cdot( 1-\hat{s}_{\mu})\right\|_{2}\left\|f_{\tau}\right\|_{2}=\left|S\right|\mathcal{ O}(\Lambda_{0}\sqrt{\frac{1}{\mu\tau}}), \tag{145}\] where we applied Eq. (137) to bound \(\left\|\hat{f}_{\tau}\cdot(1-\hat{s}_{\mu})\right\|_{2}\leq\sqrt{4/\pi\mu\tau}\), and used Proposition 15 to bound the spectral norm of the sum of jump operators in the second line. To justify the equality on the fourth line (different blocks \(\nu\)s decohere), observe the choice of the parameter \(\mu\) in Eq. (143) implies \[\hat{\mathbf{S}}^{a}(\omega^{\prime})=\sum_{\nu\in B(\mathbf{H})}\hat{\mathbf{S}}^{a}_{ \approx\nu}(\omega^{\prime})\mathds{1}(\left|\omega^{\prime}-\nu\right|<\mu+2 \delta_{\lambda})=\sum_{\nu\in B(\mathbf{H})}\hat{\mathbf{S}}^{a}_{\approx\nu}(\omega ^{\prime})\mathds{1}(\left|\omega^{\prime}-\nu\right|<\Delta_{\nu}/2), \tag{146}\] where \(\hat{\mathbf{S}}^{a}_{\approx\nu}(\omega^{\prime})\) is given in Eq. (138). This ensures that for any given \(\omega^{\prime}\), \(\hat{\mathbf{S}}^{a}(\omega^{\prime})\) can activate at most _one_ block of transitions with Bohr frequencies closest to \(\nu\) (see Figure 4). Consequently, \[\hat{\mathbf{S}}^{a\dagger}(\omega^{\prime})\hat{\mathbf{S}}^{a}(\omega^{\prime})=\sum _{\nu\in B(\mathbf{H})}\hat{\mathbf{S}}^{a\dagger}_{\approx\nu}(\omega^{\prime})\hat{ \mathbf{S}}^{a}_{\approx\nu}(\omega^{\prime}). \tag{147}\] Next, for the approximation \(E_{5}\), we define the following "rounded" function \(\bar{\theta}(\omega^{\prime})\) where an input \(\omega^{\prime}\) close to \(\nu\in B(\mathbf{H})\) is assigned the same value \(\theta(\nu)\), with uniqueness of \(\nu\) guaranteed by Eq. (146), \[\bar{\theta}(\omega^{\prime}):=\begin{cases}\theta(\nu)&\text{if}\quad\left| \omega^{\prime}-\nu\right|\leq\mu+2\delta_{\lambda}\quad\text{for}\quad\nu \in B(\mathbf{H})\\ \theta(\omega^{\prime})&\text{else}\end{cases}. \tag{148}\] This lets us formally pull \(\theta(\omega^{\prime})\) out of the integral. Of course, this rounding introduces an error scaling with the energy spread multiplied with the derivative \[\left\|\bar{\theta}-\theta\right\|_{\infty}\leq(2\mu+4\delta_{\lambda})\cdot \left\|\mathrm{d}\theta/\mathrm{d}\omega\right\|_{\infty}. \tag{149}\] Roughly, this error quantifies how the energy gradient (i.e., \(\theta(\omega)=\gamma_{\beta}(\omega)\omega\)) changes due to perturbation in Bohr frequency. Thus, \[E_{5}\leq\left\|\theta-\bar{\theta}\right\|_{\infty}\left\|\sum_{a,\nu}\mathbf{A} ^{a\dagger}_{\approx\nu}\mathbf{A}^{a}_{\approx\nu}\right\|\left\|\hat{f}_{\tau} \cdot\hat{s}_{\mu}\right\|_{2}^{2}=\left|S\right|\mathcal{O}\left(2\mu+4\delta _{\lambda}\right) \tag{150}\] where we applied Propositions 14 and 15 (deferred to Appendix 5), and used the fact that \(\|\hat{f}_{\tau}\hat{s}_{\mu}\|_{2}\leq\|\hat{f}_{\tau}\|_{2}\leq 1\). Finally, in the last approximation \(E_{6}\), we define the truncated weight \[\theta_{-}(\nu)=\theta(\nu)\mathds{1}(\nu\leq\Delta_{\nu}/2). \tag{151}\] This truncation has the property that \(\theta_{-}(\nu)\leq 0\) for each \(\nu\in B(\mathbf{H})\), which ensures the last line is negative semidefinite. Thus, \[E_{6}\leq\left\|\theta-\theta_{-}\right\|_{\infty}\left\|\sum_{a,\nu}\mathbf{A}^{a \dagger}_{\approx\nu}\mathbf{A}^{a}_{\approx\nu}\right\|\left\|f_{\tau}\right\|_{2 }^{2}\leq\left|S\right|\left\|\theta-\theta_{-}\right\|_{\infty}\] \[\text{where}\qquad\left\|\theta-\theta_{-}\right\|_{\infty}\leq\max_{\omega \geq\Delta_{\nu/2}}\omega\gamma_{\beta}(\omega)\leq\frac{\text{e}^{-\beta\Delta_ {\omega}/4}}{\beta},\] (L.52) using the tail bound in Eq. (F.3). Altogether, \[\epsilon_{A} =E_{1}+E_{2}+E_{3}+E_{4}+E_{5}+E_{6}\] \[\leq\mathcal{O}\left(|S|\left(\frac{1}{\tau}+\frac{\left\|\mathbf{H} \right\|^{3/4}}{\tau^{1/4}}+\mu+\frac{\Lambda_{0}}{\sqrt{\mu\tau}}+\delta_{ \lambda}+\frac{\text{e}^{-\beta\Delta_{\nu}/4}}{\beta}\right)\right).\] (L.53) We then choose \(\mu=\min(\Lambda_{0}^{2/3}/\tau^{1/3},(\Delta_{\nu}-4\delta_{\lambda})/4)\) so as to optimize the error \(\mathcal{O}(\mu+\Lambda_{0}/\sqrt{\mu\tau})\) while subject to the constraint that \(\mu<(\Delta_{\nu}-4\delta_{\lambda})/2\). This choice implies \(\Lambda_{0}/\sqrt{\mu\tau}\leq\Lambda_{0}^{2/3}/\tau^{1/3}+2\Lambda_{0}/\sqrt {(\Delta_{\nu}-4\delta_{\lambda})\tau}\leq\mathcal{O}(\Lambda_{0}^{2/3}/\tau ^{1/3}+\Lambda_{0}/\sqrt{\Delta_{\nu}\tau})\), where we used \(\delta_{\lambda}\leq\left\|\mathbf{V}\right\|\leq\Delta_{\nu}/8\) which is a combination of Proposition L.3 and the assumption in the theorem statement. This yields the following error bound \[\epsilon_{A}\leq\mathcal{O}\left(|S|\left(\frac{1}{\tau}+\frac{ \left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4}}+\frac{\Lambda_{0}^{2/3}}{\tau^{1/3}}+ \frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}\tau}}+\delta_{\lambda}+\frac{\text{e}^{ -\beta\Delta_{\nu}/4}}{\beta}\right)\right).\] (L.54) **Step 2.** For the original Hamiltonian \(\mathbf{H}\), we may repeat the above argument with trivial perturbation (\(\mathbf{V}=0\)) to get \[\left\|\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\theta_{-}(\nu)(\mathbf{A} _{\approx\nu}^{a})^{\dagger}\mathbf{A}_{\approx\nu}-\mathcal{L}^{\dagger}[\mathbf{H}] \right\|\leq\epsilon_{B}\] (L.55) for some \(\epsilon_{B}>0\). More detailedly, we have \[\mathcal{L}^{\dagger}[\mathbf{H}] \overset{E_{7}}{\approx}\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\int_ {-\infty}^{\infty}\theta_{-}(\nu)\hat{\mathbf{A}}_{\nu}^{a}(\omega)^{\dagger}\hat{ \mathbf{A}}_{\nu}^{a}(\omega)\text{d}\omega\] (setting \[\mathbf{V}=0\] from above) \[=\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\theta_{-}(\nu)(\mathbf{A}_{\nu}^{a} )^{\dagger}\mathbf{A}_{\nu}^{a}\quad\text{ (Proposition K.1 and }[\mathbf{A}_{\nu}^{a\dagger}\mathbf{A}_{\nu}^{a},\mathbf{H}]=0\text{ for each }\nu\in B(\mathbf{H})\)) \[\overset{E_{8}}{\approx}\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})} \theta_{-}(\nu)(\mathbf{A}_{\approx\nu}^{a})^{\dagger}\mathbf{A}_{\approx\nu}^{a}.\] (L.56) The second line is operator Parseval's identity, where the time evolution simplifies due to commutativity \([\mathbf{A}_{\nu}^{a\dagger}\mathbf{A}_{\nu}^{a},\mathbf{H}]=0\). The error \(E_{7}\) can be bounded by the same bounds for \(\epsilon_{A}\) in Eq. (L.54) by setting \(\mathbf{V}=0\) (i.e., \(\mathbf{A}_{\approx\nu}^{a}\rightarrow\mathbf{A}_{\nu}^{a}\) ). The last line is a brute-force rewriting of \(\mathbf{A}_{\nu}^{a}\) into \(\mathbf{A}_{\approx\nu}^{a}\), which acts on eigenstates of \(\mathbf{H}^{\prime}\) instead of \(\mathbf{H}\), with error \(E_{8}\) bounded by perturbation theory. This rewriting allows us to prove Theorem 12 by directly applying Lemma L.2 between the following set of operators and their Fourier transforms \[\{\sqrt{|\theta_{-}(\nu)|}\mathbf{A}_{\approx\nu}^{a}\}_{a,\nu}\quad \text{and}\quad\{\sqrt{|\theta_{-}(\nu)|}\hat{\mathbf{A}}_{\approx\nu}^{a}(\omega^ {\prime})\}_{a,\nu,\omega^{\prime}}\quad\text{for the perturbed Hamiltonian}\quad\mathbf{H}^{\prime}.\] We give an explicit error bound on \(E_{8}\) in Proposition L.6 (deferred to Appendix L.5), which yields \[E_{8}=\mathcal{O}\left(\left\|\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\| \theta_{\max}\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}\right)=\mathcal{O} \left(|S|\,\theta_{\max}\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}\right),\] (L.57) where we used the bound \(\max_{\nu\in B(\mathbf{H})}|\theta_{-}(\nu)|=\theta_{\max}\) provided in the theorem statement. Collect the errors to bound \(\epsilon_{B}=E_{7}+E_{8}\leq\epsilon_{A}+E_{8}\). **Step 3.** Now we may finish the proof of Theorem 12 by applying Lemma L.2. First, Eq. (L.55) implies \[-\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\theta_{-}(\nu)(\mathbf{A}_{ \approx\nu}^{a})^{\dagger}\mathbf{A}_{\approx\nu}+\epsilon_{B}\mathbf{I}\succeq-\mathcal{ L}^{\dagger}[\mathbf{H}]\] \[\succeq r\mathbf{O}-\epsilon\mathbf{I}\] (by assumption) \[\succeq r\mathbf{O}^{\prime}-(r\|\mathbf{O}^{\prime}-\mathbf{O}\|+\epsilon) \mathbf{I}.\] (L.58) Similarly, Eq. (L.40) implies \[-\mathcal{L}^{{}^{\prime}\dagger}[\mathbf{H}^{\prime}]+\epsilon_{A} \mathbf{I} \succeq-\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\theta_{-}(\nu)\int_ {-\infty}^{\infty}\hat{\mathbf{A}}_{\approx\nu}^{a}(\omega^{\prime})^{\dagger} \hat{\mathbf{A}}_{\approx\nu}^{a}(\omega^{\prime})\mathrm{d}\omega^{\prime}\] \[\succeq r\mathbf{O}^{\prime}-(\epsilon+\epsilon_{B}+r\left\|\mathbf{O}-\mathbf{O}^{ \prime}\right\|)\mathbf{I}.\] (L.59) Note in the last step we used the assumption that \([\mathbf{O}^{\prime},\mathbf{H}^{\prime}]=0\) and applied Lemma L.2. Hence, we have shown that \(-\mathcal{L}^{{}^{\prime}\dagger}[\mathbf{H}^{\prime}]\succeq r\mathbf{O}^{\prime}- \epsilon^{\prime}\mathbf{I}\), where \(\epsilon^{\prime}=\epsilon+\epsilon_{A}+\epsilon_{B}+r\|\mathbf{O}-\mathbf{O}^{\prime}\|\) can be bounded by \[\epsilon^{\prime}\leq\epsilon+\mathcal{O}\bigg{(}\left|S\right| \Big{(}\frac{1}{\tau}+\frac{\left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4}}+\frac{ \Lambda_{0}^{2/3}}{\tau^{1/3}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}\tau}}+ \frac{\mathrm{e}^{-\beta\Delta_{\nu}/4}}{\beta}+\delta_{\lambda}+\theta_{\max }\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}+r\|\mathbf{O}-\mathbf{O}^{\prime}\|\Big{)} \bigg{)}.\] (L.60) A simpler bound.We now consider the special case of \(\mathbf{O}=\mathbf{I}-\mathbf{P}\), \(\mathbf{O}=\mathbf{I}-\mathbf{P}^{\prime}\) to derive a simpler bound as in the theorem statement. Note we have \(\|\mathbf{O}-\mathbf{O}^{\prime}\|=\|\mathbf{P}-\mathbf{P}^{\prime}\|\leq 8\|\mathbf{V}\|/ \Delta_{E}\leq 8\|\mathbf{V}\|/\Delta_{\nu}\), using Lemma L.3 and the fact the spectral gap is lower bounded by the Bohr-frequency gap, \(\Delta_{E}\geq\Delta_{\nu}\). Furthermore generally \(\theta_{\max}=\|\theta_{-}\|_{\infty}=\mathcal{O}(\Lambda_{0})\) for our choice of \(\gamma_{\beta}(\omega)\) in Eq. (F.4). And we always have \(\delta_{\lambda}\leq\|\mathbf{V}\|\) by Proposition L.3. Plugging these into Eq. (L.60), we have \[\epsilon^{\prime}\leq\epsilon+\mathcal{O}\bigg{(}\left|S\right| \Big{(}\frac{1}{\tau}+\frac{\|\mathbf{H}\|^{3/4}}{\tau^{1/4}}+\frac{\Lambda_{0}^{ 2/3}}{\tau^{1/3}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}\tau}}+\frac{\mathrm{e} ^{-\beta\Delta_{\nu}/4}}{\beta}+\|\mathbf{V}\|+(\Lambda_{0}+r)\frac{\|\mathbf{V}\|}{ \Delta_{\nu}}\Big{)}\bigg{)}.\] (L.61) This concludes our proof of Theorem 12. ### Supplementary calculations In this section, we provide some missing calculations that prove some propositions used in the proof in the previous section. **Proposition L.4** (Bounds on the derivative).: _There exists an absolute constant \(C\) such that for any \(\beta,\Lambda_{0}\),_ \[\left\|\frac{\mathrm{d}}{\mathrm{d}\omega}\theta(\omega)\right\|_{\infty}= \mathcal{O}\left(\left\|\frac{\mathrm{d}}{\mathrm{d}\omega}\left(\frac{ \mathrm{e}^{-\omega^{2}/2\Lambda_{0}^{2}}}{1+\mathrm{e}^{\beta\omega}}\omega \right)\right\|_{\infty}\right)\leq C.\] (L.62) Proof.: By the product rule, \[\left|\frac{\mathrm{d}}{\mathrm{d}\omega}\left(\frac{\omega\mathrm{e}^{-\omega ^{2}/2\Lambda_{0}^{2}}}{1+\mathrm{e}^{\beta\omega}}\right)\right|=\left|\frac{ \mathrm{e}^{-\omega^{2}/2\Lambda_{0}^{2}}-\mathrm{e}^{-\omega^{2}/2\Lambda_{0}^ {2}}\omega^{2}/\Lambda_{0}^{2}}{1+\mathrm{e}^{\beta\omega}}-\frac{\mathrm{e}^{ -\omega^{2}/2\Lambda_{0}^{2}}\beta\omega\mathrm{e}^{\beta\omega}}{(1+ \mathrm{e}^{\beta\omega})^{2}}\right|\leq(const.)\] (L.63) using change of variable \(x=\beta\omega\) and \(y=\omega/\Lambda_{0}\) to obtain the absolute constant bound. **Proposition L.5**.: _In the prevailing notation,_ \[\left\|\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\mathbf{A}_{\nu}^{a\dagger}\mathbf{A}_{\nu}^{a} \right\|\!,\quad\left\|\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\mathbf{A}_{\approx\nu}^{a \dagger}\mathbf{A}_{\approx\nu}^{a}\right\|\leq\left\|\sum_{a\in S}\mathbf{A}^{a \dagger}\mathbf{A}^{a}\right\|\!.\] (L.64) Proof.: We focus on \(\mathbf{A}_{\approx\nu}^{a\dagger}\) since the case of \(\mathbf{A}_{\nu}^{a\dagger}\) is a special case. Resolve the identity by nearby energy subspaces \[\mathbf{I}=\sum_{E}\mathbf{P}_{E}=\sum_{\bar{E}}\sum_{\begin{subarray}{c}\underline{E \approx\bar{E}}\\ =:\mathbf{P}_{\approx E}\end{subarray}}\quad\text{such that}\quad\mathbf{A}_{\approx\nu}^{a}= \sum_{\bar{E}_{2}-\bar{E}_{1}=\nu}\mathbf{P}_{\approx\bar{E}_{2}}\mathbf{A}^{a}\mathbf{P}_ {\approx\bar{E}_{1}}.\] (L.65) Now, we calculate \[\left\|\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}\mathbf{A}_{\approx\nu}^{a \dagger}\mathbf{A}_{\approx\nu}^{a}\right\| =\left\|\sum_{a\in S,\bar{E}_{2},\bar{E}_{1}}\mathbf{P}_{\approx\bar{ E}_{1}}\mathbf{A}^{a\dagger}\mathbf{P}_{\approx\bar{E}_{2}}\mathbf{A}^{a}\mathbf{P}_{\approx \bar{E}_{1}}\right\|=\left\|\sum_{a\in S,\bar{E}_{1}}\mathbf{P}_{\approx\bar{E}_{1}} \mathbf{A}^{a\dagger}\mathbf{A}^{a}\mathbf{P}_{\approx\bar{E}_{1}}\right\|\] \[=\max_{\bar{E}}\left\|\mathbf{P}_{\approx\bar{E}_{1}}\sum_{a\in S}\bm {A}^{a\dagger}\mathbf{A}^{a}\mathbf{P}_{\approx\bar{E}_{1}}\right\|\leq\left\|\sum_{a \in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\|\!.\] (L.66) The last line uses that the operator norm of block-diagonal matrices equals the maximum among the blocks. **Proposition L.6** (Jumps with perturbed Hamiltonian).: _In the prevailing notation, and for any function \(h(\nu)\), we have that_ \[\left\|\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}h(\nu)(\mathbf{A}_{\nu}^{a })^{\dagger}\mathbf{A}_{\nu}^{a}-\sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}h(\nu)(\mathbf{A} _{\approx\nu}^{a})^{\dagger}\mathbf{A}_{\approx\nu}^{a}\right\|\] \[\leq\mathcal{O}\!\left(\left\|\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A} ^{a}\right\|\cdot\max_{\nu\in B(\mathbf{H})}|h(\nu)|\cdot\frac{\left\|\mathbf{V} \right\|}{\Delta_{\nu}}\right)\!.\] (L.67) Proof.: It suffices to set \(\left\|\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\|=1\). Consider the operator Fourier transform with a smooth bump weight \(\left\|g\right\|_{2}=1\), \[\mathbf{A}_{g,\mathbf{H}}^{a}(\omega)\quad\text{for}\quad\hat{g}(\omega)\propto \begin{cases}0&\text{if}\quad|\omega|\geq\frac{\Delta_{\nu}}{2}\\ \mathcal{O}(1)&\text{else.}\end{cases}\] (L.68) which automatically decohere different Bohr frequency blocks (i.e., no need to apply secular approximation). Also, extend the function locally \[h(\omega):=\left\{h(\nu)\qquad\text{if}\quad|\omega-\nu|\leq 2\left\|\mathbf{V} \right\|\right.\] (L.69) Then, \[\sum_{a\in S}\int_{-\infty}^{\infty}h(\omega)\mathbf{A}_{g,\mathbf{H}}^{ a}(\omega)^{\dagger}\mathbf{A}_{g,\mathbf{H}}^{a}(\omega)\mathrm{d}\omega =\sum_{a\in S}\sum_{\nu,\nu^{\prime}\in B(\mathbf{H})}\int_{-\infty}^{ \infty}h(\omega)(\mathbf{A}_{\nu^{\prime}}^{a})^{\dagger}\mathbf{A}_{\nu}^{a}\hat{g}^{ *}(\omega-\nu^{\prime})\hat{g}(\omega-\nu)\mathrm{d}\omega\] \[=\sum_{a\in S}\sum_{\nu,\in B(\mathbf{H})}h(\nu)(\mathbf{A}_{\nu}^{a})^{ \dagger}\mathbf{A}_{\nu}^{a}\int_{-\infty}^{\infty}\left|\hat{g}(\omega-\nu) \right|^{2}\mathrm{d}\omega\] \[= \sum_{a\in S}\sum_{\nu\in B(\mathbf{H})}h(\nu)(\mathbf{A}_{\nu}^{a})^{ \dagger}\mathbf{A}_{\nu}^{a}.\] (L.70) Now, we add the perturbation \(\mathbf{H}+\mathbf{V}\). The insight is that we can introduce an artificial Hamiltonian \[\bar{\mathbf{H}}:=\sum_{E\in\mathrm{spec}(\mathbf{H})}E\sum_{E^{\prime}\approx E}\mathbf{P}_ {E^{\prime}}\quad\text{such that}\quad\left\|\bar{\mathbf{H}}-\mathbf{H}\right\|\leq 2 \left\|\mathbf{V}\right\|\] (L.71) with exactly the same spectrum of the original Hamiltonian \(\mathbf{H}\), but with the basis according to the perturbed Hamiltonian \(\mathbf{H}^{\prime}\). Then, the same argument with the artificial Hamiltonian implies \[\sum_{a\in S}\int_{-\infty}^{\infty}h(\omega)\mathbf{A}_{g,\bar{\mathbf{H}}}^{a}(\omega )^{\dagger}\mathbf{A}_{g,\bar{\mathbf{H}}}^{a}(\omega)\mathrm{d}\omega=\sum_{a\in S} \sum_{\nu\in B(\mathbf{H})}h(\nu)(\mathbf{A}_{\approx\nu}^{a})^{\dagger}\mathbf{A}_{ \approx\nu}^{a}.\] (L.72) Lastly, we may bound the difference by the purification \[\left\|\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}_{g,\mathbf{H} }^{a}(\omega)\otimes\left|a\right\rangle\otimes\left|\omega\right\rangle \mathrm{d}\omega-\sum_{a\in S}\int_{-\infty}^{\infty}\hat{\mathbf{A}}_{g,\bar{\bm {H}}}^{a}(\omega)\otimes\left|a\right\rangle\otimes\left|\omega\right\rangle \mathrm{d}\omega\right\|\] \[=\left\|\sum_{a\in S}\int_{-\infty}^{\infty}\mathbf{A}_{\mathbf{H}}^{a}(t )\otimes\left|a\right\rangle\otimes g(t)\left|t\right\rangle\mathrm{d}t-\sum_ {a\in S}\int_{-\infty}^{\infty}\mathbf{A}_{\bar{\mathbf{H}}}^{a}(t)\otimes\left|a \right\rangle\otimes g(t)\left|t\right\rangle\mathrm{d}t\right\|\] (Fourier Transform is unitary) \[\leq\sqrt{\int_{-\infty}^{\infty}\left|2\left\|\mathrm{e}^{ \mathrm{i}\mathbf{H}t}-\mathrm{e}^{\mathrm{i}\bar{\mathbf{H}}t}\right\|g(t)\right|^{2 }\mathrm{d}t}=\mathcal{O}(\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}).\] (L.73) The factor of 2 is due to left and right Hamiltonian evolution \[\mathrm{e}^{\mathrm{i}\mathbf{H}t}\mathbf{A}^{a}\mathrm{e}^{-\mathrm{i}\mathbf{H}t}- \mathrm{e}^{\mathrm{i}\bar{\mathbf{H}}t}\mathbf{A}^{a}\mathrm{e}^{-\mathrm{i}\bar{\bm {H}}t}=\mathrm{e}^{\mathrm{i}\mathbf{H}t}\mathbf{A}^{a}(\mathrm{e}^{-\mathrm{i}\mathbf{H}t }-\mathrm{e}^{-\mathrm{i}\bar{\mathbf{H}}t})+(\mathrm{e}^{\mathrm{i}\mathbf{H}t}- \mathrm{e}^{\mathrm{i}\bar{\mathbf{H}}t})\mathbf{A}^{a}\mathrm{e}^{-\mathrm{i}\bar{\bm {H}}t}.\] (L.74) To evaluate the integral, we use that and that and that \(g(t)\) is rapidly decaying for large \(|t|\geq\frac{1}{\Delta_{\nu}}\). To conclude the proof, use the purification tricks (Lemma K.1). ### Monotonicity of gradient on a subspace For our proof that \(\mathsf{BQP}\)-hard Hamiltonians has no suboptimal local minima in Appendix J, we will need the following refinements of Theorem 12 where the gradient operator acts on a low-energy subspace with an excitation gap. Intuitively, we care only about the Bohr-frequency gap restricted to the low-energy subspace \(\mathbf{HQ}\) instead of the full Hilbert space; the gradient on that subspace should not be sensitive to the excited states above the excitation gap. **Corollary L.1** (Monotonicity of gradient on a subspace; Corollary H.1 restated).: _Consider a Hamiltonian \(\mathbf{H}=\sum_{\bar{E}}\bar{E}\mathbf{P}_{\bar{E}}\) and its perturbation \(\mathbf{H}^{\prime}:=\mathbf{H}+\mathbf{V}\). Let \(\mathbf{P}\) be the ground space projector for \(\mathbf{H}\) and \(\mathbf{P}^{\prime}\) be the corresponding perturbed eigensubspace of \(\mathbf{H}^{\prime}\). Let \(\mathbf{Q}\) be a low-energy eigensubspace projector of \(\mathbf{H}\) (i.e., \(\mathbf{Q}=\sum_{E\leq E_{\mathbf{Q}}}\mathbf{P}_{E}\) for \(E_{\mathbf{Q}}\in\text{Spec}(\mathbf{H})\)) with excitation gap \(\Delta_{\mathbf{Q}}\). Assume \(\frac{\left\|\mathbf{V}\right\|\left\|\mathbf{H}\right\|}{\Delta_{\mathbf{Q}}}\leq\frac{ 1}{144}\Delta_{\nu}\) where \(\Delta_{\nu}:=\min_{\nu_{1}\neq\nu_{2}\in B(\mathbf{H}|_{\mathbf{Q}})}\left|\nu_{1}- \nu_{2}\right|\) is the Bohr-frequency gap of \(\mathbf{H}\) within the subspace \(\mathbf{Q}\). For any \(\beta,\tau>0\), let \(\mathcal{L}=\sum_{a\in S}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}},\mathcal{L}^{ \prime}=\sum_{a\in S}\mathcal{L}_{a}^{\beta,\tau,\mathbf{H}^{\prime}}\) be thermal Lindbladians with jumps \(\{\mathbf{A}^{a}\}_{a\in S}\), where \(\left\|\mathbf{A}^{a}\right\|\leq 1\) and the transition weight \(\gamma_{\beta}(\omega)\) is given by Eq. (F.4). Then we have the monotone property that_ \[-\mathbf{Q}\mathcal{L}^{\dagger}[\mathbf{H}]\mathbf{Q}\succeq\mathbf{r}\mathbf{Q}(\mathbf{I}-\mathbf{P})- \epsilon\mathbf{I}\quad\text{implies}\quad-\mathbf{Q}^{\prime}\mathcal{L}^{\prime \dagger}[\mathbf{H}^{\prime}]\mathbf{Q}^{\prime}\succeq\mathbf{r}\mathbf{Q}^{\prime}(\mathbf{I}- \mathbf{P}^{\prime})-\epsilon^{\prime}\mathbf{I}\] (L.75) _where \(\mathbf{Q}^{\prime}\) projects onto the perturbed eigensubspace of \(\mathbf{H}^{\prime}\) identified with \(\mathbf{Q}\), and_ \[\epsilon^{\prime}\leq\epsilon+|S|\cdot\mathcal{O}\bigg{(}\frac{1}{ \tau}+\frac{\left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4}}+\frac{\Lambda_{0}^{2/3}}{ \tau^{1/3}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}\tau}}+\frac{\Lambda_{0}}{ \sqrt{\Delta_{\mathbf{Q}}\tau}}+\frac{\mathrm{e}^{-\beta\Delta_{\mathbf{Q}}/4}}{\beta} +\frac{\mathrm{e}^{-\beta\Delta_{\mathbf{Q}}/4}}{\beta}\] \[+\left(1+\frac{\Lambda_{0}}{\Delta_{\nu}}\right)\frac{\left\|\bm {V}\right\|\left\|\mathbf{H}\right\|}{\Delta_{\mathbf{Q}}}+r\Big{(}\frac{\left\|\mathbf{V }\right\|}{\Delta_{\mathbf{Q}}}+\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}\Big{)} \bigg{)}.\] (L.76) Proof.: The idea is that \(\mathbf{Q}^{\prime}\mathcal{L}^{{}^{\prime}\dagger}[\mathbf{H}^{\prime}]\mathbf{Q}^{\prime}\) essentially depends only on the low energy subspace \(\mathbf{Q}^{\prime}\) and the corresponding restricted transition \(\mathbf{Q}^{\prime}\mathbf{A}^{a}\mathbf{Q}^{\prime}\). \[\mathbf{Q}^{\prime}\mathcal{L}^{{}^{\prime}\dagger}[\mathbf{H}^{\prime}] \mathbf{Q}^{\prime} \overset{E_{1}}{\approx}\sum_{a\in S}\int_{-\infty}^{\infty}\theta (\omega^{\prime})\mathbf{Q}^{\prime}\hat{\mathbf{A}}^{a}(\omega^{\prime})^{\dagger} \hat{\mathbf{A}}^{a}(\omega^{\prime})\mathbf{Q}^{\prime}\mathrm{d}\omega^{\prime}\] (Prop. F.3 and Lemma L.1) \[\overset{E_{2}}{\approx}\sum_{a\in S}\int_{-\infty}^{\infty} \theta(\omega^{\prime})\mathbf{Q}^{\prime}\hat{\mathbf{A}}^{a}(\omega^{\prime})^{ \dagger}\mathbf{Q}^{\prime}\hat{\mathbf{A}}^{a}(\omega^{\prime})\mathbf{Q}^{\prime} \mathrm{d}\omega^{\prime}\] (excitation gap \[\Delta_{\mathbf{Q}^{\prime}}\] of \[\mathbf{Q}^{\prime}\] ) \[=\sum_{a\in S}\int_{-\infty}^{\infty}\theta(\omega^{\prime}) \hat{\mathbf{R}}^{a}(\omega^{\prime})^{\dagger}\hat{\mathbf{R}}^{a}(\omega^{\prime}) \mathrm{d}\omega^{\prime}\] (set \[\mathbf{R}^{\prime a}:=\mathbf{Q}^{\prime}\mathbf{A}^{a}\mathbf{Q}^{\prime}\] and use that \[[\mathbf{Q}^{\prime},\mathbf{H}^{\prime}]=0\] ) \[\overset{E_{3}}{\approx}\mathcal{L}_{\{\mathbf{R}^{\prime a}\}}^{ \dagger\beta,\tau,\mathbf{H}^{\prime}_{\mathbf{Q}^{\prime}}}[\mathbf{H}^{\prime}_{\mathbf{Q}^ {\prime}}]\] (unsimplify: Prop. F.3 and Lemma L.1) \[\overset{E_{4}}{\approx}\mathcal{L}_{\{\mathbf{R}^{\prime a}\}}^{ \dagger\beta,\tau,\mathbf{H}^{\prime}_{\mathbf{Q}^{\prime}}}[\mathbf{H}^{\prime}_{\mathbf{Q}^ {\prime}}].\] (change the jumps to \[\mathbf{R}^{a}=\mathbf{Q}\mathbf{A}^{a}\mathbf{Q}\] ) The approximation \(E_{2}\) inserts the low-energy projector \(\mathbf{Q}^{\prime}\). To do so, we resolves the identity by \(\mathbf{I}=\mathbf{Q}^{\prime}+(\mathbf{I}-\mathbf{Q}^{\prime})\) and uses that \[\sum_{a\in S}\int_{-\infty}^{\infty}\theta(\omega^{\prime})\mathbf{Q} ^{\prime}\hat{\mathbf{A}}^{a}(\omega^{\prime})^{\dagger}(\mathbf{I}-\mathbf{Q}^{\prime}) \hat{\mathbf{A}}^{a}(\omega^{\prime})\mathbf{Q}^{\prime}\mathrm{d}\omega^{\prime}\] (excitation gap of \[\mathbf{Q}^{\prime}\] ) \[\overset{E_{21}}{\approx}\sum_{a\in S}\int_{-\infty}^{\infty} \theta(\omega^{\prime})\hat{\mathbf{S}}^{a}(\omega^{\prime})^{\dagger}\hat{\mathbf{S}} ^{a}(\omega^{\prime})\mathrm{d}\omega^{\prime}\] (secular approximation for \[(\mathbf{I}-\mathbf{Q}^{\prime})\mathbf{A}^{a}\mathbf{Q}^{\prime}\] ) \[\overset{E_{22}}{\approx}0\] ( \[\mu\ll\Delta_{\mathbf{Q}}\] , \[\Delta_{\mathbf{Q}}\beta\gg 1\] .) That is, we need the excitation gap to be large so that \((\mathbf{I}-\mathbf{Q}^{\prime})\mathbf{A}^{a}\mathbf{Q}^{\prime}\) have a vanishing contribution to the gradient. These error combines \(E_{2}=E_{21}+E_{22}\) where \[E_{21} \leq 2\left\|\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\| \left\|\theta\right\|_{\infty}\left\|f_{\tau}\cdot(1-\hat{s}_{\mu})\right\|_{2} \left\|f_{\tau}\right\|_{2}=|S|\,\mathcal{O}(\frac{\Lambda_{0}}{\sqrt{\mu\tau}})\] (L.77) \[E_{22} \leq\left\|\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a}\right\|\max_{ \omega^{\prime}\geq\Delta_{\mathbf{Q}^{\prime}-\mu}}\left|\theta(\omega^{\prime}) \right|\left\|f_{\tau}\cdot\hat{s}_{\mu}\right\|_{2}^{2}=|S|\,\frac{\mathrm{e}^{- \beta(\Delta_{\mathbf{Q}^{\prime}-\mu)/2}}}{\beta}.\] (L.78) Thus, we consider a safe choice of \(\mu=\Delta_{\mathbf{Q}^{\prime}}/2\). Also, since the perturbation is small by assumption, \(\left\|\mathbf{V}\right\|\leq\frac{1}{144}\Delta_{\nu}\frac{\Delta_{\mathbf{Q}}}{\left\| \mathbf{H}\right\|}\leq\frac{1}{144}\Delta_{\mathbf{Q}}\), the excitation gap remains large \(\Delta_{\mathbf{Q}^{\prime}}\geq\Delta_{\mathbf{Q}}-2\left\|\mathbf{V}\right\|\geq\Delta_{ \mathbf{Q}}-\frac{1}{72}\Delta_{\mathbf{Q}}\geq\Delta_{\mathbf{Q}}/2\). The third line (after approximation \(E_{2}\)) formally disposes of the excited state of \(\mathbf{H}^{\prime}\) above \(E^{\prime}_{\mathbf{Q}^{\prime}}\) by defining a modified Hamiltonian \[\mathbf{H}^{\prime}_{\mathbf{Q}^{\prime}}:=\mathbf{H}^{\prime}\mathbf{Q}^{\prime}+E^{\prime}_{\mathbf{Q} ^{\prime}}(\mathbf{I}-\mathbf{Q}^{\prime})\] (L.79) for the Fourier transform \(\mathbf{\hat{R}}^{\prime a}(\omega^{\prime})\). This Hamiltonian is merely a proof artifact, and one can also set the energy of the excited subspace \(\mathbf{I}-\mathbf{Q}^{\prime}\) to be infinity. The error \(E_{3}\) is merely the error to put it back to the form of an energy gradient. The approximation \(E_{4}\) changes the jumps with norm bounded by \[\left\|\sum_{a\in S}\mathbf{R}^{\prime a}\otimes\left|a\right>-\sum_{a\in S}\mathbf{R}^{a} \otimes\left|a\right>\right\|\leq 2\sqrt{\left\|\sum_{a\in S}\mathbf{A}^{a\dagger}\mathbf{A}^{a }\right\|}\left\|\mathbf{Q}-\mathbf{Q}^{\prime}\right\|=\left|S\right|\mathcal{O}( \frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q}}})\] (L.80) using subspace perturbation bounds \(\left\|\mathbf{Q}-\mathbf{Q}^{\prime}\right\|\leq 8\frac{\left\|\mathbf{V}\right\|}{\Delta_{ \mathbf{Q}}}\) (Lemma L.3). Since the (suitably normalized) gradient operator \(\frac{1}{2\left|S\right\|\left\|\mathbf{O}\right\|}\cdot\mathcal{L}^{\dagger}[\bm {O}]\) can be block-encoded using \(\mathcal{O}(1)\) block-encodings of the jumps (Theorem 9, Proposition F.7, Proposition F.8), perturbation to the jumps propagates to the gradient operator by \[E_{4}=\mathcal{O}\left(\left\|\sum_{a}\mathbf{R}^{\prime a}\left|a \right>-\sum_{a}\mathbf{R}^{a}\left|a\right>\right\|\cdot\left\|\mathbf{H}^{\prime}_ {\mathbf{Q}^{\prime}}\right\|\right)=\left|S\right|\mathcal{O}\Big{(}\left\|\mathbf{H }^{\prime}_{\mathbf{Q}^{\prime}}\right\|\frac{\left\|\mathbf{V}\right\|}{\Delta_{\bm {Q}}}\Big{)}.\] (L.81) To summarize, the above gives the bound \[\left\|\mathbf{Q}^{\prime}\mathcal{L}^{\prime\dagger}[\mathbf{H}^{\prime} ]\mathbf{Q}^{\prime}-\mathcal{L}^{\dagger\beta,\tau,\mathbf{H}^{\prime}_{\mathbf{Q}^{ \prime}}}_{\{\mathbf{R}^{\mathbf{q}}\}}[\mathbf{H}^{\prime}_{\mathbf{Q}^{\prime}}]\right\| \leq\left|S\right|\cdot\mathcal{O}\left(\frac{1}{\tau}+\frac{\left\|\mathbf{H} \right\|^{3/4}}{\tau^{1/4}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\mathbf{Q}}\tau}}+ \frac{\mathrm{e}^{-\beta\Delta_{\mathbf{Q}}/4}}{\beta}+\frac{\left\|\mathbf{V}\right\| }{\Delta_{\mathbf{Q}}}\left\|\mathbf{H}^{\prime}_{\mathbf{Q}^{\prime}}\right\|\right)\] (L.82) using \(\left\|\mathbf{H}^{\prime}\right\|\leq\left\|\mathbf{H}\right\|+\left\|\mathbf{V}\right\| \leq 2\left\|\mathbf{H}\right\|\) and \(\Delta_{\mathbf{Q}^{\prime}}\geq\Delta_{\mathbf{Q}}/2\). Similarly, the \[\mathbf{Q}\mathcal{L}^{\dagger}[\mathbf{H}]\mathbf{Q}\overset{E_{5}}{\approx}\mathcal{L} ^{\dagger\beta,\tau,\mathbf{H}_{\mathbf{Q}}}_{\{\mathbf{R}^{\mathbf{q}}\}}[\mathbf{H}_{\mathbf{Q}}] \quad\text{for}\quad\mathbf{H}_{\mathbf{Q}}:=\mathbf{H}\mathbf{Q}+E_{\mathbf{Q}}(\mathbf{I}-\mathbf{Q})\] (L.83) with \(E_{5}\) also bounded by the RHS of Eq. (L.82) but with \(\left\|\mathbf{V}\right\|\to 0\). Now, we may use the monotonicity of gradient (Theorem 12) for Hamiltonian pairs \(\mathbf{H}_{\mathbf{Q}}\) and \(\mathbf{H}^{\prime}_{\mathbf{Q}^{\prime}}\), jumps \(\{\mathbf{R}^{a}\}_{a\in S}\), with the characteristic Bohr-frequency gap \(\Delta_{\nu}=\min_{\nu_{1}\neq\nu_{2}\in B(\mathbf{H}_{\mathbf{Q}})}\left|\nu_{1}-\nu_ {2}\right|\). The modified Hamiltonian perturbation \(\left\|\mathbf{H}_{\mathbf{Q}}-\mathbf{H}^{\prime}_{\mathbf{Q}^{\prime}}\right\|\) is bounded by the sum of the following two errors: \[\left\|\mathbf{H}\mathbf{Q}-\mathbf{H}^{\prime}\mathbf{Q}^{\prime}\right\|=\left\| \mathbf{H}\mathbf{Q}-\mathbf{H}\mathbf{Q}^{\prime}+\mathbf{H}\mathbf{Q}^{\prime}-\mathbf{H}^{\prime}\mathbf{Q} ^{\prime}\right\|\leq 8\frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q}}}\left\|\mathbf{H} \right\|+\left\|\mathbf{V}\right\|\leq 9\frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q}}} \left\|\mathbf{H}\right\|,\] (L.84) \[\left\|E_{\mathbf{Q}}(\mathbf{I}-\mathbf{Q})-E^{\prime}_{\mathbf{Q}^{\prime}}( \mathbf{I}-\mathbf{Q}^{\prime})\right\| =\left\|E_{\mathbf{Q}}(\mathbf{I}-\mathbf{Q})-E_{\mathbf{Q}}(\mathbf{I}-\mathbf{Q}^{ \prime})+E_{\mathbf{Q}}(\mathbf{I}-\mathbf{Q}^{\prime})-E^{\prime}_{\mathbf{Q}^{\prime}}(\mathbf{I }-\mathbf{Q}^{\prime})\right\|\] \[\leq 8E_{\mathbf{Q}}\frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q}}}+ \left\|\mathbf{V}\right\|\leq 9\frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q}}} \left\|\mathbf{H}\right\|.\] (L.85) So \(\left\|\mathbf{H}_{\mathbf{Q}}-\mathbf{H}^{\prime}_{\mathbf{Q}^{\prime}}\right\|\leq 18\|\mathbf{V}\| \|\mathbf{H}\|/\Delta_{\mathbf{Q}}\), which is less than \(\Delta_{\nu}/8\) by the assumption in the corollary statement so we may apply Theorem 12. Note that error due to perturbing the \(\mathbf{Q}(\mathbf{I}-\mathbf{P})\) can be bounded directly by \[\left\|\mathbf{Q}^{\prime}(\mathbf{I}-\mathbf{P}^{\prime})-\mathbf{Q}(\mathbf{I}-\mathbf{P})\right\| \leq\left\|\mathbf{Q}^{\prime}-\mathbf{Q}\right\|+\left\|\mathbf{P}-\mathbf{P}^{\prime} \right\|=\mathcal{O}\Big{(}\frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q}}}+\frac {\left\|\mathbf{V}\right\|}{\Delta_{\nu}}\Big{)}.\] (L.86) Apply Theorem 12 with \(\mathbf{H}=\mathbf{H}_{\mathbf{Q}}\), \(\mathbf{H}^{\prime}=\mathbf{H}^{\prime}_{\mathbf{Q}^{\prime}}\), \(\mathbf{O}=\mathbf{Q}(\mathbf{I}-\mathbf{P})\), and \(\mathbf{O}^{\prime}=\mathbf{Q}^{\prime}(\mathbf{I}-\mathbf{P}^{\prime})\), \(\delta_{\lambda}\leq\left\|\mathbf{H}_{\mathbf{Q}}-\mathbf{H}_{\mathbf{Q}^{\prime}}\right\|\), and \(\theta_{\max}=\mathcal{O}(\Lambda_{0})\), along with the additional approximation error in Eq. (L.82), we obtain the result as advertised in the corollary statement. **Corollary L.2** (Monotonicity of gradient on a subspace under off-block-diagonal perturbation; Corollary H.2 restated).: _In the setting of Corollary L.1, instead assume \(\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}\), \(\frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q}}}\leq(const.)\), and that the perturbation is off-block-diagonal, i.e., \(\mathbf{Q}\mathbf{V}\mathbf{Q}=(\mathbf{I}-\mathbf{Q})\mathbf{V}(\mathbf{I}-\mathbf{Q})=0\). Then,_ \[-\mathbf{Q}\mathbf{C}^{\dagger}[\mathbf{H}]\mathbf{Q}\succeq r\mathbf{Q}(\mathbf{I}-\mathbf{P})-\epsilon \mathbf{I}\quad\text{implies}\quad-\mathbf{Q}^{\prime}\mathcal{L}^{\prime\dagger}[ \mathbf{H}^{\prime}]\mathbf{Q}^{\prime}\succeq r\mathbf{Q}^{\prime}(\mathbf{I}-\mathbf{P}^{\prime })-\epsilon^{\prime}\mathbf{I}\] (L.87) _where_ \[\epsilon^{\prime}\leq\epsilon+\left|S\right|\cdot\mathcal{O} \bigg{(}\frac{1}{\tau}+\frac{\left\|\mathbf{H}\right\|^{3/4}}{\tau^{1/4}}+\frac{ \Lambda_{0}^{2/3}}{\tau^{1/3}}+\frac{\Lambda_{0}}{\sqrt{\Delta_{\nu}}\tau}+ \frac{\Lambda_{0}}{\sqrt{\Delta_{\mathbf{Q}}}\tau}+\frac{\mathrm{e}^{-\beta \Delta_{\mathbf{Q}}/4}}{\beta}+\frac{\mathrm{e}^{-\beta\Delta_{\mathbf{Q}}/4}}{\beta} \\ +\frac{\left\|\mathbf{V}\right\|^{2}}{\Delta_{\mathbf{Q}}}+\left\|\mathbf{H}_ {\mathbf{Q}}\right\|\cdot\Big{(}\frac{\left\|\mathbf{H}_{\mathbf{Q}}\right\|\left\|\mathbf{V} \right\|}{\Delta_{\mathbf{Q}}\Delta_{\nu}}+\frac{\left\|\mathbf{V}\right\|^{2}}{ \Delta_{\mathbf{Q}}\Delta_{\nu}}\Big{)}+r\Big{(}\frac{\left\|\mathbf{V}\right\|}{ \Delta_{\mathbf{Q}}}+\frac{\left\|\mathbf{V}\right\|^{2}}{\Delta_{\mathbf{Q}}\Delta_{\nu }}\Big{)}\bigg{)}.\] (L.88) Proof.: When the perturbation \(\mathbf{V}\) is off-diagonal in the energy eigenbasis of \(\mathbf{H}\), we can use tighter bounds on the changes in eigenvalues and eigensubspaces of \(\mathbf{H}^{\prime}\) from Lemma L.4, which implies \[\delta_{\lambda}=\max_{j}\left|E_{j}-E_{j}^{\prime}\right| =\mathcal{O}\Big{(}\frac{\left\|\mathbf{V}\right\|^{2}}{\Delta_{\mathbf{Q }}}\Big{)} \text{(originally }\mathcal{O}(\left\|\mathbf{V}\right\|))\] \[\left\|\mathbf{H}_{\mathbf{Q}}-\mathbf{H}_{\mathbf{Q}^{\prime}}^{\prime}\right\| =\mathcal{O}\left(\left\|\mathbf{H}_{\mathbf{Q}}\right\|\frac{\left\|\mathbf{ V}\right\|}{\Delta_{\mathbf{Q}}}+\frac{\left\|\mathbf{V}\right\|^{2}}{\Delta_{\mathbf{Q}}} \right). \text{(originally }\mathcal{O}(\left\|\mathbf{V}\right\|))\] \[\left\|\mathbf{P}-\mathbf{P}^{\prime}\right\| \leq\mathcal{O}\Big{(}\frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q }}}+\frac{\left\|\mathbf{V}\right\|^{2}}{\Delta_{\mathbf{Q}}\Delta_{\nu}}\Big{)}. \text{(originally }\mathcal{O}(\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}))\] The second line and third line are due to rotation and then subspace perturbation (Lemma L.3). Essentially, this is because (1) all subspace rotations are small (\(\frac{\left\|\mathbf{V}\right\|}{\Delta_{\mathbf{Q}}}\ll 1\)) and (2) the energy perturbation is _smaller_ than the level spacing (\(\frac{\left\|\mathbf{V}\right\|^{2}}{\Delta_{\mathbf{Q}}}\ll\Delta_{\nu}\)). We then follow the same argument in the proof of Corollary L.1 above with the improved bounds for Eqs. (L.84), (L.85), and (L.86). We can also use an improve bound of \(\theta_{\max}\leq\left\|\mathbf{H}_{\mathbf{Q}}\right\|\). Together these improvements yield the better error bound on \(\epsilon^{\prime}\) as advertised. ### Example where perturbation kills energy gradient We present an example where despite \(\left\|\mathbf{V}\right\|\ll\Delta_{\nu}\), the gradient is lost due to the perturbation. Therefore, the resulting change in gradient is not multiplicative (\(1-\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}\)), but merely additive. If the gradient is polynomially small, we need \(\frac{\left\|\mathbf{V}\right\|}{\Delta_{\nu}}\) to be also polynomially small to secure the gradient. **Proposition L.7** (Perturbation kills the gradient).: _Let \(\mathbf{H}=\mathbf{Z}=\left|1\right\rangle\left\langle 1\right|-\left|-1\right\rangle \left\langle-1\right|\), \(\mathbf{A}=\mathbf{Z}+\epsilon\mathbf{X}\), and \(\mathbf{V}=\epsilon\mathbf{X}\). Then, for the \(\beta,\tau\to\infty\) heat bath Lindbladian, we have that_ \[\mathcal{L}^{\dagger}[\mathbf{H}]\preceq-2\epsilon^{2}(\mathbf{I}-\mathbf{P})\quad\text{but }\quad\mathcal{L}^{\prime\dagger}[\mathbf{H}^{\prime}]=0.\] (L.89) Proof.: \[\mathcal{L}^{\dagger}[\mathbf{H}]=\epsilon^{2}\left(\left|1\right\rangle\left\langle -1\right|\mathbf{H}\left|-1\right\rangle\left\langle 1\right|-\frac{1}{2}\left|1 \right\rangle\left\langle 1\right|\mathbf{H}-\frac{1}{2}\mathbf{H}\left|1\right\rangle \left\langle 1\right|\right)=-2\epsilon^{2}\left|1\right\rangle\left\langle 1\right|.\] (L.90) But \(\mathcal{L}^{\prime\dagger}[\mathbf{H}^{\prime}]=0\) because \([\mathbf{H}^{\prime},\mathbf{A}]=0\), i.e., \(\mathbf{A}\) is diagonal in the new energy basis.
2309.08970
Insights into electronic and transport properties of phosphorene nanorings in two perpendicular directions: Effects of circular and elliptical external potentials
In this work, we study the electronic and transport properties of phosphorene nanorings in two perpendicular directions (zigzag and armchair directions) in the presence of zigzag metallic source and drain leads. Our results are based on the non-equilibrium Green's function (NEGF) method and a five-parameter tight-binding (TB) approach. We investigate how system parameters affect the electronic transport. These parameters include the radius of the rings, the width of the leads and the external potential. Our results show that for all configurations studied, a transport energy gap exists whose width can be tuned by the width of the leads and the radius of the nanoring. The transmission function of wider leads shows more sensitivity to the variation of the inner radius due to higher electronic states that can respond to smaller changes in the scattering region. In addition, the transport along the armchair direction is more susceptible to tuning than the transport along the zigzag direction. The effects of external potentials on the conductance are more pronounced than the geometrical parameters. In particular the circular potential of the amplitude of 0.1 eV can widen the transport gap by about ~0.35 eV.
MohammadAmir Bazrafshan, Farhad Khoeini, Bartlomiej Szafran
2023-09-16T12:10:46Z
http://arxiv.org/abs/2309.08970v1
Insights into electronic and transport properties of phosphorene nanorings in two perpendicular directions: Effects of circular and elliptical external potentials ## Abstract In this work, we study the electronic and transport properties of phosphorene nanorings in two perpendicular directions (zigzag and armchair directions) in the presence of zigzag metallic source and drain leads. Our results are based on the non-equilibrium Green's function (NEGF) method and a five-parameter tight-binding (TB) approach. We investigate how system parameters affect the electronic transport. These parameters include the radius of the rings, the width of the leads and the external potential. Our results show that for all configurations studied, a transport energy gap exists whose width can be tuned by the width of the leads and the radius of the nanoring. The transmission function of wider leads shows more sensitivity to the variation of the inner radius due to higher electronic states that can respond to smaller changes in the scattering region. In addition, the transport along the armchair direction is more susceptible to tuning than the transport along the zigzag direction. The effects of external potentials on the conductance are more pronounced than the geometrical parameters. In particular the circular potential of the amplitude of 0.1 eV can widen the transport gap by about \(\sim\)0.35 eV. **Keywords:** nanoring, phosphorene, tight-binding approximation, Green's function, electronic transport. ## Introduction One of the most recent and intriguing allotropes of phosphorus is phosphorene, which is a two-dimensional (2D) material derived from black phosphorus in 2014 by mechanical exfoliation [1]. Phosphorene has a puckered honeycomb structure [2, 3, 4, 5] with sp\({}^{3}\) hybridized phosphorus atoms and exhibits many unique properties, such as high carrier mobility, anisotropic behavior, a tunable band gap, and high flexibility [6, 7, 8]. Phosphorene can be manufactured by various methods, such as mechanical exfoliation, liquid exfoliation, chemical vapor deposition, and molecular beam epitaxy [5]. Phosphorene has potential applications in various fields, such as energy storage [9], field-effect transistors [6, 10, 11], optoelectronic [12], and biosensors [13]. The geometry and shape of nanostructures are crucial for their physical properties [14, 15, 16, 17, 18]. For example, the zigzag edge geometry of a graphene nanoribbon makes it a magnetic nanostructure [19]. However, by overcoming the technical issues, the fabrication of nanostructures with more precise dimensions becomes feasible. Phosphorene has been widely studied in the literature. Phosphorene is a direct p-type semiconductor with a gap value of \(\sim\)1.5 eV [1, 20]. A density functional study reports that the direct band gap of the bilayer phosphorene can vary from 0.78 to 1.04 eV depending on the stacking order [21]. Moreover, theoretical studies of zigzag phosphorene nanoribbons (ZPNRs) reveal that they are metallic [22], while armchair phosphorene nanoribbons (APNRs) are all semiconductors [6]. Furthermore, nanoring structures are helpful in studying quantum interference-related effects such as Aharonov-Bohm and Fano resonance [16, 23, 24, 25]. Ref. [26] reports the possible application of a phosphorene nanoring in sensing biomarker vapors of severe kidney disease. Fano resonance [25] has been studied in a system consisting of bilayer zigzag phosphorene nanoribbons connected to a bilayer phosphorene ring. In this work, we study the electronic transport properties of a circular phosphorene nanoring with an outer radius (R\({}_{\text{o}}\)) of 6 nm and a range of inner radii (R\({}_{\text{inner}}\)) from 2 to 5.5 nm. We show below, that the effect of the outer radius of a circular nanodisk on the transport gap disappears for larger radii, which is a consequence of the semiconducting properties of bulk phosphorene. An anisotropy of the electronic structure and transport is embedded in phosphorene crystal lattice. Therefore, for our study we consider two distinct configurations by connecting two leads in the zigzag and armchair directions. In a transport system, the leads provide/gather electrons to/from the device. Furthermore, we study the effect of lead width, which is determined by zigzag phosphorene nanoribbons in three different widths of 12, 16, and 20 atoms. Based on the work in Ref. [27], in smaller ZPNR widths, the degeneracy of metallic localized edge states can be lifted due to interactions between the edge states. The direct way to study the electronic properties of nanosystems and even large-scale structures is the tight-binding method. The tight-binding parameters can be obtained using a number of approaches. One of the most efficient and recent methods is the use of machine-learning interatomic potentials (MLPs). The MLPs can also be used to study the piezoelectric and mechanical properties of nanomaterials [28, 29]. We use the five-parameter TB model of Ref. [30], fitted the energy bands near the Fermi energy. The model was introduced for the bulk two-dimensional black phosphorous. However, it has been experimentally verified in systems with non-perfect lattices, in particular in crystals with vacancies [31]. To obtain the transport coefficient (transmission probability), the non-equilibrium Green's function is used. Our numerical results indicate that the way the leads are connected to the system has a pronounced impact on the electron transmission probability. Also, because of the difference between the leads and the device edges, the metallic states of the leads, located on the edges of the ZPNRs, cannot be transmitted through the device, which makes the transport system behave as a semiconductor. The transport energy gap is mainly determined by the bulk bands of the nanoribbons of the leads. In addition, wider leads can capture smaller changes in the device section. The manuscript is organized as follows. In the next section, the model and the method are presented. Results and discussion come in the third section, and finally, the results are summarized in the last section. **Model and Method:** In this work, we aim to investigate the electronic transport properties of phosphorene rings in two perpendicular directions (zigzag and armchair directions), in the presence of metallic zigzag nanoribbon leads, used as a source and drain contacts. We have considered the model in such a way that the zigzag electrodes are connected to the two main edge geometries of the device, i.e., the armchair and zigzag sides of the ring, see Figure 1. A schematic of the models studied is presented in Figure 1. In panel (a), the electron transport goes along the zigzag direction of the lattice, labeled as configuration Z (C\({}_{\text{Z}}\)). In panel (b), the transport direction is along the armchair direction, and we label it as configuration A (C\({}_{\text{A}}\)). The width of the leads is identified by the number of atoms in the width (W\({}_{\text{N}}\)). Three different leads widths, W\({}_{\text{N}}\)=12, 14, and 16 (N-ZPNR) were investigated in this work. The Hamiltonians are formed in the orthogonal one-orbital TB model with five parameters. It is shown that this model successfully reproduces the electronic properties of the phosphorene close to the Fermi energy [16, 30, 32]. We consider a system subject to the external potential that can be e.g. introduced by a tip of the scanning probe microscope. The TB Hamiltonian reads: \[H=\sum_{i}(\varepsilon_{i}|i\rangle\langle i|+V_{i}|i\rangle\langle i|)+\sum_{i, j}(t_{i,j}|i\rangle\langle j|+H.c.), \tag{1}\] where \(\varepsilon_{i}\) is the on-site energy, \(V_{i}\) is the external potential on the \(i^{\rm th}\) atom, and \(t_{i,j}\) is the interatomic (hopping) parameter. The external potential is considered by \(V_{i}=\rm V_{0}\left(\sqrt{\frac{x^{2}}{a}+\frac{y^{2}}{b}}-R_{o}\right)\) with a and b as control parameters related to the shape of the external potential. For \(\sqrt{\frac{x^{2}}{a}+\frac{y^{2}}{b}}>R_{o}\), the external potential term, \(V\), becomes zero. The profile of the external potential can be manipulated by the tip geometry, see [33].We consider \(V_{0}=0.1\) eV, and \(R_{o}\)=5.8 nm. The ratio of \(\alpha=\frac{a}{b}\), is the parameter that determines the shape of the external potential. The TB parameters are adopted from [16, 32], with \(\varepsilon_{i}=0\) eV, and \(t_{1}=-1.22,t_{2}=3.665,t_{3}=-0.205,t_{4}=-0.105\), and \(t_{5}=-0.055\) eV for the essential five interactions. To obtain the electronic dispersion, one can solve the eigenvalue problem, as described in [18]. The TB Hamiltonians are then implemented in the NEGF formalism in order to study the electronic transport properties. The retarded Green's function can be evaluated as [34, 35]: \[G(E)=[(E+{\rm i}\eta){\bf I}-H_{C}-\varSigma_{SC}(E)-\varSigma_{DC}(E)]^{-1}, \tag{2}\] Figure 1: The studied model devices. (a) The two zigzag leads are connected to the device along each other in \(\rm C_{Z}\), (b) the case for two zigzag leads that are connected to the sides of the nanoring is labeled by \(\rm C_{A}\) (b). where \(E\) is the electron energy, \(\mathbf{I}\) is the identity matrix, \(\eta\) is an arbitrarily small positive number, \(H_{C}\) is the device Hamiltonian, and \(\Sigma_{SC(DC)}\) is the self-energy for the source (drain) lead. Details about this formalism can be found in [35, 36]. The spectral density operator is given by: \[\Gamma_{S(D)}(E)=\mathrm{i}\big{[}\Sigma_{SC(DC)}(E)-\Sigma_{SC(DC)}(E)^{ \dagger}\big{]}, \tag{3}\] The electron transmission probability can be obtained as follows: \[T_{e}(E)=\mathrm{Trace}\big{[}\Gamma_{S}(E)G(E)\Gamma_{D}(E)G(E)^{\dagger} \big{]}. \tag{4}\] The transport energy gap is evaluated in each of the structures. The energy gap is calculated by finding the first nonzero value with respect to zero energy in the transmission spectrum (assuming \(E_{F}=0\) eV). Additionally, the energy gap of the device is calculated by solving the eigenvalue problem of the device Hamiltonian, the first two significant energy differences are calculated, and the first states close to these differences are classified as energy domains that determine the energy gap of the isolated device [38]. Besides, the local density of states (LDOS) for a given atom (indicated by index \(j\)) is the imaginary part of the Green's function [37]: \[\mathrm{LDOS}(E)_{j}=\frac{-1}{\pi}\Im\big{(}G(E)_{j,j}\big{)}, \tag{5}\] #### Results and Discussion: The electronic transmission coefficient and transport energy gap for various inner radii (from 2 to 5.5 nm with a step of 0.1 nm) of a phosphorene ring with an outer radius of 6 nm are investigated using the TB model. The outer radius was fixed as the one beyond which its value has not a pronounced effect on the transport gap. The effect of the outer radius is investigated for a nanodisk without the central opening (or R\({}_{\mathrm{inner}}\)=0) with the zigzag configuration and the lead width of 12 atoms (Figure 2), which shows that the transport energy gap is insensitive to the larger radius. The results for other lead widths are presented in the Supplementary Information. The phosphorene is an intrinsic semiconductor, and the metallic behavior of the ZPNRs originates from the zigzag edge states. In circular geometries hosting various chiralities and in the limit of larger outer radii (where the transport energy gap converges to a value), the role of the outer edge in the electronic transport near the Fermi energy, or in energy ranges which are not captured by the electronic bands of the leads, is negligible. In the next step, in Figure 3, we plot the band structure and transmission spectrum as a function of energy, for three different lead widths together with the transmission spectrum for two different lead connections with an inner radius of 2 nm. Figure 3 shows the results for the zigzag phosphorene nanoribbons with W\({}_{12}\), W\({}_{16}\) and W\({}_{20}\) in (a), (b), and (c), respectively. As can be seen in Figure 3, with increasing width, the first two bulk bands (magenta bands) get closer to the Fermi energy. Therefore, the band gap converges to the intrinsic gap of the phosphorene for wider ribbons. The transmission spectrum of both systems in all lead widths, i.e., W\({}_{12}\), W\({}_{16}\), and W\({}_{20}\) with R\({}_{\text{inner}}\) =2 nm, suggests that the transport energy gap is determined by the bulk bands of the ZPNRs, or on the other hand, the transport energy gap can be tuned by the width of the ZPNR up to the limits of the leads. In the smallest ZPNR, the first two bulk bands (marked by magenta color) are far from the Fermi energy. However, as the width increases, they get closer to each other until they reach the intrinsic energy gap of the phosphorene. The transport gap and its size are essential in determining some physical properties, such as the Seebeck coefficient [18, 39]. We plot the whole transmission spectrum in the energy, but one should note that the five-parameter TB model is fitted so that energy bands close to the Fermi energy are justified (gray transparent shaded areas gives a schematic in Figure 3). As a general rule, we notice that the transmission coefficient is closer to the maximum value set by the zigzag leads in configuration Z than in configuration C for all widths, which is a sign of the intrinsic anisotropy of phosphorene. Figure 2: The effect of the outer radius (the case of R\({}_{\text{inner}}\)=0) on the transport energy gap for C\({}_{\text{Z}}\)W\({}_{12}\). In the next step, we studied the transport energy gap for the systems as a function of the inner radius. We find that the energy stays almost constant up to a certain inner radius (Figure 4 (a)), and then it changes Figure 3: The electronic band structure and transmission spectrum of the systems, (a) 12-ZPNR, (b) 16-ZPNR, and (c) 20-ZPNR. Secondary electronic bands (here bulk bands of the ZPNR) with respect to the Fermi energy are shown by magenta lines in the band structure plots. The transmission spectrum of the ribbons is shown in magenta, the zigzag configuration is shown in green, and the armchair configuration is illustrated in blue-magenta. drastically. As noted earlier, by increasing the width of the ZPNR, the bulk bands get close to the Fermi energy; the leads provide a broader energy range that can then be filtered by the finite conductance of the device. This explains why the difference between transport energy gaps for different lead widths is significant. As the lead width increases, the range of the transport gap modulation by the inner radius also increases, indicating that wider leads are advantageous for sensing applications. Therefore, in addition to the width of the leads, the inner radius of the nanoring can be used to tune the transport energy gap. Figure 4 (b) shows that a dependence of the energy gap of the C\({}_{\text{A}}\) configuration is close to that of the C\({}_{\text{Z}}\) system. Since the outer radius is constant, this close behavior indicates the dominant role of the inner edge. However, a slight difference can still be noticed between the panels of Figure 4 (a) and (b), which can be attributed to the response of the inner edge to different transport directions. The energy gap of the isolated ring is plotted in Figure 5, showing an increasing trend in the value of the gap as the inner radius increases. As discussed in Ref. [40], charges in the armchair phosphorene nanoribbons are more localized in the central part of the ribbon, while in ZPNRs, they are localized on the edge. As one can see, by increasing the inner radius or equivalently, reducing ring width, the number of zigzag edge atoms increases, which in turn affects the energy gap. The energy levels, together with the square of the wavefunction (probability amplitude) of an isolated phosphorene nanoring, without the connection to the leads, are shown in Figure 6 (a) and (b) for two radii Figure 4: Transport energy gap as a function of inner radius for various lead widths for C\({}_{\text{Z}}\) (a), and C\({}_{\text{A}}\) (b). Figure 5: Isolated device (quantum ring) energy gap as a function of the inner radius. The outer radius is 6 nm. of 2 and 5 nm, respectively. We emphasize that the energy gap of the quantum ring is different from the one which it is connected to leads, forming a transport system. The probability amplitude is mapped onto the geometry of the quantum dot. We present a probability densities for the energy range of the isolated device, including \(|\Psi_{1}|^{2}\) which is related to the state at the edge of the energy gap above the Fermi energy, \(|\Psi_{2}|^{2}\), which is related to a typical state within the energy gap, and \(|\Psi_{3}|^{2}\) which belongs to the state at the edge of the energy gap below the Fermi energy. The energy eigenvalues of confined states are the closest energies to the Fermi energy. The in-gap states are almost localized at the edges, similar as the low-energy edge states of a zigzag phosphorene nanoribbon. Comparing the cases of the inner radius of 2 and 5 nm shows that for the thinner ring (R\({}_{\text{inner}}\)=5 nm) confined states get closer to each other and their distribution in the width of the ring becomes smoother along the edges. For a periodic system, like ZPNRs, these states can form nearly flat bands, see Figure 3. As the width decreases, their bands become more dispersive. The estimate of the isolated device energy gap (molecular energy gap) cannot be directly translated to the one of the devices used for transport, with the leads attached. In the NEGF method, leads are assumed to be infinite, i.e., the device becomes equilibrated with leads through the interface of the device and leads. The coupling of the quantum ring and electrodes can change the energy levels of the system. The ZPNR leads are metallic. However, the metallic edge states cannot propagate through the device, as the edge is modified within the quantum ring. In order to locate the areas within the ring that carry the current fed by ZPNR, one can study the local density of states. The LDOS shows how many states are available for electrons in a particular energy. The results are presented in the Supplementary Information. Let us now study the tunability of the transport properties of the device with the external potential. The effect of an external potential (with \(V_{0}=0.1\) eV) is studied in Figure 7. The effect of an external potential with various \(\alpha\) is more pronounced in C\({}_{\text{Z}}\) than in C\({}_{\text{A}}\). For an external potential with an ellipse shape that stretched along the zigzag direction (\(\alpha\)=0.5, green line), the transport coefficient of the conduction band is Figure 6: (a) and (b) are energy levels of the device section for an inner radius of 2 and 5 nm together with a representation of \(|\Psi|^{2}\) for three types of the states (see the text). close to unity. In the valence band, the transmission value is more suppressed compared to the case with \(V\)=0 (magenta line). Moreover, the transport gap of the C\({}_{\text{Z}}\) is more sensitive to this type of external potential than C\({}_{\text{A}}\), especially for \(\alpha\)=1, one can see that the transport gap extends by about 0.35 eV (for C\({}_{\text{A}}\) this change is about 0.04 eV). ## Conclusion We have studied the electronic transport properties of phosphorene nanorings with a fixed outer radius of 6 nm, and a range of inner radii of 2-5.5 nm. Two configurations were considered for attaching metallic ZPNR leads with widths of 12, 16, and 20 atoms to study the effect of lead width on the transport properties. Electronic transport properties were studied with the help of the five-parameter TB model implemented in the NEGF formalism. Based on the numerical results, the effect of the outer radius disappears at large radii, e.g., in the case of C\({}_{\text{Z}}\)W\({}_{12}\), for R\({}_{\text{o}}\geq\) 3.3 nm, the transport energy gap remains almost constant. The results show that all of the structures were semiconductors whose transport energy gaps are determined by (i) the inner radius of the ring, and (ii) either the conduction or valence band side of the Fermi energy, i.e., the bands associated with the bulk of the nanoribbon, not the edge. The transport energy gap shows more sensitivity for the case of wider leads due to their richer electronic configuration. This indicates that wider leads may be useful for sensing applications. Also, the intrinsic anisotropy of the transport within the phosphorene is translated into a difference in the transmission spectrum of two configurations of the leads.The transmission coefficient suppression for C\({}_{\text{A}}\) is larger than C\({}_{\text{Z}}\). The circular shape of the external potential (with \(V_{0}\) = 0.1 eV), widens the transport gap about 0.35 eV, and 0.04 eV in C\({}_{\text{Z}}\) and C\({}_{\text{A}}\), respectively. Figure 7: Map of three external potentials added as on-site energy to a nanoring, (a) with an inner radius of 2 nm. The zero potential is marked by gray color and the maximum potential by the most intense red color. The transmission spectra for (b) W\({}_{12}\)C\({}_{\text{Z}}\), (c) W\({}_{12}\)C\({}_{\text{A}}\), for three external potentials shown in panel (a).
2304.01016
Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
In this paper, we consider the problem of improving the inference latency of language model-based dense retrieval systems by introducing structural compression and model size asymmetry between the context and query encoders. First, we investigate the impact of pre and post-training compression on the MSMARCO, Natural Questions, TriviaQA, SQUAD, and SCIFACT, finding that asymmetry in the dual encoders in dense retrieval can lead to improved inference efficiency. Knowing this, we introduce Kullback Leibler Alignment of Embeddings (KALE), an efficient and accurate method for increasing the inference efficiency of dense retrieval methods by pruning and aligning the query encoder after training. Specifically, KALE extends traditional Knowledge Distillation after bi-encoder training, allowing for effective query encoder compression without full retraining or index generation. Using KALE and asymmetric training, we can generate models which exceed the performance of DistilBERT despite having 3x faster inference.
Daniel Campos, Alessandro Magnani, ChengXiang Zhai
2023-03-31T15:44:13Z
http://arxiv.org/abs/2304.01016v3
Quick Dense Retrievers Consume KALE: Post Training Kullback-Leibler Alignment of Embeddings for Asymmetrical dual encoders ###### Abstract In this paper, we consider the problem of improving the inference latency of language model-based dense retrieval systems by introducing structural compression and model size asymmetry between the context and query encoders. First, we investigate the impact of pre and post-training compression on the MSMARCO, Natural Questions, TriviaQA, SQUAD, and SCIFACT, finding that asymmetry in the dual-encoders in dense retrieval can lead to improved inference efficiency. Knowing this, we introduce _Kullback-Leibler Alignment of Embeddings_ (KALE), an efficient and accurate method for increasing the inference efficiency of dense retrieval methods by pruning and aligning the query encoder after training. Specifically, KALE extends traditional Knowledge Distillation after bi-encoder training, allowing for effective query encoder compression without full retraining or index generation. Using KALE and asymmetric training, we can generate models which exceed the performance of DistilBERT despite having 3x faster inference. ## 1 Introduction A bi-encoder-based retrieval, often called dense retrieval, is a retrieval function that leverages the vector representation of queries and documents as a proxy for relevance. Using two encoders, one for the query and one for the document, the input data is mapped into a common latent space where closeness becomes a proxy for relevance. Dense retrievers have become increasingly popular due to their ability to capture the semantic relationships between query and document terms. However, bi-encoder-based models can also be computationally expensive, particularly when dealing with large datasets. As a result, there has been a growing interest in methods for compressing these models to reduce their computational complexity without sacrificing performance. While the use of smaller models (Wang et al., 2020) has provided a path to improving model performance, compression cannot be adjusted to suit varying latency needs. In other words, a model must match latency requirements before it can be experimented with. Additionally, since bi-encoders require a complete index generation to evaluate performance iteratively Figure 1: Using KALE and asymmetric training on the lead to when measuring QPS vs. Recall at 100 on the NQ dataset. Using Asymmetry and KALE, it is possible to 3x QPS with nearly no loss in accuracy and 4.5x with under 2% loss in accuracy. We calculate QPS as the mean number of queries per second with a batch size of 1 and a max sequence length of 32 on a T4 GPU. Impact on retrieval accuracy is measured by the relative drop in retrieval accuracy at 100 retraining them can be very expensive. Seeing the bottleneck caused by trying to train compressed models for retrieval we explore approaches to compress models after training. By doing so it becomes cheaper to evaluate the impact of compression of retrieval and generate variants of many sizes. In this paper, we explore the role of asymmetry in the size of query and document encoders that leverage language models. Through experiments on several benchmarks, we demonstrate that our approach can significantly reduce the number of parameters in the bi-encoder model without sacrificing performance. As shown in figure 1, the combination of asymmetric bi-encoders and post-training KALE allows for 3x more QPS than an uncompressed bi-encoder with less than 1% loss in accuracy and nearly 5x with less than 2%. Building on the favorable implications of asymmetry for efficient inference, we introduce a compression mechanism called **K**ullback-Leibler **A**llingment of **E**mbeddings (KALE). KALE uses an alignment of representations to compress models without requiring any form of retraining or index regeneration. To ground our approaches, we evaluate the effectiveness of KALE and asymmetry on several benchmark datasets and compare the results to existing efficient inference approaches. The following research questions drive our work: * Is the performance of dense retrieval methods more driven by the query or document encoder size? * Is it possible to compress query encoders without retraining and index regeneration? * How can dense retrieval asymmetry and post-training alignment be leveraged to improve query encoder latency? It is in answering these questions that we deliver the following contributions: * We present the first robust studies on the role of document-query encoder symmetry, demonstrating that the size of the document encoder dominates performance. * We introduce and demonstrate the effectiveness of KALE, a post-training compression and alignment approach demonstrating its effectiveness and * We empirically demonstrate on various benchmarks how Asymmetric Compression can lead to 4.5 better QPS with 1% loss in recall accuracy at 100. ## 2 Related Work **Transformer Based Language Models** such as BERT Devlin et al. (2019) provide contextual language representations built on the Transformer architecture Vaswani et al. (2017) which can be specialized and adapted for specific tasks and domains Lee et al. (2020). Using contextual word representations, it becomes relatively easy to excel at a broad range of natural language processing tasks such as Question Answering, Text Classification, and sentiment analysis. **Bi-Encoders**, commonly called dual-encoders or dense retrievers, decompose ranking by leveraging the inner product of query and document representations to produce a relevance score for query document pairs. While not as accurate at cross-encoders Reimers and Gurevych (2019), they are more efficient for inference and easier to deploy. Bi-encoder document representations are query invariant, allowing them to be pre-computed and loaded into an Approximate Nearest Neighbor (ANN) such as FAISS Johnson et al. (2019). At runtime, a query is an encoder into a latent space, and the \(k\) documents are retrieved using a nearest neighbor algorithm such as HNSW Malkov and Yashunin (2016). Since the entire document index has already been created the retrieval latency is limited to a single call of the query encoder. Bi-encoders commonly leverage LLM such as BERT Devlin et al. (2019) to retrieve short passages of text leading to the task descriptor of Dense Passage Retrievers (DPR) Karpukhin et al. (2020). Driven by their efficiency in deployment and relevance performance, DPR-based models have rapidly become the building blocks for systems doing product search Magnani et al. (2022), open domain question answering Karpukhin et al. (2020) and customer support Mesquita et al. (2022). **Efficient Inference** study methods and models which decrease the model execution cost while minimizing the losses to model performance. Knowledge Distillation Hinton et al. (2015) is a training method where a model, called the _student_, learns to emulate a _teacher_ model, which is commonly larger or better performing than the _student_. Unstructured pruning removes individual weights or groups of weights in a model by applying a mask or setting the weight values to 0. When paired with a sparsity-aware inference engine, it is possible to gain 3-5x speedups in inference throughput with little to no loss in accuracy Kurtic et al. (2022). Structured pruning removes fundamental structural components in a language model, such as individual attention heads Voita et al. (2019) or entire model layers Sanh et al. (2019). Removing entire model layers is one of the most pervasive approaches, as latency gains are easy to realize, and pruning is straightforward. While their training regimes may differ, models like DistilBERT Sanh et al. (2019) and TinyBERT Jiao et al. (2020), and MiniLM Wang et al. (2020) leverage structural pruning as ways of generation 2-10x speedups. Methods like quantization Pouransari and Tuzel (2020)Zafrir et al. (2019), early exiting Xin et al. (2020) or token pruning Kim et al. (2021) have been effective in other NLP tasks. Still, our work primarily focuses on structured pruning and its relationship with asymmetry. We leave studying the impacts of asymmetry on these compression methods to future work. **Asymmetrical deep learning** broadly refers to any non-uniformity in shape or attribute of models. Traditional modeling approaches favor uniformity as it is preferable for optimization algorithms Mihaylova and Martins (2019), and using models for inference should match training as closely as possible Ranzato et al. (2015) as improvements in training loss during optimization result in improvements in model performance during inference. However, this does not account for cost or latency asymmetries during usage. Kasai et al. demonstrated how the sequence-to-sequence encoder depth dominates language model performance for machine translation Kasai et al. (2020). Tay et al. 2021 extend this work by finding a _Deep-Narrow_ which shows that for broad language modeling, it is possible to have 50% fewer parameters and a 40% faster inference with no loss in accuracy. **Embedding Distillation** Concurrent to our work on bi-encoder compression, Kim et al. 2023 study how distillation in embeddings leads to general compression of bi-encoders and cross-encoders Kim et al. (2023). Our work differs from theirs as we focus on the role of asymmetry between query and document encoders and how to leverage it for improved inference efficiency. ## 3 Method The use of representation models for retrieval begins with a document space \(d\) and a query space \(q\) where each of which is generated by some model \(m\). Models do not need to share the same initialization, shape, or size, but their representation vectors must share size without some projection. These two models learn a notion of relevance by training to minimize the distance of positive query-document pairs as shown in equation 1 where **x** is a query vector and **y** is a document vector, and \(\cdot\) denotes the dot product of the vectors. \[L=1-\frac{\textbf{x}\cdot\textbf{y}}{|\textbf{x}||\textbf{y}|} \tag{1}\] The query and document encoder models are commonly initialized with a pre-trained language model such as BERT. Then, using pairs of labels for positive relevance scores for queries and documents, the models are trained to minimize the distance between queries and their relevant documents Karpukhin et al. (2020) While it is common practice to initialize the query encoder and document encoder with identical language models, this ignores the cost asymmetry of the usage patterns. The document encoder is usually only used once during a large-scale batch generation of the index. Index generation happens in a latency-insensitive environment and can easily leverage many GPUs and large batch sizes to improve efficiency. The query encoder runs every time a user issues a query, which can be irregular and sporadically. The query encoder responds to each user query independently. Thus, query encoders often use a batch size of 1 and commonly leverage small inference-optimized hardware like the T4 GPU or small CPUs. Since the document encoder does not run very often, any improvement in latency produces a single fixed gain utterly dependent on the corpus size and index refresh cycle. The query encoder's user-facing nature means latency improvements occur whenever a user queries. ### Role of model symmetry with Bi-encoders Since the query encoder runs many times online and the document encoder runs once, offline, we question: Is there some form of asymmetry between the query encoder and the document encoder that can be exploited? Do the two encoders need to be compressed symmetrically? To answer this question, we explore the impact on the performance of pruning the query and document encoders on the NQ passage retrieval dataset (Kwiatkowski et al., 2019). Using a BERT-base uncased model with 12 transformer encoder layers, we generate structurally pruned models with 9,6,3,2 and 1 layer. We also further pre-train the three and six-layer models using knowledge distillation, represented as \(6_{KD}\) and \(3_{KD}\), from a 12-layer model on the Wikipedia-book corpus similar to distilBERT (Sanh et al., 2019). Then, using each of these seven models, we train dense retrieval models on the NQ passage retrieval dataset with variations of query and document models resulting in 72 variants. With each of these models, we generate a full index and evaluate retrieval performance on the development portion of the dataset. We do not tune any parameters to avoid overfitting and to explore asymmetry without overoptimizing. Each model's retrieval accuracy is evaluated with retrieval sets of depth 20, 100, and 200. We compare the impact of varying the encoders to the uncompressed baseline and a distilBERT model (denoted by \(6_{db}\)). Looking at the impact of asymmetric compression as shown in table 1, we see that the impact of compression is more pronounced with a small recall set as retrieval accuracy impact at 20 is 3x that of at 200. As shown in table 1 we observe major accuracy gains by fine-tuning the pruned model with a 4% gap between \(6\) and \(6_{KD}\) and a 8% gap between \(3\) and \(3_{KD}\) with a 4% gap for recall at 20 on the NQ dataset. Looking at the impact of asymmetry of the depth \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Layers enc & Top 20 & Impact & Top 100 & Impact & Top 200 & Impact \\ \hline 12 & 79.86\% & 0.00\% & 85.84\% & 0.00\% & 88.42 & 0.00\% \\ \hline \(6_{\text{d}}\) & 73.88\% & -7.49\% & 84.74\% & -1.29\% & 87.26\% & -1.31\% \\ \hline 9 & 73.41\% & -8.08\% & 83.68\% & -2.51\% & 86.51\% & -2.16\% \\ \hline \(6_{KD}\) & 75.04\% & -6.04\% & 85.15\% & -0.80\% & 87.45\% & -1.10\% \\ \hline \(6\) & 71.69\% & -10.23\% & 83.30\% & -2.96\% & 86.04\% & -2.69\% \\ \hline \(3_{KD}\) & 73.32\% & -8.19\% & 83.43\% & -2.80\% & 86.20\% & -2.51\% \\ \hline 3 & 69.36\% & -16.20\% & 80.61\% & -6.09\% & 84.49\% & -4.45\% \\ \hline 2 & 66.87\% & -16.27\% & 80.42\% & -6.33\% & 86.38\% & -5.17\% \\ \hline 1 & 54.96\% & -31.18\% & 71.88\% & -16.26\% & 76.73\% & -13.22\% \\ \hline \end{tabular} \end{table} Table 1: Impact of Structural pruning before fine-tuning on Retrieval Accuracy on NQ passage retrieval dataset Figure 2: Measuring the impact on recall at 20 on the NQ retrieval dataset by varying the number of transformer layers for the query encoder and document encoder of encoders as shown in table 2 and figure 2 we find there is the size of the query and document encoders cause similar impacts on retrieval accuracy. A retriever with 3 layers in the query encoder and 12 in the document encoder loses 11.9% of its retrieval accuracy and 12.55% when the sizes of the document encoder and query encoders are flipped. These asymmetric retrievers perform better than the symmetric 3-layer models, which lose 16.2% which highlights the ability to improve retrieval performance by having non-uniform compression. It is worth noting that having a larger document encoder is preferable to a larger query encoder which supports the notion that the document encoder is more important than the query encoder (Li and Lin, 2021).// Similar results can be seen with the introduction of fine-tuned three and 6-layer models as shown in table 6. Unsurprisingly, KD-optimized language models outperform non-distilled models, and any asymmetrical variant that leverages a distilled model outperforms the un-distilled variant. Without further optimization, a model with a distilled 3-layer query encoder and a 12-layer document encoder will outperform a model with symmetrical 6-layer models despite being 2x faster. ### Inference Benchmarks To evaluate the impact of structural pruning, we benchmark inference speeds of query encoding while varying the number of transformer layers. We perform benchmarking using an Intel Xeon Gold 6238R Processor and a T4 Nvidia GPU. For each model, we evaluate the performance on encoding 6500 queries with a batch size of one and a max context length of 32. For CPU inference, we evaluate the performance of models using the ONNX library 1, and for GPU inference, we evaluate native Pytorch inference. We repeat each run five times to ensure consistency and report the mean. Summary statistics can be found in table 3 and full results, including percentile, standard deviation, and confidence intervals, can be found in the appendix.5. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Layers & size & composed size & method & QPS & Speedup \\ \hline 12 & 148 & 387 & GPU & 105.823 & 1.00 \\ \hline 9 & 337 & 512 & GPU & 139.494 & 1.23 \\ \hline 6 & 256 & 236 & GPU & 172.338 & 1.63 \\ \hline 3 & 175 & 161 & GPU & 299.43 & 2.83 \\ \hline 2 & 148 & 136 & GPU & 441.422 & 4.17 \\ \hline 1 & 121 & 111 & GPU & 660.04 & 6.24 \\ \hline 12 & 418 & 387 & GPU & 472.728 & 1.00 \\ \hline 9 & 337 & 212 & CPU & 63.24 & 1.34 \\ \hline 6 & 256 & 236 & CPU & 90.366 & 1.91 \\ \hline 3 & 175 & 161 & CPU & 160.02 & 3.31 \\ \hline 2 & 148 & 136 & CPU & 229.666 & 4.86 \\ \hline 1 & 121 & 111 & CPU & 378.534 & 8.01 \\ \hline \end{tabular} \end{table} Table 4: Impact of structural pruning with and without KALE on Accuracy at 100 across various datasets. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Layers & layers & top & layer & layer & top & layer & layer \\ \hline [MISSING_PAGE_POST] & -67.187 & -27.187 & -27.187 & -1.192 \\ \hline \hline \end{tabular} \end{table} Table 3: Variation in model throughput according to the serving method and the number of transformer layers. Structural pruning can lead to a 6 and 8-layer performance increase on GPU and CPU and pruning a model to 3 layers allows a CPU to offer better inference performance than the GPU. KL Alignment of Embeddings While training asymmetric models can improve latency, it requires novel training regimes and experimentation, and existing workloads need to regenerate their entire index to take advantage of any inference speedups. Generation of the passage index can take longer than model training (Karpukhin et al., 2020), which makes regenerating a new index and retraining a model to meet changing latency requirements an inefficient experimentation pathway. Moreover, coupling asymmetry into training makes generating query encoder variants more difficult, as each encoder requires its own index and document encoder. Motivated by this bottleneck, we introduce **K**ullback-Leibler **A**llingment of **E**mbeddings (KALE), a simple method of improving bi-encoder latency by aligning the embeddings of compressed models. KALE is applied after model training and leverages large batch sizes to make compression **computationally inexpensive** and **independent of training**. A single V100 GPU KALE can produce a compressed query encoder in less than 5 minutes. First, a bi-encoder model trains with separate query and document encoders. When training is complete, the document encoder, \(e_{document}\), is frozen, and using the query encoder, \(e_{q}\), a structurally pruned copy, \(e_{q^{\prime}}\), is made. Then, using a sample of queries, the \(e_{q^{\prime}}\) model is fine-tuned to minimize the KL divergence of their query representations as shown in equation 2. While the KL divergence is a measure of differences in probability distributions it has been applied successfully for representation alignment (Kim et al., 2023). To leverage it, we treat each of the representation vectors as a probability over a set of logits. \[D_{\text{KL}}(e_{q^{\prime}}\parallel e_{q})=\sum_{x\in\mathcal{X}}e_{q^{ \prime}}(x)\log\left(\frac{e_{q^{\prime}}(x)}{e_{q}(x)}\right). \tag{2}\] We explored the use of various distance functions such as cosine similarity, Manhattan distance, and the KL divergence but found little sensitivity in any metric besides KL divergence. We believe this is due to us freezing the document representations, and as a result, cosine distance allows the query embeddings to _drift_ more than probability distribution matching methods. To explore this further, we experiment with tuning the temperature for the KL divergence and add a loss scaling factor but find a temperature of one and a scaling factor of ten to be most optimal. Additionally, we explored using a contrastive loss with random negative and hard negatives mined from the trained encoder but found no positive impact for either method. We leave further exploration of training objective improvement for future work. ### Experimental Results We evaluate the effectiveness of KALE by taking uncompressed BERTBASE models and pruning them with and without KALE on a variety of well-established passage retrieval benchmarks. First, models are trained, and indexes are generated using un-optimized BERTBASE models. Next, the document encoders are frozen, and the query encoders are structurally pruned to have 9,6,3,2 or 1 transformer layer. Finally, query encoders are aligned using KALE, and we compare the performance of compressed models by comparing the impact on retrieval accuracy at 20,100, and 200. To aid reproducibility, each model is trained using the Tevatron (Gao et al., 2022) 2 library, which makes use of hugginface's transformers to provide a simple interface for exploring neural ranking models. Our experiments focus on the plain BERTBASE-uncased 12-layer transformer model. While never more capable models exist, the unaltered BERT model is widely used in production workloads, which our experiments seek to emulate. Our work aims not to produce the highest possible retrieval accuracy for a dense encoder. Instead, our goal is to find the role of asymmetry in bi-encoder models. As a result, we leverage the well-established parameters in all of our experiments without using an advanced methodology like contrastive or curriculum learning. There are fewer parameters for using KALE, and we deliberately do not optimize on anything but the loss between \(e_{q}\) and \(e_{q^{\prime}}\). In general, higher degrees of pruning require longer training with smaller batches. Figure 3: Impact of structural pruning with and without KALE on the NQ, MSMARCO, TriviaQA, SciFACT, and SQuAD Passage Retrieval dataset with the recall set sizes of 20,100, and 200. Across datasets, we see a consistent trend where KALE is effective but most effective when the network is heavily pruned and recall set sizes are small. When the model is pruned to 2 or 1 layer with a recall set size of 20, the difference between using KALE or not can be up to 10 times the loss in recall accuracy **Datasets** We use a wide variety of standard dense retrieval benchmarks, including MSMARCO V1.1 3(Campos et al., 2016), NQ Passage Ranking 4(Kwiatkowski et al., 2019), SciFact Passage Ranking 5(Wadden et al., 2020), TriviaQA passage Ranking 6(Joshi et al., 2017), and SQUAD Passage Ranking 7(Rajpurkar et al., 2016). Footnote 3: [https://huggingface.co/datasets/Tevatron/msmarco-passage](https://huggingface.co/datasets/Tevatron/msmarco-passage) Footnote 4: [https://huggingface.co/datasets/Tevatron/wikipedia-nq](https://huggingface.co/datasets/Tevatron/wikipedia-nq) Footnote 5: [https://huggingface.co/datasets/Tevatron/scifact](https://huggingface.co/datasets/Tevatron/scifact) Footnote 6: [https://huggingface.co/datasets/Tevatron/wikipedia-trivia](https://huggingface.co/datasets/Tevatron/wikipedia-trivia) Footnote 7: [https://huggingface.co/datasets/Tevatron/wikipedia-squad](https://huggingface.co/datasets/Tevatron/wikipedia-squad) For each dataset, we evaluate performance by measuring the recall accuracy with retrieval depths of 20,100, and 200. Additionally, for the MSMARCO dataset, we also report MRR@10; for Scifact, we also report NDCG @10 and RR@10. **Computational Experiments** Our experimentation on fine-tuning our compressed models uses a 16 GB V100 GPU. Experiments in bi-encoder model training leverage 1 V100 for the MSMARCO and 4 for each other experiment. Due to the vast number of models and datasets we train on, each experiment happens with the same fixed seed. ### Evaluating KALE We compare the performance of using KALE for post-training compression in figure 3 across the five datasets and see a fairly consistent trend. When the recall set is small and the query encoders are pruned to a high degree, the impact of KALE is most visible, often driving over 50 improvements in retrieval accuracy. Additionally, using KALE allows the models to have a steady and gradual drop in recall accuracy relative to speedup instead of the sharp drop shown by the regular usage of structural pruning. Without KALE, post-training compression causes a 20-50% loss in retrieval accuracy. With the use of KALE, these losses are cut to 1-10%. In practice, this allows using one or 2-layer encoder models running with CPU-based inference with minor impacts on accuracy. We also notice a surprising performance improvement between 3 and 2-layer query encoders with and without KALE. We believe this shows the phenomena studied elsewhere: the first and last layers do most of the work (Oh et al., 2022). ### Aiding Asymmetry with KALE Seeking to optimize compression further, we combine KALE with asymmetrical finetuning and evaluate the results similarly to our earlier experiments. Results on the impact of KALE and asymmetry on the five datasets on the recall accuracy at 100 can be found in table 5 where \(3_{kd}-6_{kd}\) denotes a three-layer query encoder and six-layer document encoder, \(3_{kd}-3_{kd}\) denotes dual three layer encoders. Full results and metrics for each task can be found in the appendix section.4. First, it is immediately observable that post-training compression via KALE performs worse than models natively designed for that size. We believe this is due to the convergence of the KALE models to have _some distance_ from the uncompressed model because of dropout. We experimented with not using dropout in KALE, but model performance quickly suffered. Looking at the best retrieval accuracy vs. the model speedups shown in figure 4, we can see a substantial variation in the impact of compression across datasets. In tasks like SCIfacts, it is possible to get over 4x speedup while improving accuracy, while on tasks like SQuAD, even minor speedups lead to major losses in accuracy. We believe this variation is driven by the relative difficulty of each dataset, where easier tasks are more compressible than harder tasks. We believe these variations in results highlight the utility of post-training compression methods like KALE. Given the task variability in the impact of compression, iteration speed and cost are essential to effectively tuning model inference speed and accuracy. ## 5 Limitations While our work makes a broad study on how to improve model efficiency our scope is limited. Our work is limited to the usage of BERT-base and it is not clear how our compression approaches scale to more varied architectures like the sequence-to-sequence models used by DocT5 Lee et al. (2022) or more optimized models like RoBERTa Liu et al. (2019) or compressed models like MiniLM Wang et al. (2020). ## 6 Conclusion and Future Work In this work, we have demonstrated how the use of asymmetry between the query and document encoders in bi-encoder models can be leveraged for improved inference efficiencies across CPUs and GPUs. Using our post-training compression framework, KALE, we can compress models up to 6x with little loss in accuracy. Compressing models without regenerating the document index or the document encoder makes it practical to have many query encoders tailored to each use case's latency needs. In the future, we wish to study how asymmetry in retrieval can be implemented with models which are widely different and may have different hidden sizes, such as using MiniLM for the query model and RoBERTA-Large for the document model.
2306.00217
FEED PETs: Further Experimentation and Expansion on the Disambiguation of Potentially Euphemistic Terms
Transformers have been shown to work well for the task of English euphemism disambiguation, in which a potentially euphemistic term (PET) is classified as euphemistic or non-euphemistic in a particular context. In this study, we expand on the task in two ways. First, we annotate PETs for vagueness, a linguistic property associated with euphemisms, and find that transformers are generally better at classifying vague PETs, suggesting linguistic differences in the data that impact performance. Second, we present novel euphemism corpora in three different languages: Yoruba, Spanish, and Mandarin Chinese. We perform euphemism disambiguation experiments in each language using multilingual transformer models mBERT and XLM-RoBERTa, establishing preliminary results from which to launch future work.
Patrick Lee, Iyanuoluwa Shode, Alain Chirino Trujillo, Yuan Zhao, Olumide Ebenezer Ojo, Diana Cuevas Plancarte, Anna Feldman, Jing Peng
2023-05-31T22:23:20Z
http://arxiv.org/abs/2306.00217v2
# FEED PETs: Further Experimentation and Expansion on the ###### Abstract Transformers have been shown to work well for the task of English euphemism disambiguation, in which a potentially euphemistic term (PET) is classified as euphemistic or non-euphemistic in a particular context. In this study, we expand on the task in two ways. First, we annotate PETs for vagueness, a linguistic property associated with euphemisms, and find that transformers are generally better at classifying vague PETs, suggesting linguistic differences in the data that impact performance. Second, we present novel euphemism corpora in three different languages: Yoruba, Spanish, and Mandarin Chinese. We perform euphemism disambiguation experiments in each language using multilingual transformer models mBERT and XLM-RoBERTa, establishing preliminary results from which to launch future work. ## 1 Introduction Detecting and interpreting figurative language is a rapidly growing area in Natural Language Processing (NLP) (Chakrabarty et al., 2022; Liu and Hwa, 2017). Unfortunately, little work has been done on euphemism processing. Euphemisms are expressions that soften the message they convey. They are culture-specific and dynamic: they change over time. Therefore, dictionary-based approaches are ineffective (Bertram, 1998; Holder, 2002; Rawson, 2003). Euphemisms are often ambiguous: their figurative and non-figurative interpretation is often context-dependent; see Table 1 for examples. Thus, existing work refers to these expressions as potentially euphemistic terms (PETs). State-of-the-art language models such as transformers perform well on many major NLP benchmarks. Recently, an attempt has been made to determine how these models perform in the euphemism disambiguation task (Lee et al., 2022), in which an input text is classified as containing a euphemism or not. The described systems report promising results; however, without further analysis and experimentation, it is unclear what transformers are capturing in order to perform the disambiguation, and the full extent of their ability in other languages. To address this, the present study describes two experiments to expand upon the euphemism disambiguation task. In the first, we investigate a pragmatic property of euphemisms, vagueness, and use human annotations to distinguish between PETs which are more vague (vague euphemistic terms, or VETs) versus less vague. We then experiment with transformers' abilities to disambiguate examples containing VETs versus non-VETs, and find that performance is generally higher for VETs. While we are unable to ascertain the exact reason for this discrepancy, we analyze the potential implications of the results and propose follow-up studies. In the second experiment, we create novel euphemism corpora for three other languages: Yoruba, Latin American and Castilian) Spanish, and Mandarin Chinese. Similarly to the English data, examples are obtained using a seed list of PETs, and include both euphemistic and non-euphemistic instances. We run initial experiments using multilingual transformer models mBERT and XLM-RoBERTa, testing their ability to classify them. The results establish preliminary baselines from which to launch future multilingual and cross-lingual work in euphemism processing. ## 2 Previous Work In the past few years, there has been an interest in the NLP community in computational approaches to euphemisms. felt2020phrase present the first effort to recognize euphemisms and dysphemisms (derogatory terms) using NLP. The authors use the term _x-phemisms_ to refer to both. They used a weakly supervised algorithm for semantic lexicon induction (Thelen and Riloff, 2002) to generate lists of near-synonym phrases for three sensitive topics (lying, stealing, and firing). The important product of this work is a gold-standard dataset of human x-phemism judgements showing that sentiment connotation and affective polarity are useful for identifying x-phemisms, but not sufficient. While the performance of Felt and Riloff (2020)'s system is relatively low and the range of topics is very narrow, this work inspired other research on euphemism detection. Thus, Zhu et al. (2021) define two tasks: 1) euphemism detection (based on the input keywords, produce a list of candidate euphemisms) 2) euphemism identification (take the list of candidate euphemisms produced in (1) and output an interpretation). The authors selected sentences matched by a list of keywords, created masked sentences (mask the keywords in the sentences) and applied the masked language model proposed in BERT Devlin et al. (2018) to filter out generic (uninformative) sentences and then generated expressions to fill in the blank. These expressions are ranked by relevance to the target topic. Gavidia et al. (2022) present the first corpus of potentially euphemistic terms (PETs) along with example texts from the GloWbE corpus. They also present a subcorpus of texts where these PETs are not being used euphemistically. Gavidia et al. (2022) find that sentiment analysis on the euphemistic texts supports that PETs generally decrease negative and offensive sentiment. They observe cases of disagreement in an annotation task, where humans are asked to label PETs as euphemistic or not in a subset of our corpus text examples. The disagreement is attributed to a variety of potential reasons, including if the PET was a commonly accepted term (CAT). This work is followed by Lee et al. (2022) who present a linguistically driven proof of concept for finding potentially euphemistic terms, or PETs. Acknowledging that PETs tend to be commonly used expressions for a certain range of sensitive topics, they make use of distributional similarities to select and filter phrase candidates from a sentence and rank them using a set of simple sentiment-based metrics. With regards to the euphemism disambiguation task, in which terms are classified as euphemistic or non-euphemistic, a variety of BERT-based approaches featured in the 3rd Workshop on Figurative Language Processing have shown promising results. Keh et al. (2022) and Kesen et al. (2022) both show that supplying the classifier with information about the term itself, such as embeddings and its literal (non-euphemistic) meaning, significantly boost performance, among other enhancements. In a zero-shot experiment, Keh (2022) shows that BERT can disambiguate PETs unseen during training (albeit at a lower success rate), suggesting that some form of general knowledge is learned, though it is unclear what. ## 3 VET Experiments In this section, we discuss the concept of Vague Euphemistic Terms (VETs), and subsequent experiments. The linguistics literature often describes euphemisms as either'more ambiguous' or 'vaguer' than the non-euphemistic expressions they substitute Burridge (2012); Williamson (2002); Egre and Klinedinst (2011); Russell (1923); Di Carlo (2013). We understand ambiguity as a countable property, when an expression can have a certain number of senses; whereas vagueness is not countable, a continuum of meaning or theoretically an infinite number of interpretations. However, we note that these qualities are on a "spectrum", and may not be equal for all euphemisms. See below for examples of some euphemisms which may be considered to be VETs, and others, non-VETs: _VAGUE: The funds will be used to help <neutralize> threats to the operation and ensure our success._ (Counter? Peacefully or violently? Kill? Some other form of removing power?) \begin{table} \begin{tabular}{l|l} Non-euphemistic & Euphemistic \\ \hline Asked to choose between jobs and the environment, & This summer, the budding talent agent was \\ a majority – at least in our warped, & between jobs and free to babysit pretty much \\ first-past-the-post system – will pick jobs. & any time. \\ \hline Managers and scientists switch between jobs in private & The couple say that they employ some great \\ industry and government in USA in a manner & baristas and are looking to train more as the \\ perhaps not yet noticeable in India. & business expands, they emphasise that it \\ & is a job offering a great career and not just \\ & for students and those between jobs. \\ \hline \hline \end{tabular} \end{table} Table 1: Euphemistic and non-euphemistic interpretations are context-sensitive. Ambiguity of _between jobs_ (Retrieved from the News on the Web Corpus, October 6, 2021) VAGUE: They were really starting to like each other, but did not know if they were ready to <go all the way> yet._ (Start dating? Have sexual intercourse? Begin or complete some other process?) _NONVAGUE: As part of their restructuring, the company will <lay off> part of their workforce by next week. NONVAGUE: There is always gossip about who <slept with> who on the front page of the magazine._ Additionally, Gavidia et al. (2022); Lee et al. (2022) observed that there are different kinds of potentially euphemistic terms (PETs). One distinction they suggest is 'commonly accepted terms' (CATs), which are so commonly used in a particular domain that they may have less pragmatic purpose (intention to be vague/neutral/indirect/etc.) than other euphemisms. Some examples of PETs which may be CATs are "elderly", "same-sex", and "venereal disease". Humans may disagree on whether these terms are euphemistic in context, since CATs may be viewed as "default terms" rather than a deliberate attempt to be euphemistic. Notably, since many of the PETs under investigation are established expressions, we expect a fair amount to be non-vague; i.e., modern speakers of the language should precisely understand what the term means. The differences described above may be a factor in computational attempts to work with euphemisms; e.g., some examples may be harder to disambiguate. To investigate this, we assess transformers' performances on examples annotated to be "vague" versus those that are "non-vague". However, defining and determining the relative vagueness of an expression is not a trivial task. Below, we describe our methodology for obtaining vagueness labels, experimental results and follow-up analyses. ### Methodology #### 3.1.1 Vagueness Labels To examine correlations between model performance and vagueness, we first aim to label each PET with a binary label (0 for non-vague, and 1 for vague). Existing computational methods for measuring vagueness are primarily lexically driven, using a dictionary of "vague terms", such as "approximately" or gradable adjectives like "tall" (Guelorget et al., 2021; Lebanoff and Liu, 2018), and do not fit our use case. Thus, we consider human-annotation approaches. However, in discussions with authors and annotators, we found that there was significant disagreement on what is meant by "vagueness", and how it should be defined for this task. Lacking clear instructions for explicitly annotating vagueness, we opted for an indirect annotation task. In this task, we asked annotators to replace the PET with a more direct paraphrase (if possible), and use similarities in annotators' paraphrases as a proxy for "vagueness". Intuitively, if annotators give dissimilar responses for a particular PET, then this indicates the PET is open to multiple interpretations, and thus a VET. The way we computed the labels was as follows: 1. We supply annotators with a randomly selected example of each PET from the Euphemism Corpus; if a PET was ambiguous, both a euphemistic and a non-euphemistic example was supplied, resulting in an annotation task of 188 examples. A total of 6 linguistically-trained annotators were recruited. Annotators were then supplied with these instructions: _"For this task, you will read through text samples and decide how to paraphrase a certain word/phrase in the text. Each row will contain some text in the "text" column containing a particular word/phrase within angle brackets \begin{table} \begin{tabular}{l|l} Non-euphemistic & Euphemistic \\ \hline pregnant woman & woman in a certain condition \\ aged care institution & home, hosel, house, cottage, village, residence \\ old age & certain age \\ false statements & alternative facts \\ war & special military operation/campaign \\ we have to change and do something we aren’t used to & we must reach beyond our fears \\ being out of work & being in transition \\ a lack of consistent access to enough food for an active healthy life & food insecurity \\ prison & correctional facility \\ blind & visually challenged, visually impaired \\ \hline \end{tabular} \end{table} Table 2: Euphemisms are vaguer than the expressions they substitute. < >. In the "paraphrase" column, please try to replace the word/phrase with a more direct interpretation. If you can't think of one, then answer with the original word/phrase." 2. Sentence-BERT Reimers and Gurevych (2019) was then used to generate embeddings of the annotators' responses. The cosine similarities between the embeddings were computed for each example and acted as an automatic measure of similarity between responses. See Table 3 for sample responses and the respective cosine similarity scores between them. 3. While this transformer-based similarity score generally captured semantic similarity well for strong cases of similarity or dissimilarity (e.g., see rows 2 and 3 of Table 3), we found that there were several "borderline cases" in which the score did not accurately reflect the semantic similarity between responses. For instance, annotators sometimes "over paraphrased" non-euphenistic examples, providing responses with significant lexical differences (e.g., the non-euphenistic usage of the word "expecting" was paraphrased as "expecting", "anticipating", "foreseeing", etc.), that led to a low cosine score, despite being semantically similar to human judgment. Therefore, based on an examination of such borderline cases, we used the automatic method to assign a label of 0 (non-vague) to examples with a cosine score greater than 0.65, a label of 1 (vague) to examples with a score lower than 0.50, and manually annotated all examples in between. See Table 3 for sample responses, and the label they resulted in. 4. Lastly, these labels were generalized to the rest of the dataset under the assumption that euphemistic and non-euphenistic PETs are either vague or non-vague, regardless of context. For example, the euphemistic uses of "passed away" or "lay off" are usually non-vague, while "neutralize" and "special needs" are usually vague. Table 4 shows the final distribution of vagueness labels in our dataset when using this procedure. It should be noted that this is an experimental procedure for approximating human labels of vagueness, in lieu of a more established method. In particular, the generalization that all PETs are vague or not regardless of context is a strong assumption. We leave exploring alternate methods of annotating vagueness for future work. \begin{table} \begin{tabular}{|c|c|c|} \hline & **Vague** & **Non-Vague** \\ \hline **Euphemistic** & 408 & 975 \\ \hline **Non-Euphemistic** & 361 & 208 \\ \hline \end{tabular} \end{table} Table 4: Number of vague vs. non-vague examples in the dataset \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Text** & **Euph Label** & **Paraphrases** & **Cos Sim** & **Vague Label** \\ \hline The violent Indian & 1 & revolutionaries, reformers, & 0.53 & 1 \\ \textless{}Freedom Fighters\textgreater{} who & & anti-government activists, & & \\ fought the British were very & & insurrectionists, terrorists, & & \\ much this. [...] & & terrorists & & \\ \hline [...] He’s \textless{}passed away\textgreater{} & 1 & dead, died, died, died, died, & 0.924 & 0 \\ but he started out as [...] & & died & & \\ \hline [...] were electrocuted for & 0 & smuggling, leaking, & 0.330 & 1 \\ \textless{}passing on> nuclear & & illegally spreading, giving, & & \\ information to Soviet & & passing on, giving away & & \\ Russia [...] [...] & & & & \\ \hline At home, I wasn’t allowed & 0 & an old enough age, a certain & 0.608 & 0 \\ to watch certain movies & & age, grown mature enough, & & \\ until I had reached \textless{}a & & maturity, adulthood, a & & \\ certain age\textgreater{}. [...] & & certain age & & \\ \hline \end{tabular} \end{table} Table 3: Sample of annotation results. The ”Paraphrases” column shows the six annotators’ responses, and the ”Cos Sim” column shows the cosine similarity scores between embeddings of the responses. #### 3.1.2 Data and Model The euphemism dataset used for the experiments is the one created by Gavidia et al. (2022). A few modifications were made to several examples we believed to be misclassified. The final dataset contained 1952 examples, of which 1383 are euphemistic and 569 are non-euphemistic, spanning 128 different PETs. The model used for all experiments was RoBERTa-base Liu et al. (2019). RoBERTa was fine-tuned on the data using 10 epochs, a learning rate of 1e-5, a batch size of 16; all other hyperparameters were at default values. Using the vagueness labels, we run classification tests in which RoBERTa is fine-tuned on both vague and non-vague examples, and then tested on both vague and non-vague examples. Then, we compute performance metrics separately for vague and non-vague examples in the test set for comparison. In the training and test sets, the data was split as evenly as possible across all labels of interest to help eliminate the impact of class imbalance on output metrics. Specifically, samples were randomly selected using the size of the smallest subgroup (vague-euphemistic, nonvague-euphemistic, etc.), and then evenly distributed into training and test sets using an 80-20 split. For example, for the vagueness data shown in Table 4, 208 is the size of the smallest subgroup, so 208 examples were randomly selected from all other subgroups for a total of 832 examples (664 train and 168 test); i.e., there were equal amounts of vague-euphemistic, vague-non-euphemistic, etc. examples in both training and test sets. Additionally, the number of unique/ambiguous PETs was approximately the same in all data splits. ### Experimental Results and Observations Table 5 shows the results of the VET experiment, which are metrics (Macro-F1, Precision, and Recall) averaged across 10 different classification runs. As aforementioned, in order to look at the effect of vagueness, we compute metrics for vague and nonvague examples separately; the first row shows the average metrics for the vague test examples in each run, while the second row shows metrics for the non-vague test examples. We observe that the performances are better for the examples marked as vague, rather than non-vague, suggesting that this is a meaningful distinction between examples. As a consequence of the annotation procedure, the immediate conclusion is that examples containing non-vague PETs (i.e., those which annotators interpreted similarly) are somehow harder to classify, while those containing VETs are easier. However, a concrete explanation of this result remains elusive. An initial hypothesis was that non-vague PETs may be more likely to be PETs which annotators disagreed on in the original dataset Gavidia et al. (2022), but this was not necessarily the case. An error analysis of the most frequently misclassified examples leads us to a potential cause for the comparatively poor performance of the non-vague examples. We noted that a significant proportion of misclassified examples were non-euphemistic examples (which had been consistently misclassified as euphemistic by BERT). PETs in these examples appeared to co-occur with a relatively high number of "sensitive words" - words relating to sensitive topics that people may typically use euphemisms for, such as death, politics, and so on. If certain "sensitive words" are typically associated with euphemistic examples, then examples where this is not the case may mislead the classifier. In an attempt to quantify this, we use the following procedure: 1. Using a list of sensitive topics previously used for euphemism work as a starting point Lee et al. (2022), we come up with "sensitive word list" comprising of a list of 22 words we believe to represent a range of "sensitive topics". See Appendix A for the full list. 2. For each example, we go through each word and compute the cosine similarity with the words in our "sensitive word list" using Word2Vec Mikolov et al. (2013). For every comparison that yields a similarity score > 0.5, we add a point to this example's "sensitivity score". 3. We then isolate the examples which were misclassified 10 or more times in the experiments, and repeat the above. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **F1** & **P** & **R** \\ \hline Vague & 0.853 & 0.856 & 0.854 \\ \hline Non-vague & 0.793 & 0.805 & 0.795 \\ \hline \end{tabular} \end{table} Table 5: Results from the vagueness experiments. Table 6 below shows the results of this procedure. Each row shows a particular subgroup (e.g., the first row is for the euphemistic, vague examples), the number of examples in the subgroup, and the mean "sensitvty score" for examples in the subgroup. The last column shows the score normalized by the number of words in each example. The first 4 rows of the dataset show that for the full corpus, sensitivity scores are higher for euphemistic examples than for non-euphemistic, regardless of vagueness. This suggests that, although euphemisms are milder alternatives to sensitive words, they tend to co-occur with other sensitive words in the context. In contrast, we observe that this trend is reversed for the frequently misclassified examples (bottom 4 rows). That is, the misclassified euphemistic examples have an unusually low sensitivity score, while non-euphemistic examples have an unusually high score. If BERT has associated sensitive words with the euphemistic label, then it may be "confused" by non-euphemistic examples which have a high occurrence of them, and vice versa. Intuitively, we speculate that this happens more frequently with non-vague examples, because usage of a non-vague PET may correlate with decreased pragmatic intent. Overall, there appears to be a correlation between the sensitivity score and misclassifed examples. Unfortunately, follow-up experiments involving model interpretability and ablation did not yield concrete results, so we cannot yet claim that BERT is "paying attention" to sensitive words. We leave a more comprehensive investigation to future work. However, the vagueness distinction between PETs indicates that there are linguistic differences between examples that have a concrete impact on model performance. Future work includes investigating other pragmatic features of euphemisms in a similar fashion, such as indirectness or politeness, and in other languages besides English. ## 4 Multilingual Experiments Euphemism disambiguation thus far has focused on American English. In this section, we describe euphemism disambiguation experiments run on multilingual data. For each of the different languages, native speakers and language experts created a list of PETs, collected example texts for each PET, and annotated each text for whether the PET was being used euphemistically given the context. We then test the classification abilities of multilingual transformer models. The results are intended to show whether multilingual transformer models have the potential to disambiguate euphemisms in languages other than English, and establish preliminary baselines for the task. ### Datasets The data collection and annotation for each language is described below. Note that, while inter-annotator agreement is reported by [1], we did not have enough annotators to report agreement for each language. However, we assume that the agreement for other languages will be similar to American English, and leave more precise metrics for future work with more annotators. #### 4.1.1 Mandarin Chinese Euphemisms are widely used in Chinese Mandarin in both formal and informal contexts, and in spoken and written language. It has been a social norm to use euphemisms to express respect and sympathy, and also to avoid certain taboos and controversies. For example, Chinese speakers are accustomed to use euphemisms to talk about topics such as death, sexual activities and disabilities, as explicit and direct narratives can be considered inappropriate or disrespectful. In collecting the PETs, terms used by mainly ancient Chinese were excluded since the corpus is contemporary. Also, the PETs were restricted to single words and multi-word expressions, rather than sentences [14]. The euphemistic terms are generated based on the language knowledge of the collector, who is a native speaker of Mandarin Chinese. For the source corpus, we referred to an online Chinese corpus made by Bright Xu (username: brightmart) on Github (brightmart, \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Euph** & **Vague** & **Data-** & **Size** & **Mean** & **Norm** \\ & & **set** & & **Score** & **Score** \\ \hline 1 & 1 & Full & 408 & **7.94** & **0.126** \\ 1 & 0 & Full & 975 & 7.78 & 0.13 \\ 0 & 1 & Full & 361 & 5.59 & 0.094 \\ 0 & 0 & Full & 208 & **5.56** & **0.095** \\ \hline 1 & 1 & Err & 21 & **3.57** & **0.056** \\ 1 & 0 & Err & 42 & 4.36 & 0.076 \\ 0 & 1 & Err & 45 & 7.09 & 0.114 \\ 0 & 0 & Err & 35 & **8.26** & **0.13** \\ \hline \end{tabular} \end{table} Table 6: Average sensitivity scores for each subgroup of the full corpus (top 4 rows) versus frequently misclassified examples (bottom 4 rows). 2019). The particular corpus used was [2016] which consists of 2.5 million news articles from 63,000 media from 2014 to 2016, including title, keyword, summary and text body. See Table 7 for examples of Chinese PETs. For example, [2016] means "to use the bathroom / to relieve oneself" when used euphemistically; and means "convenient" when used non-euphemistically. #### 4.1.2 Spanish Spanish, a Romance language, is the second most spoken language in the world Lewis (2009). For the sake of building a wide and robust corpus, it was paramount considering all different dialects of Spanish. Some of the countries considered are: Equatorial New Guinea, Puerto Rico, Argentina, Spain, Chile, Cuba, Mexico, Bolivia, Ecuador, Paraguay, Dominican Republic, Venezuela, Costa Rica, Colombia, Nicaragua, Honduras, Guatem, Peru, El Salvador, Uruguay, and Panama. Euphemisms are highly used in Spanish on a daily basis. Topics related to politics, employment, sexual activities or even death are widely communicated with euphemistic terms. First, a list of potentially euphemistic terms (PETs) was created using a dictionary of euphemisms as main reference Garcia (2000); Rodriguez and Estrada (1999). For extracting PETs, we relied heavily on the Real Academia Espanola (Real Spanish Academy)1. The corpus we collected contains sentences with PETS, PET label (euphemistic/non-euphemistic), data source and country of origin. For example: "Pasar un buen rato" meaning "to have/spend a good time" can be used as both, euphemistically and non-euphemistically. This term could be used to express involvement on a sexual activity or to spend a good time with a friend, family or an acquainted. Furthermore, the phrase "Dar a luz" meaning "to give birth" is another example that comprises both uses. Women naturally give birth to babies but women can also give birth to wonderful ideas, so as any other human being. See more examples in Table 8. Footnote 1: [https://apps2.rae.es/CORPES/view/](https://apps2.rae.es/CORPES/view/) inicioExterno.view #### 4.1.3 Yoruba Yoruba is one of the major languages of Nigeria, the most populous country on the African continent Okanlawon (2016). With over 50 million language users as speakers, it is the third most spoken language in Africa Shode et al. (2022). There \begin{table} \begin{tabular}{l|l} **Non-euphemistic** & **Euphemistic** \\ \hline Es perfecta para divertirse, pasar un buen rato y dejarte & Con el proposito evidente de pasar un buen rato con ella. \\ llevar por una historia sin mas pretension. / It is perfect to have some fun, have a good time and to let yourself & La chica no era muy brillante, pero lo que le fataba de inteligencia le sobraba en curvas. / With the clear purpose of having a good time with her. The girl was not that brilliant, but her curves overshadowed her inteligence. \\ \hline Que los pocos recursos disponibles estaban compromitados para pagar las deudas ocultas. /That the few resources are destined to pay off the hidden debt. & Para que jóvenes de pocos recursos logren alcanzar su \\ \hline \end{tabular} \end{table} Table 7: Examples of euphemistic and non-euphemistic sentences in Mandarin Chinese \begin{table} \begin{tabular}{l|l} **Non-euphemistic** & **Euphemistic** \\ \hline Táwó, góbon Funké rá lálejó néé lána to wá láti lá Ekó. & Obinrin tá kó ri alejó né. \\ Táaiwo, Funké’s elder sübling saw her visitor who came & The woman who does not see her menstruation. \\ from Lagos yesterday. & \\ \hline A kó góbotó dáke. & E sara gíri, babá ti dáké. \\ We should not be quiet. & Be brave, father is dead. \\ \hline \end{tabular} \end{table} Table 8: Examples of euphemistic and non-euphemistic sentences in Spanish \begin{table} \begin{tabular}{l|l} **Non-euphemistic** & **Euphemistic** \\ \hline Táwó, góbon Funké rá lálejó néé lána to wá láti lá Ekó. & Obin rá lálejó né. \\ Táaiwo, Funké’s elder sübling saw her visitor who came & The woman who does not see her menstruation. \\ from Lagos yesterday. & \\ \hline A kó góbotó dáke. & E sara gíri, babá ti dáké. \\ We should not be quiet. & Be brave, father is dead. \\ \hline \end{tabular} \end{table} Table 9: Examples of euphemistic and non-euphemistic sentences in Yorüba are many different dialects of Yoruba spoken by Yoruba people in Nigeria, Benin, and Togo, all of which are tonal (change depending on tone) and agglutinative (words are made up of linearly sequential morphemes) in nature. Euphemisms are often used in everyday Yoruba language conversations. Speakers use them to communicate sensitive topics like death and physical or mental health in a more socially acceptable manner, and to show reverence for certain people or occupations such as elders of the night which refer to witches and wizards, prostitutes, and so on. Euphemisms in Yoruba are used to soften the harshness of situations; to report the death of an individual, speakers of the language mostly use indirect or subtle sentences instead of saying it directly. In NLP research, Yoruba is considered as a low resourced language because of the limited availability of data in digital formats. There is no corpus dedicated to Yoruba euphemisms available online so PETs were collected from different sources such as news websites like BBC Yoruba, Alaroye, religious sources including Yoruba Bible, JW.org, transcribed Muslim and Christian sermons, Yoruba wikipedia, Yoruba Web corpus (YorubaWaC), blogposts, journals, research works, books, Global Voices, Nigerian song lyrics, written texts written by Yoruba native speakers and social media platforms such as tweets, Facebook public posts, and Nairaland. Some samples of PETs are listed in Table 9. ### Methodology From each language dataset, a maximum of 40 euphemistic and non-euphenistic examples per PET were randomly chosen to be in the experimental dataset. This was done to in an effort to ensure an overall balance of PETs in the data and reduce skewed label proportions for each PET. We also include American English data, sampled in the same manner, to provide a basis of comparison. The final statistics for each dataset are shown in Table 10. We test three multilingual transformer models: mBERT Devlin et al. (2018), XLM-RoBERTa and XLM-RoBERTa-large Conneau et al. (2020). The hyperparameters used were the same as those described in 3.1.2. A stratified 5-fold split is used to create 5 different train-test splits of each dataset, which includes every example while preserving the 80-20 ratio used in previous experiments. ### Results and Observations Table 11 shows the performance of each model. The metrics reported are macro-F1 (F1), precision (P), and recall (R), averaged across 5 experiments. We note several things about the results: (1) All languages performed at least decently, indicating that multilingual BERT models pick up on something to disambiguate euphemisms in each language. (2) As expected, XLM-RoBERTa-large generally performed better than XLM-RoBERTa-base, which consistently performed worse than mBERT. (3) Because of differences in each language's dataset, the results are not directly com \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Language** & **Total** & **Euph** & **Non-** & **Total** & **Always-** & **Ambiguous** \\ & **Examples** & **Euph** & **Euph** & **PETs** & **Euph** & **PETs** \\ \hline American English & 1952 & 1383 & 569 & 129 & 71 & 58 \\ \hline Mandarin Chinese & 1552 & 1134 & 418 & 70 & 46 & 24 \\ \hline Spanish & 961 & 564 & 397 & 80 & 33 & 47 \\ \hline Yoruba & 1942 & 1281 & 661 & 129 & 62 & 69 \\ \hline \end{tabular} \end{table} Table 10: Statistics of multilingual datasets used for euphemism disambiguation experiments. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline **Language** & \multicolumn{3}{c|}{**mBERT**} & \multicolumn{3}{c|}{**XLM-RoBERTa-base**} & \multicolumn{3}{c|}{**XLM-RoBERTa-large**} \\ \cline{2-9} & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** \\ \hline American English & 0.819 & 0.876 & 0.933 & 0.765 & 0.852 & 0.894 & 0.854 & 0.907 & 0.930 \\ \hline Mandarin Chinese & 0.901 & 0.952 & 0.938 & 0.884 & 0.921 & 0.960 & 0.952 & 0.967 & 0.982 \\ \hline Spanish & 0.747 & 0.781 & 0.816 & 0.765 & 0.799 & 0.819 & 0.776 & 0.813 & 0.826 \\ \hline Yoruba & 0.729 & 0.801 & 0.859 & 0.683 & 0.771 & 0.843 & 0.667 & 0.768 & 0.814 \\ \hline \end{tabular} \end{table} Table 11: Results of euphemism disambiguation experiments on the multilingual datasets. parable. We aim to make the experimental setup more consistent for future work, but some present inconsistencies include: * The Chinese data is the only one in which the PET is consistently "identified" (i.e. surrounded) by angle brackets <>, which the classifier may have used to its advantage. (Empirically, we notice that such "identifiers" improve performance.) * The proportion of non-euhemistic examples to the entire dataset was the smallest for Chinese (27%), followed by English (29%), Yoruba (34%) and Spanish (41%). This, along with the number of ambiguous PETs, may reflect the relative "difficulty" of disambiguation for each language. * While mBERT is pretrained on Yoruba data, the XLM-RoBERTa models are not. Thus, any sort of disambiguation capabilities shown by the XLM-RoBERTa models are notable. ## 5 Conclusion and Future Work This study presents an expansion of the euphemism disambiguation task. We describe our method for annotating vagueness, and show that this kind of pragmatic distinction may reveal interesting trends in BERT's ability to perform NLU. Namely, BERT performs better for PETs labeled as VETs, which leads us to the potential result that BERT may be associating the presence of "sensitive words" to euphemisms. Corroborating this result and exploring additional properties of euphemisms are left for future work. The multilingual results show that BERT models can already disambiguate euphemisms in multiple languages to some extent, and establish a baseline from which to improve results. While continuously expanding the multilingual corpora is a must, a number of modeling aspects can be investigated as well. For instance, error analyses can be run to reveal potential misclassification trends in each language, and data and modeling improvements that were shown to work for American English can be attempted on other languages. In general, such investigations may be used to suggest useful cross-lingual features for PET disambiguation, and more broadly, universal properties of euphemisms. ## Limitations Euphemisms are culture and dialect-specific, and we do not necessarily investigate the full range of euphemistic terms and topics covered by our selected languages. Even for "English", for instance, we do not explore euphemisms unique to "British English", though that warrants a study of its own. Additionally, as aforementioned, differences in the multilingual dataset render the results not directly comparable. For example, there are few large, structured corpora of Yoruba, so the data was taken from a variety of sources, as opposed to the other languages. Additional limitations prevent some analyses, such as limited ability to identify the PET in Yoruba due to loss of diacritics. ## Ethics Statement The authors foresee no ethical concerns with the work presented in this paper. ## Acknowledgements This material is based upon work supported by the National Science Foundation under Grant numbers: 2226006 and 1704113.
2309.10681
Social Interactions Mediated by the Internet and the Big- Five: a Cross-Country Analysis
This study analyzes the possible relationship between personality traits, in terms of Big Five (extraversion, agreeableness, responsibility, emotional stability and openness to experience), and social interactions mediated by digital platforms in different socioeconomic and cultural contexts. We considered data from a questionnaire and the experience of using a chatbot, as a mean of requesting and offering help, with students from 4 universities: University of Trento (Italy), the National University of Mongolia, the School of Economics of London (United Kingdom) and the Universidad Cat\'olica Nuestra Se\~nora de la Asunci\'on (Paraguay). The main findings confirm that personality traits may influence social interactions and active participation in groups. Therefore, they should be taken into account to enrich the recommendation of matching algorithms between people who ask for help and people who could respond not only on the basis of their knowledge and skills.
Andrea Mercado, Alethia Hume, Ivanno Bison, Fausto Giunchiglia, Amarsanaa Ganbold, Luca Cernuzzi
2023-09-19T15:04:55Z
http://arxiv.org/abs/2309.10681v1
# Social Interactions Mediated by the Internet and the Big-Five: a Cross-Country Analysis ###### Abstract This study analyzes the possible relationship between personality traits, in terms of Big Five (extraversion, agreeableness, responsibility, emotional stability and openness to experience), and social interactions mediated by digital platforms in different socioeconomic and cultural contexts. We considered data from a questionnaire and the experience of using a chatbot, as a mean of requesting and offering help, with students from 4 universities: University of Trento (Italy), the National University of Mongolia, the School of Economics of London (United Kingdom) and the Universidad Catolica Nuestra Senora de la Asuncion (Paraguay). The main findings confirm that personality traits may influence social interactions and active participation in groups. Therefore, they should be taken into account to enrich the recommendation of matching algorithms between people who ask for help and people who could respond not only on the basis of their knowledge and skills. diversity, social interactions, personality, Big-Five, conversational bot ## 1 Introduction Among other diversity dimensions, personality traits may play a relevant role in social interactions mediated by technological platforms [1, 2, 3, 4, 5, 6, 7, 8]. Moreover, the socio-cultural context may influence the diversity [9]. For this study, we analyze 4 pilot experiments that were carried out in parallel with university students in different socioeconomic and cultural contexts; that is, University of Trento (Italy), the National University of Mongolia, the School of Economics of London (United Kingdom) and the Universidad Catolica Nuestra Senora de la Asuncion (Paraguay), using a self-reported questionnaire to begin modeling and analyzing diversity among students based on their social practices, competencies, knowledge and motivations. One of the survey dimensions focused on personality traits. Despite some criticism [13], we adopted the widely used Big-Five model [10]: Extraversion (E), Agreeableness (A), Conscientiousness (C), Neuroticism (N) and Openness (O). Complementarity, a Chatbot application allows participants social interactions requesting and offering help, represented as questions and answers in the application. Thus, the main objective of the research is to analyze the role played by personality in social interaction mediated by a Chatbot. This could inform machine algorithms based on artificial intelligence for recommending persons that could offer better help. ## 2 Data from the pilots The full data collection process was identically applied in all the four pilot sites. In order to standardize the tools and experiences, translation to English (for a normalization of the values) and localization work was necessary to adapt them to the sociolinguistic skills of each site. The organizational details, as well as the ethical and legal aspects, are described in [11]. Participants have been recruited through email invitations and classified according to the area of study: STEM or No-STEM. Finally, the collected data are anonymized by each institution and made available to WeNet collaborators to inform machine learning algorithms able to enhance interactions between students and contribute to the "Diversity Model". Among the almost 13 thousand responses to the survey from which about 8500 complete psychosocial profiles, we invited the target population to participate in the Chatbot experience. Users generate both questions and answers to queries through interaction with other students of the same institution and even provide suggestions on a topic of interest or simply comments. The participation was voluntary, subject to the availability and interests of the users. The following table shows the participation in the Chatbot experience in the different sites. ## 3 Analysis of results The data obtained during the experiment were analyzed in relation to the personality traits of the Chatbot users according to the Big-Five taking into account the length of questions and answers input by the participants, and the possible effect of other sociodemographic variables (sex, area of study, and site of the pilot). For the analysis, the Spearman's rank correlation test was used, as in previous work [1], but this time in combination with multinomial regression. In Table 2, the correlation between the length of questions and answers and Big-Five is shown, the analysis is done both by pilots and in total for the whole dataset. By looking into this data for each institution some correlations can be found. However, it can also be noted that when analyzing the dataset in this more fragmented way, the values and signs of correlations sometimes change. This can be due to the size and composition of the samples, or other elements, like the translations, that can make the error more significant in the predictions based on these results. In this sense, it is also reasonable to assume that personality characteristics, and therefore the effects they may have on users behavior, do not change across cultures. Hence, the correlations are also analyzed over the total number of users in the entire dataset. Thus, in general a negative correlation can be identified regarding the length of questions with Neuroticism; while the length of answers shows positive correlations with Extraversion, Agreeableness and Openness, and negative ones with Conscientiousness and Neuroticism. \begin{table} \begin{tabular}{l c c c|c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c}{**UNITN**} & \multicolumn{3}{c}{**LSE**} & \multicolumn{3}{c}{**NUM**} & \multicolumn{3}{c}{**UC**} \\ \cline{2-13} & **P** & **Q** & **A** & **P** & **Q** & **A** & **P** & **Q** & **A** & **P** & **Q** & **A** \\ \hline Male & 14 & 78 & 443 & 5 & 15 & 117 & 8 & 56 & 432 & 10 & 85 & 234 \\ Female & 28 & 265 & 593 & 38 & 233 & 608 & 29 & 497 & 2481 & 10 & 119 & 310 \\ \hline STEM & 16 & 100 & 460 & 7 & 120 & 442 & 21 & 496 & 2379 & 15 & 73 & 196 \\ No-STEM & 26 & 243 & 576 & 36 & 128 & 283 & 16 & 57 & 534 & 5 & 131 & 348 \\ \hline **Total** & **42** & **343** & **1036** & **43** & **248** & **725** & **37** & **553** & **2913** & **20** & **204** & **544** \\ \hline \hline \end{tabular} \end{table} Table 1: Number of participants (P), answers (A) and questions (Q), disaggregated by area of study and sex. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & Total & \multicolumn{2}{c}{LSE} & \multicolumn{2}{c}{NUM} & \multicolumn{2}{c}{UC} & \multicolumn{2}{c}{UNITN} \\ & Corr. & p. & Corr. & p. & Corr. & p. & Corr. & p. & Corr. & p. \\ \hline \multicolumn{9}{l}{_Question_} \\ E & 0.032 & 0.249 & -**-0.109*** & **0.084** & **0.072*** & **0.088** & -0.013 & 0.852 & 0.044 & 0.456 \\ A & -0.041 & 0.139 & -0.033 & 0.599 & 0.043 & 0.310 & **-0.314*** & **0.000** & -0.033 & 0.574 \\ C & 0.020 & 0.474 & 0.078 & 0.217 & -0.009 & 0.835 & **-0.203*** & **0.003** & **0.104*** & **0.081** \\ N & **-0.155*** & **0.000** & -0.028 & 0.660 & **-0.111*** & **0.009** & **-0.290*** & **0.000** & -0.088 & 0.138 \\ O & -0.019 & 0.505 & -0.008 & 0.903 & -0.022 & 0.609 & 0.024 & 0.734 & -0.090 & 0.130 \\ Events & 1306 & 255 & 558 & 207 & 286 \\ Answer & & & & & & & & \\ E & 0.0930*** & **0.000** & -0.053 & 0.150 & -0.017 & 0.330 & **0.0929*** & **0.021** & **0.1713*** & **0.000** \\ A & **0.1216*** & **0.000** & **0.1840*** & **0.000** & 0.004 & 0.803 & **0.0671*** & **0.097** & 0.036 & 0.231 \\ C & **-0.0375*** & **0.005** & 0.005 & 0.891 & **-0.0340*** & **0.053** & **0.0712*** & **0.078** & 0.048 & 0.114 \\ N & **-0.0486*** & **0.000** & 0.007 & 0.845 & **-0.0626*** & **0.000** & **-0.1365*** & **0.001** & **-0.0895*** & **0.003** \\ O & **0.0969*** & **0.000** & 0.033 & 0.370 & 0.0722* & 0.000 & 0.002 & 0.968 & 0.0737* & 0.015 \\ Events & 5688 & 750 & 3223 & 614 & 1101 & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Spearman correlation between logarithmic length of questions and answers and Big-Five. \begin{table} \begin{tabular}{l c c c c} \hline \hline & Questions & Answers & \\ & Coef. & p. & Coef. & p. \\ \hline \multicolumn{9}{l}{Length of question} & -6.7488 & 0.463 \\ \hline \multicolumn{9}{l}{E} & 0.0968 & 0.134 & 0.0752 & 0.78 \\ \multicolumn{9}{l}{A} & **-0.2232** & **0.049** & **-1.4582** & **0.001** \\ Big-five & C & -0.0265 & 0.727 & 0.3643 & 0.26 \\ \multicolumn{9}{l}{N} & -0.0522 & 0.452 & 0.3468 & 0.242 \\ O & **0.1441** & **0.066** & **-0.5899** & **0.051** \\ \hline \multicolumn{9}{l}{E*lques} & -0.0541 & 0.343 \\ Big-five & A*lques & & & **0.3586** & **0** \\ Length of question & N*lques & & & **-0.0734** & 0.288 \\ \multicolumn{9}{l}{O*lques} & -0.1432 & **0.026** \\ \multicolumn{9}{l}{O*lques} & -0.2111 & **0.001** \\ \hline \multicolumn{9}{l}{pilot (Ref. UNITN)} \\ \multicolumn{9}{l}{LSE} & -6.5271 & 0.323 & **23.5338** & **0.003** \\ \multicolumn{9}{l}{NUM} & -1.4537 & 0.823 & **-21.096** & **0.008** \\ \multicolumn{9}{l}{UC} & 8.1407 & 0.278 & -4.06 & 0.649 \\ \hline \multicolumn{9}{l}{Sex (Ref. Male)} \\ \multicolumn{9}{l}{Female} & -3.9917 & 0.487 & 0.1042 & 0.988 \\ \hline \multicolumn{9}{l}{Dep. (Ref. STEM)} \\ \multicolumn{9}{l}{No-STEM} & 0.9058 & 0.867 & -5.2058 & 0.434 \\ \hline \multicolumn{9}{l}{Cons} & 72.5146 & 0.000 & 84.7309 & 0.043 \\ \multicolumn{9}{l}{Obs.} & 115 & 105 \\ \multicolumn{9}{l}{Events} & 1318 & 5386 \\ \hline \hline \end{tabular} \end{table} Table 3: Multilevel multinomial linear regression of question-and-answer length. To further analyze dimensions that could influence the level of participation of participants, Table 3 shows a Multilevel multinomial linear regression of questions and answer length. As it can be seen sociodemographic variables, like sex and area of study (i.e., STEM, NO-STEM) appear to have no effect in predicting questions and answers lengths. These results also seem to confirm that, ceterisparibus of personality traits and sociodemographic characters, a possible effect of cultural differences between the pilots only for the answers and not for the questions length. But these differences in answers seem to be due to English translation and not real cultural differences.. On the other hand, it is confirmed that personality does have a statistically significant effect. However, in order to better dimension this effect, the length of the answer is considered also with respect to the length of the question. In other words, when faced with banal, short questions such as "How are you?", we cannot expect very long answers, regardless of the personality of the respondents. Whereas, when faced with questions that give room for further elaboration of the answer, we can expect the effects of personality traits to emerge. In this sense, the results only show a positive effect of the personality traits Agreeableness and Openness, and a negative effect of Neuroticism, which affect the richness of the response. Finally, Figure 1 shows projections of linear predicted answer lengths by Agreeableness and Openness and question length. That is, as the question becomes more articulate (more characters) so will the answer for people with high Agreeableness and Openness. ## 4 Discussion and conclusions We have found that some personality traits of participants, modeled according to the Big-Five (such as Agreeableness and Openness to experience), influence the way they request help and/or contribute to other users through a Chatbot application. Moreover, other elements like sociodemographic variables appear to have no effect in predicting questions and answers lengths. With regard to potential cultural differences affecting response length, the sample is too small for a definitive conclusion. Further analysis may shed more light on the role of personality in characterizing diversity as a factor to improve Internet-mediated social interactions in different contexts. ## Acknowledgements This research has received funding from the European Union's Horizon 2020 FET Proactive project "WeNet: Internet of us", Grant Agreement No: 823783. Figure 1: Predicted answer characters by Big-five (Agreeableness and Openness) and question length.
2302.14795
3D Coronary Vessel Reconstruction from Bi-Plane Angiography using Graph Convolutional Networks
X-ray coronary angiography (XCA) is used to assess coronary artery disease and provides valuable information on lesion morphology and severity. However, XCA images are 2D and therefore limit visualisation of the vessel. 3D reconstruction of coronary vessels is possible using multiple views, however lumen border detection in current software is performed manually resulting in limited reproducibility and slow processing time. In this study we propose 3DAngioNet, a novel deep learning (DL) system that enables rapid 3D vessel mesh reconstruction using 2D XCA images from two views. Our approach learns a coarse mesh template using an EfficientB3-UNet segmentation network and projection geometries, and deforms it using a graph convolutional network. 3DAngioNet outperforms similar automated reconstruction methods, offers improved efficiency, and enables modelling of bifurcated vessels. The approach was validated using state-of-the-art software verified by skilled cardiologists.
Kit Mills Bransby, Vincenzo Tufaro, Murat Cap, Greg Slabaugh, Christos Bourantas, Qianni Zhang
2023-02-28T17:46:25Z
http://arxiv.org/abs/2302.14795v1
# 3D coronary vessel reconstruction from Bi-plane angiography using graph convolutional networks ###### Abstract X-ray coronary angiography (XCA) is used to assess coronary artery disease and provides valuable information on lesion morphology and severity. However, XCA images are 2D and therefore limit visualisation of the vessel. 3D reconstruction of coronary vessels is possible using multiple views, however lumen border detection in current software is performed manually resulting in limited reproducibility and slow processing time. In this study we propose 3DAngioNet, a novel deep learning (DL) system that enables rapid 3D vessel mesh reconstruction using 2D XCA images from two views. Our approach learns a coarse mesh template using an EfficientB3-UNet segmentation network and projection geometries, and deforms it using a graph convolutional network. 3DAngioNet outperforms similar automated reconstruction methods, offers improved efficiency, and enables modelling of bifurcated vessels. The approach was validated using state-of-the-art software verified by skilled cardiologists. Kit Mills Bransby\({}^{1}\) Vincenzo Tufaro\({}^{1,2}\) Murat Cap\({}^{1,2}\) Greg Slabaugh\({}^{1}\) Christos Bourantas\({}^{1,2}\) Qianni Zhang\({}^{1}\)\({}^{1}\) Queen Mary University of London, United Kingdom \({}^{2}\) Department of Cardiology, Barts Health NHS Trust, London, United Kingdom deep learning, 3d reconstruction, angiography ## 1 Introduction XCA is a standard procedure in the assessment of coronary artery disease, where an injection of radiopaque contrast medium into vessels enables visualisation of lumen morphology [1]. Regions of narrowing (stenosis) caused by a build up of atherosclerotic plaque that restrict blood flow to the heart can be identified, aiding treatment such as stent placement. Quantitative coronary angiography (QCA) was introduced to provide precise quantification of plaque lesions and disease progression. However, QCA only uses a 2D representation of the lumen, and therefore limits visualisations in cases of vessel overlap and foreshortening where the full morphology of the vessel is either obscured or distorted [1]. In order to overcome these problems, 3D-QCA methods have been developed to reconstruct coronary artery segments in three dimensions using two or more angiographic views. 3D-QCA models are of high clinical relevance due to their application in computational fluid dynamics where the flow of blood through coronary vessels is simulated. These approaches allow for visualisation of vessel geometry and stenosis in 3D, however they are typically semi-autonomous, time-consuming, and require manual correction of vessel segmentation or multi-step input from clinicians. Bifurcation points where the vessel splits into multiple branches affect the flow and velocity of blood, however reconstructing main and side branches in a single 3D-QCA model has been a particular challenge in past investigations. Since the inception of the ShapeNet dataset [2], mesh deformation DL algorithms such as 3DR2N2 [3], Pixel2Mesh [4], Pixel2Mesh++ [5] have demonstrated state-of-the-art results for 3D reconstruction from 2D images. This is especially applicable in medical imaging where data is often limited to one or few views and has led to 3D reconstruction of the heart in HeartFFDNet [6], liver in Xray2Shape [7] and lungs in DeepOrganNet [8]. Such methods offer advantages over traditional approaches as they are able to reconcile surface morphology in information poor areas, have higher performance and are computationally efficient. Despite this, no attempts have been made to reconstruct coronary vessels using these advanced DL techniques. **Our Contribution.** Based on the challenges discussed, we propose and validate a novel deep learning methodology for 3D reconstruction of coronary artery segments called 3DAngioNet. To the best of our knowledge this is the first deep learning paper to automate coronary segment reconstruction with bifurcation points and without the need for clinical correction. ## 2 Method ### System Overview 3DAngioNet uses a coarse-to-fine approach based on three modules. Firstly, a Mesh Initialisation (MI) module creates a coarse mesh using an EfficientNetB3-UNet segmentation algorithm and back-projection from 2D to 3D using stereo geometry. Secondly, a Surface Refinement (SR) module adds fine-grained detail by deforming the initial mesh using image features sampled from an encoder using a graph convolutional network (GCN). Third, in cases of bifurcation an additional step is carried out where main and side branches are stitched together using a simple Boolean operation. The model takes a pair of angiographic images as input, with corresponding acquisition geometries and segment of interest (SOI) start and end points provided by an expert clinician, and outputs a 3D mesh surface. The full framework is presented in Fig. 1. ### Mesh Initialisation Module **2D Vessel Segmentation.** As segmentation of a specific SOI is required, there are a number of challenges to overcome when using deep learning segmentation algorithms. Firstly, as the SOI is similar to other vessels in shape, texture and pixel values, it is likely a DL segmentation algorithm will try to segment the full coronary tree resulting in a high number of false positives. This was confirmed experimentally and overcome by cropping the X-ray around the SOI. Despite this, overlapping vessels in the cropped patch may still restrict learning as the choice of vessel can be ambiguous. This was mitigated by rotating images so the start and end points of the SOI are fitted to the x-axis. This strategy ensures the algorithm learns
2309.10859
Systematics in the Cepheid and TRGB Distance Scales: Metallicity Sensitivity of the Wesenheit Leavitt Law
Using an updated and significantly augmented sample of Cepheid and TRGB distances to 28 nearby spiral and irregular galaxies, covering a wide range of metallicities, we have searched for evidence of a correlation of the zero-point of the Cepheid Period-Luminosity relation with HII region (gas-phase) metallicities. Our analysis, for the 21 galaxies closer than 12.5 Mpc, results in the following conclusions: (1) The zero points of the Cepheid and TRGB distance scales are in remarkably good agreement, with the mean offset in the zero points of the most nearby distance-selected sample being close to zero, Delta mod_o(Cepheid - TRGB) = -0.026 +\- 0.015 mag (for an I-band TRGB zero point of M_I = -4.05 mag); however, for the more distant sample, there is a larger offset between the two distance scales, amounting to -0.073 +/- 0.057 mag. (2) The individual differences, about that mean, have a measured scatter of +/- 0.068~mag. (3) We find no statistically significant evidence for a metallicity dependence in the Cepheid distance scale using the reddening-free W(V,VI) period-luminosity relation: Delta mod_o (Cepheid - TRGB) = -0.022 (+/- 0.015) \times ([O/H]-8.50) - 0.003 (+/- 0.007)
Barry F. Madore, Wendy L. Freedman
2023-09-19T18:08:47Z
http://arxiv.org/abs/2309.10859v1
# Systematics in the Cepheid and TRGB Distance Scales: ###### Abstract Using an updated and significantly augmented sample of Cepheid and TRGB distances to 28 nearby spiral and irregular galaxies, covering a wide range of metallicities, we have searched for evidence of a correlation of the zero-point of the Cepheid Period-Luminosity relation with HII region (gas-phase) metallicities. Our analysis, for the 21 galaxies closer than 12.5 Mpc, results in the following conclusions: (1) The zero points of the Cepheid and TRGB distance scales are in remarkably good agreement, with the mean offset in the zero points of the most nearby distance-selected sample being close to zero, \(\Delta\mu_{o}\)(Cepheid - TRGB) = - 0.026 \(\pm\) 0.015 mag (for an \(I\)-band TRGB zero point of \(M_{I}\) = - 4.05 mag); however, for the more distant sample, there is a larger offset between the two distance scales, amounting to - 0.073 \(\pm\) 0.057 mag. (2) The individual differences, about that mean, have a measured scatter of \(\pm 0.068\) mag. (3) We find no statistically significant evidence for a metallicity dependence in the Cepheid distance scale using the reddening-free W(V,VI) period-luminosity relation: \(\Delta\mu_{o}(Cepheid-TRGB)=-0.022\ (\pm 0.015)\times([O/H]-8.50)-0.003\ (\pm 0.007)\) galaxies: distances and redshifts, variables: Cepheids; distances ## 1 Introduction Cepheids have been used for more than a century in probing the scale size of the Universe, beginning most notably with Henrietta Leavitt's understated discovery of _"... a simple relation between the brightness of the variables..."_ (in the Small Magellanic Cloud) _"... and their periods"_ (Leavitt & Pickering 1912). What we now understand about Cepheids and their Period-Luminosity-Color relation has been guided by theory, and tested by methodical observational programs, involving calibration at the limit of the available detectors and telescopes. Much progress has been made by taking advantage of strides in detector technology (see for instance, McGonegal et al. 1982 and Freedman et al. 2012 for the impact of the introduction of near- and mid-infrared detectors.) The scatter in the PL relation is now better understood, where known contributing factors include the intrinsic color (Sandage, 1972) and extrinsic reddening (both total line-of sight extinction and differential reddening). Metallicity effects can also, in principle, change the colors of individual Cepheids through wavelength-dependent differential line blanketing and re-radiation in their atmospheres (Breuval et al. 2021, 2022 and Owens et al. 2022). Changes to a Cepheids' interior stellar structure, radii, luminosities and pulsation properties, are also manifest in systematic changes to the finite width, position and slope of the Cepheid instability strip as a whole (see Madore & Freedman 1991, updated in Freedman & Madore 2023). Thirty years ago, in their paper illustrating the promise of the Tip of the Red Giant Branch (TRGB) method as a distance indicator for resolved galaxies, Lee et al. (1993) reported two things of note in the context of this current study: (1) The two independent distance scales (Cepheids and TRGB) _agreed systematically in zero point_ at the 4% level of accuracy. (2) The individual differences in the distances to the six galaxies in their study, that also had published HII-region metallicities, _showed no trend with metallicity_ (upper-right panel of their Figure 5). The paper was exploratory, the samples were small, and all calibrations and tests for higher-order correlations were still in their infancy. Nevertheless, that study provided a novel, differential, distance-independent test for a metallicity dependence of the Cepheid PL relation. The TRGB method uses Population II red-giant branch (RGB) stars found in the dust-free, outer halos of most galaxies, regardless of Hubble type. In the \(I\) band the TRGB is seen as a sharp discontinuity in the marginalized luminosity function of the RGB population (da Costa & Armandroff 1990; Lee, Freedman & Madore 1993). Metallicity variations within these intrinsically low-metallicity populations manifest themselves through atmospheric line-blanketing effects that shift the color of the tip of the RGB to the red with increasing metallicity, but with only a slight, color-correlated decrease in the luminosity of the tip, that is well calibrated (e.g., Rizzi et al. 2007, Freedman 2021, Hoyt 2022) as measured in the \(I\) band. As such the \(I\)-band luminosity of the TRGB discontinuity has proven itself to be an excellent, easily identified, high-precision, and now widely used distance indicator. Importantly for this comparison, these two distance indicators (Cepheids and TRGB stars) are not only spatially independent of each other (one confined to the disk, the other, which can be specifically targeted in the halo), but they also have very few systematics in common in their calibration and/or in their application, when it comes to determining distances 1. With a predetermined choice of fields, the TRGB method is optimally applied to old, low-mass, low-metallicity stars in the low-density (widely separated and uncrowded), dust- and gas-free halos of galaxies. The Cepheid PL relation is applied to young, high-mass, medium- to high-metallicity stars in the densely-populated (crowded), dusty, gas-rich disks of galaxies. For a more detailed discussion and history of the TRGB method see the introduction to the paper by Madore et al. (2023). Footnote 1: The dominant shared systematic is the geometric distances to the LMC and the maser galaxy NGC 4258 which are used to set the absolute magnitude zero point for both the Cepheids and the TRGB. And, the same foreground (Milky Way) reddenings are used for both the TRGB and Cepheid extinctions. But it has to be emphasized that since we are dealing with the differences between these two methods for each galaxy those shared systematics largely cancel. Having two high-precision distance estimates to the same galaxies allows one to undertake direct, differential tests for systematic differences between the two methods. The first, and most obvious test is to look for any first-order offsets between the distances being measured by one method as compared (on a galaxy-by-galaxy basis) to distances being measured by the second method. Other tests can seek out higher-order correlations of these same differences as a function of other physical variables that might conceivably influence the luminosities of the stars involved. Some obvious parameters worth considering are: (a) the host galaxy type, (b) the absolute magnitude (mass) of the host galaxy, (c) the metallicity of disk gas, out of which the Cepheids were recently formed, (d) the surface brightness of the disk in which the Cepheids are found, and (e) the distance to the host galaxy, where its surface brightness can act as a proxy for crowding and confusion. Correlations of this sort were sought out in the first paper on this topic by Lee, Freedman & Madore (1993); and, as alluded to above, no significant correlations with any of these properties were found at that time. It is not our intention here to review all of the varied observational and theoretical tests that have been proposed and/or undertaken in attempting to find and calibrate a metallicity sensitivity of the Cepheid PL relation, but it needs to be noted that, while there have been many attempts and many calibrations, no definite consensus has been reached on the magnitude, or even the sign, of the effect. For instance, contrary to most of the negative empirical values being advocated, Romaniello et al. (2006) and Bono et al. (2008) published positive correlations. Coefficients of the simply parameterized trend \(\Delta\)(mag) [Cepheid] = \(\gamma\)\(\times\) [O/H] have ranged from declarations of a null detection (i.e., \(\gamma\) = 0.0 mag/dex) by Udalski et al. 2001 and again by Freedman & Madore 2011, and then similarly asserted more recently by Wielgorski et al. (2017); to moderate sensitivities of \(\gamma\sim-\)0.2 mag/dex (Freedman & Madore 1990; Kennicutt et al. 1998; Sakai et al. 2004; Gieren et al. 2018; Breuval et al. 2021; Owens et al. 2022); and on up to significantly larger values, in the range of \(\gamma\) = -0.4 to -0.5 mag/dex (Efstathiou 2014; Clementini et al., 2021). Settling on a value for the metallicity sensitivity of the Cepheid PL relation has wider implications beyond the physics of the Cepheid PL relation itself. As calibrated by Cepheids, the derived value of the Hubble constant is dependent upon the degree of metallicity sensitivity of the Cepheid PL relation. As shown by Freedman et al. (2001) more than 20 years ago (and demonstrated most recently by Efstathiou 2014 and 2020), in going from no metallicity corrections at all, to applying a steep dependence, such as his value of \(\gamma\) = -0.53 mag/dex say, the derived value of the Hubble constant can drop from 73 km/s/Mpc to 67 km/s/Mpc. Compared to the "factor-of-two controversy", that raged at the end of the last century, this 10% difference might not seem so consequential, were it not for the recent (and totally independent) results on the Hubble constant inferred from modeling the cosmic microwave background radiation, obtained by the Planck satellite (Ade et al. 2016, Aghanim et al. 2020). The resulting "tension" between the very high-precision Planck value of \(H_{o}\) = 67.4 \(\pm\)0.5 km/s/Mpc (Aghanim et al.) and the more recent locally-determined values of \(H_{o}\)\(\sim\) 73 km/s/Mpc (using Cepheids: Freedman et al. 2012, Riess et al. 2022) and \(H_{o}\)\(\sim\) 70 km/s/Mpc (using the TRGB Method: Freedman et al. 2019, 2020, 2021) now have implications not only for cosmology, but also for fundamental physics (see Freedman 2017, Verde et al. 2019 and references therein, for recent commentaries). In this paper, we revisit the question of the metallicity sensitivity of the Leavitt Law, based upon updated Cepheid and TRGB distances. Our goal is simply to update the considerable amount of new data since the study of Lee et al. (1993) 30 years ago, both for Cepheids and the TRGB. ## 2 The TRGB vs Cepheid Distance Test Revisited A search of the recent literature reveals that 28 nearby galaxies have had both TRGB and Cepheid distances measured to them. This is nearly twice as many galaxies as used, for example by Kennicutt et al. (1998) or Sakai et al. (2004), where this differential test was most recently undertaken 20 years ago. Moreover, many of those originally considered galaxies now have higher-precision data taken Figure 1: The decrease over time of the apparent correlation of the Cepheid PL relation zero point with increasing metallicity. The top panel shows the data (in blue) replotted from Sakai et al. (2004), where an apparently strong negative correlation was reported. The middle plot recapitulates the Sakai plot, but only updating the TRGB moduli to their modern values. The bottom plot shows the results of updating both the Cepheid and TRGB distances. The blue line represents the fit to the blue (nearby sample) data points; the red line is the fit for the total sample, including the most distant galaxies (red data points). The right row shows the marginalized cumulative probability density distribution (in blue) for the nearest-galaxy sample (in blue) in the left panels. Shallower (black) dotted are the individually differenced moduli shown as unit-area Gaussians. Galaxies with distance moduli exceeding 30.5 mag (as partitioned off in Figure 3) are highlighted as larger red dots, showing once again that the most distant galaxies are preferentially also among the highest-metallicity galaxies in this sample. for them, both in terms of revised Cepheid distances, and also in much improved and homogeneously reduced TRGB distances (e.g., Tully et al. 2013, 2015) with a consistently adopted zero point of \(M_{I}\) = -4.05 mag (Freedman et al. 2019, 2020). The Cepheid distances used here are derived uniformly from a W = V - R(V-I) (Madore 1982), reddening-free version of the period-luminosity relation, the Wesenheit function. This function is expanded upon more in Section 3, and a full description of this form of the PL relation and its history is given in the Appendix. These galaxies are listed in Table 1, where we give the host galaxy name, the TRGB distance modulus and its reference, followed by the Cepheid distance modulus and its reference. The last two columns contain the mean metallicity for the Cepheids in that galaxy and its primary reference. When multiple values were reported for any Cepheid distance modulus, the Cepheid distance modulus and its reference, and the Cepheid distance modulus are given in Table 2. Figure 2: Same as Figure 1 but with galaxies individually identified. given method, preferential consideration was given to determinations that had the highest quoted internal precision, those that were most recently published, and/or those that were derived from homogeneous compilations. In many cases the adopted value is also (coincidentally) close to the median of the totality of values published to date. Figure 3: Differences between the Cepheid and TRGB distance moduli as a function of distance. The abrupt increase in scatter is marked by the broken vertical line at \(\mu_{o}\) = 30.5 mag. At nearer distances the scatter is extremely small (at the level of \(\pm\)0.068 mag) while beyond 12.5 Mpc the scatter is found to be at a level that is nearly a factor of two and a half times larger (\(\pm\)0.152 mag). The average error in the differences in distance moduli for the two (near and far) samples are 0.100 and 0.103 mag, respectively, which when compared to the scatter of the data points around the mean (0.068 and 0.152 mag, respectively), suggests that the errors on the nearby galaxies moduli are on average overestimated by about 47% (0.032 mag), and the errors on the distant sample are underestimated by about 48% (0.049 mag). Solid horizontal lines show the mean offset from zero. Dashed horizontal lines are two-sigma bounds on the scatter in each of the two samples. Note that the more-distant sample of galaxies has both larger scatter in the sample, and systematically lower Cepheid distance moduli than their corresponding TRGB distance moduli. Figures 1 and 2 contain our results in graphical form: (1) The upper panel reproduces the plot of differential distance moduli (in the sense Cepheid minus TRGB) as a function of host-galaxy HII-region abundances, as originally provided in Sakai et al. (2004). The only difference here is that, in these plots, we identify the individual galaxies so that one can more easily track changes that occur in the plots following below it. The solid black line is the originally published fit to the data, where a metallicity dependence with a slope of \(\gamma=-0.24\)\(\pm\) 0.05 mag/dex was reported. The scatter around the fit is found to be \(\pm\)0.12 mag. (2) The middle panel shows the incremental effect, on the scatter and on the trend with metallicity, caused by updating to the most recently published TRGB distances for the same sample of galaxies discussed by Sakai et al. (2014). The result of this first step is already quite dramatic: In this updated version the slope of the relation has decreased by about a factor of six (from -0.24 to -0.04 mag/dex); while the scatter is unchanged, at \(\pm\)0.12 mag around the regression. (3) The lower panel includes two additional changes: (a) An up-dating of the TRGB and Cepheid distances, and (b) The augmentation of the sample of galaxies entering the test, going from 17 to 29 systems, which are also listed in Table 1. **In this final panel of Figure 1, there is now no evidence for any significant dependence of the Cepheid PL relation on metallicity.** The relation with metallicity is \(\Delta\mu_{o}(Cepheid-TRGB)=-0.046\)\((\pm 0.019)\times([O/H]-8.50)-0.010\)\((\pm 0.011)\) for the entire sample of 29 galaxies, and is statistically flat \(\Delta\mu_{o}(Cepheid-TRGB)=-0.022\)\((\pm 0.015)\times([O/H]-8.50)-0.003\)\((\pm 0.007)\) for the subset of the 21 nearest galaxies having distance moduli less than 30.5 mag. Two of the lowest metallicity galaxies (Sextans A and B) are also two of the systems that have the largest uncertainties on their distances. The plotted solutions are weighted fits so the impact of these two galaxies on the final solution appropriately down-weighted and taken into account. The scatter in this final plot is \(\pm\) 0.068 mag for the nearest sample, and is discussed in more detail in Figure 2. There are several factors driving the change from the earlier plots. First, as can be seen in Figure 1, most of the suggestion of a gradient in Sakai et al. (2004) resulted from only two data points falling at the highest metallicities, M101 and NGC 3351. The trend was considerably weaker for galaxies with metallicities 12 + log(O/H) \(<\) 8.5 dex. In the intervening time, improvements to both the TRGB and Cepheid distances have served to reduce the uncertainties in each method, and data for many more galaxies with larger abundances have been obtained. ## 3 Broader Implications and Extensions If, as concluded above, the specific combination of V and I bands in the form of W, is insensitive to metallicity, it then follows that V and I are well-suited for estimating unbiased values of extinction and color excess (assuming, of course, that the interstellar extinction law, extrapolated beyond the VI wavelength range to the red, is universal.) Macri et al. (2001) undertook the first direct test of this conjecture when the near-infrared camera, NICMOS was installed on HST. They obtained H-band imaging of 70 Cepheids in 12 nearby galaxies chosen from the HST Key Project, asking the simple question: _For each of the galaxies, does an extrapolation of the standard Milky Way extinction curve, previously fit only to the VI Cepheid data, also fit the H-band data for those same galaxies and their Cepheids?_ The answer was, that, to within \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Galaxy} & \(\mu\)(TRGB) & err. & Ref & \(\mu\)(Cepheid) & err. & Ref & Z & Ref \\ \hline LMC & 18.48 & 0.040 & 0 & 18.48 & 0.040 & 0 & 8.50 & 4 \\ SMC & 18.99 & 0.08 & 29 & 19.04 & 0.08 & 47 & 7.98 & 4 \\ NGC 0224 = M31 & 24.46 & 0.07 & 12 & 24.47 & 0.05 & 11 & 8.98 & 4 \\ NGC 0300 & 26.43 & 0.07 & 3 & 26.37 & 0.09 & 14,15 & 8.35 & 4 \\ NGC 0598 = M33 & 24.71 & 0.04 & 7 & 24.62 & 0.09 & 9 & 8.82 & 4 \\ NGC 1309 & 32.50 & 0.07 & 18 & 32.50 & 0.04 & 39 & 8.80 & 3 \\ NGC 1365 & 31.36 & 0.05 & 18 & 31.18 & 0.05 & 39 & 8.96 & 3 \\ NGC 1448 & 31.32 & 0.06 & 18 & 31.42 & 0.02 & 39 & 8.80 & 3 \\ NGC 3021 & 32.18 & 0.03 & 35 & 32.30 & 0.07 & 39 & 9.00 & 3 \\ NGC 3031 = M81 & 27.79 & 0.07 & 3,17 & 27.79 & 0.09 & 3,16 & 8.75 & 4 \\ NGC 3109 & 25.56 & 0.05 & 7 & 25.54 & 0.03 & 46 & 8.06 & 4 \\ NGC 3351 & 29.92 & 0.05 & 3 & 30.01 & 0.08 & 21 & 9.24 & 4 \\ NGC 3370 & 32.27 & 0.05 & 18 & 32.08 & 0.04 & 39 & 8.80 & 3 \\ NGC 3621 & 29.26 & 0.12 & 7 & 29.14 & 0.06 & 3 & 8.75 & 4 \\ NGC 3627 = M66 & 30.18 & 0.09 & 1 & 30.15 & 0.08 & 2 & 9.25 & 21 \\ NGC 4258 & 29.37 & 0.02 & 1 & 29.27 & 0.06 & 39 & 9.06 & 3 \\ NGC 4424 & 31.00 & 0.06 & 18 & 30.62 & 0.28 & 39 & 9.10 & 3 \\ NGC 4536 & 30.96 & 0.05 & 18 & 30.80 & 0.06 & 39 & 8.75 & 3 \\ NGC 5236 = M83 & 28.29 & 0.11 & 3 & 28.32 & 0.13 & 5 & 8.40 & 6 \\ NGC 5253 & 27.75 & 0.11 & 3,19 & 27.75 & 0.24 & 3 & 8.15 & 4 \\ NGC 5457 = M101 & 29.07 & 0.04 & 41 & 29.05 & 0.14 & 20 & 9.20 & 40 \\ NGC 5584 & 31.76 & 0.05 & 1 & 31.84 & 0.03 & 39 & 8.80 & 3 \\ NGC 6822 & 23.36 & 0.07 & 3 & 23.38 & 0.05 & 25 & 8.50 & 4 \\ IC 1613 & 24.31 & 0.05 & 28 & 24.29 & 0.03 & 26 & 7.86 & 4 \\ IC 4182 & 28.17 & 0.05 & 3,7 & 28.22 & 0.06 & 45 & 8.40 & 4 \\ Sextans A & 25.69 & 0.06 & 44 & 25.71 & 0.20 & 27 & 7.49 & 4 \\ Sextans B & 25.77 & 0.03 & 10 & 25.69 & 0.27 & 28 & 7.56 & 4 \\ WLM & 24.94 & 0.03 & 42 & 24.92 & 0.04 & 41 & 7.74 & 4 \\ \hline \end{tabular} References to Table 1: 0: Fiducial: Pietrzynski et al. (2019); 1: Jang & Lee (2017); 2: Gibson et al. (2000) 3: Tully et al. (2013) 4: Sakai et al. (2004); 5: Saha et al. (2006); 6: Bresolin et al. (2009) 7: Rizzi et al. (2007); 8: Dolphin & Kennicutt (2002) 9: Madore & Freedman (1991); 10: Jacobs et al. (2009); 11: Kochanek et al. (1997); 12: Conn et al. (2012); 13: Gieren et al. (2013); 14: Bono et al. (2010); 15: Gieren et al.(2005); 16: Kanbur et al. (2003); 17: Radburn-Smith et al. (2011); 18: Freedman et al. (2019); 19: Tully et al. (2015); 20: Stetson et al. (1998); 21: Ferraresse et al. (2000); 22: Kennicutt et al. (1998); 24: Beaton et al. (2019); 23: Sabbi et al. (2018); 25: Rich et al. (2014); 26: Scowcroft et al. (2013); 27: Pioto et al. (1994); 28: Sakai et al. (1997); 29: Cioni et al. (2000); 30: Hoyt et al. (2019); 31: Gibson et al. (1999); 32: Hatt et al. (2018a); 33: Zaritsky et al. (1999); 34: Hatt et al. (2018b); 35: Jang et al. (2018); 36: Jang & Lee (2018); 37: Madore & Freedman (2020); 38: Riess et al. (2011); 39: Table 2, this paper; 40: Mager et al. (2013); 41: Beaton et al. (2019); 42: McQuinn et al. (2017); 43: Gieren et al. (2008); 44: Dolphin et al. (2003); 45: Freedman et al. (2001); 46: Pietrzynski et al. (2006); 47: Marconi et al. (2017). \end{table} Table 1: TRGB - Cepheid Metallicity-Sensitivity Calibration Sample the quoted uncertainties, the adopted interstellar extinction curve is indeed universal.2 Interestingly, this conclusion implicitly validates using VI data (without the need for additional NIR imaging) to obtain reddening-corrected true distance moduli to extragalactic samples of Cepheids. Footnote 2: A few years later, this same conclusion was independently reached by Sakai et al. (2004) where they state “_For some of the galaxies with ground-based distances, Cepheids are observed in more bands than V and I (for instance B and R, see Appendix A). Calculating a distance using multi-wavelength data sometimes leads to improvements over fits which only use V and I data, especially in the case of sparsely sampled PL relations. Distances using all available photometric bands are there fore listed in Table 3, Columns 5 and 6;_ **these agree identically to the distances tabulated in Columns 3 and 4, when only V and I data are available.** (emphasis ours). We now ask: Do the H-band observations of the Cepheids studied in the SHoES project (Riess et al. 2016) carry any sensitivity to the metallicity of those stars? SHoES was put together as a Cepheid-based Type Ia supernova calibration project, with the specific goal of addressing potential systematics in the Hubble constant, and reducing the combined (systematic and statistical) uncertainties in it final error budget. They used Cepheids, pushing the calibration into the near-infrared, specifically introducing H-band observations and combining them with VI data. In introducing the SHoES sample into our analysis, we have to be careful to take the new Cepheid distances to supernova host galaxies without some slight modification. All of the Cepheid distance discussed so far are derived from using the reddening-free period-luminosity relation, the W(V,VI) Wesenheit function. A full description of this form of the PL relation and its history is given in the Appendix. We emphasize that the SHoES team also used a Wesenheit function, but choosing to form the reddening-free magnitude W(H,VI) by combining their H-band photometry with an appropriately scaled (V-I) color, whereas the Key Project used only W(V,VI). Accordingly, in this analysis we have calculated W(V,VI) PL relations for the SHoES galaxies and used these distances to compare with the TRGB distances available for 9 of them that have TRGB distances published so far. They are shown as red circles in the lower panel of Figures 1 and 2, and in the largest-distance portion of Figure 3. The multi-wavelength fits of the Macri et al. (2001) data to a Milky Way extinction curve are shown in Figures 4 and 5. The SHoES data for Cepheids in the zero-point calibrating galaxy NGC 4258 are shown in Figure 6, where it is clear that the Cepheid distances derived from a variety of wavelength combinations have not yet convincingly converged, even for this relatively nearby, and very important zero-point calibrating galaxy. Figures 7 through 10 plot the complete set of SHoES (2016) apparent distance moduli fit here by a Milky Way extinction-curve using all three (VIH) bands (the dashed black line), and then only the VI data (the solid blue line). The Wesenheit W(H,VI) true distance modulus, as published in Riess et al. (2016), is shown by the red arrow to the far left in each plot. Values of the resulting W(V,VI) and W(H,VI) distance moduli are given in Table 2. A comparison of the VI and VIH extinction-curve fits are shown in Figure 9. There is no indication of any trend with distance modulus. The observed scatter \(\pm\)0.16 mag, which, if it is equally shared between the two methods, would suggest that the two Cepheid distance estimates are each good to \(\pm\) 0.11 mag, or \(\pm\)6% in distance. The addition of the SHoES galaxies to Figures 1 and 2 has no obvious affect on the main conclusion of this paper, that there is little or no metallicity effect on the Cepheid distance based on W(V,VI) PL relations. The only deviant point belongs to the galaxy NGC 4424, which has very low statistical weight overall given that only 9 Cepheids were found in it in the SHoES survey. The main takeaway from Figure 2, however, is that the more distant SHoES galaxies have significantly larger scatter in their comparison with the TRGB distances (\(\pm\)0.152 mag for the distant sample, as opposed to Figure 4: Cepheid Extinction Curve Plots for Key Project galaxies (Macri et al. 2001) following the installation of NICMOS on HST. Solid blue lines are fits to the VI data points as published earlier in Freedman et al. (2001). Dashed black lines show the three-point fit to the VIH data now including the NIR NICMOS observations. The horizontal dash-dot lines show the distance modulus published in Freedman et al. (2001). However, for IC 1613, in particular, the large open circles are NICMOS data, while additional data points are taken from Scowcroft et al. (2017). The solid horizontal line is the multi-wavelength true distance modulus fit, as given in Scowcroft et al.. \(\pm 0.066\) mag for the more nearby galaxies), and that there is an indication of an offset of about a tenth of a magnitude in the mean of the two samples. Figure 5: Same as Figure 4 Figure 6: Extinction Curve Fits to VI (thick black line), VH (thin black line)and VIH (broken black line) data for Cepheids in the maser galaxy NGC 4258. The corresponding Cepheid true distance moduli, three in number, are given in blue. The TRGB true distance modulus is given in red. And the geometric/maser distance modulus to NGC 4258 is given in black. The total spread in true distance moduli is more than 0.2 mag. Figure 7: Extinction Curve Fits to VI and VIH data for SHoES galaxies hosting Type Ia SNe in Riess et al. (2016). The red arrow at the Y axis indicates the SHoES distance modulus. A fit to the VI data alone is shown by a solid blue line; while a fit to the combined VIH data is shown by a dashed black line. The corresponding true moduli and mean reddenings are explicitly given in the lower right corner of each plot, and given again in Table 2. Figure 8: Same as for Figure 7. Figure 9: Same as for Figure 7. Figure 11: Comparison of the W(VIH) and W(VI) distance moduli of SHoES galaxies. The W(VIH) moduli are plotted in the vertical direction; the W(VI) moduli are tracked along the horizontal direction. The mean offset between the two determinations is shown by the dashed blue line of unit slope. Two-sigma (solid, blue) lines, encompassing the scatter, flank that line and are separated by \(\pm 0.32\) mag. A thin black line shows the one-to-one correspondence had there been no zero-point offset. Figure 10: Same as for Figure 7. ## 4 Summary Discussion We have revisited the differential (Cepheid comparison with the TRGB) test of the dependence of the Cepheid Leavitt law on metallicity. We find that for an updated sample of 29 nearby galaxies the TRGB and Cepheid distance scales are in good systematic agreement, with a mean offset of -0.034 mag \(\pm\)0.007 mag (error on the mean) at 12 + log(O/H) = 8.50 dex. We note that this agreement lends added independent support to the value of the TRGB zero point of \(M_{I}\) = -4.05 mag recently derived by Freedman et al. (2019, 2020). The overall scatter in the differences between the paired measurements is \(\pm\) 0.099 mag (giving a sigma on the mean of \(\pm\)0.022 mag.) If these errors are equally shared between the two methods, then the errors on the separate Cepheid and TRGB distance moduli are themselves individually good to \(\pm\)0.070 mag (i.e., \(\pm\)3.5% precision in distance). However, in Figure 3 we note an abrupt and significant increase in the scatter between the two distance indicators, that being \(\pm\)0.068 mag up to a distance modulus of about 30.5 mag (12.5 Mpc) and rising to \(\pm\)0.152 mag thereafter. This effect was also seen in Figure 5 of Freedman et al. (2019). We do not definitively know the reason for this increased scatter; however, one possibility is crowding and confusion problems in the photometry of the Cepheids in these more distant systems, especially for the high-surface-brightness disk systems (see also the discussion in Wielgorski et al. 2017). The mean offset in the zero points of the most nearby distance-selected sample is close to zero, being \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Galaxy} & \(\mu\)(VIH) & \(\sigma\)(mag) & \(\mu\)(VI) & \(\sigma\)(mag) \\ \hline NGC 1015 & 32.455 & \(\pm\)0.017 & 32.499 & \(\pm\)0.061 \\ NGC 1309 & 32.422 & 0.028 & 32.498 & 0.035 \\ NGC 1365 & 31.287 & 0.035 & 31.179 & 0.045 \\ NGC 1448 & 31.191 & 0.048 & 31.420 & 0.041 \\ NGC 2442 & 31.420 & 0.040 & 31.279 & 0.020 \\ NGC 3021 & 32.402 & 0.012 & 32.297 & 0.071 \\ NGC 3370 & 32.015 & 0.017 & 32.081 & 0.035 \\ NGC 3447 & 31.910 & 0.040 & 32.046 & 0.025 \\ NGC 3972 & 31.518 & 0.055 & 31.770 & 0.041 \\ NGC 3982 & 31.646 & 0.021 & 31.587 & 0.041 \\ NGC 4038 & 31.312 & 0.023 & 31.397 & 0.038 \\ NGC 4258 & 29.460 & 0.069 & 29.269 & 0.060 \\ NGC 4424 & 30.693 & 0.008 & 30.623 & 0.280 \\ NGC 4536 & 30.870 & 0.009 & 30.803 & 0.055 \\ NGC 4639 & 31.360 & 0.053 & 31.654 & 0.075 \\ NGC 5457 = M101 & 28.987 & 0.015 & 28.890 & 0.015 \\ NGC 5584 & 31.727 & 0.030 & 31.838 & 0.025 \\ NGC 5917 & 32.246 & 0.010 & 32.118 & 0.055 \\ NGC 7250 & 31.320 & 0.026 & 31.145 & 0.050 \\ UGC 9391 & 32.868 & 0.003 & 32.784 & 0.075 \\ \hline \end{tabular} \end{table} Table 2: Cepheid VIH and VI True Distance Moduli \(\Delta\mu_{o}\)(Cepheid - TRGB) = -0.026 \(\pm\) 0.015 mag; however, for the more distant sample, there is a larger offset between the two distance scales, amounting to --0.073 \(\pm\) 0.057 mag. Based on a doubling of the sample of galaxies and updated distances used in this study, we find little evidence for any residual correlation of these differential distance moduli with Cepheid-related metallicities. ## 5 Acknowledgements We thank the _University of Chicago_ and the _Observatories of the Carnegie Institution for Science_ for their past and on-going support of our long-term research into the calibration and determination of the expansion rate of the Universe. Financial support for this work was provided in part by NASA through grant number HST-GO-13691.003-A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
2310.20621
Deepfake detection by exploiting surface anomalies: the SurFake approach
The ever-increasing use of synthetically generated content in different sectors of our everyday life, one for all media information, poses a strong need for deepfake detection tools in order to avoid the proliferation of altered messages. The process to identify manipulated content, in particular images and videos, is basically performed by looking for the presence of some inconsistencies and/or anomalies specifically due to the fake generation process. Different techniques exist in the scientific literature that exploit diverse ad-hoc features in order to highlight possible modifications. In this paper, we propose to investigate how deepfake creation can impact on the characteristics that the whole scene had at the time of the acquisition. In particular, when an image (video) is captured the overall geometry of the scene (e.g. surfaces) and the acquisition process (e.g. illumination) determine a univocal environment that is directly represented by the image pixel values; all these intrinsic relations are possibly changed by the deepfake generation process. By resorting to the analysis of the characteristics of the surfaces depicted in the image it is possible to obtain a descriptor usable to train a CNN for deepfake detection: we refer to such an approach as SurFake. Experimental results carried out on the FF++ dataset for different kinds of deepfake forgeries and diverse deep learning models confirm that such a feature can be adopted to discriminate between pristine and altered images; furthermore, experiments witness that it can also be combined with visual data to provide a certain improvement in terms of detection accuracy.
Andrea Ciamarra, Roberto Caldelli, Federico Becattini, Lorenzo Seidenari, Alberto Del Bimbo
2023-10-31T16:54:14Z
http://arxiv.org/abs/2310.20621v2
# Deepfake detection by exploiting surface anomalies: the SurFake approach ###### Abstract The ever-increasing use of synthetically generated content in different sectors of our everyday life, one for all media information, poses a strong need for deepfake detection tools in order to avoid the proliferation of altered messages. The process to identify manipulated content, in particular images and videos, is basically performed by looking for the presence of some inconsistencies and/or anomalies specifically due to the fake generation process. Different techniques exist in the scientific literature that exploit diverse ad-hoc features in order to highlight possible modifications. In this paper, we propose to investigate how deepfake creation can impact on the characteristics that the whole scene had at the time of the acquisition. In particular, when an image (video) is captured the overall geometry of the scene (e.g. surfaces) and the acquisition process (e.g. illumination) determine a univocal environment that is directly represented by the image pixel values; all these intrinsic relations are possibly changed by the deepfake generation process. By resorting to the analysis of the characteristics of the surfaces depicted in the image it is possible to obtain a descriptor usable to train a CNN for deepfake detection: we refer to such an approach as SurFake. Experimental results carried out on the FF++ dataset for different kinds of deepfake forgeries and diverse deep learning models confirm that such a feature can be adopted to discriminate between pristine and altered images; furthermore, experiments witness that it can also be combined with visual data to provide a certain improvement in terms of detection accuracy. ## 1 Introduction With the increasing of false information spreading all over the media, nowadays, trust in digital content is potentially compromised by the easiness of creating fabricated facts. Information can further undergo multiple modifications before reaching a potential user. The latest advancements of AI, especially for image and video manipulation, are fostering more and more the possibility to easily change the meaning of the information to convey, due to the fact that media content is likely to be exposed to variations of different nature. Among the possible media manipulation approaches, Deepfakes are a very recent class of methods that can generate synthetic human images. Despite having been used with astonishing results for movie production in Hollywood, Deepfakes can also be easily used for malicious purposes, such as crafting highly realistic fake propaganda. Deepfake creation typically involves the use of deep learning to recreate some person imagery. Specifically deep networks learn how to transfer or reenact facial expression as well as how to generate a proper imitation of a person's voice and inflection. Nowadays, several techniques can create real-looking content easily, by using deep generative models, such as GAN-style architectures [3, 9, 11] and diffusion probabilistic models (DPMs) [15]. Deepfake can be applied to different types of media, from image to video, but also fake audio can be created, e.g. through text-to-speech [17], by typing a new text, or by voice swapping [28]. Deepfake for image and video is mainly done by tampering with parts of the scene, for instance, the faces of subjects present in the media. Various techniques have been designed to alter faces, some of them Figure 1: An example of surface anomalies found in fake images. From left to right: the RGB (Red-Green-Blue) face, our proposed GSD (Global Surface Descriptor) feature and the logarithm of the GSD, used here for sake of visualization to highlight the artifacts introduced by the manipulation. regard video applications, e.g. lip syncing [29, 35] where the audio is employed to reconstruct the mouth movements over the video frames. Other general-purpose face manipulation techniques are about the visual alteration of the content, by changing the expressions or moving the face from a source image to the target one. In this paper, we focus on detecting face manipulations in images. Face manipulations [44] can be basically summarized in two different categories: reenactment and swapping. Facial reenactment deals with manipulating certain facial attributes or reenact faces with deep learning methods while maintaining the identity unchanged. Face swapping [8], instead, aims at replacing the face of a person in the reference image with the same facial shape and features of another target subject by realistically changing the identity. According to this, it is straightforward to understand that deepfake detection is an urgent task, in order to prevent disinformation and avoid the diffusion of media showing people saying or doing things they never said or did. The fundamental idea behind deepfake detection consists in the fact that a neural network during the process of generating a fake content should leave a sort of trace that is embedded as a fingerprint over the manipulated image (video). Most of the existing approaches for deepfake detection try to recover this hidden pattern to reveal false contents and to do that they generally resort to the analysis of frames at pixel level (RGB raw data). However, not only RGB-level inconsistencies can be detected as anomalies in the visual space, in fact many fine details can be affected by the forgery without compromising the visual perception of the image itself. More precisely, the camera acquisition process is itself a signature that is incorporated into the image. We argue that such information about the acquisition moment could be altered by deepfakes and this could be exploited for the detection task. Such information relates to an ensemble of specific characteristics of the scene, for instance, the external illumination source, including lighting, shadows and reflections, which all impact on the surfaces present in the environment and on the objects at the time of image acquisition. Other relevant details regard the face pose, but even the camera parameters may be somehow embedded into the image and strictly related to the image capturing, e.g. lens distortions and intrinsic noises. In the literature, some works [23, 34] tackled the deepfake detection problem from different angles, by leveraging geometrical aspects of facial landmarks, in which inconsistent or highly synthetic patterns are found in generated fakes by different forgery techniques. The proposed approach leverages on the analysis of the features of the framed scene that are determined by the overall geometry of the scene itself (e.g. surfaces) and by the original image acquisition process (e.g. illumination, camera orientation). Differently from other research works focusing on individuating fakes by detecting specific patterns in terms of depth map or face motion, we specifically deal with the surfaces present within the acquired scene by exploiting the modifications induced to the surface normals and determined by the deepfake alteration. A general visual example of this is provided in Figure 1 where it can be appreciated how a modification, just on the mouth of the woman, can also determine some slight global variations on the other parts of the global surface descriptor (GSD) image. In summary, the main contributions are listed hereafter: 1. we propose to utilize surface geometry features of the acquired scene to highlight inconsistent patterns revealing fake images; 2. we study and evaluate, in which extent, such features can constitute by themselves an effective mean to discriminate between pristine and fake contents; 3. we conduct experiments on different kinds of forgeries and network architectures to verify that the new proposed surface-based feature can be advantageously combined with RGB frames to get an improvement in terms of accuracy performance. The paper is organized as it follows: after this introductory section, Section 2 describes main related works while Section 3 presents the proposed method. Section 4 is dedicated to the experimental results and Section 5 draws conclusions providing possible future works. ## 2 Related Works DeepFake Detection is a recent problem raised to recognise real or tampered data, also in vision tasks, which is typically addressed as a binary classification problem. Nowadays visual content is an informative media that can be manipulated, thanks to the usage of recent generative models [3, 11, 15, 9]. In particular, the possibility of manipulating human faces has found a lot of interest both for entertainment and for malicious purposes. Several manipulation techniques can be used, either replacing faces to match the one of another subject or by simply altering facial expressions. Such manipulations are obtained, e.g. via CNNs [19], conditional GANs [27] or based on facial landmark alignment [8]. A plethora of implementations are also available [25]. Existing methods [26] for deepfake detection are designed to directly process entire videos or single frames, so to discover whether faces have been manipulated. Several works exploit handcrafted features from artifacts and inconsistencies of the fake generation process. Xian et al. [41] proposed to preprocess images to remove low level noise cues of GAN images, and so a forensic model is forced to learn more intrinsic features. This method allows better generalization capability then previous deepfake methods [5, 42]. Amerini et al. [4] leveraged the optical flow in order to look at motion discrepancies, which are found across synthetically generated frames, and finally to classify them as original or deceptive. Li et al. [21] utilized an advanced architecture named HRNet to detect the blending boundary of Deepfakes manipulated images. Guo et al. [13] introduced a CNN model named SCnet to detect Glow-based facial forgery by learning high-level features through a hierarchical convolutional block. Zhao et al. [45] proposed to look at source feature inconsistency within the forged image with the hypothesis that a pristine image should contain the same source features across locations. Instead of learning GAN fingerprints on fakes [43] or visual self-inconsistencies via recorded photo metadata [16], Maiano et al. [24] exploited depth inconsistencies located inside tampered face images to detect manipulations. Other approaches, instead, utilize recurrent networks, e.g. RNNs or LSTMs, to look at visual artifacts within single video frames or temporal inconsistency across frames. Sabir et al. [31] leveraged the use of spatio-temporal features of video streams to detect deepfakes, as temporal coherence is not innate in deepfake generation. Guera et al. [12] extracted frame-level features with a CNN and fed them into an LSTM to create temporal sequence descriptors, which are finally trained to be classified as real or fake. Different methods have also been proposed to exploit visual or behavioral inconsistencies, hardly removable when generating fakes. Based on the fact that a person in deepfakes has a lot less frequent blinking than that in untampered videos, Li et al. [22] proposed to crop eye areas of video frames on which features are extracted and then fed into an LSTM, and so to predict the probability of eye open or close. Caldelli et al. [7] proposed to leverage the optical flow to account for facial motion since artificial parts of the face contain some intrinsic dissimilarities with respect to natural expressions. Becattini et al. [6] found discrepancies in face alignment by looking at the head orientation, i.e. roll, pitch and yaw. Liang et al. [23] extract geometry facial features as peculiarities around the landmark to be discriminated between pristine and manipulated regions (e.g. spatial relationship, appearance, shape). Such features are fed into a CNN-LSTM network, and, finally, a decoder learns to map low-level features to pixel-wise manipulation localizations along with a softmax classifier to detect real and fake. Sun et al. [34] exposed abnormal facial movement patterns and time discontinuities by means of precise geometric features of facial landmarks, by making a proper calibration step, which is performed through a Lucas-Kanade operation to track landmark points and merge the detection and the prediction using a Kalman filter. Differently, we do not make use of segmentation face model [23], facial landmarks, or calibration steps [34]. We consider the Global Surface Descriptor (GSD) of the face, which is a feature describing the geometry of the face. However, in contrast to [6], which leverages head pose estimation with respect to the camera, we use a description of surface orientations at a pixel level by characterizing surface normals in a global up-right reference system that is inherently obtained along with the camera orientation. ## 3 The proposed method In this section we introduce the proposed method, named _SurFake_, which exploits inconsistencies in the features of surfaces belonging to the acquired scene to perform deepfake detection. Our pipeline is organized into three steps as depicted in Figure 2: first, we perform face detection on each video frame using dlib [18] obtaining a face crop with a fixed resolution of \(224\times 224\). Secondly, we run a pretrained _UpRightNet_[40] on face crops to extract features (see Section 3.1 for details) in order to get the _Global Surface Descriptor (GSD)_. Finally, the concatenation of the RGB face crop and the GSD feature which constitutes a 6-channel tensor is used to train a deep convolutional neural network to perform the binary classification task required to detect deepfakes. Figure 2: Pipeline of SurFake for deepfake detection. After extracting the face crop from the image, we generate its Global Surface Descriptor (GSD) through UpRightNet [40] and we scale the generated vector values in \([0,255]\) to obtain an RGB image. Then, we concatenate the face crop and to the GSD feature at the last channel and we pass in input to a classifier. Finally, we train the classifier to distinguish whether the content is real and fake. ### The Global Surface Descriptor (GSD) In order to extract subtle scene details, we face deepfake detection from a new perspective. In contrast to looking for inconsistencies in the visual perception domain, we highlight anomalies by looking into the geometrical aspects related to the camera acquisition process. Such aspects permanently mark low-level peculiarities of the image without visually affecting the content. Therefore, tampered images may contain fine-grained distortions which do not stand in the visible space. Typically, deepfakes depict a person in the foreground whose entire face (or some of its parts) has been tampered with. The idea behind our approach is to address forgery detection by focusing on the surface geometry of the face. To do that, we employ a deep learning model named _UpRightNet_[40] to estimate such geometrical characteristics presented by the oval surface of the face but also by other different surfaces such as the chin, the nose, the eye sockets or any headgear. UpRightNet is a neural network that learns to estimate the 2DoF camera orientation, i.e. roll and pitch, from a single RGB image using intermediate representations, called surface frames, estimated from both the local camera and the global up-right coordinate systems. Let us suppose to predict the per-pixel surface normals of an indoor image in the camera perspective. Surface normals on the ground and other horizontal surfaces point in the same direction as the camera up vector, instead, walls and other vertical surfaces are perpendicular to the up vector. Thus, camera orientation can be estimated as finding the vector which is most parallel to the ground normals and most perpendicular to the wall normals. UpRightNet solved the camera orientation problem by computing the rotation that best aligns the two estimated representations of the surface frames. A surface geometry frame \(\mathbf{F}(i)\) is estimated from each pixel \(i\), as a \(3\times 3\) matrix of mutually orthogonal unit vectors, that is normals, tangents and bitangents respectively: \(\mathbf{F}(i)=[\mathbf{n}(i),\mathbf{t}(i),\mathbf{b}(i)]\) with \(\mathbf{n}(i),\mathbf{t}(i),\mathbf{b}(i)\in\mathbb{R}^{3}\). UpRightNet estimates two surface frames, one in the local camera coordinate system, \(\mathbf{F}^{c}(i)\), and one in the global up-right coordinate system, \(\mathbf{F}^{g}(i)\). In order to predict roll and pitch of the camera, UpRightNet aligns the up-vector in the two representations, by using the z-component of \(\mathbf{F}^{g}(i)\), i.e. \(\mathbf{f}^{g}_{z}(i)\in\mathbb{R}^{3}\). Such alignment is computed by learning weights to solve a constrained least squared problem using ground-truth camera orientations. Due to the fact that the feature \(\mathbf{f}^{g}_{z}(i)\) substantially provides a 3-channel global description of the surfaces belonging to the acquisition scene, we have considered that it could be a good candidate to possibly give evidence of a manipulation. Such a feature, denominated _Global Surface Descriptor (GSD)_, will be analysed more in depth within the next sub-section. ### Analysis of Deep Geometric Representations In this section we report on how UpRightNet features can be useful in the context of deepfake detection. In Figure 3 we depict an example of a pristine face crop along with various face manipulations methods implemented in FaceForensics++ [30] (first row). We also show, in the second Figure 3: Sample frames (first row) and the corresponding Global Surface Descriptors (second row) and \(\log(GSD)\) (third row) for each of the 5 different forgeries in FF++, from left to right: Real, DF, F2F, FSH, FS, NT [30]. The third row highlights how GSD is sensitive to forgeries. row the corresponding GSD feature extracted by UpRightNet after passing the face image in input. Finally, in the last row, we enhance the colorization of GSD in order to visually highlight how the GSD feature could be useful to detect anomalies by depicting the logarithm of the feature. In fact, upon a visual inspection, the GSD features may appear similar to each other for different faces, even appearing uniform regardless of the content. This is due to the fact that the faces are typically framed frontally in the global upright coordinate system. Similarly to global representations estimated for indoor images, the light green pixels stand for surfaces whose normals are perpendicular to the up-ward vector, e.g. the walls in a room, just like most of the face pixels too. There are also other parts of the image which are sometimes colored with shades of red and dark green, that encode surfaces parallel to the ground of a room. In our face domain, pixels with normals parallel to the ground are located at the top and at the bottom of the image. Therefore, the geometry estimated for face images is quite consistent with indoor environments. However, we would like to demonstrate how this geometry, i.e. our proposed GSD feature, is useful in our task. To this aim, we here highlight anomalies, by calculating the logarithm of each image pixel on the first of the 3 channels and we plot the results in the third row in Figure 3, for each face manipulations. First of all, we generally observe that some artifacts are added or exaggerated at the top of the image for all manipulations, which explains that any alteration produces unavoidable patterns (see the yellow color and the surrounding parts located at the top of the images in the third row, i.e. the forehead). In the face swapping approach the alterations are mainly reported around the outer facial landmarks for FS and FSH. Interestingly, the latter is also a more recent learning-based face replacing method than the well-known DF, where the face looks almost flat. As soon as facial expressions are tampered (i.e. by performing either F2F or NT) and small parts are faked, e.g. mouth or eyes, other relevant regions are also compromised, e.g. cheeks and hair. Although the GSD feature may look scarsely informative at once, we argue that such subtle details can come up more visible for a neural network trained to detect these synthetic patterns, in the sense of anomalies in the geometric estimation of the face. ## 4 Experimental results In this section, we will present the experimental results carried out in order to verify the effectiveness of the presented approach and, in particular, of the GSD feature. ### Implementation Details DatasetWe conduct experiments on FaceForensics++ (FF++) [30], one of the most widely used datasets for deepfake detection. It has collected 1000 original real videos from the internet and for each video 5 different forged versions are generated. This dataset comprises two types of face manipulation techniques: face swapping, in which the face identity in the source image is replaced with the target one, and face reenactment, in which the facial expression in the source image is altered from the one in a target image, while maintaining the identity. FaceForensics++ includes three swapping methods, DeepFakes (DF) [1], FaceShifter (FSH) [20] and FaceSwap (FS) [2], and two reenactment methods, i.e. Face2Face (F2F) [38] and NeuralTextures (NT) [37]. In particular, two of these manipulations are computer graphics-based approaches (Face2Face and FaceSwap) while the other three are learning-based approaches (DeepFakes, FaceShifter and NeuralTextures). Specifically, NeuralTextures operates by only altering the mouth region, i.e. eye parts are unchanged while FaceShifter is a recent learning-based approach. It generates high fidelity identity preserving face swap results being able, differently from the other two face swapping methods, to deal with facial occlusions using a double synthesis stage. Overall, there are 1000 forged videos for each face manipulation, for a total of 5000. The image resolutions vary across videos, from \(272\times 480\) up to \(1920\times 1080\). FaceForensics++ provides raw videos, and two versions compressed using the H.264 codec, i.e. light compression (c23), which is nearly lossless, and heavy compression (c40). For all our experiments we choose the c23 videos, which is indicated from the dataset authors as HQ (high quality). Data preparationTo reduce the computational burden and exclude redundancies, we sample one frame out of ten in each video sequence. For all our experiments we follow the 72:14:14 data split, respectively for train, validation and test sets, as indicated in [30], i.e. 720, 140 and 140 videos. Following [30], \(224\times 224\) face crops are obtained by first extracting a \(1.3\)-factor enlarged crop centered at the detected face in the input image and then scaling it to the fixed resolution. We consider face crops of such a resolution as it is a standard image dimension that can be processed into most of the existing architectures. Since UpRightNet gets input images of \(288\times 384\) and generates outputs at the same resolution, we adapt face crops to such dimension in order to extract the Global Surface Descriptor (GSD) and then we rescale to \(224\times 224\), as done in [40]. Among the UpRightNet pretrained weights on InteriorNet and ScanNet (i.e. two indoor image datasets), we chose the former one since this model estimates a more coherent representation of GSD than the latter, in accordance with the true geometry in the face domain in terms of normals perpendicular or parallel to the global upright coordinate system (see Section 3.1). ArchitecturesIn order to classify images as real or fake we train 4 different well-known and standard architectures: ResNet50 [14], MobileNetV2 [32], EfficientNet-B0 [36] and Xception [10] with pretrained weights on ImageNet. Since we are using a neural network pretrained on classical RGB images from ImageNet, we modify the first convolutional layer, which gets the input, to handle a different type of data, i.e. the 3-band GSD feature and also a different number of input channels, as our approach is employing a total of 6, i.e. the first 3 channels for the RGB concatenated to the second 3 additional ones from GSD. Besides, to make the training more stable and to allow the model to converge faster, we make a proper weight initialization by following [24], i.e. we calculate the average of the three original input channels from the pretrained model and we replace this initialization for each and every channel of GSD. We chose this initialization for all our experiments. Because each architecture is pretrained on ImageNet, we scale values in \([0,1]\) and then we normalize with mean \([0.485,0.456,0.406]\) and standard deviation \([0.229,0.224,0.225]\) for ResNet50, MobileNetV2 and EfficientNet-B0 respectively. For Xception, we use the Pytorch implementation and the pretrained ImageNet weights from [33]. Since Xception accepts inputs at \(299\times 299\), we upscale our patches to fit that resolution, we scale values in \([0,1]\) and we normalize with mean and standard deviation both set to \([0.5,0.5,0.5]\). Note that we apply scaling and normalization on the input values for RGB and GSD separately. Training settingWe implement SurFake in Pytorch. We model the deepfake detection as a binary classification problem and we train each classification network on an NVIDIA TITAN GTX. In particular, we use a standard cross entropy loss with two classes, real and fake, for 30 epochs and batch size 32. We utilize SGD as optimizer with momentum \(0.9\), weight decay \(0.0001\) and learning rate \(0.001\). ### Analysis of the proposed GSD feature performance In this section, experimental results to evaluate the effectiveness of the proposed GSD feature will be presented. To better understand the capacity to provide distinctiveness between real and deepfake images, we have considered the activations obtained at the final layer of UpRightNet, just before getting the output decision step. Therefore, we train MobileNetV2 to detect each face manipulation by using only our GSD feature as input without RGB. MobileNetV2 contains the initial fully convolution layer with 32 filters, followed by 19 residual bottleneck layers and ends with a linear layer with 1280 dimensional feature. We then plot \begin{table} \begin{tabular}{|c||c|c|c|c||c|} \hline & \multicolumn{5}{c|}{**FF++ forgeries**} \\ \hline **Architectures** & DF & F2F & F8H & FS & NT & Avg \\ \hline \hline ResNet50 & 0.766 & 0.725 & 0.735 & 0.690 & 0.674 & 0.718 \\ MobileNetV2 & 0.800 & 0.773 & 0.756 & 0.764 & 0.713 & 0.761 \\ EfficientNet-B0 & 0.802 & 0.773 & 0.761 & 0.754 & 0.707 & 0.759 \\ Xception & 0.772 & 0.730 & 0.736 & 0.692 & 0.698 & 0.726 \\ \hline \end{tabular} \end{table} Table 1: Performance in terms of accuracy for the GSD feature on the test set with respect to the different network architectures. Figure 4: T-SNE [39] plots of the GSD feature activations for real and fake samples of the test set for each of the different forgeries (MobilNetV2 architecture). Only a reduced number of samples is plotted for the sake of visibility. Figure 5: ROC Curve of GSD features for real and fake using MobileNetV2 as classifier. We also reported the Area Under Curve (AUC) for each forgery. these activations, by resorting to T-SNE [39]. As it can be seen in Figure 4 for all the 5 manipulation techniques of the FF++ dataset, it is possible to appreciate a certain separation between real samples (black dots) and fake ones (colored dots) which is coherent for all the different cases. Even though our proposed GSD features between real and fake are often uniform (see Figure 3), subtle differences can be perceived using a neural network, while, instead, being almost invisible to the human eye. The depicted T-SNE in Figure 4 clearly demonstrates how relevant these GSD patterns are (similar representations can be obtained with other network architectures). Although we are processing patches that describe surfaces of faces rather than canonical RGB face images, a significant amount of test samples have been correctly separated in the projection space. We also plot the ROC curves in the test set for all the face manipulations detected using MobileNetV2, in Figure 5. We deduce that in all the forgery techniques the AUC (Area Under Curve) is around 0.85, which is quite interesting considering the scarce visible information carried on this feature. Similarly, we have tried to make a quantitative evaluation of such a phenomenon and we have computed the accuracy values for all the five different distortions. To do that, we evaluate our approach, still using GSD features as unique input to the classification network, and we report our results for all the 4 architectures. As listed in Table 1, we can notice that in each case and, above all, coherently for different kinds of network architectures, an average accuracy around \(0.75\) can be globally achieved. We observe that GSD exhibits well distinctiveness in most of the manipulations for all the architectures with high accuracy, e.g. DeepFakes (DF), Face2Face (F2F) and FaceShifter (FSH) are well-detected. Performance on NeuralTextures (NT) are lower than the other manipulations and possibly this is due to the fact that NT deals with facial reenactment performed just around the mouth region [30]. Additionally, we report the ROC curves for each forgery of all the architectures in Figure 6, which highlights the efficacy of the GSD features in most cases, as the average Area Under Curve is above \(0.80\), except for NT which gets an average of \(0.77\). We can also observe that EfficientNet-B0 and MobileNetV2 report higher AUC for all the forgeries. ### Composing GSD with RGB frames Hereafter, we present the results obtained by composing the 3-channel GSD with the RGB frames that are usually adopted as primary source of information in most deepfake detection methods. This has been done in order to understand if the proposed GSD feature is able to provide an improvement in deepfake detection, thanks to the fact that it takes into account geometrical components related to the acquisition scene. In this case the diverse network architectures have been trained by receiving as input a 6-channel tensor composed of 3 RGB bands concatenated with the 3 GSD channels. The achieved performances in terms of detection accuracy are listed in Table 2. As it can be seen, by looking at the last column of the table, a general increment is registered on average. Due to the fact that accuracy values are already quite high such improvement is rather limited but, what is interesting is that it is consistent for all the five forgeries and coherent for all the different network architectures that we considered. In particular, if we look at the NT case that usually appears to be more difficult to treat, it is possible to appreciate that an overall trend of increment is achieved for all the four networks. It is worth to point out that for the Xception model a substantial similar behavior is registered and the GSD feature does not seem to bring a relevant advancement. Since Xception gets input images of \(299\times 299\) but our original patches are of \(224\times 224\), both RGB and GSD have to be upscaled and fit them to the bigger resolution. That potentially adds interpolation artifacts. Indeed, ResNet50, that can directly get as inputs the original patches (as well as the other selected architectures), even though it has a different architecture from Xception but with comparable number of trainable parameters and performance reported in ImageNet, obtains an overall improvement when RGB is concatenated along with GSD. EfficientNet-B0, still with a different architecture but with similar performance and with one fifth of the trainable parameters than ResNet50 and Xception, got more improvement, with an average accuracy across all forgery manipulations of 0.981 (i.e. \(+0.3\%\) using the percentage notation). Overall, we notice that our approach employing either EfficientNet-B0 or MobileNetV2 gets more performance improvement. This is also confirmed, by looking at the ROC curves in Figure 6, where our approach is \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{8}{c|}{**FF++ forgeries**} \\ \hline **Architectures** & \multicolumn{2}{c|}{DF} & \multicolumn{2}{c|}{F2F} & \multicolumn{2}{c|}{FSH} & \multicolumn{2}{c|}{FS} & \multicolumn{2}{c|}{NT} & \multicolumn{2}{c|}{Average} \\ \cline{2-13} & RGB & RGB+GSD & RGB & RGB+GSD & RGB & RGB+GSD & RGB & RGB+GSD & RGB & RGB+GSD \\ \hline \hline ResNet50 & 0.981 & **0.984** & 0.988 & **0.989** & **0.980** & 0.971 & 0.985 & **0.986** & 0.938 & **0.947** & 0.974 & **0.975** \\ MobileNetV2 & 0.987 & **0.992** & **0.990** & 0.989 & 0.985 & **0.989** & **0.990** & **0.990** & 0.958 & **0.966** & 0.982 & **0.985** \\ EfficientNet-B0 & 0.989 & **0.992** & 0.983 & **0.986** & **0.982** & **0.982** & 0.984 & **0.985** & 0.955 & **0.958** & 0.978 & **0.981** \\ Xception & **0.976** & 0.975 & **0.979** & **0.979** & **0.976** & **0.976** & 0.977 & **0.978** & **0.939** & **0.939** & **0.969** & **0.969** \\ \hline \end{tabular} \end{table} Table 2: Performance in terms of accuracy on the test set for the different architectures with respect to the FF++ forgeries for RGB and RGB+GSD cases. only trained with the GSD features. These two architectures show a superior trend of the ROC curves with respect to others, and with the corresponding Area Under Curve values higher for all the manipulations. Such behavior demonstrates that our proposed GSD feature introduced in a classification network can benefit the detection of deepfakes. ## 5 Conclusions In this paper we proposed a novel deepfake approach named SurFake able to detect face manipulations at frame level. To do that, we introduced the use of Global Surface Descriptor (GSD) as feature that accounts for the camera acquisition process which marks permanently the image. In particular, we exploited the characteristics of the surfaces in which pixels belonging to horizontal or vertical areas of the image have a proper direction and intensity with respect to a global coordinate system. We tested SurFake with 4 different architectures on FaceForensics++, which contains 5 different face manipulations. We demonstrated that our proposed GSD features alone allow a classifier to reach around 75% of accuracy on average; furthermore, we tested our proposed pipeline by using RGB frames and GSD together as input and we got an overall improvement, though limited, for all the diverse face manipulations. As future works, we consider to deal with larger cropped patches in order to possibly improve the effectiveness of GSD and also to make some data augmentations, e.g. random crop. We will investigate other geometric information estimated by UpRightNet methodology, e.g. the local surface geometry which is directly tied to the local coordinate system. Finally, we will carry out further experiments on other deepfake datasets.
2309.15921
Envelope Ejection and the Transition to Homologous Expansion in Common-Envelope Events
We conduct a long-timescale ($5000\,$d) 3-D simulation of a common-envelope event with a $2\,M_{\odot}$ red giant and a $1\,M_{\odot}$ main sequence companion, using the moving-mesh hydrodynamic solver MANGA. Starting with an orbital radius of $52\,R_{\odot}$, our binary shrinks to an orbital radius of $5\,R_{\odot}$ in $200\,$d. We show that over a timescale of about $1500\,$d, the envelope is completely ejected while $80$ per cent is ejected in about $400\,$d. The complete ejection of the envelope is solely powered by the orbital energy of the binary, without the need for late-time reheating from recombination or jets. Motivated by recent theoretical and observational results, we also find that the envelope enters a phase of homologous expansion about $550\,\rm d$ after the start of our simulation. We also run a simplified 1-D model to show that heating from the central binary in the envelope at late times does not influence the ejection. This homologous expansion of the envelope would likely simplify calculations of the observational implications such as light curves.
Vinaya Valsan, Sarah V. Borges, Logan Prust, Philip Chang
2023-09-27T18:00:41Z
http://arxiv.org/abs/2309.15921v1
# Envelope Ejection and the Transition to Homologous Expansion in Common-Envelope Events ###### Abstract We conduct a long-timescale (5000 d) 3-D simulation of a common-envelope event with a 2 \(M_{\odot}\) red giant and a 1 \(M_{\odot}\) main sequence companion, using the moving-mesh hydrodynamic solver Manga. Starting with an orbital radius of 52 \(R_{\odot}\), our binary shrinks to an orbital radius of 5 \(R_{\odot}\) in 200 d. We show that over a timescale of about 1500 d, the envelope is completely ejected while 80 per cent is ejected in about 400 d. The complete ejection of the envelope is solely powered by the orbital energy of the binary, without the need for late-time reheating from recombination or jets. Motivated by recent theoretical and observational results, we also find that the envelope enters a phase of homologous expansion about 550 d after the start of our simulation. We also run a simplified 1-D model to show that heating from the central binary in the envelope at late times does not influence the ejection. This homologous expansion of the envelope would likely simplify calculations of the observational implications such as light curves. keywords: binaries: close - hydrodynamics - stars: winds, outflows - methods: numerical ## 1 Introduction Common-envelope evolution (CEE; Paczynski, 1976) is believed to be responsible for the production of many close binary systems such as X-ray binaries, binary neutron stars, binary black holes, and white dwarf binaries including cataclysmic variables (for a review, see Ivanova et al., 2013). The physics and astrophysics of these systems have been a subject of continuous study for the last 50 yr. However, the complex physics of this process, which includes gravity, hydrodynamics, nuclear burning, recombination, and radiation, has precluded much analytic progress. In the last decade, advances in computing and algorithms have made high-resolution, long-timescale simulations of CEE possible. These long-timescale simulations have begun to unravel the relevant physics of CEE and its astrophysical impact. In particular, studies carried out by several groups on low-mass binaries such as an asymptotic giant branch (AGB) primary with a main-sequence (MS) companion star (Sand et al., 2020; Chamandy et al., 2020; Ondratschek et al., 2022) and a red-giant branch (RGB) primary with an MS star (Iaconi et al., 2019) have demonstrated that the envelope is completely ejected on sufficiently long timescales (years to decades). These groups differ in their conclusions of what physics is important. For instance, Iaconi et al. (2019), Sand et al. (2020), and Ondratschek et al. (2022) argue that recombination energy is crucial in envelope ejection, but Chamandy et al. (2020) argue otherwise. This difference is likely due to the limited number of cases studied and future studies will likely bring this physics into sharper focus. In addition, recent work by Iaconi et al. (2019) argues that the expansion of the envelope leads to homologous expansion on long timescales. This should not come as a surprise because any expansion leads to homologous expansion so long as the trajectories remain ballistic. However, if the envelope expands homologously, this helps to simplify the theory of CEE and would provide an easier means to compute observables. Recent observations have also hinted at the presence of homologously expanding ejecta in CEE and stellar merger events. For instance, the properties of observed CO emission in V4332 Sgr is well reproduced with the homologous expansion model (Kamitsikal et al., 2018). Additionally, Kamitsikal et al. (2020) showed that the observed properties of the molecular remnant of Nova 1670 (CK Vulpeculae) are satisfactorily reproduced by linear velocity fields. In this paper, we study the physics of the ejection of the envelope using long-timescale simulations. Starting with our recent work (Prust and Chang, 2019; Prust, 2020), we optimize our numerical techniques to allow for an order of magnitude increase in integration time. We show that for the case of a 2 \(M_{\odot}\) RGB and a 1 \(M_{\odot}\) MS companion, we achieve complete envelope ejection in 1500 d. We also show that the envelope enters a homologous phase early on (about 550 d) and that the morphology of the ejected material is roughly spherical. The paper is organized as follows. In Section 2, we discuss the numerical setup of our calculations. We follow Prust and Chang (2019) and Prust (2020), but describe a significant improvement in how we generate initial conditions, giving substantial speedups. In Section 3, we discuss the results of these simulations and show complete envelope ejection after 1500 d. We also demonstrate that the envelope enters a homologous phase early on and that it can be approximated as spherical. Motivated by these results, we develop a 1-D numerical model in Section 4 and compare these simplified calculations with the full 3-D calculation. In Section 5, we discuss the major theoretical and observational implications of our results and close in Section 6. ## 2 Numerical setup We use the moving-mesh hydrodynamic solver for ChaNGa, which we call manga(Chang et al., 2017; Prust and Chang, 2019), to study CEE. manga is a moving-mesh hydrodynamic simulation code based on the algorithms described by Springel (2010). A detailed description of manga is presented by Chang et al. (2017). Improvements such as the multiple time-stepping algorithms and integration of stellar equations of state are presented by Prust and Chang (2019). A discussion of its use for simulations of main sequence tidal disruption events is presented by Spaulding and Chang (2021). Finally, recent code improvements are discussed by Chang et al. (2020) for radiation hydrodynamics, Chang and Etienne (2020) for general relativistic hydrodynamics, Prust (2020) for moving-boundary conditions, and Prust & Chang (in preparation) for magnetohydrodynamics. We refer the interested reader to this literature for a detailed description of manga. ### Initial Conditions We use the same procedure as Prust and Chang (2019) to construct initial conditions. We evolve a \(2\,M_{\odot}\) star with MESA (Paxton et al., 2011, 2013, 2015, 2018, 2019; Jermyn et al., 2022) from the pre-main sequence to the red giant phase and stop when it reaches a radius of around \(52\,R_{\odot}\). As done by Prust and Chang (2019), we construct a star of mass \(2\,M_{\odot}\), whose entropy profile matches that of the original star. The core of the newly-constructed star is modelled as a dark matter particle with gravitational softening for computational expediency. We then map the radial profile of density and temperature to an unstructured particle (mesh-generating point) mesh. The simulation consists of 430K mesh-generating points, of which 80K model the star. The companion star, which is also modelled as a softened dark matter particle, is then placed at the surface of the red giant. We should note that placing the companion on the surface of the red giant as an initial condition is unrealistic for a couple of reasons. First, the red giant model used at the beginning of the simulation is only in hydrostatic equilibrium in isolation and thus does not account for the effects of the companion. Second, a more realistic scenario would involve the binary system evolving, allowing the red giant to slowly expand on a nuclear timescale until it fills its Roche lobe. At this point, it undergoes unstable mass transfer, driving the system into CEE. However, this more realistic scenario is not easily realizable in numerical simulation as it involves the slow evolution of the red giant in the thermal timescale and would demand a realistic treatment of nuclear burning and radiation. The computational costs of such a simulation would be prohibitive. Thus, we have simplified the initial conditions to the ones stated and anticipate that for the long-term evolution of the CEE event, the initial conditions do not play a large role. In this paper, we have made a number of modifications in an effort to reduce the computational cost. First, we use an adiabatic equation of state (\(\gamma=5/3\)) instead of a MESA equation of state. The simulation using the MESA equation of state is significantly more computationally expensive than the adiabatic case. The primary advantage of using the MESA equation of state is that it encodes additional information in regard to recombination energy, which may be important in ejecting the envelope for low-mass systems. Nevertheless, as we will demonstrate below, orbital energy alone achieves complete envelope ejection without the need for recombination energy. We also improved the grid generation for the tenuous atmosphere surrounding the stars. In particular, we increase the spacing between grid points exponentially outside the stars up to the final coarsest resolution instead of the power-law increase used previously by Prust and Chang (2019). This reduces the number of nearest-neighbour searches for mesh-generating points near the boundary of the star and atmosphere. We find that this improves the performance of the code by around a factor of 2. These reductions in computational costs and improved computing power allow us to run our simulation for 5000 d, which is longer than our previous simulations (Prust and Chang, 2019) by a factor of around 20. It is also in line with other recent long-timescale simulations (Iaconi et al., 2019; Sand et al., 2020). ## 3 Results We show a series of projected density plots in Fig.1, projected along the axis of the orbital plane (z-axis, right column) and projected along a direction in the orbital plane (x-axis, left column) at 503 d, 1006 d and 2013 d. The '+' sign marks the centre of mass of the system. Initially, these projections demonstrate that the ejected matter is axisymmetric, but not spherical. The overall shape remains fairly constant, though it does become more spherical as it evolves. This constancy of the overall shape and evolution to greater spherical symmetry will be important in our discussion of homologous expansion and the spherical approximation. In Fig. 2, we plot the orbital separation, \(a\), between the centres of the stars. Each \(a\) in the plot is taken to be the average of \(a\) in a time interval of 14 d. In the beginning, the two centres plunge toward one another in a period of rapid orbital decay. This starts to slow down and at 200 d, the orbital decay plateaus to an orbital separation of \(a\approx 5\,R_{\odot}\), which is similar to our previous result of \(a\approx 3.6\,R_{\odot}\)(Prust and Chang, 2019). For each i-th particle/mesh-generating point in the system, the total mechanical energy is defined as \[E_{\rm{mech,i}}=m_{\rm i}\left(\frac{1}{2}v_{\rm i}^{2}+\phi_{\rm i}\right), \tag{1}\] where \(m_{\rm i}\) is the total mass of the mesh-generating point, \(v_{\rm i}\) is the fluid velocity of the mesh-generating point relative to the bound centre of mass of the system, and \(\phi_{\rm i}\) is the gravitational potential. Particles with \(E_{\rm{mech,i}}<0\) are bound to the binary while those with \(E_{\rm{mech,i}}>0\) are unbound. As discussed by Prust and Chang (2019), we must carefully define the velocities relative to that of the centre of mass of the bound material. This involves an iterative computation to find the bound mass and the centre of mass velocity. The unbound mass fraction is then defined as the fractional mass of the material with positive total energy. In Fig. 3 we plot the unbound mass fraction, \(f_{\rm{unb}}\), as a function of time. Nearly all gas from the red giant is unbound after 1500 d and over 80 per cent at 400 d. At 250 d, the fraction of mass that is unbound compared to our previous result (Prust and Chang, 2019) is substantially larger (40 per cent vs. 10 per cent), considering just the mechanical energy. However, the equation of state is different between the two simulations (ideal gas vs MESA). In addition, envelope ejection occurs in the absence of additional late-time energy injection via hydrogen recombination and/or jets. Other work has also recently demonstrated complete envelope ejection on a time scale of about 1000 d, but these results can rely on additional late-time energy injection. Figure 1: Projection of density on to the \(x-z\) plane (left panel) and \(x-y\) plane (right panel) at different time slices (503 d, 1006 d, 2013 d) from the simulation. The ‘+’ sign marks the centre of mass of the system. ### Homologous Expansion Recently, Iaconi et al. (2019) showed in their long-timescale CEE simulations that the envelope ejection follows a homologous expansion approximation. In their work, they simulate the evolution of a \(0.88\,M_{\odot},83\,R_{\odot}\) RGB and a \(0.6\,M_{\odot}\) companion star and follow the system for about \(15\,\mathrm{yr}\). They show that the external layers of the envelope become homologous as soon as they are ejected, but that it takes about \(14\,\mathrm{yr}\) for the bulk of the unbound gas to enter homologous expansion. Motivated by this result, we investigate the onset of homologous expansion in our simulations. To begin, we recall that the distinguishing characteristic of homologous expansion is that the velocity follows a radial profile \(v\propto r\). In essence, this means that fluid elements are on ballistic trajectories with little or no interaction between fluid elements or external forces. As such, the radial position of a fluid element can then be written as \[r(t)=v_{\mathrm{r}}t_{\mathrm{h}}, \tag{2}\] where \(v_{\mathrm{r}}\) is the radial velocity of the fluid element and \(t_{\mathrm{h}}\) is the homologous expansion time. We note that while the formalism of homologous expansion is relatively simple and discussed widely in the literature, we discuss it here again to define it in terms of computational quantities like time, which is defined as zero at the beginning of a simulation and has no relation to the zero time in homologous expansion. Thus, we define \(t_{\mathrm{h}}=t-t_{0}\), where \(t\) is the time since the start of the simulation and \(t_{0}\) is some fitted time that defines the \(t=0\) point of homologous expansion. Indeed, \(t_{\mathrm{h}}\) is mapped exactly to the time in most discussions of homologous expansion. We can rewrite equation (2) as \[v_{\mathrm{r}}=\frac{r}{t-t_{0}}=\frac{r}{t_{\mathrm{h}}}, \tag{3}\] and by differentiating and integrating the above equation, we can write the position of a fluid particle at time \(t\) with respect to an initial time \(t_{\mathrm{i}}\) as \[r(t)=r_{\mathrm{i}}\frac{t-t_{0}}{t_{\mathrm{i}}-t_{0}}=r_{\mathrm{i}}\frac{t _{\mathrm{h}}}{t_{\mathrm{h,i}}}, \tag{4}\] where \(r_{\mathrm{i}}\) is the radial position at a time \(t_{\mathrm{i}}>t_{0}\). From equation (4), we now define a scaled radius with respect to the initial time given the current radius for any fluid element in the simulation \[r_{\mathrm{s}}(r,t)=r\frac{t_{\mathrm{h,i}}}{t_{\mathrm{h}}}=r\frac{t_{ \mathrm{i}}-t_{0}}{t-t_{0}}. \tag{5}\] In other words, \(r_{\mathrm{s}}\) maps the position of a fluid element, \(r\), at a particular time \(t\) to the position of a fluid element at the initial time \(t_{\mathrm{i}}\). Having defined the scaled radius, \(r_{\mathrm{s}}\), we also define the scaled density and velocity as \[\rho_{\mathrm{s}}(r,t) = \rho_{\mathrm{i}}(r_{\mathrm{s}}(r,t))\left(\frac{t_{\mathrm{h}}} {t_{\mathrm{h,i}}}\right)^{3} \tag{6}\] \[v_{\mathrm{r,s}}(r,t) = v_{\mathrm{r,i}}(r_{\mathrm{s}}(r,t)). \tag{7}\] The simple rescaling given in equations (6) and (7) is insufficient to describe the entire system. While it works for the expanding envelope, it does not describe the tenuous atmosphere. Toward that end, we define the radius of the envelope, \(R(t)\), which is the position of the outer boundary of the homologously expanding region inside which equations (5), (6) and (7) are valid. We define a dimensionless radius \(\eta\) as \[\eta=\frac{r}{R}, \tag{8}\] so that \(\eta=0\) at the centre of the CE and \(\eta=1\) at the envelope's outer boundary. From equation (4), we can estimate \(R\) for any given time with respect to the initial time \(t_{\mathrm{i}}\): \(R=R_{\mathrm{i}}(t_{\mathrm{h}}/t_{\mathrm{h,i}})\). We can then redefine scaled density, \(\rho_{\mathrm{s}}\), and scaled velocity, \(v_{\mathrm{s}}\), for regions inside and outside of the envelope. This gives \[\rho_{\mathrm{s}} = \begin{cases}\rho_{\mathrm{i}}(r_{\mathrm{s}}(r,t))\left(\frac{t_ {\mathrm{h}}}{t_{\mathrm{h,i}}}\right)^{3}&\text{if }\eta\leq 1\\ \rho_{\mathrm{b}}&\text{if }\eta>1\end{cases}\] \[v_{\mathrm{r,s}} = \begin{cases}v_{\mathrm{r,i}}(r_{\mathrm{s}}(r,t))&\text{if }\eta\leq 1 \\ 0&\text{if }\eta>1\end{cases} \tag{9}\] To show that our simulations follow the scaling defined by equations (5) and (9), we select \(t_{\mathrm{i}}\approx 800\,\mathrm{d}\) and fit \(t_{0}\approx 550\,\mathrm{d}\) so that the re-scaled radial velocities \(v_{\mathrm{r,s}}\) match one another between a few \(\times 10^{2}\,R_{\odot}\) and a few \(\times 10^{3}\,R_{\odot}\). We show this result in the top plot of Fig. 4. As this plot shows, the velocities for \(r_{\mathrm{s}}>\) a few \(\times 10^{2}\,R_{\odot}\) match each other for a number of different time steps. We also fit a power law between \(6\times 10^{2}\,R_{\odot}\) and \(5\times 10^{3}\,R_{\odot}\), and the resulting fit Figure 3: Fraction of unbound mass, \(f_{\mathrm{amb}}\), as a function of time, \(t\). We only consider the mechanical energy in this case. Figure 2: Smoothed separation between the centre of mass of the red giant and the main-sequence star as a function of \(t\). There is an initial rapid plunge of the two centres toward each other, but this plateaus to about \(5\,R_{\odot}\) after \(200\,\mathrm{d}\). is \[v_{\rm r}=4.8\left(\frac{r_{\rm s}}{100\,R_{\odot}}\right)^{0.86}{\rm km\,s^{-1}}. \tag{10}\] The radial power law exponent is about 1, which is consistent with homologous expansion. Thus we find that by 2000 d since the start of the simulation, homologous expansion is definitively reached. We also plot the scaled density \(\rho_{\rm s}\) in the bottom plot of Fig. 4. Similar to the behaviour of \(v_{\rm r}\), we observe that the \(\rho_{\rm s}\) match one another for a few different epochs when rescaled by \(r_{\rm s}\). This is expected in the case of homologous expansion when the (scaled) density structure is frozen. In Fig. 5, we show the fractional change in the absolute 3-D velocity of all fluid elements relative to their asymptotic (late-time) velocities approach zero as the envelope evolves. The fractional change is computed based on the late-time velocity of each fluid element, defined as the average of the velocity between 2000 d to 2500 d. The dotted line in Fig. 5 represents the mean of the fractional change in the particle velocity. The shaded region represents the standard deviation from the mean. Thus, the velocities of fluid elements do not change by more than 5 per cent either in the magnitude or direction after about 1500 d on average. This implies that the fluid elements are on ballistic trajectories. In addition, we plot the ratio of total thermal energy in the system to the total kinetic energy as a function of time in Fig. 6. The thermal energy is smaller than 5 per cent of the kinetic energy after 500 d. The fact that this thermal energy does not continue to drop due to adiabatic expansion is because we use a temperature floor in our simulations to maintain numerical stability. In any case, thermal energy is a negligible fraction of the energy budget of the system. Finally, for a homologously expanding system under adiabatic conditions, the average density scales like \[\bar{\rho}\propto t_{\rm h}^{-3}. \tag{11}\] This has also been previously demonstrated numerically by Iaconi et al. (2019) for their SPH simulations. Here we confirm the same result by plotting the average density of unbound particles (solid line) in the envelope as a function of \(t\) in Fig. 7. We also plot a \(t^{-3}\) power law fit (dashed line) that is fitted for \(t\in[500,2000]\) d. The average density from our simulation follows the \(t^{-3}\) power law. Figure 4: Radial velocity (top) and scaled density (bottom) as a function of scaled radius for different times (2000 d, 2500 d and 3000 d). We also show a power-law fit (solid black line) for \(v_{\rm r}\propto r_{\rm s}^{0.86}\) (top). Figure 5: The fractional change in the velocity of all particles compared to the final average velocity as a function of time. The dashed line represents the mean and the shaded area represents the standard deviation of the fractional change in the particle velocity. Figure 6: Ratio of thermal energy to kinetic energy as a function of time. ## 4 1-D Model Motivated by the results of the previous section, we now study a simplified 1-D spherically-symmetric model of the ejected envelope. We have developed a 1-D finite-volume spherically-symmetric hydrodynamics code in Python that uses an HLLE Riemann solver (Harten et al., 1983; Einfeldt, 1988) with piecewise-constant (first-order) reconstruction to study this ejected envelope1. Our models consist of 350 grid points that are logarithmically spaced, starting from \(r=3\,R_{\odot}\) to \(r=3\times 10^{7}\,R_{\odot}\), giving 50 grid points per decade. We set the origin to the centre of mass of the binary and use free (von Neumann) boundary conditions on the inner and outer boundaries. While the discussion of the hydrodynamic equations can be widely found in the literature, we will briefly recap them here for completeness. These equations can be written in compact notation by introducing a state vector \(\mathcal{U}=(\rho,\rho v_{\rm r},\rho e)\) Footnote 1: This 1-D code will be shared on reasonable request to the corresponding author. \[\frac{\partial\mathcal{U}}{\partial t}+\frac{1}{r^{2}}\cdot\frac{\partial \mathcal{F}}{\partial r}=\mathbf{S}, \tag{12}\] where \(\mathcal{F}=(r_{\rm r}^{2}\rho v_{\rm r},r^{2}(\rho v_{\rm r}v_{\rm r}+P),r^{2 }(\rho e+P)v_{\rm r})\) is the flux function, \(\mathbf{S}=(0,-\rho\frac{GM_{\rm r}}{r},-\rho v_{\rm r}\frac{GM_{\rm r}}{r^{2}}+ \mathcal{S}_{\rm h})\) is the source function, \(e\) is the specific energy, \(G\) is the gravitational constant and \(M_{\rm r}\) accounts for the mass of the central binary and the envelope enclosed within radius \(r\). The extra term, \(\mathcal{S}_{\rm h}\), is added to study the effect of the heating supplied to the envelope from the central binary. We discuss this below. For the initial conditions of the 1-D model, we take the fitted results from the 3-D numerical simulation at \(t=800\,\mathrm{d}\). The 1-D model computes 1000 outputs over 100 yr, each one separated by 0.1 yr. We present the results in Figs. 8 and 9 for 10, 20, 50, and 100 yr. These times are relative to the start of the 3-D simulation so that the same time between the two simulations can be directly compared. The scaled radius and density follow equations (5) and (6), respectively. We calculate the best fit of the linear part to get the power-law relation between \(v_{\rm r}\) and \(r_{\rm s}\). We fit each time step separately and average them to produce a best fit of \(v_{\rm r}\propto r_{\rm s}^{0.95}\). We note that the 1-D model is intentionally not constrained to adhere to the homologous expansion model, though we did use initial conditions that correspond to the beginning of the homologous phase. One feature that is observed in these 1-D models, but not in the full 3-D models, is evident in Fig. 9. Here at a radius below about a few \(\times 10\,R_{\odot}\), the radial velocity becomes negative. This manifests as the vertical rise in Fig. 9. This is due to the gravitational potential from the inner binary being much larger than the total energy of the envelope in this region. However, these negative radial velocities are not seen in full 3-D models though the data is quite noisy in this region (Fig. 4). This difference may be attributed to the periodic forcing of the orbiting binary on the gas in this inner region. To examine this, we develop a simple model of binary heating for this 1-D model. The heating from the central binary can be thought of as a periodic forcing from the binary driving a damped simple harmonic oscillator with a frequency equal to the epicyclic frequency, \(\kappa=\Omega\), where \(\Omega\) is the Keplerian orbital frequency. A discussion of this heating term is in Appendix A, but the resulting parameterized heating term is \[\mathcal{S}_{\rm h}=\lambda\Omega\frac{GM_{\rm bin}\rho}{r}\left(\frac{a_{\rm bin }}{r}\right)^{2}, \tag{13}\] Figure 8: Scaled density, \(\rho_{\rm s}\) as a function of scaled radius, \(r_{\rm s}\) from 1-D simulations, for 10 (red), 20 (blue), 50 (magenta), and 100 (yellow) yr and a heating parameter of \(\lambda=1\). Figure 7: Mean density of the envelope as a function of \(t\) (blue solid curve). The black dotted curve is the \(t^{-3}\) fit and is fitted for \(r\in[500,2000]\,\mathrm{d}\). Note that the average density profile follows a \(t^{-3}\) power-law closely. Figure 9: Radial velocity, \(v_{\rm r}\), as a function of scaled radius from 1-D simulations. The times and \(\lambda\) are the same as in Fig.8. The solid black line is the best fit of \(v_{\rm r}\propto r_{\rm s}^{0.95}\). where \(\lambda\) is a free parameter, \(M_{\rm bin}\) is the binary mass and \(\sigma_{\rm bin}\) is the binary separation. Here, we set \(M_{\rm bin}=1.36\,M_{\odot}\), and \(a_{\rm bin}=5\,R_{\odot}\). Fig. 10 shows \(v_{\rm r}\) and \(\rho_{\rm s}\) at 10 yr for different heating parameters \(\lambda\). The heating term \(\mathcal{S}_{\rm h}\) impacts the inner envelope and has little impact in the outer regions (\(r_{\rm s}>10^{4}\,R_{\odot}\)). For the case of no heating (\(\lambda=0\), red line), the inner regions follow a free-fall inflow solution. For larger heating rates, this inflow is suppressed but not completely eliminated, though the infall region occurs at substantially smaller radii than is effectively probed by our 3-D simulations. In either case, the heating rate makes no impact on either the density or velocity profile at large radii. We thus conclude that binary heating or any other late-time heating has little effect on the ultimate expansion and ejection of the envelope, but is required to prevent the infall of the innermost envelope. In the 3-D simulations, angular momentum or turbulence of the gas close to the binary may play a similar role, but this is not well modelled in the 1-D simulation. In any case, it is evident that there is extra physics that is not entirely accounted for in the 1-D model that leads to homologous expansion (at smaller radii) in the 3-D model. ## 5 Discussion In this work, we study the long-timescale evolution of CEE and the homologous nature of envelope expansion. We simulate a common-envelope event using manga for a \(2\,M_{\odot}\) RGB and a \(1\,M_{\odot}\) MS star binary system. We show that nearly all gas from the red giant is unbound in 1500 d using an adiabatic equation of state and relying only on the release of orbital energy. This is in agreement with and in contrast to other work. For instance, Chamandy et al. (2020) evolved a binary system of an AGB + white dwarf or MS star system through 20 orbits using an adiabatic equation of state. In agreement with our findings, they show that the envelope unbinds at a constant rate and would become unbound in less than 10 yr. On the other hand, a number of others suggest that additional energy injection is necessary. Ondratschek et al. (2022) studied a binary system consisting of an AGB primary similar to Sand et al. (2020) with a white dwarf or a MS star such that the mass ratio is 0.25 using the OPAL equation of state (Rogers & Nayfonov, 2002). They find a complete envelope ejection in about 1000 d when considering thermal and ionization energy along with mechanical energy, and in about 3400 d when ignoring the thermal and ionization energy. Sand et al. (2020) studied the fraction of unbound mass in two different simulations, one using the ideal gas equation of state and the other using the OPAL equation of state (Rogers & Nayfonov, 2002) for a binary system with an AGB primary and a white dwarf or an MS companion star such that the mass ratio is 0.5. In the case of the ideal gas equation of state, only 20 per cent of the mass becomes unbound and the rate of mass ejection is slower if the internal energy is ignored. In their OPAL runs, 80 per cent of the mass is unbound in about 2500 d considering only the mechanical energy and 100 per cent is ejected in about 1000 d considering mechanical energy along with thermal and recombination energy. Finally, Iaconi et al. (2019) find that the envelope of a binary system with 0.88 \(M_{\odot}\) RGB and a 0.6 \(M_{\odot}\) MS companion star is completely ejected in about 500 d when considering mechanical and recombination energies. One crucial difference between our work and that of Sand et al. (2020) and Ondratschek et al. (2022) is that the cores of the stars end up in a much tighter binary in our case. In particular, Sand et al. (2020) starts the binary at 236 \(R_{\odot}\) but ends up at 41 \(R_{\odot}\). In our case, we start at around 50 \(R_{\odot}\), but end up at 5 \(R_{\odot}\). Hence, our orbit shrinks by a factor of nearly 10 whereas Sand et al. (2020)'s orbit shrinks by a factor of about 5. The corresponding relative gravitational energy release is hence a factor of 2 greater in our case. In addition, we also show that the envelope reaches homologous expansion starting around a few hundred days (550 d). In comparison, Iaconi et al. (2019) showed that it takes about 5000 d for the bulk of the unbound gas to become homologously expanding, even though the external layers of the envelope become homologous as soon as they are ejected. This difference may be due to the analysis methodology. Namely, Iaconi et al. (2019) traced the ballistic trajectories of SPH particles whereas we looked at the velocities of fluid elements and fit the velocities and associated radii to a \(t_{\rm h}=0\) point. We are also simulating these events at substantially higher resolution through a combination of greater particle count and the use of a moving-mesh methodology. Inspired by the homologous expansion in our 3-D simulation, we also study a 1-D model for \(t>800\) d. Unsurprisingly, we find that initializing the 1-D simulation with the spherical approximation of the 3-D simulation data produces homologous expansion in the bulk of the envelope. However, the inner regions require some additional physics not modelled in the 1-D simulation to preclude fallback and, hence, the breaking of homology. Here, we attribute this additional physics to heating from the periodic forcing of the binary but note that turbulence or angular momentum may play the same role. The fact that simulated common-envelope events follow both a (roughly) spherical and homologous expansion approximation is likely useful for their theoretical and observational studies. First, CEE need not be numerically simulated for extremely long timescales. Instead, they only need to be simulated to the point where they begin homologous expansion, which occurs on a timescale of years as opposed to decades. This will result in significant computational savings and an associated expansion in the parameter space that can be explored. Figure 10: Scaled density (\(\rho_{\rm s}\)) and radial velocity (\(v_{\rm r}\)) as a function of scaled radius (\(r_{\rm s}\)), for 10 yr and different heating parameters \(\lambda\). Secondly, the fact that they obey both the spherical and homologous expansion approximations implies that radiation transfer codes that are used to calculate supernova light curves and spectra can be adapted to compute light curves and spectra from CEE events. Homologous expansion kinematic models are widely accepted as a good first-order approximation to model radiation transfer in supernova explosions (Ropke, 2005; Kerzenendorf and Sim, 2014; Liu et al., 2018). These radiation transfer codes typically assume a spherical and homologous expansion profile for the ejecta to greatly speed up the calculation. This fact has already been utilized by Kaminski et al. (2018) to model the CO emission in V4332 Sgr as a homologously expanding bipolar flow. Similarly Kaminski et al. (2020) show that the observed properties of the molecular remnant of Nova 1670 (CK Vulpeculae) are reproduced by assuming linear velocity fields. Finally, we note that we have neglected effects, such as magnetic fields (Ondratschek et al., 2022) and jets (see for instance Soker and Kaplan, 2021) that could cause the outflow to become more non-spherical at late times. The effect of jets in CEE is still unclear, as Zou et al. (2022) found that jets are quickly choked within the envelope. On the other hand, Ondratschek et al. (2022) showed that late-time jet-like outflows produce the bipolar morphology seen in many planetary nebular systems. These effects will be a topic of future work. ## 6 Conclusions In this work, we have analyzed the nature of expansion and the timescale of complete envelope ejection in common-envelope evolution. We used the moving-mesh hydrodynamic solver manga to perform a long-timescale simulation of a CE system involving a \(2\,M_{\odot}\) red giant and a \(1\,M_{\odot}\) main sequence star. We let the system evolve for around \(13\,\mathrm{yr}\). Starting at an orbital radius of \(52\,R_{\odot}\), the binary plateaus to an orbital radius of \(5\,R_{\odot}\) in \(200\,\mathrm{d}\). We observe that nearly all envelope material is unbound after \(1500\,\mathrm{d}\), and \(80\) per cent is unbound in \(400\,\mathrm{d}\). This timescale is similar to that found by others who also studied envelope ejection in low-mass binary systems. However, we find that there is no need for additional energy injection from recombination. Motivated by previous work by Iaconi et al. (2019), we also show that the envelope enters a phase of homologous expansion after \(550\,\mathrm{d}\). This is likely important for theoretical and observational work on CEE. First, one can have significant computational savings by doing numerical simulations up to the homologous start time and using the homologous expansion model afterwards. Secondly, the radiative transfer codes used for finding light curves and spectra for supernovae can be adapted for use in CEE simulations. Finally, we study the homologous expansion model in 1-D simulations using the power-law fits of the homologous phase as initial conditions. From this study, we found that periodic heating from the binary star at late times can affect the inner regions of the envelope but does not impact the homologous expansion. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author. ## Acknowledgements VV acknowledges support from the NSF through grants PHY-1912649 and PHY-2207728. SvB, LP, and PC acknowledge support from the NASA ATP program through NASA grant NNH17ZDA001N-ATP, the NSF through grant AST-2108269, and the UWM Research Assistance Fund. LP also acknowledges support from the UWM R1 Distinguished Dissertator Award, NSF through grant PHY-1748958 and a grant from the Simons Foundation (216179, LB). manga simulations were completed on the Mortimer HPC System at UWM, which was funded by the NSF Campus Cyberinfrastructure Award OAC-2126229 and UWM. We also use the yt software platform for the analysis of the data and generation of plots in this work (Turk et al., 2011).
2309.12082
Estimating Stable Fixed Points and Langevin Potentials for Financial Dynamics
The Geometric Brownian Motion (GBM) is a standard model in quantitative finance, but the potential function of its stochastic differential equation (SDE) cannot include stable nonzero prices. This article generalises the GBM to an SDE with polynomial drift of order q and shows via model selection that q=2 is most frequently the optimal model to describe the data. Moreover, Markov chain Monte Carlo ensembles of the accompanying potential functions show a clear and pronounced potential well, indicating the existence of a stable price.
Tobias Wand, Timo Wiedemann, Jan Harren, Oliver Kamps
2023-09-21T13:53:17Z
http://arxiv.org/abs/2309.12082v2
# Estimating Stable Fixed Points and Langevin Potentials for Financial Dynamics ###### Abstract The Geometric Brownian Motion (GBM) is a standard model in quantitative finance, but the potential function of its stochastic differential equation (SDE) cannot include stable nonzero prices. This article generalises the GBM to an SDE with polynomial drift of order \(q\) and shows via model selection that \(q=2\) is most frequently the optimal model to describe the data. Moreover, Markov chain Monte Carlo ensembles of the accompanying potential functions show a clear and pronounced potential well, indicating the existence of a stable price. Langevin Equation; Stochastic Differential Equation; Finance, Econophysics; Data-Driven Inference ## I Introduction Research on financial data with methods from physics, summarised as _Econophysics_, has lead to a better understanding of statistical properties such as financial correlation matrices [1; 2; 3], scaling behaviours of empirical distributions [4], microscopic trader models [5; 6] and other phenomena [7; 8]. Differential equations such as the Brownian Motion and the Geometric Brownian Motion (GBM) in the Black-Scholes-Merton model have been an important tool to analyse financial data [9; 10; 11]. Econophysics contributed to these efforts via ordinary (ODE), stochastic (SDE) and partial differential equations (PDE) [12; 13; 14; 15; 16] and a recent empirical study modelled price time series with a harmonic oscillator ODE to reconcile the randomness of financial markets with the idea of a fair price [17]. The GBM, still widely used as a standard model for price time series, presents the researchers with a subtle difficulty with regards to its interpretation: Its deterministic part implies either an unlimited exponential growth or an exponential decline to a price of 0 as pointed out in [18; 19; 20]. While traditional finance models have tried to improve the GBM by changing its stochastic component, the deterministic part has largely been left unchanged (cf. the discussion in section 1 of [20]). While [20] used a constrained model with regularisation via strong prior information to fit parameters to their model, the goal of the present article is to estimate model parameters and to select the best model without any of these restrictions, i.e. letting the data speak for itself. The estimation of Langevin equations from data via the Kramers-Moyal coefficients [21; 22] sparked a family of methods to estimate nonparametric drift and diffusion coefficients to model the observed system as a stochastic differential equation which have also been applied to financial data [23]. A particularly interesting expansion of this method is given by the maximum-likelihood-framework (ML) in [24]: for each time step \(t_{i}\) and observed data \(x_{i}\), the transition likelihood \(L_{i}=p(x_{i+1}|x_{i})\) from \(x_{i}\) to \(x_{i+1}\) is calculated and the joint likelihood \(L=\sum_{i}L_{i}\) is maximised by the estimation algorithm. This approach takes the inherent stochasticity of stochastic differential equations into account and can be performed with a parametric model to recover algebraic equations increase its interpretability. Similarly, the SINDy algorithm recovers a sparse functional form of the underlying algebraic equations by fitting the data to a candidate function library, but struggles with noisy and stochastic data [25; 26]. This article uses a combination of the ML framework of [24] with the candidate function library in [25] for a robust method to estimate stochastic differential equations from data similar to [27]. As an extension, the presented method can be used to estimate data from time series with non-constant time increments \(dt_{i}\neq dt_{j}\). We use stock market prices at daily and 30-minute intervals as described in section II to estimate their stochastic differential equations. In particular, we estimate the potential in which the dynamics take place to evaluate the stability of the dynamical process with the overall goal to distinguish between periods with a stable fixed point and unstable dynamics as explained in section III. The results for the different polynomial orders of the model and their implication for the stability are shown in section IV and discussed with respect to possible applications for risk assessment in section V. Data We analyse stock market data from the companies listed in table 1 to cover a range of different business sectors. Our analysis covers two distinct market conditions: (i) a calm period from early 2019 through early 2020 which was characterised by low overall volatility and (ii) the Covid Selloff beginning March 2020 which was accompanied by a spike in market volatility. We analyse two sampling intervals: daily, end-of-day price changes (for which our data availability covers the whole of 2019 and 2020) and 30-minute intervals (for which our data is limited to the period between January 2019 up to and including July 2020). Note that we are directly analysing the price time series \(P_{t}\) instead of the returns \(r=\log(P_{t+1}/P_{t})\). Although analysis of the price data is also an important contribution to research [28], the returns are often chosen as an observable because of their stationary distribution which allows the application of several time series analysis methods. However, our focus is explicitly on the non-stationary behaviour of stock prices: We estimate the potential of the differential equation's dynamics for different time intervals to differentiate between those dynamics with and without a stable fixed point (cf. section III). Similarly, the work in [18; 19; 20] also uses prices to determine the position of the fixed points (or, equivalently, the wells of the potential): Although a return of 0 also indicates a fixed point, it is not clear whether the price associated with it is the same as in the previous time window under observation. In particular, the research in [18; 19; 20] stresses the important difference between fixed points at a nonzero price \(P>0\) (normal behaviour of a stock) and at a price of \(P=0\) (crash of the stock). Both phenomena correspond to a return of \(r=0\), but describe vastly different situations of the stock. **TAQ Database:** We use intraday data from the TAQ (Trade and Quote) database. To account for microstructure related issues, such as the bid-ask-bounce or infrequent trading, we rely upon quoted prices that we re-sample to a 30-minute frequency.1 For that, we first remove all crossed quotes, i.e., all quotes where the bid price exceeds the ask, require the bid-ask-spread to be below 5$, and finally use the last valid available quote within every 30-minute interval.2 We further account for dividend payments and stock splits, which mechanically influence stock prices, and create a performance price index using quoted mid-prices. Footnote 1: When we talk about quotes, we refer to the National Best Bid and Offer (NBBO) where the national best bid (offer) is the _best_ available quoted bid (offer) price across all U.S. exchanges. See [29] for an overview. Footnote 2: We forward fill quotes if there is no valid entry for a given time interval. However, this does almost never happen for very liquid stocks such as those chosen in this paper. **CRSP:** We also consider lower-frequency, daily, data from the Center of Research in Security Prices (CRSP), which is one of the most used database in economics and finance. We again calculate a performance price index for each stock using the daily holding period return provided by CRSP. Note that, while we use quoted mid-prices for the 30-minute high-frequency data, CRSP uses trade prices to calculate the holding period return. However, as the trading volume has increased considerably over the last decade, this should not be an issue [30]. ## III Theoretical background and model The standard stochastic differential equation to describe a stock price \(P\) is the Geometric Brownian Motion given by \[\frac{\mathrm{d}P}{\mathrm{d}t}=\mu P+\sigma P\epsilon \tag{1}\] with standard Gaussian noise \(\epsilon=\epsilon(t)\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,1)\), constant drift \(\mu\) (typically \(\mu>0\)) and volatility \(\sigma\). As pointed out in [20], the physical interpretation as a particle's trajectory \(P\) in a potential \(V(P)\) transforms equation (1) to \[\frac{\mathrm{d}P}{\mathrm{d}t}=-\frac{\mathrm{d}V}{\mathrm{d}P}(P)+\sigma P \epsilon_{t}\quad\text{with}\quad V(P)=-\frac{\mu}{2}P^{2} \tag{2}\] and an arbitrary constant \(C\) (set to zero for simplicity). However, analysing this potential \(V\) in terms of its linear stability (cf. e.g. [31]) leads to the problematic result that the only fixed point in the data with \(\frac{\mathrm{d}V}{\mathrm{d}P}(P_{0})=0\), namely \(P_{0}=0\), is an unstable fixed point for \(\mu>0\). Without a stable fixed point, trajectories are expected to diverge away from \(P_{0}=0\) towards infinity. As this is - at least for limited time scales - a highly unrealistic model, the authors of [18; 20] have suggested higher order polynomials in the potential \(V\) of (2). From the assumption that the rate of capital injection by investors should depend on the current market capitalisation, they derive a quartic potential \[V(P) = -P\left(\frac{\alpha_{1}}{2}P+\frac{\alpha_{2}}{3}P^{2}+\frac{ \alpha_{3}}{4}P^{3}\right)\text{ with}\] \[-\frac{\mathrm{d}V}{\mathrm{d}P}(P) = \alpha_{1}P+\alpha_{2}P^{2}+\alpha_{3}P^{3}, \tag{3}\] \begin{table} \begin{tabular}{c|c c} Company & Business Sector & Ticker \\ \hline Apple & Technology & AAPL \\ Citigroup & Banking & C \\ Walt Disney Co. & Media & DIS \\ Every Inc. & Energy & EVRG \\ General Electrics & Industry & GE \\ Pfizer & Pharmaceuticals & PFE \\ Walmart Inc. & Retail & WMT \\ \end{tabular} \end{table} Table 1: The companies whose data has been analysed in our article. i.e. the drift term \(-\frac{\mathrm{d}V}{\mathrm{d}P}(P)\) is a polynomial of order \(q=3\). With a suitable choice of parameters \(\alpha\), this potential can adopt the shape of a double-well potential with stable fixed points at \(P_{0}=0\) and \(P_{1}>0\), thereby predicting both the presence of a bankruptcy state at the stable fixed point \(P_{0}=0\) and an additional stable state with nonzero price \(P_{1}>0\). In [20], however, major constraints to the parameters during the estimation process were necessary to achieve this. ### Numerical Implementation If a time series \((s_{n}(t_{n}))_{n}\) with observations \(s_{n}\) at time \(t_{n}\) has been recorded, a maximum likelihood approach can be used to estimate the most likely parameters \(\sigma\) and \(\alpha_{i}\) such that a stochastic differential equation according to equations (2) and (3) may have produced the observed time series. For any two adjacent points \(s_{n}(t_{n})\) and \(s_{n+1}(t_{n+1})\) and any given parameters \(\phi=(\sigma^{2},\alpha_{i})\), the likelihood \(L\) of observing the transition from \(s_{n}(t_{n})\) to \(s_{n+1}\) at \(t_{n+1}\) can be explicitly calculated as \[L(s_{n+1}|t_{n+1},s_{n},t_{n},\phi)=\left(2\pi\left(\sigma s_{n} \sqrt{t_{n+1}-t_{n}}\right)^{2}\right)^{-\frac{1}{2}} \tag{4}\] \[\cdot\exp\left(-\frac{\left(s_{n+1}-\left(s_{n}+\left(-\frac{ \mathrm{d}V}{\mathrm{d}t}(s)\right)(t_{n+1}-t_{n})\right)\right)^{2}}{2(\sigma s _{n}\sqrt{t_{n+1}-t_{n}})^{2}}\right)\] or as the log likelihood \[\mathcal{L}(s_{n+1}|t_{n+1},s_{n},t_{n},\phi)=\log L(s_{n+1}|t_{n+1 },s_{n},t_{n},\phi) \tag{5}\] \[=-\frac{1}{2}\log\left(2\pi\left(\sigma s_{n}\sqrt{t_{n+1}-t_{n} }\right)^{2}\right)\] \[-\frac{\left(s_{n+1}-\left(s_{n}+\left(-\frac{\mathrm{d}V}{ \mathrm{d}t}(s)\right)(t_{n+1}-t_{n})\right)\right)^{2}}{2(\sigma s_{n}\sqrt{ t_{n+1}-t_{n}})^{2}}.\] Because we assume Markovian dynamics, the complete log likelihood for the full observed time series is then simply the sum over the stepwise log likelihoods \[\mathcal{L}\left((s_{n})_{n}|(t_{n})_{n},\phi\right)=\sum_{i=0}^{n-1}\mathcal{ L}(s_{i+1}|t_{i+1},s_{i},t_{i},\phi). \tag{6}\] For given observations \((s_{i},t_{i})\), the likelihood \(\mathcal{L}\) can be maximised by varying the parameters \(\phi\) to estimate the optimal parameters \(\phi^{*}\). According to Bayes' Theorem and Bayesian Statsitics [32], the likelihood of observing the measured data conditional on some parameter values \(L\left((s_{n}(t_{n}))_{n}|\phi\right)\) is combined with an a-priori distribution \(f_{prior}(\phi)\) to calculate a posterior distribution of the parameters given the observed data: \[f_{post}(\phi|(s_{n}(t_{n}))_{n})\sim f_{prior}(\phi)L\left((s_{n}(t_{n}))_{n} |\phi\right). \tag{7}\] For an uninformed flat prior, this transformation is mathematically trivial, but allows us to calculate \(f_{post}\) as a probability density of the parameters \(\phi\) conditional on the observed data. Hence, the distribution of the parameters \(\phi\) can be explored via Markov chain Monte Carlo (MCMC) methods (e.g. [33]) by drawing samples \((\phi^{(j)})_{j}\) from the posterior distribution as implemented in the Python package _emcee_[34]. MCMC can uncover correlations between different parameters and also explore local maxima of the probability density. It therefore gives a more complete view of the underlying distribution than summary statistics like e.g. the mean or standard deviation. In particular, we will use the sampled parameters \((\phi^{(j)})_{j}\) to construct an ensemble of potentials \(V(P)\) and evaluate whether their shapes are roughly consistent with each other. ### Synthetic Data To test our method, synthetic time series \((s_{n})_{n}\) are simulated via the Euler-Maruyama scheme [35] as \[s_{n+1}=s_{n}+\left(-\frac{\mathrm{d}V}{\mathrm{d}s}(s)\right)\left(t_{n+1}-t _{n}\right)+\sigma s_{n}\sqrt{t_{n+1}-t_{n}}\epsilon_{n} \tag{8}\] with \(\epsilon_{n}\overset{iid}{\sim}\mathcal{N}(0,1)\) for any parameters \(\alpha_{i}\) for the potential in (3). The following paragraphs discuss how well our model can then identify the underlying dynamcis and parameters from the observed data \((s_{n},t_{n})\). Note that for the synthetic data, non-equidistant time steps have been used. #### ii.2.1 Estimating the Correct Order Generalising the potential from (3) to a potential with arbitrary polynomial order \(q\) leads to \[V(P)=-P\sum_{i=1}^{q}\frac{\alpha_{i}}{i+1}P^{i}. \tag{9}\] For given \(q\), random parameter values \(\alpha_{i}\) and a random noise level \(\sigma\) are sampled and the resulting time series is simulated with these parameters. If the sampled parameters result in numerical errors (i.e. an infinitely diverging time series), the time series is discarded from the ensemble. This is done repeatedly until the ensemble includes 100 time series with a length of 1000 time steps each. For those synthetic time series, the best order is estimated via the Akaike information criterion \(AIC\)[36] given by \[AIC=-2\mathcal{L}_{\mathrm{max}}+2(q+1) \tag{10}\] where \(q+1\) is the total number of model parameters (\(q\) monomials' prefactors \(\alpha_{i}\) and \(\sigma\)). Hence, \(q\) is varied, the maximum likelihood \(\mathcal{L}_{\mathrm{max}}\) for the chosen \(q\) is estimated and the resulting \(AIC\) is calculated. The model with the lowest \(AIC\) is chosen as the best model. The results shown in figure 1 show that for polynomial orders \(q=1\) (Geometric Brownian Motion), \(q=2\) and \(q=3\) (Halperin's suggestion as in equation (3)), the correct order is usually identified as such. An even higher order \(q=4\) shows very unreliable results, but will be included for completeness for the further data analysis. #### iii.2.2 Estimating the Parameters Instead of sampling repeated trajectories with different parameters, now for order \(q=3\), the parameters \(\phi=(\sigma^{2},\alpha_{1},\alpha_{2},\alpha_{3})\) are kept constant as \(\phi=(0.05,2,-1,0.01)\). 100 time series with 1000 time steps are sampled and their parameters are estimated by fitting a model with \(q=3\). Despite the constant parameters, the randomness of the \(\epsilon_{n}\) in equation (8) nevertheless ensures that the time series are different from each other. The histograms of the estimated parameters, their means and standard deviations are shown together with the true parameter values in figure 2. Note that the true parameter value is always within the one-standard-deviation-interval around the mean and that the parameter \(\alpha_{3}\) has a distribution virtually indistinguishable from that of a parameter with mean zero: The parameter estimation correctly shows that \(\alpha_{3}\) is so low that it is a superfluous parameter for model inference. Note that if the same data is estimated by a model with \(q=2\), the results are fairly consistent with the depicted histograms. However, fitting the data to a model with \(q=4\) results in the one-standard-deviation-interval of \(\alpha_{2}\) also containing the value 0, which is a consequence of overfitting the model. ## IV Results While [20] analyses time periods of one year to estimate parameters, we believe that because of the assumption of constant volatility in equation (4), it is prudent to restrict the date to shorter intervals of one trading month. Hence, we divide the given data into non-overlapping monthly intervals and estimate the polynomial order \(q\) of the underlying stochastic differential equation via the \(AIC\). Note that the time difference between each observation is taken as a constant interval of 1 time step in trading days or 30-minute-steps, respectively, including the overnight return. ### Polynomial Orders The distributions of the estimated polynomial orders \(q\) are shown in figure 3: On both time scales and in all market periods, the order \(q=2\) is the most frequently estimated order with the GBM model at \(q=1\) being the second most frequent estimation. The suggestion \(q=3\) from [20] as well as the even higher-order \(q=4\) are only rarely estimated as the most accurate model. Interestingly, calm and turbulent periods (as defined in section II) show essentially identical distributions, whereas the order \(q=4\) seems to be a bit more frequently estimated for the shorter time scale of 30 minutes than for the daily data. Table 2 shows that there is high consistency between the estimated orders for both time intervals for orders \(q=1\) and \(q=2\), but increasing disagreements for orders \(q=3\) and \(q=4\). Overall, this suggests that a polynomial order of \(q=1\) and \(q=2\) can be a reasonable modelling assumption for the time series data and that the identification of these two orders is consistent for the two sampling intervals under consideration, whereas the choice of calm or turbulent periods does not seem to influence our results. \begin{table} \begin{tabular}{c|c c c c} Optimal Order (30 Min) & \multicolumn{3}{c}{Opt.} & Order (Daily) \\ \cline{2-5} & 1 & 2 & 3 & 4 \\ \hline 1 & 24 & 6 & 3 & 1 \\ 2 & 6 & 41 & 10 & 3 \\ 3 & 1 & 10 & 8 & 1 \\ 4 & 7 & 7 & 3 & 2 \\ \end{tabular} \end{table} Table 2: Comparison of the estimated orders \(q\) for the same company and month with the price time series in daily and 30-minute-intervals. Figure 1: For each true polynomial order \(q\), 100 trajectories are randomly sampled and then their best order is estimated. The histograms show that the method successfully estimates trajectories with order \(q=1,2,3\), but struggles with the higher order \(q=4\). ### Potentials For the optimal polynomial order \(q\), we then sampled the parameters \(\phi=(\sigma^{2},\alpha_{1},\ldots,\alpha_{q})\) from their posterior probability distribution to get an ensemble of parameters \((\phi^{(j)})_{j}\). From that, we calculate the corresponding ensemble of potentials \(V(P)\) according to (9) and plot them, their pointwise centred \(68\%\) and \(95\%\) credible intervals (CIs) and the potential corresponding to the maximum likelihood estimation. The zero horizontal is shown in these plots as the y-axis position of the potential at \(P=0\) to indicate where the potential is above or below the potential energy at the zero price (and if the price "particle" would therefore prefer or not prefer to be at the potential level of \(P=0\)). A couple of generic features can be observed for these potentials and do not depend on the chosen sampling rate: #### iii.2.1 Order 2 As the order \(q=2\) is the most frequently identified polynomial order according to the results in figure 3, it is quite insightful to focus on the associated potentials. They virtually always look like the potential depicted in figure 4 and show a potential well as a pronounced minimum. Close to this minimum, the \(68\%\) CI is usually also below \(0\) and sometimes (as depicted in figure 4) even the \(95\%\) CI. The MCMC-sampled potential ensembles thus support the existence of a potential well as they clearly show the potential well for a large majority of trajectories. Following the interpretation of the potential wells from [20], this supports the existence of a locally stable price within this potential minimum. #### iii.2.2 Order 1: GBM The GBM with \(q=1\) is the second most frequently estimated order. As shown in the subplots \(a\) and \(b\) in figure 5, the MCMC samples show two general types of Figure 2: Parameter estimations for 100 trajectories with the same true parameters \(\phi=(\sigma^{2},\alpha_{i})\) as given by the solid lines. The interval of Mean \(\pm\) Standard Deviation of the estimated ensembles always includes the true parameters. ensembles: in \(a\), the maximum likelihood estimation of the potential is always very close to 0 and the 68% CIs therefore envelope the zero horizontal. This makes it difficult to gauge a clear direction of the potential and hence of the movement of the price time series. Contrary to that, potential ensembles like in \(b\) have a clear direction. In \(b\), the potential is increasing for higher prices (and hence a restoring force pulls the price to the minimum at 0), but decreasing potential ensembles can also be found for other time intervals. That means some time intervals like \(b\) show a predominant direction of price movement, whereas others like \(a\) have no predominant direction, but rather a random movement. #### iv.2.3 Order 3 The potential with \(q=3\) is the one suggested in [20] and the typical shape of their MCMC samples are shown in subplot \(c\) of figure 5. Note that some of these potentials are also mirrored along the x-axis. Similar to the potentials in \(b\), they also show a predominant direction, but also often a bistable saddle point. Notably, they do not show the pronounced double-well potential predicted in [20]. #### iv.2.4 Order 4 Potentials with \(q=4\) usually correspond to very wide potential wells as can be seen in subplot \(d\) of figure 5 (compare e.g. its full width at half maximum to that of the potential in 4). In the depicted MCMC ensemble, the maximum likelihood estimation does not lie within the 68% credible interval. This indicates a multimodal posterior distribution and was found in surprisingly many time intervals. Similar to the ensemble shown in subfigure \(a\), these potentials also often envelope the x-axis with their 68% CIs and therefore show no clearly predominant direction. ## V Conclusion ### Summary We use a maximum likelihood estimation to analyse price time series of stocks. Via the \(AIC\) model selection, Figure 4: MCMC-sampled potentials of order \(q=2\) for 30-minute-intervals for Pfizer in September 2019. The maximum likelihood estimation (Estimation) lies firmly within the ensemble of potentials and even the 68% credible interval shows a clear potential well. Figure 3: Estimated polynomial orders \(q\) of the monthly real-world price time series of the companies in table 1 for the daily (left) and 30-minute-intervals (right). Differences between the calm (up to February 2020) and turbulent period (starting in March 2020) are only small. we find that a second order polynomial for the drift term often offers a very suitable description of the data. While the standard GBM model with a first order polynomial is not selected as frequently as the second order model, it still appears often enough to be considered a valid candidate model. Higher order polynomials are rarely estimated. Sampling the posterior density of the parameters via MCMC reveals that the second order polynomials' potentials show very pronounced potential wells (i.e. stable minima) for nonzero prices which is mathematically impossible for the GBM's potential as pointed out in [20]. ### Discussion Our research question is heavily inspired by [20], but differs from it in a key factor: The model presented in [20] always has a drift polynomial of order \(q=3\) and uses the credit default swap rates (CDS) to estimate the probability of a considered company going bankrupt. This probability is then used as a constraint in the parameter estimation such that the jump from a potential well with nonzero price to another potential well at price zero (i.e. the stock collapsing) has a jump probability (Kramers's escape rate) which is quantified by the CDS. Thus, the work in [20] combines the price dynamics of stochastic differential equations with the CDS data as additional constraints to estimate a stochastic differential equation with a probability of the stock crashing. In contrast to this, our estimation scheme uses no additional constraints or external data, but purely the price time series. Our \(q=3\) estimations (subfigure \(c\) in 5) do not show the double-well potential postulated by [20]. However, our model selection via the \(AIC\) indicates that \(q=2\) is instead the most frequently observed polynomial degree and for \(q=2\), MCMC shows a clear potential well that is consistent for the whole MCMC ensemble. In short, because we do not use additional constraints, we cannot reproduce the double well potential with default probabilities, but instead show via our fully unconstrained approach that the potential wells arise naturally just from the price time series alone. Estimating potentials for stochastic financial dynamics and analysing the stability of their fixed points has also been done in [37; 38], but two key differences exist between them and our approach: While we estimate explicitly analytical potentials for the time series of individual stocks, the work in [37; 38] estimates potentials purely numerically Figure 5: MCMC-sampled potentials of order \(q=2\) for 30-minute-intervals for Pflzer in time intervals with \(q=1\) (\(a\) and \(b\)), \(q=3\) (\(c\)) and \(q=4\) (\(d\)). Note that the maximum likelihood estimation lies within the 68% credible interval for all orders \(q\) except for \(q=4\). without a closed-form analytical expression and does so for the collective market movement instead of treating individual assets. Interestingly, [37; 38] observe transitions between the different minima of the potentials and therefore a non-stationary market behaviour, similar our study, because the \(AIC\) selection means that the stocks are not described by the same polynomial degree \(q\) for all time series. Instead, our model selection implies that the potential itself is time-dependent. A possible explanation for this is that due to external effects, an order parameter such as the capital influx into the financial markets is changed. Then, the underlying potential might change due to these effects and e.g. experience a bifurcation, resulting in changing price dynamics. Whether a bifurcation is a suitable description of the dynamics under such conditions requires further analysis on the transitions between the different models. One can imagine e.g. that a price initially starts as being in a stable fixed point with \(q=2\) like in figure 4, but external news change the potential to that of subfigure \(b\) in figure 5 with \(q=1\). Now, the price has a predominant direction of movement and is no longer experiencing a restoring force back to the price at the previous fixed point and can therefore explore new areas of the phase space (figure 6 illustrates the transition between different regimes). The market finally manages to process the news and their implications and finally, the price reaches a new fixed point with \(q=2\). Thus, the price at the potential minimum can be interpreted as a fair price similar to the discussion in [17]. However, further research into the transition between the different potentials is necessary to verify this interpretation. Note that GBMs with potentials such as subfigure \(a\) in figure 5 have essentially no predominant direction of movement and show a random walk without a restoring force. This is a different behaviour to \(q=2\) which also does not show a predominant direction, but instead has such a restoring force that restricts the price to the potential well. One might have assumed that stable fixed points (\(q=2\)) should occur significantly less frequently during the turbulent period because of the overall instability of the market. But interestingly, our results do not seem to show a difference between the calm and turbulent market period (cf. figure 3), perhaps indicating that the market can quickly adjust to such turbulent behaviour. Finally, it is a reassuring result that the \(AIC\) selection still frequently suggests \(q=1\) (the standard GBM model) as the best polynomial order. The standard GBM model still appears rather frequently in our data and therefore nevertheless manages to provide a reasonably accurate model. ### Further Research and Applications As discussed in the previous subsection, our method can be used to distinguish between different regimes (stable fixed point or growth/decay) of the dynamics of a stock time series. One could use our methodology to continuously model a given time series, update it with new data and pay attention to when the potential is changing such that the system is transitioning from a stable (resilient) state to an unstable one or vice versa. This point of view can be used to judge the system's resilience against noise and anticipate critical transitions to a qualitatively new system behaviour [39; 27]. In the non-stationary system of a free market, such monitoring might support risk management decisions. While this article focused on the drift term like in [20], there are of course possible extensions of the diffusion/volatility that can be taken into account, too. Stochastic volatility and local volatility models have been widely accepted in finance [40; 41], but other modelling possibilities exist, too: While our article used the volatility parametrisation from the GBM in (1) via \(\sigma P\epsilon_{t}\), one can also imagine e.g. a polynomial model here given by \[\text{Diffusion}(P)=\sigma\left(\sum_{j}\beta_{j}P^{j}\right)\epsilon_{t}. \tag{11}\] However, from the author's experience, the maximum likelihood estimation can become troublesome if the diffusion term has several free parameters as the estimator can then attempt to essentially attribute the whole observed dynamics to the diffusion. A strong regularisation might be necessary if one wishes to expand the diffusion model. A multi-stage estimation procedure might provide another alternative: First estimate the GBM model with drift parameters \(\phi_{D,1}\), then keep these parameters fixed to estimate the parameters \(\phi_{V,1}\) of a more complicated volatility model (e.g. Heston's stochastic volatility). Then keep the parameters \(\phi_{V,1}\) of the volatility model fixed and vary the drift parameters according to the scheme presented in our article in order to find the optimal order \(q\) and its associated parameters \(\phi_{D,2}\). For Figure 6: Evaluation of the different regimes for the dynamics of the Pfizer stock for 30-minute-intervals. the rare orders \(q=3\) and \(q=4\) have been summarised under the label ”Noise”, the label ”Stable FP” indicates order \(q=2\) with a potential well and the GBM of order \(q=1\) is further split up into periods of growth (\(\uparrow\)), random stagnation (–) and decline (\(\downarrow\)): according to their MCMC-sampled potentials (cf. subfigures a and b in 5), a growth or decline is only assumed if the 68% CIs do not include the 0 horizontal. fixed order \(q\), iteratively use fixed \(\phi_{D,n}\) to estimate \(\phi_{V,n+1}\) and fixed \(\phi_{V,n+1}\) to estimate \(\phi_{D,n+1}\) until the parameter values converge. Developing and fine-tuning this procedure, however, is beyond the scope of the present work whose main aim was to investigate the existence of stable fixed points in the drift potential. Another model extension might be the incorporation of memory effects. Generalised versions of the Langevin equation include non-Markovian memory terms by e.g. an explicit memory kernel [42] or by assuming the existence of a second hidden process that has not been observed [43]. Such a hidden component might correspond to the traders' knowledge or belief which certainly influences the stock prices, but is not explicitly recorded. Although we believe that there is some virtue in having a simple model as evidenced by the widespread use of the GBM, a more complex analytical model than a polynomial approach can of course be used in the maximum likelihood framework to expand our rather simple model. Combining all those extensions and using a strict regularisation procedure to discard superfluous terms might ultimately help to develop a model that not only differentiates between the different regimes of stability (as shown in the present article), but also reproduces the well-known stylised facts from the empirical literature. Finally, it is noteworthy to point out that although we used equidistant time intervals between the observations of the data, the model has been tested on synthetic data with non-equidistant time intervals in section III.2. Such a situation arises naturally in the context of tick-by-tick data which is the highest resolution of trading data. Here, instead of sampling the price at a high frequency, every single trade is recorded at the exact time that it occurred. As the time between two subsequent trades can be arbitrarily short or long, the application of a robust method without the need for equidistant time steps might prove useful here. ###### Acknowledgements. The authors thank Tim Kroll (WWU Munster) for valuable discussion about the propagator and its technical implementation and the anonymous reviewers for their valuable advice. Tobias Wand is financed by the Studienstiftung des deutschen Volkes.
2302.14781
Time Series Anomaly Detection in Smart Homes: A Deep Learning Approach
Fixing energy leakage caused by different anomalies can result in significant energy savings and extended appliance life. Further, it assists grid operators in scheduling their resources to meet the actual needs of end users, while helping end users reduce their energy costs. In this paper, we analyze the patterns pertaining to the power consumption of dishwashers used in two houses of the REFIT dataset. Then two autoencoder (AEs) with 1D-CNN and TCN as backbones are trained to differentiate the normal patterns from the abnormal ones. Our results indicate that TCN outperforms CNN1D in detecting anomalies in energy consumption. Finally, the data from the Fridge_Freezer and the Freezer of house No. 3 in REFIT is also used to evaluate our approach.
Somayeh Zamani, Hamed Talebi, Gunnar Stevens
2023-02-28T17:26:27Z
http://arxiv.org/abs/2302.14781v1
# Time Series Anomaly Detection in Smart Homes: ###### Abstract. Fixing energy leakage caused by different anomalies can result in significant energy savings and extended appliance life. Further, it assists grid operators in scheduling their resources to meet the actual needs of end users, while helping end users reduce their energy costs. In this paper, we analyze the patterns pertaining to the power consumption of dishwashers used in two houses of the REFIT dataset. Then two autoencoder (AEs) with 1D-CNN and TCN as backbones are trained to differentiate the normal patterns from the abnormal ones. Our results indicate that TCN outperforms CNN1D in detecting anomalies in energy consumption. Finally, the data from the Fridge. Freezer and the Freezer of house No. 3 in REFIT is also used to evaluate our approach. Time series, Anomaly detection, Deep learning, Autoencoder, Temporal convolutional networks, Smart homes, Sustainability + Footnote †: journal: Acment of Speech and Signal Processing a temperature report of -30 degrees Celsius can be anomalous; however, during a cold season, such a report may be more common (Bowden et al., 2017). To this end, understanding the available data will provide a solid foundation for improving energy efficiency. For this purpose, there are thirty-one publicly available databases with several features, such as the geographical location, period of collection, number of monitored households, the sampling rate of collected data, and number of sub-metered appliances (Han et al., 2017). Regarding this, a valuable dataset is REFIT which includes cleaned electrical consumption in Watts for 20 households in the UK at both the aggregate and appliance levels (Krishnan et al., 2017). On the other hand, Pang (Pang, 2018) has provided a comprehensive overview of current anomaly detection methods to gain an important understanding of their inherent capabilities and limitations in addressing some largely unsolved challenges in anomaly detection. According to his study, Autoencoders, which are a subset of the generic normality feature learning category, aim to learn some low-dimensional feature representation space on which the given data instances can be well reconstructed. While this is a widely used method for data compression or dimension reduction, by using this method, the feature representations are enforced to learn important regularities of the data so that reconstruction errors are minimized. Consequently, anomalies are difficult to reconstruct from the resulting representations and are, therefore, subject to large reconstruction errors (Krishnan et al., 2017). ## 3. Methodology ### Dataset and preprocessing The REFIT Electrical Load Measurements dataset contain cleaned electrical consumption data in Watts for 20 households in the UK at both the aggregate and appliance level. The data is related to a period of two years comprising nine individual appliance measurements at 8-second intervals per house with 1,194,958,790 readings (Krishnan et al., 2017). The models proposed in this paper are trained using dish-washer data from houses No. 1 and 2. Furthermore, data from the Fridge_Freezer and the Freezer of house No. 3 is used to assess the effectiveness of our approach. To begin with, it is necessary to resample the data to convert it into equal time intervals \(r\). Then using the following formula, the average sampling time, \(\overline{t}\) of the REFIT data is used to fill in a limited number of signals, \(n\) with no data. The remaining empty intervals are substituted with zero. \[n=\left\lfloor\frac{4*\overline{t}}{r}\right\rfloor \tag{1}\] Additionally, for devices used according to users' needs, consumption data must first be differentiated. The power consumption pattern may include turning the device on and off several times per usage. The matching data is therefore combined into relevant signals. Also, due to the possibility of failure in some devices that can result in constant operation for an extended period, we assume a maximum period for a device to operate. ### The proposed models The development of time series anomaly detection algorithms has recently received considerable attention. Autoencoder-based approaches are often used to identify anomalous behavior by analyzing the reconstruction error of the data (Krishnan et al., 2017)(Pang et al., 2018)(Pang et al., 2018). Having learned a nonlinear transformation of the input data into a compressed representation, latent variables are used to reconstruct the original input. On the other hand, utilizing the convolution mechanism in sequential models is computationally optimal (Pang et al., 2018). Also, due to CNNs' equivariance properties and sparse interactions, they are translated from computer vision into the time domain using temporal convolutional networks (TCN)(Pang et al., 2018). In the following sections, we will describe how we used autoencoders (AEs) for time series data that utilize 1-dimensional CNNs and TCNs as building blocks to detect energy anomalies in the REFIT dataset. #### 3.2.1. CNN-based autoencoder (CNN-AE) We used TensorFlow to implement the architecture consisting of two smaller sequential models, an encoder and a decoder. Also, considering the speed of the model convergence, our CNN-based autoencoder is comprised of 3 layers of Conv1D using the data of the households' dishwashers. Furthermore, a nonlinear ReLu activation function is used in each convolution layer. In this model, a standard rate of 0.2 is considered Figure 1. The architecture of the CNN-based autoencoder (CNN-AE) Figure 2. The architecture of the TCN-based autoencoder (TCN-AE) for the dropout layer to randomly remove 20% of the upper layer during learning. Figure 1. shows the layers and the number of input and output parameters of each. The input layer is \(320\times 1\) (3200 seconds), calculated according to the maximum operation time of the device. #### 3.2.2. TCN-based autoencoder (TCN-AE) The temporal convolutional network (TCN) combines simplicity with auto-regressive prediction, residual blocks, and a very long memory. In general, a TCN can be broken down into three components: a list of dilation rates \(D=\{q_{1},q_{2},...,q_{n_{r}}\}\), the number of \(n_{filters}\), and the kernel size \(k\), which is the same for all filters in a TCN (Zhou et al., 2019)(Zhou et al., 2019). Inspired by a classical (deep) autoencoder, the TCN autoencoder encodes sequences, along the temporal axis, of length \(T\) into a compressed representation of length \(T/s\) (where \(s\in\mathbb{Z}^{+}\)) and then tries to reconstruct the original sequence (Zhou et al., 2019). In Figure 2, each layer of the TCN-AE is described by its parameters within the box. TCN-AE receives a sequence \(x[n]\) of length \(T\) and dimensionality \(d\) as its input. Using a TCN, the encoder first processes input sequence \(x[n]\) of length \(T\) and dimension \(d\). Afterward, a one-dimensional convolutional layer with \(q=1\), \(k=1\), and \(n_{filters}=8\) is used to reduce the dimensionality of the TCN's output. As the last layer in the encoder, the temporal average pooling layer downsamples the series by a factor of \(s\). To do so, groups of size \(s\) are averaged along the time axis. In the decoder module, the downsampled sequence is returned to its original length by performing a nearest neighbor interpolation on the upsampled sequence. Upsampled sequences are passed through a second TCN with independent weights parameterized similarly to the encoder-TCN. As a final step, the input sequence is reconstructed with a Conv1D layer that ensures that the dimensionality of the input is matched (by setting \(k=1\) and \(n_{filters}=d\)) (Zhou et al., 2019). As described in the next section, the input sequence and its reconstruction will be used for detecting anomalies after TCN-AE has been trained. ### Experimental results #### 3.3.1. Anomaly detection We compute a threshold value of \(2\sigma\) above the predicted value to measure the trend in electricity consumption over time, where \(2\sigma\) is the standard deviation on the day before the actual moment (Zhou et al., 2019). An abnormal state is defined as a value exceeding the threshold for predicted electricity consumption at the actual moment. Equations (2) and (3) show the calculation of \(\sigma\) and \(Ythreshold\): \[\sigma=\sqrt{\frac{\sum_{i=1}^{n}(x_{i}-\overline{x})^{2}}{n}} \tag{2}\] \[Ythreshold=\overline{\gamma}+2\sigma \tag{3}\] where \(\sigma\) is the standard deviation, \(Ythreshold\) is the threshold, \(\overline{\gamma}\) is the predicted value, \(x_{i}\) is the electricity consumption, \(\overline{x}\) is the average electricity consumption, and \(n\) is the number of samples. A normal electricity usage pattern detection for the dishwasher is shown in Figure 3 (a), where the real-time threshold curve follows the sequence trend, indicating the model depicts the dishwasher's normal electricity usage. As can be seen in the figure, the real power consumption curve does not exceed the threshold range, which indicates a normal level of electricity consumption. As shown in Figure 3 (b), anomalous consumption patterns occur when actual values exceed the threshold. Consequently, the method can distinguish between normal and abnormal consumption behavior. #### 3.3.2. Evaluation As Table 1. and Table 2. show, for both architectures, the best performance is obtained with the data division ratio of 8:2, and clearly, TCN-AE is more efficient than CNN-AE. Our unsupervised approach has also been evaluated using the data from the Fridge_Freezer and the Freezer of house No. 3 in REFIT. The results in Table 3. confirm the best division ratio of 8:2 and the higher performance of TCN compared to CNN1D. ## 4. Conclusion and future work This paper presents the starting point of our work on studying how would applying deep learning algorithms, and explainability improve energy efficiency, environmental sustainability, and user adoption. In this regard, first, we preprocessed our data by resampling and differentiating each usage. Next, the extracted patterns of dishwasher usage in houses No. 1 and 2 of the REFIT dataset were analyzed. Two deep learning models, CNN-AE and TCN-AE were then trained to detect abnormalities. While the TCN backbone performed better, we evaluated our models using the data from the refrigerators of house No. 3 in REFIT as well. Through the implementation of energy monitoring systems and the formulation of intelligent anomaly detection techniques, abnormal behaviors can be mitigated. This is possible, especially if user-centric explainable recommender systems are combined with anomaly detection modules. However, there is no proper labeled dataset available to develop accurate algorithms or detect different types of anomalies. Accordingly, we plan to run a laboratory to build the first appropriately labeled energy anomaly dataset. ## 5. Acknowledgement This research has been funded by the EU Horizon 2020 Marie Sklodowska-Curie International Training Network GECKO, Grant number 955422. Figure 3. (a) Examples of the normal energy consumption of the dishwasher (b) Examples of the abnormal energy consumption of the dishwasher
2309.16582
Perverse coherent extensions on Calabi-Yau threefolds and representations of cohomological Hall algebras
For $Y\to X$ a toric Calabi-Yau threefold resolution and $M\in \DD^b\Coh(Y)^T$ satisfying some hypotheses, we define a stack $\mf M(Y,M)$ parameterizing \emph{perverse coherent extensions} of $M$, iterated extensions of $M$ and the compactly supported perverse coherent sheaves of Bridgeland. We define framed variants $\mf M^\f(Y,M)$, prove that they are equivalent to stacks of representations of framed quivers with potential $(Q^\f,W^\f)$, and deduce natural monad presentations for these sheaves. Moreover, following Soibelman we prove that the homology $H_\bullet(\mf M^{\f,\zeta}(Y,M),\varphi_{W^\f})$ of the space of $\zeta$-stable, $\f$-framed perverse coherent extensions of $M$, with coefficients in the sheaf $\varphi_{W^\f}$ of vanishing cycles for $W^\f$, is a representation of the Kontsevich-Soibelman cohomological Hall algebra of $Y$. For $M=\mc O_Y[1]$, $\mf M^{\f}(Y,M)$ is the stack of perverse coherent systems of Nagao-Nakajima, so $\bb V_Y^\zeta=H_\bullet(\mf M^{\f,\zeta}(Y,M),\varphi_{W^\f})$ is the DT/PT series of $Y$ for $\zeta=\zeta_{\DT/\PT}$ by Szendroi and \emph{loc. cit.}, and we conjecture that $\V_Y^{\zeta_\NCDT}$ is the vacuum module for the quiver Yangian of Li-Yamazaki. For $M=\mc O_S[1]$ with $S\subset Y$ a divisor, $\mf M^{\f}(Y,M)$ provides a definition in algebraic geometry for Nekrasov's spiked instanton variant of the ADHM construction, and analogous variants of the constructions of Kronheimer-Nakajima, Nakajima-Yoshioka, and Finkelberg-Rybnikov. We conjecture that $H_\bullet(\mf M^{\f,\zeta}(Y,M),\varphi_{W^{\f}})$ is the vacuum module of the vertex algebra $\V(Y,S)$ defined by the \mbox{authors} in a companion paper, generalizing the AGT conjecture to this setting. For $Y\to X=\{xy-z^mw^n\}$, this gives a geometric approach to the relationship between $W$-algebras and Yangians for affine $\gl_{m|n}$.
Dylan Butson, Miroslav Rapcak
2023-09-28T16:39:18Z
http://arxiv.org/abs/2309.16582v1
Perverse coherent extensions on Calabi-Yau threefolds and representations of cohomological Hall algebras ###### Abstract. For \(Y\to X\) a toric Calabi-Yau threefold resolution and \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) satisfying some hypotheses, we define a stack \(\mathfrak{M}(Y,M)\) parameterizing _perverse coherent extensions_ of \(M\), iterated extensions of \(M\) and the compactly supported perverse coherent sheaves of Bridgeland. We define framed variants \(\mathfrak{M}^{t}(Y,M)\), prove that they are equivalent to stacks of representations of framed quivers with potential \((Q^{t},W^{t})\), and deduce natural monad presentations for these sheaves. Moreover, following Soibelman we prove that the homology \(H_{\bullet}(\mathfrak{M}^{t,\zeta}(Y,M),\varphi_{W^{t}})\) of the space of \(\zeta\)-stable, f-framed perverse coherent extensions of \(M\), with coefficients in the sheaf \(\varphi_{W^{t}}\) of vanishing cycles for \(W^{t}\), is a representation of the Kontsevich-Soibelman cohomological Hall algebra of \(Y\). For \(M=\mathcal{O}_{Y}[1]\), \(\mathfrak{M}^{t}(Y,M)\) is the stack of perverse coherent systems of Nagao-Nakajima, so \(\mathbb{V}_{Y}^{t}=H_{\bullet}(\mathfrak{M}^{t,\zeta}(Y,M),\varphi_{W^{t}})\) is the DT/PT series of \(Y\) for \(\zeta=\zeta_{\mathrm{DT/PT}}\) by Szendroi and _loc. cit._, and we conjecture that \(\mathbb{V}_{Y}^{\mathrm{NCDT}}\) is the vacuum module for the quiver Yangian of Li-Yamazaki. For \(M=\mathcal{O}_{S}[1]\) with \(S\subset Y\) a divisor, \(\mathfrak{M}^{t}(Y,M)\) provides a definition in algebraic geometry for Nekrasov's spiked instanton variant of the ADHM construction, and analogous variants of the constructions of Kronheimer-Nakajima, Nakajima-Yoshioka, and Finkelberg-Rybnikov. We conjecture that \(H_{\bullet}(\mathfrak{M}^{t,\zeta}(Y,M),\varphi_{W^{t}})\) is the vacuum module of the vertex algebra \(\mathbb{V}(Y,S)\) defined by the authors in a companion paper, generalizing the AGT conjecture to this setting. For \(Y\to X=\{xy-z^{m}w^{n}\}\), this gives a geometric approach to the relationship between \(W\)-algebras and Yangians for affine \(\mathfrak{gl}_{m|n}\). ###### Contents * 1 Introduction * 1.1 Motivation from algebraic geometry * 1.2 Motivation from geometric representation theory * 1.3 Motivation from string theory * 1.4 Summary of results * 2 Preliminaries * 2.1 \(A_{\infty}\) algebras and their module categories * 2.2 Tilting objects and derived equivalences * 2.3 Perverse coherent sheaves and non-commutative resolutions * 2.4 Quivers, path algebras, and Koszul duality for \(A_{\infty}\) algebras * 2.5 Calabi-Yau structures and quivers with potential * 2.6 \(A_{\infty}\) categories and the twisted objects construction of Kontsevich * 3 Perverse coherent sheaves on Calabi-Yau threefolds and quivers with potential * 3.1 Overview of Section 3 * 3.2 Tilting objects on toric Calabi-Yau threefolds * 3.3 Koszul duality patterns in equivariant enumerative geometry * 3.4 Monad presentations from Koszul duality * 3.5 Generalization to the \(A_{\infty}\) case via twisted objects 3.6 Moduli spaces of perverse coherent sheaves and quivers with potential * 3.7 Examples * 3.8 Koszul resolutions from Beilinson spectral sequences * 4 Perverse coherent extensions on Calabi-Yau threefolds and extended quivers * 4.1 Overview of Section 4 * 4.2 Bridgeland-Deligne t-structures * 4.3 Categories of perverse coherent extensions * 4.4 Monad presentations of perverse coherent extensions * 4.5 Moduli spaces of perverse coherent extensions and extended quivers * 4.6 Framing structures * 4.7 Examples * 5 Representations of cohomological Hall algebras from perverse coherent extensions * 5.1 Overview of Section 5 * 5.2 Vanishing cycles sheaves and cohomology * 5.3 Cohomological Hall algebras of quivers with potential and Calabi-Yau threefolds * 5.4 Representations of cohomological Hall algebras from perverse coherent extensions * 5.5 Enumerative invariants from perverse coherent extensions * 5.6 Towards representations of shifted quiver Yangians * 6 Yangians of threefolds and vertex algebras of divisors * 6.1 Perverse coherent systems, Donaldson-Thomas theory, and quiver Yangians * 6.2 Perverse coherent extensions of divisors, Vafa-Witten theory, and vertex algebras * 6.3 Towards isomorphisms between -superalgebras and Yangians for ## 1. Introduction This paper is intended to contribute to a family of interconnected mathematical ideas at the intersections of geometric representation theory, enumerative algebraic geometry, low dimensional topology, and integrable systems, which follow predictions from supersymmetric quantum field theory and string theory. We are especially interested in a group of results following the conjectures of Alday-Gaiotto-Tachikawa [1], and their independent proof by Schiffmann-Vasserot [23], Maulik-Okounkov [17], and Braverman-Finkelberg-Nakajima [10]. The goal of this paper is to construct certain moduli spaces of sheaves on Calabi-Yau threefolds, and representations on their homology groups of algebras of Hecke modifications of these sheaves, which conjecturally identify with modules over certain vertex algebras and affine quantum groups, generalizing the AGT conjecture and several related predictions of string theory. In this introduction, we will explain these goals and the results of this paper as follows: * In Section 1.1, we explain the motivation from algebraic geometry: to construct moduli spaces of coherent sheaves on Calabi-Yau threefolds which are equivalent to spaces of representations of a framed quiver with potential via a monad description, and in particular give models for moduli spaces of instantons on divisors, as in Theorem A. * In Section 1.2, we explain the motivation from geometric representation theory: to construct representations on the cohomology groups of these moduli spaces of certain infinite dimensional associative algebras, as in Theorem B, and identify these with particular modules. * In Section 1.3, we explain the motivation from the relevant string theory constructions. * In Section 1.4, we give a concrete summary of the results of each section in this paper. ### Motivation from algebraic geometry The first goal of this paper is to construct certain moduli spaces of sheaves on Calabi-Yau threefolds \(Y\) which admit descriptions as moduli spaces of representations of quivers with potential. The motivating example is the space of perverse coherent systems on \(Y\) introduced by Nagao-Nakajima [11], and studied systematically in [20]. This space was defined to give a geometric interpretation to the results of Szendroi [22], who computed the Donaldson-Thomas (DT) invariants [15], Panharipande-Thomas (PT) invariants [15], and generalizations thereof, for the resolved conifold \(Y=|\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)|\) using the moduli space of representations of a certain quiver with potential discovered in [16]. The definition of the moduli space of perverse coherent systems on \(Y\) is in terms of the notion of perverse coherent sheaves on \(Y\) introduced in [14], but we will begin by explaining the simplest example of such a moduli space, the case \(Y=\mathbb{C}^{3}\), for which the category of perverse coherent sheaves on \(Y\) is simply the usual abelian category of coherent sheaves. A (rank 1, compactly supported) perverse coherent system on \(\mathbb{C}^{3}\) is by definition a pair \((F,s)\) where \(F\in\operatorname{Coh}_{\operatorname{cs}}(\mathbb{C}^{3})\) is a compactly supported coherent sheaf on \(\mathbb{C}^{3}\) and \(s:\mathcal{O}_{\mathbb{C}^{3}}\to F\) is a map of coherent sheaves. The stack parameterizing these objects is evidently equivalent to that of finite dimensional representations of the quiver with potential Indeed, the critical locus equations for the potential \(W\) require that the endomorphisms \(B_{i}:V\to V\) commute, so that \(V\) defines a compactly supported coherent sheaf \(F\) on \(\mathbb{C}^{3}\), and thus we can identify the stack of representations of the underlying unframed quiver with potential with the stack of compactly supported coherent sheaves on \(\mathbb{C}^{3}\). Similarly, the image of the unit \(1\in\mathbb{C}\) under the map \(I:\mathbb{C}\to V\) determines the image of the unit section \(1\in\mathcal{O}_{\mathbb{C}^{3}}\) under a unique map \(s:\mathcal{O}_{\mathbb{C}^{3}}\to F\). The open substack of cyclic representations of this quiver with potential, which is precisely the stable locus with respect to a choice of King stability condition \(\zeta\), corresponds to the condition that the map \(s\) is surjective, so that the moduli space of \(\zeta\)-stable representations is equivalent to the Hilbert scheme \(\operatorname{Hilb}_{n}(\mathbb{C}^{3})\) of zero dimensional subschemes of \(\mathbb{C}^{3}\) of length \(n=\dim V\), noting that we have a short exact sequence \[\mathcal{I}\to\mathcal{O}_{\mathbb{C}^{3}}\xrightarrow{s}F\] so that the data of the coherent sheaf \(F\) and the map \(s\) are equivalent to that of an ideal sheaf \(\mathcal{I}\) in \(\mathcal{O}_{\mathbb{C}^{3}}\) of codimension \(n\). This moduli space is the simplest example of a DT theory moduli space, and is the prototypical example of the class of moduli spaces considered in this paper. Finally, we mention another equivalent interpretation of this moduli space, which is the one that our construction most naturally generalizes. In the derived category \(\operatorname{D}^{b}\!\operatorname{Coh}(\mathbb{C}^{3})\), we can rewrite the above exact triangle equivalently as \[F\to\mathcal{I}[1]\to\mathcal{O}_{\mathbb{C}^{3}}[1]\, \tag{1.1}\] which we interpret geometrically as follows: the ideal sheaves \(\mathcal{I}\) being parameterized by \(\operatorname{Hilb}_{n}(\mathbb{C}^{3})\), viewed as objects in the derived category \(\mathcal{I}[1]\in\operatorname{D}^{b}\!\operatorname{Coh}(\mathbb{C}^{3})\) concentrated in cohomological degree \(-1\), can equivalently be characterized as certain extensions of the object \(\mathcal{O}_{\mathbb{C}^{3}}[1]\) by a compactly supported coherent sheaf \(F\in\operatorname{Coh}_{\operatorname{cs}}(\mathbb{C}^{3})\) concentrated in cohomological degree \(0\). Even away from the stable locus, the data of the map \(s\) in the definition of perverse coherent system is evidently equivalent to a representative of the extension class under the identification \[\operatorname{Hom}^{1}(\mathcal{O}_{\mathbb{C}^{3}}[1],F)=\operatorname{Hom}^{ 0}(\mathcal{O}_{\mathbb{C}^{3}},F)\.\] In fact, there exist exotic t-structures on the triangulated category \(\operatorname{D}^{b}\!\operatorname{Coh}(\mathbb{C}^{3})\) with hearts containing both \(\operatorname{Coh}_{\operatorname{cs}}(\mathbb{C}^{3})\) and \(\mathcal{O}_{\mathbb{C}^{3}}[1]\), called perverse coherent t-structures in the sense of Deligne, as recalled in [1]. Thus, we can understand the moduli space of perverse coherent systems as simply describing the space of extensions between objects in a certain abelian category, and this is the perspective that we will generalize below. For more general threefolds \(Y\) on which the perverse coherent sheaves in the sense of Bridgeland are distinct from the usual coherent heart, this requires a new family of t-structures that combines these two notions, which we call Bridgeland-Deligne perverse coherent t-structures. The primary motivation for the generalization of the moduli space of perverse coherent systems on a threefold \(Y\) that we introduce in this paper is to construct moduli spaces of sheaves supported on algebraic surfaces \(S\) or divisors \(S\subset Y\) which are also equivalent to representations of framed quivers with potential. Perhaps the most famous example of any construction relating quiver representations and sheaves is that of Atiyah-Drinfeld-Hitchin-Manin [1], which provides an isomorphism between the moduli space of stable, framed instantons of rank \(r\) and charge \(n\) on \(\mathbb{R}^{4}\) and the space of stable representations of dimension \(\dim V=n\) of the quiver It is well known that in the case \(r=1\), the only non-trivial instantons are purely singular and thus the moduli space of these objects is equivalent to the Hilbert scheme \(\operatorname{Hilb}_{n}(\mathbb{C}^{2})\) after choosing a complex structure to identify \(\mathbb{R}^{4}\cong\mathbb{C}^{2}\). The induced identification between stable, rank \(1\) representations of the ADHM quiver of dimension \(n\) and points in \(\operatorname{Hilb}_{n}(\mathbb{C}^{2})\) is exactly in parallel with the \(\mathbb{C}^{3}\) case explained above: one can show that the stable locus in the stack of quiver representations is contained in the locus where the map \(J:V\to\mathbb{C}\) vanishes, after which the argument from the \(\mathbb{C}^{3}\) case above applies _mutatis mutandis_. It is not difficult to see that this construction can be recast in terms of sheaves on \(\mathbb{C}^{3}\) supported on \(\mathbb{C}^{2}\), and that the ADHM quiver admits a corresponding natural generalization to a quiver with potential. The Hilbert scheme of zero dimensional, length \(n\) subschemes of \(\mathbb{C}^{2}\) is by definition the quot scheme \(\operatorname{Quot}_{n}(\mathbb{C}^{2},\mathcal{O}_{\mathbb{C}^{2}})\) of length \(n\) quotients over \(\mathbb{C}^{2}\) of the structure sheaf \(\mathcal{O}_{\mathbb{C}^{2}}\), which can be equivalently interpreted as the quot scheme \(\operatorname{Quot}_{n}(\mathbb{C}^{3},\iota_{*}\mathcal{O}_{\mathbb{C}^{2}})\) of length \(n\) quotients over \(\mathbb{C}^{3}\) of the pushforward of the structure sheaf \(\mathcal{O}_{\mathbb{C}^{2}}\) along the inclusion \(\iota:\mathbb{C}^{2}\to\mathbb{C}^{3}\). Similarly, since the ADHM quiver is a doubled quiver in the sense of Nakajima [21], it admits a canonical enhancement to a tripled quiver with potential in the sense of Ginzburg [14], given by (1.2) These two reformulations evidently correspond to each other under an equivalence of the type explained above. Indeed, the critical locus equations for the potential \(W\) imply not only that \(B_{1},B_{2},I\) and \(J\) satisfy the ADHM relations, but additionally that \[[B_{1},B_{3}]=[B_{2},B_{3}]=0\qquad\text{and}\qquad B_{3}I=0\.\] Together, this implies that on the stable locus \(B_{3}\) vanishes identically on \(V\), and thus \(V\) should be interpreted as a compactly supported coherent sheaf on \(\mathbb{C}^{3}\) which is annihilated by the coordinate function \(z_{3}\) corresponding to \(B_{3}\), and thus supported on \(\mathbb{C}^{2}\subset\mathbb{C}^{3}\). In turn, the map \(\mathcal{O}_{\mathbb{C}^{3}}\to V\) corresponding to \(I\) evidently factors through a map \(s:\iota_{*}\mathcal{O}_{\mathbb{C}^{2}}\to V\). In these terms, the goal of our construction is to systematically generalize the preceding example in which \(Y=\mathbb{C}^{3}\) and \(S=\mathbb{C}^{2}\) to give analogous models in algebraic geometry for moduli spaces of instantons on divisors \(S\) in Calabi-Yau threefolds \(Y\). However, in order to better motivate the details of our approach, we briefly discuss some of the subtleties which appear already in this simple setting and must be resolved in the general construction. One natural question when comparing with the example of perverse coherent systems on \(\mathbb{C}^{3}\) is to give an analogous geometric description of the full stack of representations of the quiver with potential of Equation 1.2. In particular, away from the stable locus the map \(J:V\to\mathbb{C}\) is not in general zero, so the moduli space is evidently parameterizing more than just general maps \(s:\mathcal{O}_{\mathbb{C}^{2}}\to V\), but there is not such an evident geometric interpretation for the map \(J\) as for \(I\). Moreover, this issue is even more apparent in rank \(r>1\), in which case the map \(J\) is not in general zero even on the stable locus. Further, the well-known description in algebraic geometry of the moduli space of stable representations of the ADHM quiver of higher rank, as in [16], requires a choice of projective compactification of \(S=\mathbb{C}^{2}\), but we would like to generalize this construction to surfaces \(S\) in threefolds \(Y\) for which there is not evidently a canonical choice of projective compactification. We now explain the resolution of these subtleties in the present setting: Even in the well-known setting of _loc. cit._, where the moduli space of stable representations of the usual ADHM quiver is identified with the moduli space of torsion free sheaves \(\mathcal{E}\) on \(\mathbb{P}^{2}\) equipped with a trivialization \(\varphi:\mathcal{E}|_{\mathbb{P}^{1}_{\infty}}\xrightarrow{\cong}\mathcal{O}_ {\mathbb{P}^{1}_{\infty}}\) on the line \(\mathbb{P}^{1}_{\infty}\subset\mathbb{P}^{2}\) at infinity, there is surprisingly little discussion in the literature of the full stack of representations. In fact, the only explicit reference we could find which gives a geometric description of the stack of representations of the ADHM quiver is in Section 5 of [1], where the result is attributed to Drinfeld. The description is as the stack of torsion free perverse coherent sheaves on \(\mathbb{P}^{2}\), equipped with a trivialization at infinity as above, where the former are defined to be complexes of coherent sheaves \[E=\left[E^{-1}[1]\xrightarrow{d}E^{0}\right]\, \tag{1.3}\] such that \(H^{-1}E\) is torsion free and \(H^{0}E\) has zero dimensional support. We now reinterpret this description analogously to the interpretation of perverse coherent systems in terms of extensions explained above. First, note that these complexes of sheaves are again in the heart of a perverse coherent t-structure on \(\mathbb{P}^{2}\) in the sense of Deligne, as the name suggests. For simplicity, we begin by considering the stable locus on which the differential \(d\) is surjective, so that the objects being parameterized are just the usual torsion free coherent sheaves \(H^{-1}E=\mathcal{E}[1]\) thought of as concentrated in cohomological degree \(-1\). It is well known that every such sheaf fits into a canonical short exact sequence \[\mathcal{E}\to(\mathcal{E}^{\vee})^{\vee}\to F\] where the reflexive hull \((\mathcal{E}^{\vee})^{\vee}\) determines a rank \(r\) vector bundle on \(\mathbb{P}^{2}\), and \(F\) is a coherent sheaf with zero dimensional support concentrated in cohomological degree \(0\). Note that \(F\) must be supported on the complement \(\mathbb{C}^{2}=\mathbb{P}^{2}\setminus\mathbb{P}^{1}_{\infty}\) of the line at infinity, by the existence of the isomorphism \(\varphi\), and similarly for \(H^{0}E\) in the unstable case. Thus, the restriction \(\tilde{\mathcal{E}}=\mathcal{E}|_{\mathbb{C}^{2}}\) and the restriction of the double dual \((\tilde{\mathcal{E}}^{\vee})^{\vee}\) also fit into a short exact sequence \[\tilde{\mathcal{E}}\to(\tilde{\mathcal{E}}^{\vee})^{\vee}\to F\.\] The rank \(r\) vector bundle \((\tilde{\mathcal{E}}^{\vee})^{\vee}\) on \(\mathbb{C}^{2}\) is necessarily trivializable, and the choice of trivialization at infinity of \(\mathcal{E}\) determines a trivialization at infinity for \((\mathcal{E}^{\vee})^{\vee}\) which extends uniquely to a trivialization of \((\tilde{\mathcal{E}}^{\vee})^{\vee}[1]\). Thus, the space of sheaves of the form \(\tilde{\mathcal{E}}[1]\) is identified with the space of extensions \[F\to\tilde{\mathcal{E}}[1]\to\mathcal{O}_{\mathbb{C}^{2}}^{\oplus r}[1]\, \tag{1.4}\] which is the desired analogue of Equation 1.1. In fact, the object \(\mathcal{O}_{\mathbb{C}^{2}}^{\oplus r}[1]\) is itself an iterated extension of the object \(\mathcal{O}_{\mathbb{C}^{2}}[1]\), so that we can further identify the space of sheaves \(\tilde{\mathcal{E}}[1]\) with the space of iterated extensions of compactly supported coherent sheaves \(F\in\operatorname{Coh}_{\mathrm{cs}}(\mathbb{C}^{3})\) with \(r\) copies of \(\mathcal{O}_{\mathbb{C}^{2}}[1]\), such that \(F\) is a subobject and the underlying iterated extension of \(\mathcal{O}_{\mathbb{C}^{2}}[1]\) is equipped with an isomorphism to the object \(\mathcal{O}_{\mathbb{C}^{2}}^{\oplus r}[1]\). The generalization of this description to the unstable locus simply corresponds to dropping the requirement that \(F\) occurs as a subobject in the iterated extension. For example, in the simplest case that \(F=\iota_{*}\mathcal{O}_{\mathrm{pt}}\) is given by the structure sheaf of a single reduced point in \(\mathbb{C}^{2}\), there is a unique non-trivial extension class (1.5) where the top row is given by the Koszul resolution of \(\mathcal{O}_{\mathrm{pt}}\). The totalization of this map of complexes provides a representative \(E\) of the extension class, and this complex of sheaves is the prototypical example of the desired geometric interpretation of a representation of the ADHM quiver with \(J\neq 0\). Equivalently, we can view this as a representative of the corresponding class in \(\operatorname{Hom}^{2}(\mathcal{O}_{\mathrm{pt}},\mathcal{O}_{\mathbb{C}^{2}})\), as \[\mathcal{O}_{\mathbb{C}^{2}}\to E^{-1}\to E^{0}\to\mathcal{O}_{\mathrm{pt}} \qquad\text{where}\qquad\begin{cases}E^{-1}&=\operatorname{coker}\left[ \mathcal{O}_{\mathbb{C}^{2}}\to\mathcal{O}_{\mathbb{C}^{2}}^{3}\right]\\ E^{0}&=\mathcal{O}_{\mathbb{C}^{2}}\end{cases}\,\] which is the corresponding prototypical example of a perverse coherent torsion free sheaf, in the sense of Equation 1.3, for which the differential \(d\) is not surjective. We can extend this argument to explicitly parameterize the entire moduli space of such sheaves in terms of representations of the ADHM quiver. The corresponding prototypical extension of the form in Equation 1.4 is represented analogously in terms of the Koszul resolution of \({\Cal{O}}_{\text{pt}}\) by the map of complexes (1.6) Moreover, note that any compactly supported coherent sheaf \(F\in\operatorname{Coh}_{\text{cs}}({\mathbb{C}}^{2})\) of length \(n\) is itself an iterated extension of the structure sheaves of \(n\) points in \({\mathbb{C}}^{2}\), and again there are natural representatives of the two elementary non-trivial extensions in \(\operatorname{Hom}^{1}({\Cal{O}}_{\text{pt}},{\Cal{O}}_{\text{pt}})\cong{ \mathbb{C}}^{2}\) given by (1.7) Thus, we can express a general sheaf \(E\) which is isomorphic to an iterated extension of \(n\) copies of \({\Cal{O}}_{\text{pt}}\) and \(r\) copies of \({\Cal{O}}_{{\mathbb{C}}^{2}}[1]\), and equipped with an isomorphism between the underlying iterated extension of \({\Cal{O}}_{{\mathbb{C}}^{2}}[1]\) and \({\Cal{O}}_{{\mathbb{C}}^{2}}^{\oplus r}[1]\), as a complex of the form (1.8) where \(V\) is an \(n\) dimensional vector space representing the multiplicity space of the copies of \({\Cal{O}}_{\text{pt}}\) used in the construction. This is precisely the monad presentation of Equation 2.6 in [20] for arbitrary maps \(B_{1}\), \(B_{2}\), \(I\), and \(J\) satisfying the ADHM equations, and in particular we deduce that this moduli space of sheaves is equivalent to the full stack of representations of the ADHM quiver. The preceding arguments all apply analogously to identify the stack of representations of the quiver with potential of Equation 1.2 with that parameterizing complexes of coherent sheaves isomorphic to an iterated extension of \({\Cal{O}}_{\text{pt}}\) and \(\iota_{*}{\Cal{O}}_{{\mathbb{C}}^{2}}[1]\) in an appropriate category of perverse coherent sheaves on \({\mathbb{C}}^{3}\) in the sense of Deligne, equipped with an isomorphism of the underlying iterated extension of \(\iota_{*}{\Cal{O}}_{{\mathbb{C}}^{2}}[1]\) with \(\iota_{*}{\Cal{O}}_{{\mathbb{C}}^{2}}^{\oplus r}[1]\). This is the prototypical example of the spaces of coherent sheaves studied in this paper, which we call (rank \(r\), trivially framed) _perverse coherent extensions_ of \(\iota_{*}{\Cal{O}}_{{\mathbb{C}}^{2}}[1]\). We remark that the object which we have referred to as _the_ underlying iterated extension of \(\iota_{*}{\Cal{O}}_{{\mathbb{C}}^{2}}[1]\) is not in general well-defined, but we give general hypotheses sufficient to ensure that it is, which can be verified directly in the examples of interest; we will ignore this subtlety for the remainder of the introduction. The reinterpretation of the framing data as a choice of isomorphism between the underlying iterated extension of \(\iota_{*}\mathcal{O}_{\mathbb{C}^{2}}[1]\) and the trivial extension \(\iota_{*}\mathcal{O}_{\mathbb{C}^{2}}^{\oplus r}[1]\) may at first seem ad hoc, but in fact it immediately leads to a natural family of generalizations of the above correspondence, which play a crucial role in our desired applications in geometric representation theory. We define a framing structure \(\mathrm{f}\) to be a fixed iterated extension \(H_{\mathrm{f}}\) of \(r\) copies of \(\iota_{*}\mathcal{O}_{\mathbb{C}^{2}}[1]\), and we can consider the analogous moduli space of sheaves but equipped with an isomorphism to \(H_{\mathrm{f}}\) instead of the trivial extension \(\iota_{*}\mathcal{O}_{\mathbb{C}^{2}}^{\oplus r}[1]\), which we call \(\mathrm{f}\)-_framed_ perverse coherent extensions. In present setting, such extensions are determined by a nilpotent Higgs field \(\phi_{\mathrm{f}}\) of rank \(r\) on \(\mathbb{C}^{2}\), and if we assume that \(\phi_{\mathrm{f}}\) is given by the constant nilpotent matrix \(A_{\mathrm{f}}\in\mathfrak{gl}_{r}\), the corresponding stack of \(\mathrm{f}\)-framed perverse coherent extensions of \(\iota_{*}\mathcal{O}_{\mathbb{C}^{2}}[1]\) is equivalent to that of representations of the modified quiver with potential (1.9) arrows marked with a \(\bullet\) such as that labeled by \(A_{\mathrm{f}}\) in this quiver are fixed, in the sense that they are not part of the data required to specify a representation of the quiver, but are chosen as part of the input data for defining the quiver with potential itself, and modify the equations on the collection of linear maps corresponding to the unmarked arrows defining a quiver representation. This quiver with potential was studied recently in [10] in precisely the geometric representation theory context in which we are interested, and their results were our initial motivation for studying this generalized notion of framing. We can also consider framed perverse coherent extensions in the presence of more than one object at once, which provide analogous models in algebraic geometry for moduli spaces of instantons on divisors which are not necessarily irreducible. Indeed, considering the shifted structure sheaves of the three coordinate divisors in \(\mathbb{C}^{3}\) simultaneously, we obtain a description in algebraic geometry for the stack of representations of the spiked instantons variant of the ADHM quiver introduced in [20], the quiver with potential defined by (1.10) The above is the quiver with potential corresponding to the trivial framing structure, but for any choice of iterated extension of the framing objects there is a corresponding modification of the potential analogous to that in Equation 1.9 above. We now briefly describe our generalization of the three dimensional variants of the ADHM construction described above to a larger class of algebraic surfaces \(S\), or more generally divisors, in Calabi-Yau threefolds \(Y\). We must assume that \(Y\) occurs as a resolution \(f:Y\to X\) such that \(\dim f^{-1}(\{x\})\leq 1\) for each \(x\in X\) and \(f_{*}\mathcal{O}_{Y}\cong\mathcal{O}_{X}\), where we use \(f_{*}\) to denote the full derived push-forward functor, and for concreteness we restrict our attention to the case that \(X\) is an affine, toric threefold singularity with a unique \(T\)-fixed point \(x\in X\). The former conditions are precisely those under which the perverse coherent t-structure in the sense of Bridgeland is defined on \(\mathrm{D}^{b}\mathrm{Coh}(Y)\), and the basic idea is to modify the definitions above by replacing the compactly supported coherent sheaves \(\mathrm{Coh}_{\mathrm{cs}}(Y)\) on \(Y=\mathbb{C}^{3}\) used in the preceding constructions with the complexes of sheaves with compactly supported cohomology contained in the alternative heart \(\mathrm{PervCoh}_{\mathrm{cs}}(Y)\subset\mathrm{PervCoh}(Y)\). The category of compactly supported perverse coherent sheaves \(\mathrm{PervCoh}_{\mathrm{cs}}(Y)\) has a natural collection of simple generators, generalizing the role of the structure sheaves of points in \(\mathrm{Coh}_{\mathrm{cs}}(\mathbb{C}^{3})\) in the previous constructions. In the simplest example where the fibre \(C=f^{-1}(\{x\})\) over the fixed point \(x\in X\) is reduced and irreducible, so that it is isomorphic as a scheme to \(\mathbb{P}^{1}\), the compactly supported perverse coherent sheaves on \(Y\) are generated by the structure sheaves of points in \(Y\setminus C\) together with the objects \(F_{0}=\iota_{*}\mathcal{O}_{\mathbb{P}^{1}}\) and \(F_{1}\iota_{*}\mathcal{O}_{\mathbb{P}^{1}}(-1)[1]\), where \(\iota:C\to Y\) is the inclusion of the exceptional curve. For simplicity, we restrict our attention to the formal completion \(\hat{Y}\) of \(Y\) along \(C\), for which the category is generated only by the latter two objects. We can again identify the moduli stack of objects in this category with the stack of representations of an unframed quiver with potential, in analogy with the description of \(\mathrm{Coh}_{\mathrm{cs}}(\mathbb{C}^{3})\) above, where the quiver is defined to have two vertices corresponding to the objects \(F_{0}\) and \(F_{1}\), and arrows determined by the non-trivial extension classes between these objects in the threefold \(Y\). For example, in the case that \(Y=|\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)|\), it is given by which is precisely the unframed variant of the quiver with potential used in our motivating example of the study of the DT theory of the conifold in [10], originally discovered in string theory by Klebanov-Witten [11]. We will comment further on the physical derivation below in Section 1.3, which explains the motivation from string theory for our constructions. A typical example is the description of the structure sheaf \(\mathcal{O}_{y}\) of a point \(y\) on the exceptional curve \(C\) as the quiver representation corresponding to an extension \[\mathcal{O}_{\mathbb{P}^{1}}\to\mathcal{O}_{y}\to\mathcal{O}_{\mathbb{P}^{1}}( -1)[1]\ ;\] this was one of the original motivations for the definition of the category of perverse coherent sheaves in [1], and can be understood as a reinterpretation in the derived category of the classical description of Beilinson [1] of coherent sheaves on \(\mathbb{P}^{1}\) as representations of a quiver. More generally, we consider the stack of iterated extensions of these objects together with a fixed number of copies of an auxiliary object \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)\), such as \(\mathcal{O}_{S}[1]\) for a reduced divisor \(S\subset Y\), equipped with an isomorphism of the underlying iterated extension of \(\mathcal{O}_{S}[1]\) with some fixed choice of such an extension. This is the general definition of the stack of perverse coherent extensions of \(M\), and under some hypotheses on the object \(M\), we prove that it is equivalent to the stack of representations of a framed quiver with potential, and that this equivalence is implemented explicitly by a monad presentation, as for the examples in the case \(Y=\mathbb{C}^{3}\) explained above. In the case \(Y=\widetilde{A_{m-1}}\times\mathbb{A}^{1}\to X=\{xy-z^{m}\}\times\mathbb{A}^{1}\), the product of \(\mathbb{A}^{1}\) with a resolution \(\widetilde{A_{m-1}}\) of the singularity \(A_{m-1}=\{xy-z^{m}\}\), and \(M=\mathcal{O}_{S}[1]\) for \(S=\widetilde{A_{m-1}}\), the resulting quiver is a framed, tripled variant of the affine Dynkin quiver of type \(A_{m-1}\). In the case \(m=2\) for example, with framing structure determined by a nilpotent endomorphism \(G_{\mathrm{f}}\in\mathfrak{gl}_{r}\), the resulting quiver with potential is given by \[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/p1.eps}\end{array}\qquad \text{and}\qquad W_{M}^{\mathrm{f}}=E(BC-DA)+F(AD-CB)+EIJ-IG_{\mathrm{f}}J\.\] In the case \(G_{\mathrm{f}}=0\), this is precisely the three dimensional analogue of the Nakajima quiver variety given by the doubled affine Dynkin quiver, the moduli spaces of representations of which was identified with moduli space of instantons on \(\widetilde{A_{m-1}}\) in the results of Kronheimer-Nakajima [10]. We give a three dimensional variant of this identification which extends to the full stack of quiver representations, via a corresponding variant of the monad description of _loc. cit._. In the case \(Y=|\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)| \to X=\{xy-zw\}\) and \(M=\mathcal{O}_{S}[1]\) for \(S=|\mathcal{O}_{\mathbb{P}^{1}}(-1)|\), the quiver with potential is given by \[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/p1.eps}\end{array}\qquad \text{and}\qquad W=ABCD-ADBC+IJC\.\] This is a three dimensional analogue of the quiver with relations studied in [11], generalizing the results of the thesis [10] of King describing vector bundles to torsion free sheaves on \(|\mathcal{O}(-1)|\). However, we note that this example involves an additional subtlety which was absent in the preceding examples: since the potential has quartic terms rather than only cubic, the critical locus equations will have cubic terms rather than just quadratic, and thus the differentials in the monad description must have some quadratic dependence on the linear maps defining the quiver representation. Indeed, the three dimensional monad description produced by our construction in this case is given by (1.11) \[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/p1.eps}\end{array} \qquad\begin{array}{c}\includegraphics[width=142.26378pt]{figs/p1.eps}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{figs/p1. Evidently a monad description involving a non-linear dependence on the linear maps defining the quiver representation can not be derived from the straightforward argument we gave to obtain the ADHM monad description in Equation 1.8 above, in which we added differentials to the Koszul resolutions of the constituent objects of the iterated extension corresponding to the representatives of the extension classes between the simple objects, as such differentials evidently depend linearly on the extension classes and in turn the linear maps in the quiver representation. In fact, we observe that the condition that the potential is cubic corresponds to the requirement that the path algebra of the resulting quiver with relations is Koszul. In general, the path algebra need not be Koszul and equivalently its Koszul dual algebra, which is given by the Ext algebra of the simple objects \(F_{i}\), will in general be an \(A_{\infty}\) algebra rather than a strict associative algebra. Following this observation, we use the analogy with the classical setting of [1] to construct the monad description as the image, under the derived equivalence of module categories between the Koszul dual algebras, of a combinatorial description of the corresponding heart of the category of modules over the Ext algebra. In the Koszul case, this heart is given by the category of linear complexes of projective modules, as explained in _loc. cit._, and in the general \(A_{\infty}\) case we generalize this description using the twisted objects construction proposed by Kontsevich in [10], and established carefully in [14], allowing us to deduce a general formula for the desired non-linear monad description of perverse coherent extensions. We summarize the above discussion by stating the first main theorem of the paper: Let \(Y\to X\) be a toric Calabi-Yau threefold resolution and \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) an object in the coherent derived category, equivariant with respect to the structure torus \(T\), and satisfying some hypotheses as outlined above. Then given a choice of a framing structure f, there exists an algebraic stack \(\mathfrak{M}^{\mathrm{f}}(Y,M)\) parameterizing f-framed perverse coherent extensions of \(M\), and we have: _Theorem_ A (4.38).: There is a canonical framed quiver with potential \((Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})\) and an equivalence of algebraic stacks \[\mathfrak{M}(Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})\xrightarrow{\cong} \mathfrak{M}^{\mathrm{f}}(Y,M)\,\] where the induced equivalence of groupoids of \(\mathbb{K}\) points is defined on objects by \[(V_{i},B_{ij})\mapsto\left(\tilde{H}:=\bigoplus_{i\in I_{M}}K(I_{i})\otimes V _{i}\,\ d_{B}:=\sum_{k\in\mathbb{Z},\ i,i_{2},...,i_{k-1},j\in I_{M}}K(\rho_{k}^{ \Sigma\oplus\Sigma_{\infty}}(\cdot,b_{i,i_{2}},...,b_{i_{k-1},j})^{N})\otimes( B_{i,i_{2}}...B_{i_{k-1},j})\right)\.\] This general formula determines the monad descriptions of Equations 1.8 and 1.11, for example. The pair \((V_{i},B_{ij})\) denotes a representation of the framed quiver with potential \((Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})\), determined by vector spaces \(V_{i}\) and linear maps \(B_{ij}:V_{i}\to V_{j}\), the objects \(K(I_{i})\xrightarrow{\cong}F_{i}\) are canonical Koszul-type projective resolutions of the compactly supported simple objects \(F_{i}\in\mathrm{PervCoh}_{\mathrm{cs}}(Y)\), and \(d_{B}\) denotes the deformation of the differential determined by the quiver representation in terms of certain \(A_{\infty}\) module structure maps \((\rho_{k}^{\Sigma\oplus\Sigma_{\infty}})_{k\in\mathbb{Z}}\) for the Ext algebras \(\Sigma=\mathrm{Hom}(F,F)\), and the representatives of extension classes \(b_{i,j}\in\mathrm{Hom}^{1}(F_{i},F_{j})\) such as those of Equations 1.5, 1.6 and 1.7. ### Motivation from geometric representation theory The primary motivation for the results of this paper, and in particular for the detailed study of the moduli spaces of coherent sheaves on threefolds described in the preceding section, is to generalize a conjecture from the seminal paper [1] of Alday-Gaiotto-Tachikawa (AGT) and its proof by Schiffmann-Vasserot in [17]. This generalization predicts relationships between the enumerative geometry of coherent sheaves on surfaces and threefolds, and the representation theory of certain vertex algebras and affine quantum groups, mediated by familiar mechanisms from geometric representation theory. To begin, we explain the geometric origin of the vector spaces on which we will define these representations. Recall that the original motivation for the definition of perverse coherent systems in [10] was to provide a geometric explanation for the computations in [11] of the DT invariants, PT invariants, and generalizations thereof, for the threefold \(Y=|\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)|\), in terms of the stack of representations of a quiver with potential. In fact, the computation was of the cohomological analogues of these invariants, as we now explain: For each choice of stability condition \(\zeta\), _loc. cit._ compute the homology of the space of \(\zeta\)-stable representations of the framed quiver with potential \((Q_{Y}^{\mathrm{f}},W_{Y}^{\mathrm{f}})\) determined by the object \(\mathcal{O}_{Y}[1]\) with its unique framing structure \(\mathrm{f}\), or equivalently the space of \(\zeta\)-stable, trivially framed, perverse coherent extensions of \(\mathcal{O}_{Y}[1]\), with coefficients in the sheaf of vanishing cycles \(\varphi_{W_{Y}^{\mathrm{f}}}\) for \(W_{Y}^{\mathrm{f}}\), \[\mathbb{V}_{Y}^{\zeta}=H_{\bullet}(\mathfrak{M}^{\mathrm{f},\zeta}(Y,\mathcal{ O}_{Y}[1]),\varphi_{W_{Y}^{\mathrm{f}}})=\bigoplus_{\mathbf{d}\in\mathbb{N}^{Q _{Y}}}H_{\bullet}(\mathfrak{M}^{\mathrm{f},\zeta}_{\mathbf{d}}(Y,\mathcal{O}_ {Y}[1]),\varphi_{W_{Y}^{\mathrm{f}}})\] where \(\mathfrak{M}^{\mathrm{f},\zeta}_{\mathbf{d}}(Y,\mathcal{O}_{Y}[1])\) denotes the connected component corresponding to representations of dimension \(\mathbf{d}=(d_{i})_{i\in I}\in\mathbb{N}^{Q_{Y}}\). Then, it was explained in _loc. cit._ that the corresponding generating functions of the Euler characteristics of the graded components \[\mathcal{Z}_{Y}^{\zeta}(\mathbf{q})=\sum_{\mathbf{d}\in\mathbb{N}^{Q_{Y}}} \mathbf{q}^{\mathbf{d}}\chi(H_{\bullet}(\mathfrak{M}^{\mathrm{f},\zeta}_{ \mathbf{d}}(Y,\mathcal{O}_{Y}[1]),\varphi_{W_{Y}^{\mathrm{f}}}))\ \in\mathbb{Z}[\mathbf{q}]\,\] for appropriate stability conditions \(\zeta_{\mathrm{DT}}\) and \(\zeta_{\mathrm{PT}}\), compute the DT series and PT series of \(Y\), \[\mathcal{Z}_{Y}^{\zeta_{\mathrm{DT}}}(\mathbf{q})=\mathcal{Z}_{Y}^{\mathrm{DT} }(\tilde{\mathbf{q}})\qquad\text{and}\qquad\mathcal{Z}_{Y}^{\zeta_{\mathrm{ PT}}}(\mathbf{q})=\mathcal{Z}_{Y}^{\mathrm{PT}}(\tilde{\mathbf{q}})\,\] for some appropriate identifications of parameters. For example, in the case \(Y=\mathbb{C}^{3}\) the DT series is given by the MacMahon function \[\mathcal{Z}_{\mathbb{C}^{3}}^{\mathrm{DT}}(q)=\prod_{k=1}^{\infty}\frac{1}{(1- q^{k})^{k}}=M(q)\qquad\text{for}\qquad q=-q_{0}\,\] as conjectured in [14]; this computation follows from a fixed point counting argument by general results of [1], and was successively generalized in [11], [10], and [21]. For any object \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) satisfying appropriate hypotheses and a fixed choice of framing structure \(\mathrm{f}\), we can consider analogous spaces of \(\zeta\)-stable representations of the associated framed quiver with potential, or equivalently \(\zeta\)-stable, \(\mathrm{f}\)-framed perverse coherent extensions of \(M\), and define the corresponding cohomological invariants \[\mathbb{V}^{\mathrm{f},\zeta}(M)=\bigoplus_{\mathbf{d}\in\mathbb{N}^{Q_{Y}}} \mathbb{V}^{\mathrm{f},\zeta}_{\mathbf{d}}(M)\qquad\mathbb{V}^{\mathrm{f}, \zeta}_{\mathbf{d}}(M)=H_{\bullet}(\mathfrak{M}^{\mathrm{f},\zeta}_{ \mathbf{d}}(Y,M),\varphi_{W_{M}^{\mathrm{f}}})\, \tag{1.12}\] and their associated generating functions. We will be especially interested in the case \(M=\mathcal{O}_{S}[1]\) for \(S\subset Y\) a divisor, as we have discussed in the preceding section. Let \(S\subset Y\) be an effective Cartier divisor, \(S^{\mathrm{red}}\) the underlying reduced scheme, and let \(S_{d}\) for \(d\in\mathfrak{D}_{S}\) denote the irreducible components of \(S^{\mathrm{red}}\), so that \(\mathcal{O}_{S^{\mathrm{red}}}^{\mathrm{ss}}\) the semisimplification of \(\mathcal{O}_{S^{\mathrm{red}}}\) is \[\mathcal{O}_{S^{\mathrm{red}}}^{\mathrm{ss}}=\bigoplus_{d\in\mathfrak{D}_{S}} \mathcal{O}_{S_{d}}\qquad\text{and we have}\qquad[S]=\sum_{d\in\mathfrak{D}_{S}}r _{d}[S_{d}]\] for some tuple \(\mathbf{r}_{S}=(r_{d})_{d\in\mathfrak{D}_{S}}\) of positive integers \(r_{d}\in\mathbb{N}\). Consider the space of perverse coherent extensions of \(\mathcal{O}_{S^{\mathrm{red}}}^{\mathrm{ss}}[1]\), and note that \(\mathcal{O}_{S}[1]\) is by definition an iterated extension of the objects \(\mathcal{O}_{S_{d}}[1]\) which each occur with multiplicity \(r_{d}\), so that the object \(\mathcal{O}_{S}[1]\) itself determines a framing structure \(\mathrm{f}_{S}\) of rank \(\mathbf{r}_{S}\). Thus, for each stability condition \(\zeta\) on the corresponding framed quiver with potential, we define the associated cohomological invariant \[\mathbb{V}_{S}^{\zeta}=\bigoplus_{\mathbf{d}\in\mathbb{N}^{V_{Q_{Y}}}}\mathbb{V} _{S,\mathbf{d}}^{\zeta}\qquad\mathbb{V}_{S,\mathbf{d}}^{\zeta}=H_{\bullet}( \mathfrak{M}_{\mathbf{d}}^{\mathrm{f}_{S},\zeta}(Y,\mathcal{O}_{S^{\mathrm{sed }}}^{\mathrm{ss}}[1]),\varphi_{W^{\mathrm{f}_{S}}})\.\] For an appropriate choice of stability condition \(\zeta=\zeta_{\mathrm{VW}}\), we let \(\mathbb{V}_{S}=\mathbb{V}_{S}^{\zeta_{\mathrm{VW}}}\) and introduce the corresponding generating function \[\mathcal{Z}_{S}^{\mathrm{VW}}(\mathrm{q})=\sum_{\mathbf{d}\in\mathbb{N}^{Q_{Y} }}\mathbf{q}^{\mathbf{d}}\chi(H_{\bullet}(\mathfrak{M}_{\mathbf{d}}^{\mathrm{ f}_{S},\zeta_{\mathrm{VW}}}(Y,\mathcal{O}_{S^{\mathrm{sed}}}^{\mathrm{ss}}[1]), \varphi_{W^{\mathrm{f}_{S}}}))\ \in\mathbb{Z}[\![\mathbf{q}]\!]\,\] which we interpret as a local analogue of the Vafa-Witten partition function proposed in [10]. Indeed, in cases where \(S^{\mathrm{red}}\) is given by an irreducible smooth surface, this will be a generating function for the Euler characteristics of moduli spaces of instantons on \(S\), as we explain below. We also consider the analogous vector spaces \[\mathbb{V}_{S}^{0}=\bigoplus_{\mathbf{d}\in\mathbb{N}^{V_{Q_{Y}}}}\mathbb{V}_{ S,\mathbf{d}}^{0}\qquad\mathbb{V}_{S,\mathbf{d}}^{0}=H_{\bullet}(\mathfrak{M}_{ \mathbf{d}}^{0_{S},\zeta_{\mathrm{VW}}}(Y,\mathcal{O}_{S^{\mathrm{sed}}}^{ \mathrm{ss}}[1]),\varphi_{W^{0_{S}}})\,\] determined by the trivial framing structure \(0_{S}\) of rank \(\mathbf{r}_{S}\). We now consider the simplest example of this construction: let \(Y=\mathbb{C}^{3}\) and \(S=\mathbb{C}^{2}\), so that the corresponding quiver with potential is given by the three dimensional analogue of the ADHM quiver, as in Equation 1.2. One crucial aspect of the analogy between this quiver with potential and the usual ADHM quiver is that they are related by _dimensional reduction_, in the sense of Appendix A of [11] for example. It is proved in _loc. cit._ that there is a natural isomorphism \[H_{\bullet}(\mathfrak{M}_{\mathbf{d}}(Q,W),\varphi_{W})\xrightarrow{\cong}H_{ \bullet}(\mathfrak{M}_{\mathbf{d}}(\tilde{Q},R))\, \tag{1.13}\] between the homology of the stack of representations of the quiver with potential \((Q,W)\), with coefficients in the sheaf of vanishing cycles determined by \(W\), and the ordinary Borel-Moore homology of the corresponding dimensionally reduced quiver with relations \((\tilde{Q},R)\). In the case at hand, this implies we have isomorphisms \[\mathbb{V}_{\mathbb{C}^{2},n}=H_{\bullet}(\mathfrak{M}_{n}^{\ell,\zeta}( \mathbb{C}^{3},\mathcal{O}_{\mathbb{C}^{2}}[1]),\varphi_{W^{\mathrm{f}}}) \xrightarrow{\cong}H_{\bullet}(\mathrm{Hilb}_{n}(\mathbb{C}^{2}))\,\] so that we can identify \[\mathbb{V}_{\mathbb{C}^{2}}=\bigoplus_{n\in\mathbb{N}}H_{\bullet}(\mathrm{Hilb }_{n}(\mathbb{C}^{2}))\qquad\text{and}\qquad\mathcal{Z}_{\mathbb{C}^{2}}^{ \mathrm{VW}}(q)=\prod_{k=1}^{\infty}\frac{1}{1-q^{k}}=q^{\frac{1}{24}}\eta(q)^{ -1}\,\] where the latter can again be computed directly by a fixed point counting argument. In fact, the preceding well known formula for the generating function of the Euler characteristics of Hilbert schemes of points on surfaces admits a more structured algebraic refinement, which was discovered independently by Grojnowski [12] and Nakajima [25]. It was observed in _loc. cit._ that there exist natural correspondences (1.14) \[\begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n,n+k}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n+k}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n+k}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n+k}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n}\end{array}\qquad \begin{array}{c}\includegraphics[width=142.26378pt]{Hilb}_{n+k}\end{array}\] for \(k\in\mathbb{Z}\) and \(n\in\mathbb{N}\) in the compatible range. Taking a sum over \(n\in\mathbb{N}\), we obtain operators \[\alpha_{k}=\sum_{n\in\mathbb{N}}\alpha_{k}^{n}:\mathbb{V}_{\mathbb{C}^{2}} \to\mathbb{V}_{\mathbb{C}^{2}}\, \tag{1.15}\] for each \(k\in\mathbb{Z}\) which satisfy the relations implicit in the following theorem: _Theorem 1.1_.: [10, 11] There exists a natural representation \[\mathcal{U}(\pi)\to\operatorname{End}(\mathbb{V}_{\mathbb{C}^{2}})\qquad \text{defined by}\qquad b_{k}\mapsto\alpha_{k}\,\] of the algebra of modes \(\mathcal{U}(\pi)\) of the Heisenberg vertex algebra \(\pi\) on the vector space \(\mathbb{V}_{\mathbb{C}^{2}}\), such that \(\mathbb{V}_{\mathbb{C}^{2}}\) is identified with the vacuum module \(\pi_{0}\) for the Heisenberg vertex algebra. In particular, note that as a corollary of the latter part of the statement, we have an identification \[P_{q}(\pi)=\mathcal{Z}_{\mathbb{C}^{2}}^{\operatorname{VW}}(q) \tag{1.16}\] of the Poincare polynomial of the vacuum module of the vertex algebra \(\pi\) and the rank \(1\) local Vafa-Witten invariant of \(\mathbb{C}^{2}\), which immediately implies the above computation of the latter. The preceding theorem and the resulting identification of Equation 1.16 are the prototype of our desired results relating the enumerative geometry of sheaves on algebraic surfaces to the representation theory of vertex algebras, and the goal of this paper is to extend these to more general divisors \(S\) in Calabi-Yau threefolds \(Y\) corresponding to more interesting vertex algebras, and provide geometric constructions of certain classes of modules for these algebras. It will be important in the generalization to replace all of the above constructions by their equivariant analogues with respect to the action of the subtorus \(\check{T}\) of the toric structure torus \(T\) of the Calabi-Yau threefold \(Y\) which preserves the Calabi Yau structure, as well as a subtorus \(T_{\operatorname{f}}\) of the group of symmetries of the framing structure \(\operatorname{f}\), the product of which we will denote by \(A=\check{T}\times T_{\operatorname{f}}\). In particular, we consider the vector spaces \(\mathbb{V}\) over the field of fractions \(F\) of \(H^{\bullet}_{A}(\operatorname{pt})\), and similarly with all of the linear algebraic constructions above. The analogous statement of Theorem 1.1 above gives an action of the family of vertex algebras over \(F\) given by the Heisenberg algebra at level \[\kappa=-\frac{1}{\hbar_{i}\hbar_{j}}\qquad\text{where}\qquad H^{\bullet}_{ \check{T}}(\operatorname{pt})=\mathbb{K}[\hbar_{1},\hbar_{2},\hbar_{3}]/( \hbar_{1}+\hbar_{2}+\hbar_{3})\, \tag{1.17}\] and \(i\) and \(j\) are determined by the choice of toric divisor \(\mathbb{C}^{2}\) in \(\mathbb{C}^{3}\). In a sense, the first such generalization of this construction actually predated the above Theorem, and was also discovered by Nakajima in the seminal paper [11]: among many other influential results, this paper constructs an action of the universal enveloping algebra of a Kac-Moody Lie algebra \(\mathcal{U}(\widehat{\mathfrak{g}})\), which is indeed the algebra of modes of the closely related vertex algebra \(V_{\kappa}(\mathfrak{g})\), on the homology of the moduli space of instantons on the resolved du Val singularity of the corresponding ADE type; for concreteness we will restrict our attention to type \(A\) in the following. These results were another key step in the family of generalizations we propose in the present work, and in a sense we can understand the preceding example as simply the \(\mathfrak{gl}_{1}\) analogue of this more general construction, but this example is also slightly degenerate in the following sense: the action of the torus \(\tilde{T}\) on \(\widetilde{A_{m-1}}\) viewed as a toric divisor in \(\widetilde{A_{m-1}}\times\mathbb{A}^{1}\) factors through a one dimensional quotient, so there is not a parameterization of the generic level as in Equation 1.17. In fact, the level is determined by rank of the instantons, which otherwise does not modify the resulting algebra. Another construction of an action of \(\mathcal{U}(V_{\kappa}(\mathfrak{g}))\) was given by Braverman in [1], on a moduli space of sheaves on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) with structure group corresponding to the ADE type, a reduction of structure to a Borel along \(\{0\}\times\mathbb{P}^{1}\), and trivialization along \((\{\infty\}\times\mathbb{P}^{1})\cup(\mathbb{P}^{1}\times\{\infty\})\). Although the level of the action was not explicitly computed in _loc. cit._, it was implicitly determined by the other results therein, and given by \[\kappa=-h^{\vee}-\frac{\hbar_{2}}{\hbar_{1}}. \tag{1.18}\] This result would later be reinterpreted as a special case of a generalization of the AGT conjecture to subprincipal \(W\) algebras, and we will give a geometric approach to the more general conjecture which can be understood as a reformulation and generalization of the results of _loc. cit._. A fundamental breakthrough in the study of analogous vertex algebra actions on moduli spaces of sheaves of higher rank came from physics in the paper [1] mentioned above. Among other observations which have not yet been as carefully codified into mathematics, the authors conjectured that the vertex algebra which acts analogously on the \(\tilde{T}\)-equivariant cohomology of the moduli space of instantons of rank \(r\) on \(\mathbb{C}^{2}_{xy}\) is given the principal affine \(W\) algebra \(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r})\), a quantum Hamiltonian reduction of the affine algebra \(V_{\kappa}(\mathfrak{gl}_{r})\), at level \(\kappa\) as in Equation 1.18, which we will refer to as the AGT conjecture. The AGT conjecture as stated in type A was proved independently by Schiffmann-Vasserot [13] and Maulik-Okounkov [14] using quite different constructions. This paper follows the approach of [13], but as we will later speculate it is possible some ingredients of the approach of [14] will be useful in completing the proof of the general conjecture. Another distinct proof was given for general ADE type by Braverman-Finkelberg-Nakajima in [1]. We now state the result in terms of the notions we have introduced above: **Theorem 1.2**.: _[_13_, 14, 15_]_ _There exists a natural representation_ \[\mathcal{U}(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r}))\to\operatorname{End} (\mathbb{V}^{0}_{r[\mathbb{C}^{2}]})\,\] _such that \(\mathbb{V}^{0}_{r[\mathbb{C}^{2}]}\) is identified with the universal Verma module \(\mathbb{M}_{r}\) for \(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r})\)._ In [16], the authors extended this result to give an analogous geometric construction of the vacuum module as the cohomology of the moduli space of stable representations of the variant of the ADHM construction in Equation 1.9 with \(A_{\rm f}\) given by a principal nilpotent, following many of the calculations previously established in [13]. Note that this is precisely the framing structure determined by the divisor \(r[\mathbb{C}^{2}]\), and thus in the notation we have introduced above, we have: **Theorem 1.3**.: _[_16_]_ _There exists a natural representation_ \[\mathcal{U}(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r}))\to\operatorname{End} _{F}(\mathbb{V}_{r[\mathbb{C}^{2}]})\,\] _such that \(\mathbb{V}_{r[\mathbb{C}^{2}]}\) is identified with the vacuum module for \(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r})\)._ In particular, we obtain the identification \[P_{q}(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r}))=\prod_{j=1}^{r}\prod_{k=1}^ {\infty}\frac{1}{1-q^{k+j}}=\mathcal{Z}^{\rm VW}_{r[\mathbb{C}^{2}]}(q)\,\] where we have used the usual abuse of notation denoting the vacuum module by the underlying vertex algebra \(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r})\). These results are also closely related to the cohomological variant of [12] and similarly the results of [15] in their simpler specialization to case \(n=0\). The formulation of Schiffmann-Vasserot is also analogously related to unpublished results of Feigin-Tsymbaliuk as described in [17], and in turn previous results of Feigin and collaborators in the context of quantum toroidal algebras, as we will explain in more detail below. The result was also generalized in [14] to the case that \(Y=\mathbb{C}^{3}\) and \(S\) is given by an arbitrary toric divisor \(S_{L,M,N}=L[\mathbb{C}^{2}_{yz}]+M[\mathbb{C}^{2}_{xy}]+N[\mathbb{C}^{2}_{xz}]\), again closely following the approach of [14], giving an action of the Gaiotto-Rapcak vertex algebras \(Y_{L,M,N}\) introduced in [11] on the moduli space of stable representations of the quiver with potential of Equation 1.10, such that it is identified with the analogue of the Verma module for this algebra. The proposal that there should be analogous vertex algebras corresponding to more general divisors \(S\) and threefolds \(Y\) was considered already in the original paper of Gaiotto-Rapcak [11], and explored in some detail in [12], but the analogue of the AGT conjecture in this setting was not known in general, as the relevant descriptions of moduli spaces generalizing the spiked instantons construction of Nekrasov to threefolds \(Y\) other than \(\mathbb{C}^{3}\) had not been constructed previously. There is also a closely related conjecture of Feigin-Gukov [15] that there should exist vertex operator algebras \(\operatorname{VOA}[M_{4},\mathfrak{gl}_{r}]\) associated to four manifolds \(M_{4}\), analogously generalizing the AGT conjecture. This appears to coincide with the predictions of [11] discussed in the preceding paragraph in the case that the underlying reduced scheme \(S^{\operatorname{red}}\) is irreducible and smooth, \(M_{4}\) is the analytification of \(S^{\operatorname{red}}\), and \(r\in\mathbb{N}\) the multiplicity of \(S^{\operatorname{red}}\) in \(S\). However, a mathematical definition of these vertex algebras was also not known in general, for a divisor \(S\) nor four-manifold \(M_{4}\), and relatively few examples were known for \(r\geq 2\). In the companion paper [10], we give a general combinatorial construction of vertex algebras \(\mathbb{V}(Y,S)\) as the kernel of screening operators acting on lattice vertex algebras determined by the data of the GKM graph of \(Y\) and a Jordan-Holder filtration of \(\mathcal{O}_{S}\) with subquotients structure sheaves \(\mathcal{O}_{S_{d}}\) of the divisors \(S_{d}\) occuring as irreducible, reduced components of \(S\). This construction reproduces many interesting vertex algebras, conjecturally including \(\mathcal{W}\) superalgebras \(\mathcal{W}_{f_{0},f_{1}}(\mathfrak{gl}_{m|n})\) and genus zero class S chiral algebras \(\mathbb{V}(\mathbb{P}^{1},\mathfrak{gl}_{m};f_{1},...,f_{k})\) with \(k\leq 2\) marked points, each in type A and for general nilpotents \(f_{i}\), and appears to satisfy the predictions of [11], [12], and [13]. In particular, we formulate the following analogue of the AGT conjecture in this setting: _Conjecture 1.4_.: There exists a natural representation \[\rho:\mathcal{U}(\mathbb{V}(Y,S))\to\operatorname{End}_{F}(\mathbb{V}_{S}) \tag{1.19}\] of the algebra of modes \(\mathcal{U}(\mathbb{V}(Y,S))\) of the vertex algebra \(\mathbb{V}(Y,S)\) on the vector space \(\mathbb{V}_{S}\), such that \(\mathbb{V}_{S}\) is identified with the vacuum module \(\mathbb{V}_{(Y,S)}\) for the vertex algebra \(\mathbb{V}(Y,S)\). As a consequence of this conjecture, we expect equality between the local Vafa-Witten series \(\mathcal{Z}^{\operatorname{VW}}_{S}(q)\) of the divisor \(S\) defined above and the Poincare series \(P_{q}\) of the vacuum module \(\mathbb{V}_{(Y,S)}\), that is \[\mathcal{Z}^{\operatorname{VW}}_{S}(q)=P_{q}(\mathbb{V}_{(Y,S)})\ \in\mathbb{Z}[\![q]\!]. \tag{1.20}\] We have not been able to prove the preceding conjecture at the time of writing. However, we do provide a construction of the conjectural representation structure map of Equation 1.19 as a special case of Theorem B below, and we can verify directly that this satisfies the conjecture in the case that \(S\) is a smooth subvariety of \(Y\) and the corresponding moduli space parameterizes sheaves of rank \(1\) on \(S=S^{\operatorname{red}}\), as we discuss further below. It is not possible to directly verify that the resulting endomorphisms satisfy the relations of the algebra of modes \(\mathcal{U}(\mathbb{V}(Y,S))\) in general, as even in the case of rank \(r\) sheaves on \(\mathbb{C}^{2}\) considered in [14] this is not possible. However, the construction of the vertex algebras \(\mathbb{V}(Y,S)\) we give in [10], as the joint kernels of certain collections of screening operators, provides a family of compatible free field realizations of these vertex algebras which are in some sense defined to allow for a proof of Conjecture 1.4 generalizing that of [14], as we will explain further below. We hope to complete the proof following this approach in future work. We now describe the construction of the representation of \(\mathcal{U}(W^{\kappa}_{f_{\rm prin}}(\mathfrak{gl}_{r}))\) from [13], and our generalization of the construction for arbitrary divisors \(S\) in threefolds \(Y\) of the class considered here. The construction of _loc. cit._ is in essence the same as the original construction of Nakajima defined above, but it proceeds in stages and constructs actions of several auxiliary algebras which are necessary to provide the abstract proof of the desired relations implicit in Theorem 1.2 above. From the geometric perspective we have advocated for in Section 1.1 above, the correspondences of Nakajima in Equation 1.14 above can be understood as adding \(k\) copies of the structure sheaf \(\mathcal{O}_{\rm pt}\) of a point in \(\mathbb{C}^{2}\) to an iterated extension of \(n\) copies of \(\mathcal{O}_{\rm pt}\) together with a single copy of \(\mathcal{O}_{\mathbb{C}^{2}}[1]\), or removing \(-k\) copies of \(\mathcal{O}_{\rm pt}\) if \(k<0\). In fact, the relations between these correspondences can be understood more primitively in the absence of the auxiliary copy of \(\mathcal{O}_{\mathbb{C}^{2}}[1]\). As we have explained in Section 1.1, the stack of compactly supported coherent sheaves of length \(n\) on \(\mathbb{C}^{2}\) is equivalent to that of \(n\) dimensional representations of the unframed quiver with relations underlying the ADHM quiver, given by a single vertex \(Q_{\mathbb{C}^{2}}\) with two loops \(B_{1},B_{2}\) satisfying \([B_{1},B_{2}]=0\), so that we have \[\mathfrak{M}_{n}(\mathbb{C}^{2})=[C_{n}/\mathrm{Gl}_{n}]\qquad\text{where} \qquad C_{n}=\{(B_{1},B_{2})\in\mathfrak{gl}_{n}^{\times 2}\ |\ [B_{1},B_{2}]=0\}\,\] and we denote the corresponding Borel-Moore homology groups by \[\mathcal{H}(\mathbb{C}^{2})=\bigoplus_{n\in\mathbb{N}}\mathcal{H}_{n}(\mathbb{ C}^{2})=\bigoplus_{n\in\mathbb{N}}H_{\bullet}(\mathfrak{M}_{n}(\mathbb{C}^{2}))\.\] There are analogous correspondences between the spaces \(\mathfrak{M}_{n}(\mathbb{C}^{2})\), defined by the stacks of short exact sequences \(\mathfrak{M}_{k,l}(\mathbb{C}^{2})\) of representations of the unframed quiver of dimension \(k+l\), with a subobject of dimension \(k\) and quotient object of dimension \(l\), which induce maps \[p_{*}\circ q^{*}:\mathcal{H}_{k}(\mathbb{C}^{2})\otimes\mathcal{H}_{l}(\mathbb{ C}^{2})\to\mathcal{H}_{k+l}(\mathbb{C}^{2})\qquad\text{and thus}\qquad m:\mathcal{H}(\mathbb{C}^{2})^{\otimes 2} \to\mathcal{H}(\mathbb{C}^{2})\,\] which defines an associative algebra structure on \(\mathcal{H}(\mathbb{C}^{2})\); the resulting algebra \(\mathcal{H}(\mathbb{C}^{2})\) is called the _preprojective cohomological Hall algebra_ of \(\mathbb{C}^{2}\). Further, the stacks of short exact sequences of iterated extensions of \(n+k\) copies of the structure sheaf of a point and a single copy of \(\mathcal{O}_{\mathbb{C}^{2}}[1]\), with subobject given by an iterated extension of \(k\) copies of \(\mathcal{O}_{\rm pt}\), define analogous correspondences (1.21) and induce a representation of \(\mathcal{H}(Y)\) on \(\mathbb{V}_{\mathbb{C}^{2}}\), which we denote by \[\rho_{\mathbb{C}^{2}}:\mathcal{H}(\mathbb{C}^{2})\to\mathrm{End}_{F}(\mathbb{ V}_{\mathbb{C}^{2}})\.\] From this perspective, the Nakajima operator \(\alpha_{-1}:\mathbb{V}_{\mathbb{C}^{2}}\to\mathbb{V}_{\mathbb{C}^{2}}\) of Equation 1.15 is given by the action of the fundamental class of \([\mathfrak{M}_{1}(\mathbb{C}^{2})]\in\mathcal{H}_{1}(\mathbb{C}^{2})\) under this representation. More generally, the image of the spherical subalgebra \(\mathcal{SH}(\mathbb{C}^{2})\subset\mathcal{H}(\mathbb{C}^{2})\) generated by \(\mathcal{H}_{1}(\mathbb{C}^{2})\) includes the Nakajima operators \(\alpha_{k}\) for all \(k>0\), and we obtain an isomorphism \[\mathcal{U}(\pi)_{+}\xrightarrow{\cong}\rho_{\mathbb{C}^{2}}(\mathcal{SH}( \mathbb{C}^{2}))\,\] between the positive half \(\mathcal{U}(\pi)_{+}\) of the algebra of modes \(\mathcal{U}(\pi)\) of the Heisenberg vertex algebra \(\pi\), and the image of the spherical subalgebra \(\mathcal{SH}(\mathbb{C}^{2})\) under the above representation \(\rho_{\mathbb{C}^{2}}\). In fact, the analogous results for higher rank sheaves on \(\mathbb{C}^{2}\) were proved in [13], giving a representation \[\rho^{0}_{r[\mathbb{C}^{2}]}:\mathcal{H}(\mathbb{C}^{2})\to\operatorname{End} _{F}(\mathbb{V}^{0}_{r[\mathbb{C}^{2}]})\qquad\text{inducing}\qquad\mathcal{U}(W ^{\kappa}_{f_{\text{\rm prin}}}(\mathfrak{g}\mathfrak{l}_{r}))_{+}\xrightarrow {\cong}\rho^{0}_{r[\mathbb{C}^{2}]}(\mathcal{SH}(\mathbb{C}^{2}))\.\] Similarly, the action of the opposite algebra \(\mathcal{SH}(\mathbb{C}^{2})^{\text{\rm op}}\) by adjoints with respect to the equivariant integration pairing on \(\mathbb{V}^{0}_{r[\mathbb{C}^{2}]}\) determines the action of the negative half of the algebra, and this provides the construction of the representation structure map in Theorem 1.2 above. This approach was also generalized in [14] to the case of an arbitrary toric divisor \(S_{L,M,N}\) in \(Y=\mathbb{C}^{3}\), as mentioned above. The authors consider the analogous cohomological Hall algebra constructed from the homology of the stack of representations of the corresponding unframed quiver with potential, or equivalently the stack of compactly supported coherent sheaves on \(\mathbb{C}^{3}\), with coefficients in the sheaf of vanishing cycles \[\mathcal{H}(\mathbb{C}^{3})=\bigoplus_{n\in\mathbb{N}}\mathcal{H}_{n}(\mathbb{ C}^{3})\qquad\mathcal{H}_{n}(\mathbb{C}^{3})=H_{\bullet}(\mathfrak{M}_{n}( \mathbb{C}^{3}),\varphi_{W^{3}_{\mathbb{C}}})\,\] which was introduced in general by Kontsevich-Soibelman in [14], where it was proved that the analogous correspondences always define an associated algebra structure in this setting. Moreover, it was proved in [14] that the analogous correspondences define a representation of \(\mathcal{H}(\mathbb{C}^{3})\) on the analogous homology groups \(\mathbb{V}_{S_{L,M,N}}\) of the moduli spaces of stable representations of the framed quiver with potential of Equation 1.10 with coefficients in the sheaf of vanishing cycles, and that the analogous extension to the opposite algebra by adjoints determines a representation of the algebra of modes \(\mathcal{U}(Y_{L,M,N})\) of the vertex algebra \(Y_{L,M,N}\) introduced in [10]. In particular, in the case that \(L=N=0\), this construction reduces exactly to the construction of [13] explained above: the dimensional reduction equivalence of Equation 1.13 extends to an identification of modules under an analogous isomorphism of associative algebras \(\mathcal{H}(\mathbb{C}^{3})\cong\mathcal{H}(\mathbb{C}^{2})\), which follows from the results of [12] and the appendix to [14]. In general, for a Calabi-Yau threefold resolution \(Y\to X\) satisfying the hypotheses outlined above, the construction of [14] again provides an associative algebra structure on the homology of the stack of representations of the corresponding unframed quiver with potential, or equivalently the stack of compactly supported perverse coherent sheaves in the sense of Bridgeland [1], with coefficients in the sheaf of vanishing cycles determined by the potential. We denote the resulting algebra by \(\mathcal{H}(Y)\) and call it the Kontsevich-Soibelman cohomological Hall algebra of \(Y\). For an object \(M\in\operatorname{D}^{b}\!\operatorname{Coh}(Y)^{T}\) satisfying the hypotheses required to define the stack \(\mathfrak{M}^{\text{f}}(Y,M)\) of f-framed perverse coherent extensions of \(M\), and a compatible stability condition \(\zeta\), recall from Equation 1.12 that we define the associated cohomological invariant by \[\mathbb{V}^{\text{f},\zeta}(M)=\bigoplus_{\mathbf{d}\in\mathbb{N}^{Q_{Y}}}H_ {\bullet}(\mathfrak{M}^{\text{f},\zeta}_{\mathbf{d}}(Y,M),\varphi_{W^{\text{ f}}_{M}})\.\] We prove the following general result, following the approach of Soibelman from [11]: _Theorem_ B (5.6).: There exists a natural representation \[\rho_{M}:\mathcal{H}(Y)\to\operatorname{End}(\mathbb{V}^{\mathfrak{f},\zeta}(M))\] of the Kontsevich-Soibelman cohomological Hall algebra \(\mathcal{H}(Y)\) on \(\mathbb{V}^{\mathfrak{f},\zeta}(M)\). In particular, in the case that \(M=\mathcal{O}_{S^{\operatorname{red}}}^{\operatorname{ss}}[1]\) with framing structure \(\mathfrak{f}\) of rank \(\mathbf{r}_{S}\) given by \(\mathfrak{f}_{S}\) or \(0_{S}\), as defined above, we obtain: _Corollary 1.5_.: There exist natural representations \[\rho_{S}:\mathcal{H}(Y)\to\operatorname{End}_{F}(\mathbb{V}_{S})\qquad\text{ and}\qquad\rho_{S}^{0}:\mathcal{H}(Y)\to\operatorname{End}_{F}(\mathbb{V}_{S}^{0})\.\] Moreover, this representation can be restricted to a spherical subalgebra \(\mathcal{SH}(Y)\subset\mathcal{H}(Y)\) or extended to a representation of the opposite algebra \(\mathcal{SH}^{\operatorname{op}}(Y)\) by adjoints, and our expectation is that this provides a construction of the desired representation of Conjecture 1.4, via an identification \[\mathcal{U}(\mathbb{V}(Y,S))_{+}\stackrel{{\cong}}{{\to}}\rho_{S }(\mathcal{SH}(Y))\.\] In particular, in the case \(S=r[\mathbb{C}^{2}]\) the representation \(\rho_{r[\mathbb{C}^{2}]}^{0}\) is that used in the proof of the AGT conjecture in [13], and the variant \(\rho_{r[\mathbb{C}^{2}]}\) is that used in [11], as described above. We emphasize one key feature of our construction which differs significantly from previous constructions of vertex algebras corresponding to general algebraic surfaces \(S\) in the spirit of the AGT conjecture: the moduli spaces \(\mathfrak{M}^{\mathfrak{f},\zeta}(Y,\mathcal{O}_{S^{\operatorname{red}}}^{ \operatorname{ss}}[1])\) typically parameterize spaces of sheaves on surfaces with components corresponding to a lattice of possible values of the first Chern class \(c_{1}(\mathcal{E})\) of the sheaves, in addition to subcomponents corresponding to the possible values of \(c_{2}(\mathcal{E})\) usually considered. Moreover, the correspondences used in the construction of the representation in Theorem B are interpreted geometrically as modifying the sheaves along curve classes within the divisor in a way that relates components of the moduli space with distict values of \(c_{1}(\mathcal{E})\). Similarly, the construction of the vertex algebras \(\mathbb{V}(Y,S)\) in the companion paper [10], in the case that \(S\) is a smooth subvariety of \(Y\) and the corresponding moduli space is of sheaves of rank \(1\) on \(S=S^{\operatorname{red}}\), gives a lattice vertex algebra \(V_{H_{2}(S;\mathbb{Z})}\) tensored with the Heisenberg algebra \(\pi_{H_{0}(S)}\), where the components of the lattice extension correspond to components of the moduli space with distinct possible values of \(c_{1}(\mathcal{E})\), in contrast to [14] which constructs the Heisenberg subalgebra \(\pi_{H_{\bullet}(S)}\). In fact, a similar construction was outlined in the final chapter of [14] in the rank \(1\) case, and our construction of \(\mathbb{V}(Y,S)\) can be understood as providing a higher rank generalization of the proposal of _loc. cit._. As we mentioned following Conjecture 1.4 above, we can verify the conjecture by direct computation in the rank \(1\) case, as the relations in the algebra of modes of the lattice vertex algebra are given by explicit formulas; a proof of this will appear in future work. In higher rank, the construction of \(\mathbb{V}(Y,S)\) is again somewhat manifestly as a lattice type vertex algebra extension, conjecturally corresponding to Hecke modifications of higher rank sheaves along curve classes, of the \(W\)-algebras corresponding to modifications of the sheaves at points of the surface. This is in contrast to the construction of [15], which induces an action on a moduli space of sheaves with fixed first Chern class. Finally, we discuss the role of the affine Yangian \(\mathcal{Y}(\widehat{\mathfrak{gl}}_{1})\) of \(\mathfrak{gl}_{1}\) in the proof of the AGT conjecture, which occurs in both [13] and [16], and its putative generalization in the present context. As we have explained, the action of the positive and negative halves of the algebra of modes \(\mathcal{U}(W^{\kappa}_{f_{\operatorname{prin}}}(\mathfrak{gl}_{r}))\) on the module \(\mathbb{V}^{0}_{r[\mathbb{C}^{2}]}\) are given by the image of \(\mathcal{SH}(\mathbb{C}^{3})\) and \(\mathcal{SH}(\mathbb{C}^{3})^{\operatorname{op}}\) under the representation \(\rho_{r[\mathbb{C}^{2}]}^{0}:\mathcal{H}(\mathbb{C}^{3})\to\operatorname{End }_{F}(\mathbb{V}_{r[\mathbb{C}^{2}]})\) above, for each \(r\in\mathbb{N}\). It is proved in [13] that these extend to the action of an algebra which was later identified with the affine Yangian \(\mathcal{Y}(\widehat{\mathfrak{gl}}_{1})\) of \(\mathfrak{gl}_{1}\). This algebra admits a triangular decomposition \[\mathcal{Y}(\widehat{\mathfrak{gl}}_{1})=\mathcal{Y}(\widehat{\mathfrak{gl}}_{1 })_{-}\otimes\mathcal{Y}(\widehat{\mathfrak{gl}}_{1})_{0}\otimes\mathcal{Y}( \widehat{\mathfrak{gl}}_{1})_{+}\qquad\text{with}\qquad\mathcal{Y}(\widehat{ \mathfrak{gl}}_{1})_{+}\cong\mathcal{SH}(\mathbb{C}^{3})\qquad\mathcal{Y}( \widehat{\mathfrak{gl}}_{1})_{-}\cong\mathcal{SH}(\mathbb{C}^{3})^{\text{op}}\,\] such that the representations of \(\mathcal{SH}(Y)\) and \(\mathcal{SH}(Y)^{\text{op}}\) above extend to define a representation \[\rho_{r[\mathbb{C}^{2}]}:\mathcal{Y}(\widehat{\mathfrak{gl}}_{1})\to\text{ End}_{F}(\mathbb{V}_{r[\mathbb{C}^{2}]})\] for each \(r\in\mathbb{N}\), inducing a surjection \[\mathcal{Y}(\widehat{\mathfrak{gl}}_{1})\to\mathcal{U}(W^{\kappa}_{f_{\text{ prim}}}(\mathfrak{gl}_{r}))\,\] up to issues of completions which will we will continue to ignore in the remaining discussion below. This additional observation played an important role in the proof of the AGT conjecture in [13], following a crucial suggestion of Nakajima: The basic idea is that the Verma modules \(\mathbb{V}^{0}_{r[\mathbb{C}^{2}]}\) under consideration satisfy the factorization property \(\mathbb{V}^{0}_{r[\mathbb{C}^{2}]}\cong\mathbb{V}^{\otimes r}_{\mathbb{C}^{2}}\) and there exists a compatible coproduct \(\Delta:\mathcal{Y}(\widehat{\mathfrak{gl}}_{1})\to\mathcal{Y}(\widehat{ \mathfrak{gl}}_{1})^{\otimes 2}\) in the sense that the following diagram commutes Moreover, the image of the induced embedding \[W^{\kappa}_{f_{\text{prin}}}(\mathfrak{gl}_{r})\to W^{\kappa_{1}}_{f_{\text{ prin}}}(\mathfrak{gl}_{r_{1}})\otimes\mathcal{W}^{\kappa_{2}}_{f_{\text{prin}}}( \mathfrak{gl}_{r_{2}})\] is characterized by the kernel of a screening operator acting on the latter tensor product of vertex algebras. The proof of this fact uses the compatible structure of the Feigin-Frenkel resolutions of [11] to reduce to the case \(r_{1}=r_{2}=1\), for which the calculation can be directly checked, and this implies the action factors through \(\mathcal{U}(W^{\kappa}_{f_{\text{prin}}}(\mathfrak{gl}_{r}))\) in general; this is essentially the proof of Theorem 1.2 from [13]. A physical explanation of this mechanism was also explained in [10]. In the case of a general threefold \(Y\), it is natural to expect there exists a universal algebra \(\mathcal{Y}(Y)\) admitting an analogous triangular decomposition \[\mathcal{Y}(Y)=\mathcal{Y}(Y)_{-}\otimes\mathcal{Y}(Y)_{0}\otimes\mathcal{Y}( Y)_{+}\qquad\text{with}\qquad\mathcal{Y}(Y)_{+}\cong\mathcal{SH}(Y)\qquad \mathcal{Y}(Y)_{-}\cong\mathcal{SH}(Y)^{\text{op}}\,\] such that there exist representations of \(\mathcal{Y}(Y)\) on the vector spaces \(\mathbb{V}_{S}\) factoring through those of Conjecture 1.4. Indeed, from the construction of the seminal paper of [20] and its cohomological variant [21], together with the dimensional reduction isomorphisms between critical and preprojective cohomological Hall algebras of the type proved in [14] and the appendix to [15], it is natural to expect that in the case \(Y=Y_{m,0}=\widetilde{A_{m-1}}\times\mathbb{C}\) the corresponding algebra is given by the affine Yangian \(\mathcal{Y}(\widehat{\mathfrak{gl}}_{m})\) of \(\mathfrak{gl}_{m}\), in the sense defined for \(\mathfrak{sl}_{m}\) in [17]. In fact, based on considerations from string theory which we will discuss in Section 1.3 below, as well as the many existing results in mathematics we have mentioned, Costello [18] proposed a family of conjectures about geometric actions of affine Yangian type quantum groups in this setting. One observation of _loc. cit._ was that while the positive and negative parts \(\mathcal{Y}(Y)_{\pm}\) of the putative algebra are determined by the universal algebra \(\mathcal{SH}(Y)\), the structure of the algebra \(\mathcal{Y}_{M}(Y)\) and in particular its Cartan subalgebra \(\mathcal{Y}_{M}(Y)_{0}\) should, as the notation suggests, depend on the choice of object \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\), as well as the framing structure \(\mathrm{f}\), that determines the homology group \(\mathbb{V}^{\mathrm{f},\zeta}(M)\) on which we would like to construct the representation \[\rho_{M}:\mathcal{Y}_{M}(Y)\to\mathrm{End}_{F}(\mathbb{V}^{\mathrm{f},\zeta}(M) ). \tag{1.22}\] In particular, it was conjectured in _loc. cit._ that these variants correspond to shifted variants of affine Yangian type quantum groups, in the sense of [1] and [12]. In addition to the main result quoted in Theorem B above, we outline a construction of a representation of this larger algebra, following the approach of [13] and in turn [14]. We also attempt to explain the resulting computations in language closely paralleling that introduced in the string theory papers of Li-Yamazaki [15] and Galakov-Li-Yamazaki [16], which appear to have provided computations that establish some of the conjectures stated here, and we discuss these relationships further in Section 1.3 below. We hope that this paper will help to translate the results of _loc. cit._ into geometric representation theory. Let us conclude this section of the introduction by mentioning the compatibility of the preceding approach with the definition of the vertex algebras \(\mathbb{V}(Y,S)\) given in the companion paper [11], towards the proof of Conjecture 1.4, as alluded to above. In the setting of Corollary 1.5, the conjectural extension of the representation \(\rho_{S}^{0}\) is expected to give a surjection \[\mathcal{Y}_{S}(Y)\to\mathcal{U}(\mathbb{V}(Y,S))\, \tag{1.23}\] induced by the representations of these algebras on the common vector space \(\mathbb{V}^{0}_{S}\). Further, we have \[\mathbb{V}^{0}_{S}\cong\bigotimes_{d\in\mathfrak{D}_{S}}\mathbb{V}^{\otimes r _{d}}_{S^{d}}\qquad\text{so that}\qquad\mathbb{V}^{0}_{S}\cong\mathbb{V}^{0}_{S _{1}}\otimes\mathbb{V}^{0}_{S_{2}}\,\] if \(S=S_{1}+S_{2}\). Moreover, if we pick a composition series of \(\mathcal{O}_{S}\), \(\mathcal{O}_{S_{1}}\) and \(\mathcal{O}_{S_{2}}\) in terms of the structure sheaves \(\mathcal{O}_{S_{d}}\) of the irreducible components \(S_{d}\) of \(S^{\mathrm{red}}\), compatible in the sense that \[\mathcal{O}_{S_{1}}\to\mathcal{O}_{S}\to\mathcal{O}_{S_{2}}\,\] defines a short exact sequence compatible with the filtrations, then we obtain \[\mathbb{V}(Y,S)\to\mathbb{V}(Y,S_{1})\otimes\mathbb{V}(Y,S_{2}) \tag{1.24}\] an embedding of vertex algebras, with image characterized by the kernel of a screening operator; this is essentially by definition in the construction of the companion paper [11]. Thus, it is natural to speculate that there exist coproduct maps making the following diagram commute and such that the induced embedding of vertex algebras is given by that of Equation 1.24 above. These conjectural coproducts appear to be the affine analogues of the coproducts constructed in [1] and [12], and as we will explain in the final Section 6.3, the maps of the form of Equation 1.23 above for an appropriate choice of divisor in a threefold \(Y_{m,n}\) resolving the singularity \(X_{m,n}=\{xy-z^{m}w^{n}\}\) are expected to give a geometric construction of Brundan-Kleshchev type isomorphisms between affine \(W\)-superalgebras in type A and quotients of shifted Yangians for the affine superalgebras \(\widehat{\mathfrak{gl}}_{m|n}\). We hope to give a geometric construction of the coproducts and complete the proof of Conjecture 1.4 following this approach in future work. ### Motivation from string theory The broader context for the results of this paper are a family of interconnected mathematical ideas at the intersections of enumerative algebraic geometry, low dimensional topology, geometric representation theory, and integrable systems, which follow predictions from supersymmetric quantum field theory and string theory. In particular, the conjectures of Alday-Gaiotto-Tachikawa from [1], their generalizations, and adaptations thereof in mathematics, are a central example of these ideas and are the primary motivation for the present paper, as we have explained in the preceding section. In this section, we will recall the well-known physical arguments behind the AGT conjecture, as well as its generalization in [11], [12], and [13], and give a cartoon of the physics argument to explain its connections with cohomological Hall algebras and affine Yangian type quantum groups, following the series of paper [14], [14], [15], as well as [16], [17], and [18]. We will mention some further references which have influenced our thinking, but make no attempt to give systematic attributions for the physics arguments below, which we emphasize are only a cartoon to motivate the mathematical results of this paper, as well as the many further conjectures we state and various expected connections between them. The contemporary perspective on the AGT conjecture begins with the existence of a family of special quantum field theories in six real spacetime dimensions, admitting an ADE classification, and defined by having symmetries governed by the six dimensional \(\mathcal{N}=(2,0)\) superconformal algebra. The definition of these theories is poorly understood even by the standards of theoretical physics, but it is widely accepted that they exist based on an impressive web of compatible connections between better understood supersymmetric quantum field theories in lower dimensions, and corresponding relationships with topics in the pure mathematics canon from homological knot invariants [1] to the quantum geometric Langlands correspondence [1]. Indeed, the vertex algebras \(\mathbb{V}(Y,S)\) considered here, defined in the companion paper [10], are closely related to both these subjects. The six dimensional \(\mathcal{N}=(2,0)\) superconformal field theories admit a holomorphic-topological twist, a certain deformation of the quantum field theory defined on six dimensional manifolds that are the product of a smooth algebraic curve \(\Sigma\) and a four manifold \(M_{4}\). This deformation depends only on the underlying holomorphic and topological structures of \(\Sigma\) and \(M_{4}\), respectively, so that any quantities computed in these theories can be understood as invariants of these objects. The AGT conjecture is an example of a general pattern of mathematical predictions arising in quantum field theory from the operation of _dimensional reduction_, a kind of integration or pushforward of quantum field theories along fibrations, which uses the quantum field theory on the total space to encode the geometry of the fibres into a quantum field theory on the base. The total integral or pushforward to a point of a quantum field theory determines concrete mathematical objects such as numbers or vector spaces, which define invariants of the mathematical input data as above, and this operation satisfies an analogue of Fubini's theorem that canonically identifies invariants computed in two distinct ways, by reducing from a product of spaces to the point in different orders. This often leads to identifications of pairs of numbers or vector spaces, as well as various related algebraic or categorical structures, of completely distinct mathematical origins. In the setting of the six dimensional \(\mathcal{N}=(2,0)\) theory of type \(\mathfrak{g}\) considered above, this implies a sort of commutativity of the following schematic diagram, which we explain below: (1.25) \[\begin{array}{c}\text{6d }\mathcal{N}=(2,0)\text{ type }\mathfrak{g}\text{ on }\Sigma\times M_{4}\text{ \raisebox{-1.29pt}{\includegraphics[width=14.29pt]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ The top horizontal arrow of the diagram in Equation 1.25 indicates dimensional reduction along \(\Sigma\), which takes the six dimensional \(\mathcal{N}=(2,0)\) superconformal field theory of type \(\mathfrak{g}\) we have considered above and produces a four dimensional \(\mathcal{N}=2\) superconformal field theory. This was discovered in the seminal paper of Gaiotto [16], which already explained many of the key structural features of the correspondence, including identifying the space of superconformal deformations of the four dimensional theory with the Deligne-Mumford compactification of the moduli space of curves, so that the action of the mapping class group determines dualities among these theories. For the holomorphic-topological twist of the six dimensional theory considered above, the reduction to four dimensions induces the topological twist of the corresponding four dimensional \(\mathcal{N}=2\) theories generalizing the one considered in the case of pure gauge theory by Witten in [17] to explain the Donaldson invariants [18]. Thus, we expect that the invariants of \(\mathfrak{g}\), \(\Sigma\) and \(M_{4}\) computed from this perspective should include a family of decorated Donaldson-type invariants of the four manifold \(M_{4}\), determined by a choice of curve \(\Sigma\) and ADE type \(\mathfrak{g}\). The left vertical arrow of the diagram in Equation 1.25 indicates dimensional reduction along \(M_{4}\), and produces a two dimensional \(\mathcal{N}=(0,2)\) supersymmetric quantum field theory \(T[M_{4},\mathfrak{g}]\) on the Riemann surface \(\Sigma\), as was studied with similar applications in [19] and [17]. The holomorphic-topological twist of the six dimensional theory induces a holomorphic twist in two dimensions, so that the algebra of observables of this theory is expected to define a vertex operator algebra, denoted by \(\operatorname{VOA}[M_{4},\mathfrak{g}]\) in [19]. In this context, the natural invariant associated to an algebraic curve \(\Sigma\) by the vertex operator algebra \(\operatorname{VOA}[M_{4},\mathfrak{g}]\) is the space of conformal blocks on \(\Sigma\). Thus, the commutativity of the diagram in Equation 1.25 can be understood concretely as predicting a relationship between the Donaldson-type invariants of the four manifold \(M_{4}\), determined by a choice of curve \(\Sigma\) and ADE type \(\mathfrak{g}\), and the conformal blocks of the vertex operator algebra \(\operatorname{VOA}[M_{4},\mathfrak{g}]\) on \(\Sigma\). The exact mathematical statement is difficult to formulate in general, and does not appear in the literature to our knowledge, but seems to be of the following general form: _Proposal 1.6_.: The partition function on \(M_{4}\) of the class \(\mathcal{S}(\Sigma)\) theory of type \(\mathfrak{g}\) as the curve \(\Sigma\) varies can be identified with a canonical section of the sheaf of conformal blocks of \(\operatorname{VOA}[M_{4},\mathfrak{g}]\) over \(\overline{\mathfrak{M}}_{g,n}\). Several aspects of this phenomenon were discovered in the special case that \(\Sigma=E\) is an elliptic curve in the seminal paper of Vafa-Witten [20] prior to almost all of the preceding references. In this case, the class \(\mathcal{S}\) theory for \(\Sigma\) identifies with four dimensional \(\mathcal{N}=4\) superconformal gauge theory of type \(\mathfrak{g}\), where the parameter on moduli space of elliptic curves determines the complexified gauge coupling \(\tau\) of the gauge theory and the dualities corresponding to the mapping class group include the famous \(S\)-duality autoequivalence of four dimensional \(\mathcal{N}=4\) gauge theory, which maps \(\tau\mapsto-\frac{1}{\tau}\) generalizing the classical Montonen-Olive electromagnetic duality and provides a physical explanation for the quantum geometric Langlands correspondence [21]. Thus, Vafa-Witten explained that the partition functions \(\mathcal{Z}^{\operatorname{WW}}_{M_{4},\mathfrak{g}}(q)\) of four dimensional \(\mathcal{N}=4\) gauge theory in the Donaldson-Witten twist, given by the generating functions for Euler characteristics of components of the moduli spaces of instantons on four manifolds \(M_{4}\), should define modular forms. They also observed that from the perspective of the alternative reduction, these modular forms should correspond to the characters of modules \(\mathbb{V}_{M_{4},\mathfrak{g}}\) for the vertex algebra \(\operatorname{VOA}[M_{4},\mathfrak{g}]\), viewed as sections of the sheaf of conformal blocks over \(\overline{\mathfrak{M}}_{1,1}\). Concretely, they conjectured \[\mathcal{Z}^{\operatorname{WW}}_{M_{4},\mathfrak{g}}(q)=P_{q}(\mathbb{V}_{M_{ 4},\mathfrak{g}})\ \in\mathbb{Z}[\![q]\!]\, \tag{1.26}\] where \(P_{q}\) denotes the Poincare polynomial with respect to the conformal grading. The analogue of this statement for divisors \(S\) in Calabi-Yau threefolds \(Y\) is precisely the equality of Equation 1.20. Another crucial insight leading to the AGT conjecture came in the context of four dimensional \(\mathcal{N}=2\) supersymmetric gauge theories. In the seminal papers [10] and [10], Seiberg and Witten explained that the low energy physics of \(\mathcal{N}=2\) theories on the Coulomb branch is controlled by certain complex integrable systems in terms of the _Seiberg-Witten prepotential_, a quantity related to the variation of Hodge structure on the cohomology of the fibres of the integrable system. In the celebrated paper [11], Nekrasov showed that the Seiberg-Witten prepotential of a four dimensional \(\mathcal{N}=2\) theory can be computed in terms of the instanton partition function on \(\mathbb{R}^{4}\), defined using equivariance with respect to an \(S^{1}\times S^{1}\) action. Nekrasov explained that this construction had a physical interpretation in terms of the \(\Omega\)_-background_, a notion of \(S^{1}\)-equivariance for quantum field theories with a localization mechanism analogous to equivariant cohomology, such that certain quantities in \(S^{1}\)-equivariant theories can be computed using induced quantum field theories on the \(S^{1}\)-fixed points, and the latter appear naturally quantized with respect to \(\hbar\in H^{\bullet}_{S^{1}}(\text{pt})\). This insight turned out to feature prominently in fascinating connections between supersymmetric quantum field theory and quantization of integrable systems, such as those of Nekrasov-Shatashvili [10], [11], and Nekrasov-Witten [10], closely related to our setting of interest. We note that a mathematical formulation of the \(\Omega\)-background construction for factorization algebras was given in Chapter 3 of the thesis [12] of the first author, following the preprints [12, 13]. In the context of the diagram in Equation 1.25, this insight implies that it sensible to consider a local model for the four manifold given by \(M_{4}=\mathbb{R}^{4}_{h_{1},h_{2}}\), flat space \(\mathbb{R}^{4}\) in the \(\Omega\)-background induced by the \(S^{1}\times S^{1}\) action, and moreover the instanton counting invariants computed by the right hand side of the diagram in this case should be related to the quantization of the Seiberg-Witten integrable system. In the class case of class \(\mathcal{S}\) theories of type \(\mathfrak{g}\) determined by \(\Sigma\), the relevant integrable system is the Hitchin system on \(T^{\vee}\operatorname{Bun}_{G}(\Sigma)\), the quantization of which plays a central role in the geometric Langlands correspondence, as described in [1]. The key breakthrough of Alday-Gaiotto-Tachikawa in [1] was the realization that the vertex algebra that features on the left side of the diagram in Equation 1.25 in this context, which can in retrospect be denoted \(\operatorname{VOA}[\mathbb{R}^{4}_{h_{1},h_{2}},\mathfrak{g}]\) in the language of [10] we have followed above, is the closely related principal affine \(W\)-algebra \(W^{\kappa}_{\rho_{\text{\rm prim}}}(\mathfrak{g})\) of type \(\mathfrak{g}\), that is \[\operatorname{VOA}[\mathbb{R}^{4}_{h_{1},h_{2}},\mathfrak{g}]=W^{\kappa}_{ \rho_{\text{\rm prim}}}(\mathfrak{g})\qquad\text{where}\qquad\kappa=-h^{ \vee}-\frac{\hbar_{2}}{\hbar_{1}}\.\] Moreover, the authors explained the interpretation of the Nekrasov partition functions for class \(\mathcal{S}\) theories as conformal blocks of this vertex algebra on the corresponding curve, which was the original observation of the general type formulated in Proposal 1.6 above. Shortly afterwards, it was explained by Gaiotto in [11] that there is also a sensible variant of the above conjectures which is essentially local on the curve \(\Sigma\). The restriction to \(\mathbb{C}\) or \(\mathbb{C}^{\times}\) of the observables of a conformal field theory are identified with a module \(\mathbb{V}_{M_{4},\mathfrak{g}}\) of the vertex algebra \(\operatorname{VOA}[M_{4},\mathfrak{g}]\) and its associative algebra of modes \(\mathcal{U}(\operatorname{VOA}[M_{4},\mathfrak{g}])\), respectively; these are the Hilbert space and algebra of operators of a quantum mechanical system on \(\mathbb{R}\) given by reducing along \(S^{1}\) in the formal identification \(\mathbb{C}^{\times}=S^{1}\times\mathbb{R}\), as indicated by the bottom horizontal arrow in the following analogue of the diagram in Equation 1.25: (1.27) \[\begin{array}{c}6\text{d }\mathcal{N}=(2,0)\text{ type }\mathfrak{g}\text{ on } \mathbb{C}^{\times}\times M_{4}\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{ $\sim$}}\hskip-14.226378pt\raisebox{-14. The analogous commutativity of the preceding diagram implies that the Hilbert space and algebra of operators of this quantum mechanical system can be computed in terms of the five dimensional \(\mathcal{N}=2\) supersymmetric gauge theory on \(\mathbb{R}\times M_{4}\) induced by the reduction on \(S^{1}\) of the six dimensional \(\mathcal{N}=(2,0)\) theory of type \(\mathfrak{g}\). The holomorphic topological twist of the latter induces a five dimensional analogue of the Donaldson-Witten twist, for which the Hilbert space for the theory reduced on a four manifold \(M_{4}\) is given by the homology of the moduli space of instantons on \(M_{4}\), and the algebra of observables is generated by instanton operators, which add or remove charge from instanton particles in the five dimensional gauge theory propagating along the \(\mathbb{R}\) direction. The preceding discussion can be formally summarized by the following: **Proposition 1.7**.: There exists a natural representation \[\mathcal{U}(\mathrm{VOA}[M_{4},\mathfrak{g}])\to\mathrm{End}(H_{\bullet}( \mathfrak{M}(M_{4},\mathfrak{g})))\,\] of the algebra of modes \(\mathcal{U}(\mathrm{VOA}[M_{4},\mathfrak{g}])\) of the vertex algebra \(\mathrm{VOA}[M_{4},\mathfrak{g}]\) on the homology \(H_{\bullet}(\mathfrak{M}(M_{4},\mathfrak{g}))\) of the moduli space of \(\mathfrak{g}\)-instantons on \(M_{4}\), such that the latter is identified with the module \(\mathbb{V}_{M_{4},\mathfrak{g}}\). This is a schematic generalization of the common mathematical statement of the AGT conjecture, such as that quoted in Theorem 1.2 above from [11] which corresponds to the special case that \(M_{4}=\mathbb{R}^{4}\) so that \(\mathrm{VOA}[M_{4},\mathfrak{g}]=W^{\kappa}_{\rho_{\mathrm{prin}}}(\mathfrak{ g})\) and the relevant moduli space of instantons is given by the ADHM construction. In the abelian case, this generalization was essentially discovered by Nakajima, in the sense explained in [13] for example. Even the paper of Vafa-Witten [12] followed the results of Nakajima [13] in the relevant example. Unfortunately, even after the inspiring new proposals for this program in [10], relatively few concrete examples of these vertex algebras were known in the non-abelian case for interesting four manifolds \(M_{4}\). However, an alternative approach in a closely related setting was considered previously by Gaiotto-Rapcak in the seminal paper [1], which allowed for more direct gauge theory computations of the resulting vertex algebras. The six dimensional \(\mathcal{N}=(2,0)\) theory for \(\mathfrak{gl}_{r}\) is essentially by definition the theory of quantum fluctuations of a stack of \(r\) distinct six dimensional extended objects called _M5-branes_, inside of an eleven dimensional spacetime supporting a mysterious variant of string theory called _M-theory_. It was observed in _loc. cit._ that there exist vertex algebras corresponding to two dimensional theories defined on \(\Sigma\) as above, constructed by reducing in the \(\Omega\)-background a configuration of M5 branes supported on a toric divisor \(S\) in a toric Calabi-Yau threefold \(Y\), rather than a four manifold \(M_{4}\). In summary, we consider \[6\text{d }\mathcal{N}=(2,0)\text{ on }\mathbb{C}^{\times}\times S\quad \subset\quad\text{M-theory on }\mathbb{R}\times\mathbb{C}^{\times}\times\mathbb{C}\times Y \tag{1.28}\] and perform the analogous reductions of Equation 1.27. In this setting, we note that rather than being labeled by an ADE type \(\mathfrak{g}\), the six dimensional \(\mathcal{N}=(2,0)\) theory occurs in type A, with rank on each reduced, irreducible component of \(S\) determined by the multiplicity of this component in \(S\). This is the motivation for the vertex algebras \(\mathbb{V}(Y,S)\) defined in the companion paper [1], for which the generalization of the AGT conjecture is our Conjecture 1.4 above. The computations of Gaiotto-Rapcak [1], and their generalizations in Prochazka-Rapcak [21], used the type IIB duality frame in which the geometry of the threefold \(Y\) determines a \((p,q)\)-web of type II fivebranes, and the M5-branes wrapping toric divisors \(S\) in \(Y\) determine D3-branes stretched between the walls of the \((p,q)\)-web according to their position on the boundary of the moment polytope of the threefold \(Y\). In this setting, the configuration could be analyzed in terms of junctions of boundary conditions and domain walls between the four dimensional \(\mathcal{N}=4\) gauge theories on the D3 branes, giving concrete predictions for the vertex algebras in terms of constructions in the representation theory of affine Lie algebras, following [1]. This lead to mathematical constructions of some examples of these algebras, such as in [13] and in turn [1], but the analogue of the AGT conjecture was not formulated in these examples. Motivated by closely related ideas in string theory [14], Nekrasov discovered the quiver with potential of Equation 1.10 above, which appeared to describe moduli spaces of instantons supported on the toric divisors \(S_{L,M,N}\) in \(Y=\mathbb{C}^{3}\), a generalization of the ADHM construction that he called the space of _spiked instantons_[15]. Motivated by this, the results of [14] proved the generalization of the AGT conjecture in this setting, closely following the methods of [13], but using the moduli space of spiked instantons in place of the usual moduli space of instantons described by the ADHM construction. These results of [14] were the prototype for the generalization of the AGT conjecture stated in Conjecture 1.4, though we note that a description of the moduli space of spiked instantons in terms of algebraic geometry was not provided in _loc. cit._, nor was the generalization to divisors \(S\) in more interesting threefolds \(Y\). Towards stating more refined predictions about this general correspondence, including understanding the ingredients of the proof of the AGT conjecture given in [13] and in turn [14], and how we should expect them to generalize in the case of \(\mathbb{V}(Y,S)\), we give a cartoon of the relevant string theory arguments, following a proposal of Costello developed in the series of papers [15], [16], and [17], and described in general in his lecture [16]; we also follow the more recent results of [13], [12] and [15]. The basic idea is to consider lifts of the compatible reductions of the diagram in Equation 1.27, along the inclusion of the six dimensional \(\mathcal{N}=(2,0)\) theory into M-theory as in Equation 1.28, which are summarized in the following diagram: \[\begin{CD}\text{Twisted M-theory on }\mathbb{R}\times\mathbb{C}^{\times} \times\mathbb{C}\times Y\overset{\int_{S^{1}}}{\rightsquigarrow}\text{Twisted type IIA on }\mathbb{R}^{2}\times\mathbb{C}\times Y\\ @V{}V{\int_{Y}}V@V{}V{\int_{Y}}V\\ \text{5d Chern-Simons for }\mathfrak{g}_{Y}\text{ on }\mathbb{R}\times\mathbb{C}^{ \times}\times\mathbb{C}\overset{\int_{S^{1}}}{\rightsquigarrow}\text{4d Chern-Simons for }\widehat{\mathfrak{g}}_{Y}\text{ on }\mathbb{R}^{2}\times\mathbb{C}\end{CD}. \tag{1.29}\] The field theories in the diagram of Equation 1.27 all occur as theories of observables local to a defect supported along a submanifold of the spacetime underlying the corresponding object in the diagram of Equation 1.29. We begin by describing the top horizontal arrow and the resulting expectations for computations in the twisted type IIA setting. The conjectural geometric description of the vacuum module \(\mathbb{V}_{S}\) of the vertex algebra \(\mathbb{V}(Y,S)\), in terms of the homology of the corresponding moduli space of instantons supported on the divisor \(S\), was derived in the context of the diagram of Equation 1.27 as the Hilbert space of the Donaldson-Witten twist of the five dimensional \(\mathcal{N}=2\) theory on \(\mathbb{R}\times M_{4}\). In fact, it was explained in [18] that the moduli spaces of instantons in gauge theories on stacks of relatively heavy, non-compact branes can be interpreted as moduli spaces of bound states of these fixed branes with the spectrum of infinitesimally massive, compactly supported branes. The reformulation of the ADHM construction explained in Section 1.1 and its generalization in Theorem A is a mathematical realization of this perspective, and provides a systematic approach to the computations of Aspinwall-Katz [1]. Similarly, in this context the representations of the Kontsevich-Soibelman cohomological Hall algebra \(\mathcal{H}(Y)\) constructed in Theorem B can be understood as defining the action of a universal BPS algebra associated to the type IIA string theory on \(Y\), induced by correspondences between moduli spaces of bound states of D-branes defined by adjoining additional compactly supported branes, as in the correspondence of Equation 1.21. Moreover, the choice of auxiliary brane \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)\) conjecturally determines an algebra \(\mathcal{Y}_{M}(Y)\) with triangular decomposition \[\mathcal{Y}_{M}(Y)=\mathcal{SH}(Y)^{\mathrm{op}}\otimes\mathcal{H}^{0}_{M} \otimes\mathcal{SH}(Y)\qquad\text{and}\qquad\mathcal{Y}_{M}(Y)\to\mathrm{End} _{F}(\mathbb{V}_{M}) \tag{1.30}\] a representation as in Equation 1.22, for which we outline a construction in Section 5.6 following [10]. This should be interpreted as a working definition of a mathematical avatar for the BPS algebras introduced by Li-Yamazaki in [11], and generalized in [12], under the name (shifted) _quiver Yangians_; these algebras were defined to act on the moduli spaces of bound states of D-branes in \(Y\), in precisely the way we have explained above, and we hope to provide a more careful statement of the relation to _loc. cit._ in future work. In the case that the auxiliary branes are a configuration of D4 branes in the twisted IIA frame determined by an M5 brane wrapping the divisor \(S\), corresponding to \(M=\mathcal{O}_{S^{\rm red}}^{\rm ss}[1]\in\mathrm{D}^{b}\mathrm{Coh}(Y)\) and framing structure \(\mathrm{f}_{S}\) of rank \(\mathbf{r}_{S}\) as described above, we expect that the resulting representation of Equation 1.30 factors through a map \[\mathcal{Y}_{S}(Y)\to\mathcal{U}(\mathbb{V}(Y,S))\, \tag{1.31}\] as in Equation 1.23, from this universal algebra \(\mathcal{Y}_{S}(Y)\) to the algebra of modes \(\mathcal{U}(\mathbb{V}(Y,S))\). We now consider the alternative order of reduction, indicated by the left vertical arrow in the diagram of Equation 1.29, which we expect to give an analogous description of these objects in terms of representation theory. A model for this reduction was calculated in [13] in a holomorphic-topological twist of M-theory, and in the presence of an \(\Omega\)-background acting by the rank two subtorus of the toric action of \(T\) on \(Y\) which preserves the Calabi-Yau structure. The resulting five dimensional theory was computed to be a non-commutative deformation of a five dimensional cousin of Chern-Simons theory of type \(\mathfrak{g}=\mathfrak{g}_{Y}\) determined by the threefold \(Y\). In the case that \(Y=Y_{m,n}\) is a resolution of \(X_{m,n}=\{xy-z^{m}w^{n}\}\), Costello conjectured that the corresponding Lie algebra was \(\mathfrak{g}_{Y_{m,n}}=\mathfrak{gl}_{m|n}\), and detailed calculations were done in the \(\mathfrak{gl}_{1}\) case in [10] and [11]. The intended implications of these results are informed by a general proposal of Costello originating in [13], which explains relationships between various classes of quantum groups and Chern-Simons type theories in various dimensions. In the classic paper on the physical origin of the Jones polynomial [14], Witten explains that the category of line operators in three dimensional Chern-Simons theory is identified with representations of the quantum enveloping algebra \(U_{q}\mathfrak{g}\). Costello explained that there is an analogous relationship between line operators in a four dimensional holomorphic-topological variant of Chern-Simons theory and representations of the Yangian \(Y_{\hbar}\mathfrak{g}\). Moreover, he explained that the quantum groups \(U_{q}\mathfrak{g}\) and \(Y_{\hbar}\mathfrak{g}\) arising in these theories are the Koszul dual algebras \(\mathcal{A}|_{\ell}^{!}\) of the algebras of local operators \(\mathcal{A}\) of the quantum field theories restricted along a line \(\ell\), the locus along which the defects being classified are supported. These are the universal algebras controlling defects with a specified locus of support, in the sense that coupling a defect in the theory supported along the line \(\ell\) is equivalent to a map \[\mathcal{A}|_{\ell}^{!}\to\mathcal{B} \tag{1.32}\] to the putative algebra of observables \(\mathcal{B}\) on the defect, so that taking \(\mathcal{B}=\mathrm{End}(V)\) determines the correspondence between defects and representations of these algebras. Further, the five dimensional variant of Chern-Simons theory, or equivalently four dimensional Chern-Simons theory in affine type, is meant to describe the observables of a quantum gravitational theory in the twisted context, as in the twisted holographic principle of Costello-Li [15], and thus the Koszul dual algebra is deformed by a twisted analogue of backreaction to an alternate algebra \(\widetilde{\mathcal{A}}(Y)|_{\ell_{M}}^{!}\), which depends on the precise defect supported on \(\ell\) corresponding to a configuration of branes determined by \(M\). Thus, the commutativity of the diagram of Equation 1.29 conjecturally induces an identification \[\widetilde{\mathcal{A}}(Y)|_{\ell_{M}}^{!}\cong\mathcal{Y}_{M}(Y)\] such that the maps of Equation 1.32 induce those of Equation 1.31 and more generally 1.30. In the case that \(Y=Y_{m,n}\) so that \(\mathfrak{g}_{Y_{m,n}}=\mathfrak{gl}_{m|n}\) as described above, this implies a conjectural identification between the infinite dimensional associative algebras \(\mathcal{Y}_{M}(Y_{m,n})\) corresponding to various compatible objects \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y_{m,n})^{T}\) and various shifts of the affine Yangian for \(\mathfrak{gl}_{m|n}\). This is the motivation for the Conjectures 6.7 and 6.31 in the main text, for example. In this general context, Costello explained that the additional data on the algebra of observables \(\mathcal{A}\) forgotten in its restriction to \(\ell\), related to its factorization structure in the transverse directions to the support of the defect, determines the generalized commutativity data for the monoidal structure on the category of representations of the Koszul dual quantum group \(\mathcal{A}|_{\ell}^{!}\). We hope that this will lead to a better understanding of the relationship between the crucial coproducts on the affine Yangian of \(\mathfrak{gl}_{1}\) that are required for the proof of the AGT conjecture in [11], their proposed generalization discussed at the end of Section 1.3, and the more conceptual constructions of the R-matrix [15], coproduct [16], and primitives [14] in closely related contexts. Finally, we mention that the main applications discussed in Sections 6.1 and 6.2, and especially their relationship outlined in Section 6.3, can indeed be understood in the context of the twisted holographic principle of Costello-Li [17] mentioned above. In particular, the isomorphism of Conjecture 6.33 is of the typical form of their general conjecture. ### Summary of results We now give a concrete summary of the results of this paper: In Section 2, we review some relevant preliminaries for later reference and in order to fix notation. We recommend the reader skip this section and return to it only later as necessary. In Section 3, we explain the unframed analogue of Theorem A, a folklore theorem stating the equivalence between compactly supported perverse coherent sheaves on certain toric Calabi-Yau threefolds and finite dimensional representations of corresponding quivers with potential, which follows from results of Bridgeland [1] and Van den Bergh [1] together with some general facts about triangulated categories. We explain a formula for the natural monad presentation for these complexes of sheaves, using an analogy with the role of the Koszul duality derived equivalences of the BGG category \(\mathcal{O}\) in [1]. A more detailed overview is given in Section 3.1. In Section 4, we generalize the results of the preceding section to describe analogous abelian categories of coherent sheaves generated by the compactly supported perverse coherent sheaves together with an auxiliary object \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) contained in the heart of a compatible Bridgeland-Deligne perverse coherent t-structure. We describe the moduli spaces of objects in these categories, and framed variants thereof, in terms of spaces of representations of framed quivers with potential via a natural monad formalism, proving Theorem A. A more detailed overview is given in Section 4.1. In Section 5, we recall the Kontsevich-Soibelman cohomological Hall algebra \(\mathcal{H}(Y)\) of a quiver with potential corresponding to a threefold \(Y\), and give the construction of the representation of \(\mathcal{H}(Y)\) on the homology \(\mathbb{V}^{\mathrm{f},\zeta}(M)\) of the moduli space of \(\zeta\)-stable, f-framed perverse coherent extensions of \(M\), proving Theorem B. A more detailed overview is given in Section 5.1. In Section 6, we explain the primary intended applications of the results of the previous sections: In Section 6.1, we explain the relationship with perverse coherent systems of [17] and the results of [13], and for \(Y=Y_{m,n}\) state the conjectural relationship to the affine Yangian of \(\mathfrak{gl}_{m|n}\). In Section 6.2, we explain the local Vafa-Witten invariants of divisors \(S\subset Y\) defined by Theorem A, and the relationship of the representation of the cohomological Hall algebra constructed in Theorem B in this case to the generalization of the AGT conjecture and the vertex algebras \(\mathbb{V}(Y,S)\) defined in the companion paper [1]. In Section 6.3, we explain the relationship between these results and its potential application to an analogue of the Brundan-Kleshchev isomorphism between affine W-superalgebras and truncated shifted Yangians for affine \(\mathfrak{gl}_{m|n}\). ## 2. Preliminaries In this section, we recall some preliminary results for later reference and in order to fix notation. We recommend the reader skip this section and return to it only later as necessary. ### \(A_{\infty}\) algebras and their module categories Let \(S\) be a finite dimensional commutative \(\mathbb{K}\) algebra. _Definition 2.1_.: A \(\mathbb{Z}\)-graded \(A_{\infty}\) algebra over \(S\) is \(A\) is a \(\mathbb{Z}^{2}\)-graded \(S\) module \[A=\bigoplus_{i,j\in\mathbb{Z}}A^{i}_{j}[-i]\langle-j\rangle\qquad\text{together with}\qquad m_{n}:A^{\otimes n}\to A[2-n]\] for \(n\geq 1\), satisfying the usual \(A_{\infty}\) relations. When \(A^{i}_{j}=0\) for \(i\neq 0\), or \(j\neq 0\), this is equivalent to a graded associative algebra structure, or usual \(A_{\infty}\) algebra, respectively, and when \(m_{n}=0\) for \(n\geq 3\) this is equivalent to a (graded) DG-algebra. We define morphisms of graded \(A_{\infty}\) algebras as usual, and assume that all \(A_{\infty}\) algebras and morphisms of such are graded and strictly unital, in the sense of [10]. In particular, all (strictly unital, graded) \(A_{\infty}\) algebras are equipped with a canonical strict (strictly unital) map \(u:S\to A\). _Definition 2.2_.: An augmentation on a (strictly unital, \(\mathbb{Z}\)-graded) \(A_{\infty}\) algebra is a (strictly unital) map \(\epsilon:A\to S\) of \(A_{\infty}\) algebras over \(S\). _Definition 2.3_.: A (right, \(\mathbb{Z}\)-graded) \(A_{\infty}\) module \(M\) over a \(\mathbb{Z}\)-graded \(A_{\infty}\) algebra \(A\) is a \(\mathbb{Z}^{2}\)-graded vector space \[M=\bigoplus_{i,j\in\mathbb{Z}}M^{i}_{j}[-i]\langle-j\rangle\qquad\text{together with}\qquad\rho_{n}:M\otimes A^{\otimes n-1}\to M[2-n]\] for \(n\geq 1\), satisfying the usual \(A_{\infty}\) relations. We assume in addition that all \(A_{\infty}\) modules and morphisms of such are graded and strictly unital, and let \(\mathrm{D}_{\mathbb{Z}}(A)\) denote the triangulated category given by the derived category of (strictly unital, \(\mathbb{Z}\)-graded, right) \(A_{\infty}\) modules over \(A\) under (strictly unital) maps of \(A_{\infty}\) modules. In particular, if \(A\) is a plain (graded) associative algebra, this agrees with the usual derived category of (graded) \(A\) modules. Similarly, we let \(\mathrm{D}^{?}_{\mathbb{Z}}(A)\) for \(?=b,+,-\) denote the usual bounded variants, and \(\mathrm{D}^{?}(A)\) the usual derived category of (strictly unital, but not graded) \(A_{\infty}\) modules. _Definition 2.4_.: A subcategory \(\mathcal{C}\subset\mathcal{D}\) of a triangulated category \(\mathcal{D}\) is called thick if it is a strictly full, triangulated subcategory which is closed under taking direct summands. For any collection of objects \(\{M_{i}\}\) of \(\mathcal{D}=\mathrm{D}_{\mathbb{Z}}(A)\), the thick subcategory \(\mathrm{thick}(M_{i})\) generated by \(\{M_{i}\}\) is the minimal thick subcategory of \(\mathcal{D}\) containing all of the objects \(M_{i}\) and their graded shifts. In particular, we let \(\mathrm{D}_{\mathrm{perf}}(A)=\mathrm{thick}(A)\) denote the thick subcategory generated by the rank one free module and its graded shifts, and \(\mathrm{D}_{\mathrm{fd}}(A)\) the thick subcategory generated by modules \(M\in\mathrm{D}_{\mathbb{Z}}(A)\) with finite dimensional cohomology; the ungraded variants are denoted \(\mathrm{Thick}(M_{i})\), \(\mathrm{D}_{\mathrm{Perf}}(A)\), and \(\mathrm{D}_{\mathrm{Fd}}(A)\), respectively. ### Tilting objects and derived equivalences Throughout this section, let \(\mathcal{D}\) be a \(\mathbb{K}\)-linear, algebraic triangulated category in the sense of Keller [10]. The primary examples of triangulated categories we consider in the present work all fall within this class. _Definition 2.5_.: An object \(T\in\mathcal{D}\) is called a _tilting_ object if it satisfies the following conditions: * \(T\) is compact, that is, \(\operatorname{Hom}_{\mathcal{D}}(T,\cdot):\mathcal{D}\to\operatorname{D}( \mathbb{K})\) preserves coproducts; * \(T\) generates \(\mathcal{D}\), that is, \(\operatorname{Hom}_{\mathcal{D}}(T,C)=0\) implies \(C=0\). If in addition \(T\) satisfies the condition that the DG associative algebra \(\operatorname{Hom}_{\mathcal{D}}(T,T)\) has cohomology concentrated in degree \(0\), then \(T\) is called a _classical_ tilting object. We will use the following result of Keller throughout: _Theorem 2.6_.: [10] Let \(T\) be a tilting object in an algebraic triangulated category admitting set-indexed coproducts, and let \(\Lambda=\operatorname{Hom}_{\mathcal{D}}(T,T)\) be the DG associative algebra of (derived) endomorphisms of \(T\). Then there is a triangle equivalence \[\Psi_{T}:\mathcal{D}\xrightarrow{\cong}\operatorname{D}(\Lambda)\] such that composition with the forgetful functor \(\operatorname{ob}:\operatorname{D}(\Lambda)\to\operatorname{D}(\mathbb{K})\) is given by \[\operatorname{ob}\circ\Psi_{T}=\operatorname{Hom}_{\mathcal{D}}(T,\cdot): \mathcal{D}\to\operatorname{D}(\mathbb{K})\,\] and with inverse equivalence given by \[(\cdot)\otimes_{\Lambda}T:\operatorname{D}(\Lambda)\to\mathcal{D}\.\] Further, this restricts to define equivalences \[\Psi_{T}:\operatorname{Triang}(T)\xrightarrow{\cong}\operatorname{Triang}( \Lambda)\qquad\text{and}\qquad\Psi_{T}:\operatorname{Thick}(T)\xrightarrow{ \cong}\operatorname{D}_{\operatorname{Perf}}(\Lambda)\, \tag{2.1}\] where \(\operatorname{Triang}(T)\) denotes the minimal triangulated subcategory of \(\mathcal{D}\) containing \(T\), similarly for \(\operatorname{Triang}(\Lambda)\), and \(\operatorname{Thick}(T)\) is the minimal thick subcategory of \(\mathcal{D}\) containing \(T\). In summary, we obtain mutually inverse triangle equivalences Further, we note these equivalences identify the object \(T\in\mathcal{D}\) with the rank \(1\) free module \(\Lambda\in\operatorname{D}(\Lambda)\). The primary application of exceptional collections in the present work is to constructing well-behaved tilting objects, which uses the following result: Recall that a collection of objects \(T_{i}\)_classically generates_ a category \(\mathcal{C}\) if \(\operatorname{thick}(T_{i})=\mathcal{C}\). _Theorem 2.7_.: [1] Let \(\mathcal{D}\) be a compactly generated triangulated category. Then a set of compact objects classically generates \(\mathcal{D}^{c}\) if and only if it generates \(\mathcal{D}\). Let \(X\) be a smooth variety and \(E\) in \(\operatorname{D}^{b}\operatorname{Coh}(X)\) a classical generator. Then from the preceding theorem, we have: _Corollary 2.8_.: \(E\) is a tilting object for \(\operatorname{DQCoh}(X)\). Let \(\Lambda=\operatorname{Hom}_{\operatorname{D}^{b}\operatorname{Coh}(X)}(E,E)\) be the DG algebra of endomorphisms of \(E\). Applying the Morita theory from [10] recalled in Theorem 2.6, we have: _Corollary 2.9_.: There are triangle equivalences \[\operatorname{DQCoh}(X)\xrightarrow{\cong}\operatorname{D}(\Lambda) \operatorname{D}^{b}\operatorname{Coh}(X)\xrightarrow{\cong} \operatorname{D}_{\operatorname{Perf}}(\Lambda)\] intertwining the forgetful functor \(\operatorname{D}(\Lambda\text{-Mod})\to\operatorname{D}(\mathbb{K})\) with \(\operatorname{Hom}_{\operatorname{QCoh}(X)}(E,\cdot)\). ### Perverse coherent sheaves and non-commutative resolutions In this section, we recall descriptions of categories of coherent sheaves in terms of non-commutative algebras, following [10] and [11], [12] throughout. Throughout this section, let \(f:Y\to X\) a projective map of Noetherian schemes satisfying the conditions 1. \(f_{*}\mathcal{O}_{Y}=\mathcal{O}_{X}\), and 2. the fibres of \(f\) are at most one dimensional. We recall the notation \(\underline{\operatorname{Hom}}_{\operatorname{D}^{b}\operatorname{Coh}(X)}(E,F) \in\operatorname{D}^{b}\operatorname{Coh}(Y)\) for the internal \(\operatorname{Hom}\) object in \(\operatorname{D}^{b}\operatorname{Coh}(Y)\). In particular, for any object \(E\in\operatorname{D}^{b}\operatorname{Coh}(Y)\), the internal endomorphism algebra \[\underline{\Lambda}_{E}=\underline{\operatorname{Hom}}_{\operatorname{D}^{b} \operatorname{Coh}(Y)}(E,E)\in\operatorname{Alg}_{\operatorname{Ass}}( \operatorname{D}^{b}\operatorname{Coh}(Y))\] defines a DG associative algebra object internal to \(\operatorname{D}^{b}\operatorname{Coh}(Y)\), and we have \[\underline{\operatorname{Hom}}_{\operatorname{D}^{b}\operatorname{Coh}(Y)}(E,F)\in\underline{\Lambda}_{E}\text{-Mod}(\operatorname{D}^{b}\operatorname{ Coh}(Y))\] for each \(F\in\operatorname{D}^{b}\operatorname{Coh}(Y)\). Similarly, we have \[f_{*}\underline{\Lambda}_{E}\in\operatorname{Alg}_{\operatorname{Ass}}( \operatorname{D}^{b}\operatorname{Coh}(X))\qquad\text{and}\qquad f_{*} \underline{\operatorname{Hom}}_{\operatorname{D}^{b}\operatorname{Coh}(Y)}(E,F)\in(f_{*}\underline{\Lambda}_{E})\text{-Mod}(\operatorname{D}^{b} \operatorname{Coh}(X))\.\] In this context the first main result of [12], following [10], is as follows: _Theorem 2.10_.: [12] There is a vector bundle \(E\) on \(Y\) such that for \(\mathcal{A}=f_{*}\underline{\operatorname{Hom}}_{\operatorname{D}^{b} \operatorname{Coh}(Y)}(E,E)\) there are mutually inverse equivalences of categories \[f_{*}\underline{\operatorname{Hom}}_{\operatorname{D}^{b}\operatorname{Coh}(Y )}(E,\cdot):\;\operatorname{D}^{b}\operatorname{Coh}(Y)\xleftarrow{\cong} \operatorname{D}^{b}\mathcal{A}\text{-Mod}\;:f^{-1}(\cdot)\otimes_{f^{-1}( \mathcal{A})}E\.\] Note we are considering \(\mathcal{A}\in\operatorname{Alg}_{\operatorname{Ass}}(\operatorname{D}^{b} \operatorname{Coh}(X))\) as above and use the simplified notation \(\operatorname{D}^{b}\mathcal{A}\text{-Mod}:=\mathcal{A}\text{-Mod}( \operatorname{D}^{b}\operatorname{Coh}(X))\). We will give a concrete description of the vector bundle \(E\) and explain the details of this equivalence in the more specific setting of Section 3.3 below. It is natural to ask what the standard \(t\)-structure on \(\operatorname{D}^{b}\mathcal{A}\text{-Mod}\) corresponds to on \(\operatorname{D}^{b}\operatorname{Coh}(X)\). Towards giving a geometric description, we recall the Beilinson-Bernstein-Deligne theorem on gluing t-structures along recollements: _Theorem 2.11_.: [10] Consider three triangulated categories with exact functors \[\mathcal{D}_{F}\xrightarrow{i_{*}}\mathcal{D}\xrightarrow{j^{*}}\mathcal{D}_ {U}\] satisfying the following conditions: 1. there exist left and right adjoints \(i^{*},i^{!}\) and \(j_{!},j_{*}\) to both \(i_{*}\) and \(j^{*}\), respectively; 2. \(j^{*}\circ i_{*}=0\); 3. for each \(E\in\mathcal{D}\), there exist exact triangles \[j_{!}j^{*}E\to E\to i_{*}i^{*}E\xrightarrow{[1]}\qquad\text{and}\qquad i_{*}i^{! }E\to E\to j_{*}j^{*}E\xrightarrow{[1]}\qquad\text{; and}\] 4. \(i_{*},j_{!},j_{*}\) are fully faithful. Then for each pair of \(t\)-structures \((\mathcal{D}_{F}^{\leq 0},\mathcal{D}_{F}^{\geq 0})\) and \((\mathcal{D}_{U}^{\leq 0},\mathcal{D}_{U}^{\geq 0})\) on \(\mathcal{D}_{F}\) and \(\mathcal{D}_{U}\), we have that \[\mathcal{D}^{\leq 0}:=\{E\in\mathcal{D}|j^{*}E\in\mathcal{D}_{U}^{\leq 0}\text{ and }i^{*}E\in\mathcal{D}_{F}^{\leq 0}\} \qquad\text{and}\qquad\mathcal{D}^{\geq 0}=\{E\in\mathcal{D}|j^{*}E\in\mathcal{D}_{U}^{ \geq 0}\text{ and }i^{!}E\in\mathcal{D}_{F}^{\geq 0}\}\] define a \(t\)-structure on \(\mathcal{D}\). _Definition 2.12_.: A _recollement_ of triangulated categories is a triple of triangulated categories with functors satisfying the hypotheses of the preceding theorem. _Remark 2.13_.: The notation in the statement of Theorem 2.11 is motivated by the example of the recollement given by the derived categories of constructible sheaves on complementary closed and open embeddings \(i:F\to X\) and \(j:U=X\setminus F\to X\). For our application, we take \(\mathcal{D}=\mathrm{D}^{b}\mathrm{Coh}(Y)\), \(\mathcal{D}_{U}=\mathrm{D}^{b}\mathrm{Coh}(X)\), and \(\mathcal{D}_{F}\) the full subcategory \[\mathcal{C}=\{E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\ |\ f_{*}E=0\}\.\] It was observed in [10] that these categories constitute a recollement; we outline the proof to fix notation, and for completeness as it was omitted in _loc. cit._: _Proposition 2.14_.: [10] The categories and functors \[\mathcal{C}\xrightarrow{\iota}\mathrm{D}^{b}\mathrm{Coh}(Y)\xrightarrow{f_{ *}}\mathrm{D}^{b}\mathrm{Coh}(X)\, \tag{2.2}\] define a recollement of triangulated categories, where \(\iota:\mathcal{C}\to\mathrm{D}^{b}\mathrm{Coh}(Y)\) is the inclusion of the full subcategory. Proof.: By definition, we have \(f_{*}\circ\iota=0\). The functor \(f_{*}=f_{!}\) by projectivity, and admits left and right adjoints \(f^{*}\) and \(f^{!}\), respectively, by coherent duality. Moreover, for each object \(E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\) we have \[f_{*}f^{*}E\cong E\otimes f_{*}\mathcal{O}_{Y}\cong E\,\] by the hypothesis that \(f_{*}\mathcal{O}_{Y}=\mathcal{O}_{X}\), and similarly \(f_{*}f^{!}E\cong E\). It follows similarly that \(f^{*}\) and \(f^{!}\) are fully faithful. Further, for each object \(E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\) define \[\tilde{C}_{E}=\mathrm{cone}\left[f^{*}f_{*}E\to E\right]\qquad\text{and} \qquad C_{E}=\mathrm{cocone}\left[E\to f^{!}f_{*}E\right]\] so that we have exact triangles \[f^{*}f_{*}E\to E\to\tilde{C}_{E}\xrightarrow{[1]}\qquad\text{and}\qquad C_{E} \to E\to f^{!}f_{*}E\xrightarrow{[1]}. \tag{2.3}\] By the preceding paragraph, we have \(f_{*}C_{E}=f_{*}\tilde{C}_{E}=0\), so that \(C_{E},\tilde{C}_{E}\in\mathcal{C}\), and thus these exact triangles imply left and right admissibility, respectively, of the inclusion of the full subcategory \(\iota:\mathcal{C}\to\mathrm{D}^{b}\mathrm{Coh}(Y)\). In particular, the left and right adjoints \[\iota^{L},\iota^{R}:\mathrm{D}^{b}\mathrm{Coh}(Y)\to\mathcal{C}\qquad\text{ are defined by}\qquad E\mapsto\tilde{C}_{E},C_{E}\,\] respectively, and by construction we have the desired exact triangles. Fix an integer \(k\in\mathbb{Z}\) and consider the \(k^{th}\) shift of the standard \(t\)-structure on \(\mathcal{C}\) inherited as a subcategory of \(\mathcal{D}\): \[\mathcal{C}^{k,\leq 0}:=\mathcal{C}^{\leq k}\qquad\text{and}\qquad\mathcal{C}^{k, \geq 0}:=\mathcal{C}^{\geq k}\.\] _Definition 2.15_.: The perverse coherent \(t\)-structure is defined by \[\mathrm{D}^{b}\mathrm{Coh}(Y)^{k,\leq 0} :=\{E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\ |\ f_{*}E\in\mathrm{D}^{b}\mathrm{Coh}(X)^{\leq 0} \text{ and }\iota^{L}E\in\mathcal{C}^{k,\leq 0}\}\qquad, \tag{2.5}\] \[\mathrm{D}^{b}\mathrm{Coh}(Y)^{k,\geq 0} :=\{E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\ |\ f_{*}E\in\mathrm{D}^{b} \mathrm{Coh}(X)^{\geq 0}\text{ and }\iota^{R}E\in\mathcal{C}^{k,\geq 0}\}\qquad. \tag{2.4}\] Note that this indeed defines a t-structure, by Proposition 2.14 and Theorem 2.11. Towards giving a more concrete description, we have: _Proposition 2.16_.: For each \(E\in\), the conditions \(\iota^{L}E\in\mathcal{C}^{k,\leq 0}\) and \(\iota^{R}E\in\mathcal{C}^{k,\geq 0}\) are equivalent to the conditions \[\operatorname{Hom}(E,C)=0\text{ for each }C\in\mathcal{C}^{>k}\qquad\text{ and }\qquad \operatorname{Hom}(C,E)=0\text{ for each }C\in\mathcal{C}^{<k}\,\] respectively. Proof.: By definition \(\mathcal{C}^{\leq k}={}^{\perp}\mathcal{C}^{>k}\), so that \(\iota^{L}E\in\mathcal{C}^{k,\leq 0}\) if and only if \(\operatorname{Hom}(\iota^{L}E,C)=0\) for each \(C\in\mathcal{C}^{>k}\). Moreover, by construction we have \(\operatorname{Hom}(\tilde{C}_{E},C)=\operatorname{Hom}(E,C)\) for each object \(C\in\mathcal{C}\) since \(\mathcal{C}={}^{\perp}(f^{*}\mathrm{D}^{b}\mathrm{Coh}(X))\). The proof of the equivalence of the two latter conditions follows by the dual argument. In summary, we obtain the concrete description of the perverse coherent \(t\)-structure: \[\mathrm{D}^{b}\mathrm{Coh}(Y)^{k,\leq 0} :=\{E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\ |\ f_{*}E\in\mathrm{D}^{b}\mathrm{Coh}(X)^{\leq 0} \text{ and }\operatorname{Hom}(E,C)=0\text{ for each }C\in\mathcal{C}^{>k}\}\quad,\] \[\mathrm{D}^{b}\mathrm{Coh}(Y)^{k,\geq 0} :=\{E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\ |\ f_{*}E\in\mathrm{D}^{b} \mathrm{Coh}(X)^{\geq 0}\text{ and }\operatorname{Hom}(C,E)=0\text{ for each }C\in\mathcal{C}^{<k}\}\quad.\] _Definition 2.17_.: The category of perverse coherent sheaves on \(f:Y\to X\) is defined by \[\operatorname{PervCoh}^{k}(Y/X):=\mathrm{D}^{b}\mathrm{Coh}(Y)^{k,\leq 0} \cap\mathrm{D}^{b}\mathrm{Coh}(Y)^{k,\geq 0}\.\] In particular, we let \[\operatorname{PervCoh}(Y/X)=\operatorname{PervCoh}^{-1}(Y/X)\] denote the category of perverse coherent sheaves for the integer \(k=-1\), and use the term perverse coherent sheaves on \(f:Y\to X\) to refer to this case, unless specified otherwise. Also, we will often drop the dependence on \(f\) and \(X\) from the notation and write simply \[\operatorname{PervCoh}(Y):=\operatorname{PervCoh}(Y/X)\.\] In this case, we obtain the following further simplification: _Corollary 2.18_.: The category of perverse coherent sheaves on \(f:X\to Y\) is given by \[\operatorname{PervCoh}(Y)=\{E\in\mathrm{D}^{[-1,0]}\mathrm{Coh}(Y)\ |\ R^{0}f_{*}H^{-1}E=0,\ R^{1}f_{*}H^{0}E=0,\ \operatorname{Hom}(H^{0}E,C)=0\text{ for any }C\in \mathcal{C}^{\heartsuit}\}\.\] In particular, the right hand side is an abelian category. Proof.: See Lemma 3.2 in [1]. We can now state the result of Bridgeland and Van den Bergh on the image of the heart \(\mathcal{A}\text{-Mod}\subset\mathrm{D}^{b}\mathcal{A}\text{-Mod}\) under the equivalence of Theorem 2.10: _Theorem 2.19_.: [1, 1] The equivalence of Theorem 2.10 restricts to an equivalence of abelian categories \[\operatorname{PervCoh}(Y)\xrightarrow{\cong}\mathcal{A}\text{-Mod}\.\] ### Quivers, path algebras, and Koszul duality for \(A_{\infty}\) algebras _Definition 2.20_.: A finite, bigraded _quiver_\(Q\) is a finite set of vertices \(V_{Q}\), a finite set of edges \(E_{Q}=\sqcup_{k,j\in\mathbb{Z}}(E_{Q})_{j}^{k}\), and source and target maps \(s,t:E_{Q}\to V_{Q}\). We write \[E_{Q}(v,w)=\bigsqcup_{k,j\in\mathbb{Z}}(E_{Q})_{j}^{k}(v,w)\] for the space of edges from \(v\) to \(w\), defined by \(s^{-1}(v)\cap t^{-1}(w)\); the elements of \((E_{Q})_{j}^{k}\) are called edges of _bidegree_\((k,j)\), as they determine by definition the bidegree of the corresponding summand of the \(\mathbb{K}\)-span of the edge set \[\mathbb{K}\langle E_{Q}\rangle:=\bigoplus_{k,j\in\mathbb{Z}}\mathbb{K}\langle E _{Q}\rangle_{j}^{k}[-k]\langle-j\rangle\,\] where \(\mathbb{K}\langle E_{Q}\rangle_{j}^{k}=\mathbb{K}\langle(E_{Q})_{j}^{k}\rangle\) is the \(\mathbb{K}\)-span of the space of bidegree \((k,j)\) edges. A _path_\(p\) of length \(n\) in a quiver \(Q\) is a sequence of edges \(e_{n},...,e_{1}\in E_{Q}\) such that \(t(e_{i})=s(e_{i+1})\) for \(i=1,...,n-1\). The vertices \(s(e_{1})\) and \(t(e_{n})\) are the _source_ and _target_ of the path, and the bidegree of the path is sum of the bidegrees of the edges in the corresponding sequence. We say that a quiver _plain_ if its edge set is concentrated in bidegree \((0,0)\), _plain graded_ if its edge set is concentrated in bidegree \((0,\bullet)\) and _cohomologically graded_ if its edge set is concentrated in bidegree \((\bullet,0)\). Concatenation of paths is defined as usual as the concatenation of the corresponding sequences, whenever it yields a new path. We also formally define the set of length zero paths to be the set of vertices of \(Q\), and define concatenation of a fixed path with one of length zero on the left or right only if the corresponding vertex is the source or target of that path, respectively, in which case it yields the same path. _Definition 2.21_.: Let \(Q\) be a \(\mathbb{Z}^{2}\)-graded quiver. The path algebra \(\mathbb{K}Q\) is the unital associative bigraded \(\mathbb{K}\) algebra spanned over \(\mathbb{K}\) by paths of any length \(n\geq 0\), with product defined by \[p\star q=\begin{cases}pq&\text{if the concatenation is defined, and}\\ 0&\text{otherwise.}\end{cases}\] The above conventions for length zero paths imply that \[\mathbb{K}Q=\otimes_{S}^{\bullet}\mathbb{K}\langle E_{Q}\rangle\qquad\text{ where}\qquad S=\bigoplus_{v\in V_{Q}}\mathbb{K}_{v}\,\] that is, the path algebra \(\mathbb{K}Q\) is the tensor algebra over the semi-simple base ring \(S\) of the \(S\)-bimodule \(\mathbb{K}\langle E_{Q}\rangle\) spanned by the set of edges with left and right module structures determined by concatenation on the left and right with length zero paths. Let \(\mathbb{K}Q_{(n)}\subset\mathbb{K}Q\) be the two-sided ideal spanned as a vector space by paths of length \(\geq n\). _Definition 2.22_.: A (graded) _quiver with relations_\((Q,R)\) is a plain (graded) quiver \(Q\) together with a two-sided ideal \(R\subset\mathbb{K}Q\) such that \(R\subset\mathbb{K}Q_{(2)}\). The path algebra \(\mathbb{K}Q_{R}\) of a quiver with relations is defined by \(\mathbb{K}Q_{R}=\mathbb{K}Q/R\). A (graded) _DG quiver_\((Q,d)\) is a cohomologically graded (bigraded) quiver \(Q\) together with an \(S\)-linear derivation \(d:\mathbb{K}Q\to\mathbb{K}Q[1]\) such that \(d^{2}=0\). The path algebra \(\mathbb{K}Q_{d}\) of a DG quiver \((Q,d)\) is the DG algebra defined by \((\mathbb{K}Q,d)\). It is possible to describe the most general lift of the path algebra of a bigraded quiver to yield a graded DG quiver. In the most concrete terms, we see that a differential \(d:\mathbb{K}Q\to\mathbb{K}Q[1]\) is determined on generators by maps \[d_{n}:\mathbb{K}\langle E_{Q}\rangle\to\otimes_{S}^{n}\mathbb{K}\langle E_{Q} \rangle[1]\.\] These maps are equivalently defined in terms of \(\bar{A}=\mathbb{K}\langle E_{Q}\rangle^{\vee}[-1]\) by maps \[m_{n}:\otimes_{S}^{n}\bar{A}\to\bar{A}[2-n]\,\] and the condition that \(d^{2}=0\) is equivalent to the condition that the maps \(m_{n}\) satisfy the \(A_{\infty}\) relations. In summary, we obtain that a choice of differential making \(\mathbb{K}Q\) into a quasi-free graded DG algebra is equivalent to a graded \(A_{\infty}\) structure on \(A=S\oplus\bar{A}\) compatible with the natural \(S\)-augmentation. _Remark 2.23_.: We will sometimes assume that the \(S\)-bimodule \(\mathbb{K}\langle E_{Q}\rangle\) is graded finite dimensional. Moreover, the differential \(d\) is in fact defined only on the completed tensor algebra unless the corresponding \(A_{\infty}\) algebra is finite, in the sense that the structure maps \(m_{n}=0\) vanish for \(n\) sufficiently large. These conditions will often hold in our applications of interest. In fact, the preceding construction is a concrete manifestation of Koszul duality for \(A_{\infty}\) algebras, as we now explain: _Definition 2.24_.: Let \(A\) be an \(A_{\infty}\) algebra over \(S\) with an augmentation \(\epsilon:A\to S\). The _Koszul dual_ of \(A\) over \(S\) is the DG associative algebra defined by \[A^{!}=\operatorname{Hom}_{A}(S,S)\.\] For any \(A_{\infty}\) algebra \(A\) with augmentation \(\epsilon:A\to S\) and augmentation ideal \(\bar{A}=\ker\epsilon\), there is a canonical free resolution of \(S\) as an \(A\) module, called the _Koszul resolution_ given by \[\mathcal{K}^{\bullet}:=A\otimes_{S}\left(\otimes_{S}^{\bullet}\bar{A}[1] \right)=\left[\cdots\to A\otimes_{S}\bar{A}^{\otimes 2}[2]\to A\otimes_{S}\bar{A}[1] \to A\right]\,\] with \(d:\mathcal{K}^{\bullet}\to\mathcal{K}^{\bullet}[1]\) defined on generators by the multiplication map \[d|_{A\otimes_{S}\bar{A}}=m_{2}:A\otimes_{S}\bar{A}\to A\] and its higher arity analogues. We obtain the following computation of the Koszul dual algebra: _Proposition 2.25_.: The algebra \(A^{!}\) is presented by the quasi-free DG associative algebra \[A^{!}=\left(\otimes_{S}^{\bullet}(\bar{A}[1])\right)^{\vee}\] with underlying complete associative algebra free on \(\bar{A}^{\vee}[-1]\), and differential determined by the \(A_{\infty}\) structure via the equivalence explained above. Proof.: We use the Koszul resolution to compute the underlying cochain complex \[\operatorname{Hom}_{A}(S,S)\cong\operatorname{Hom}_{A}^{0}(\mathcal{K}^{ \bullet},S)\cong\operatorname{Hom}_{S}^{0}(\otimes_{S}^{\bullet}\bar{A}[1],S )=\left(\otimes_{S}^{\bullet}(\bar{A}[1])\right)^{\vee}\,\] as desired, and observe that the left module structure of \(A^{!}\) over itself as a tensor algebra is identified with the right module structure on \(\operatorname{Hom}_{A}(S,S)\) over itself given by precomposition with \(\operatorname{Hom}_{A}(\mathcal{K}^{\bullet},\mathcal{K}^{\bullet})\). We now recall the primary statements of Koszul duality in the present setting, following the presentation of [10]: _Theorem 2.26_.: [10] Let \(A\) be a strongly locally finite \(A_{\infty}\) algebra. Then \(A^{!}\) is strongly locally finite, and moreover there is a natural isomorphism of algebras \((A^{!})^{!}\cong A\). _Remark 2.27_.: This result suggests the following strategy to describe a given DG algebra over \(S\) in terms of the path algebra of a DG quiver, or equivalently, to construct a quasi-free resolution of it: compute the Koszul dual algebra, use homotopy transfer to compute the \(A_{\infty}\) structure on its cohomology, and then construct the corresponding DG quiver \(Q\) according to the concrete description of Koszul duality above. One also obtains an equivalence of categories of modules over Koszul dual algebras: _Theorem 2.28_.: [10] Let \(A\) be a strongly locally finite, augmented \(A_{\infty}\) algebra over \(S\) and \(A^{!}\) its Koszul dual. Then there are mutually inverse equivalences of categories where \(\operatorname{thick}(S)\) denotes the thick subcategory of \(\operatorname{D}(A)\) generated by \(S\) and its graded shifts, and \(\operatorname{D}_{\operatorname{perf}}(A^{!})=\operatorname{thick}(A^{!})\) that generated by the rank \(1\) free module \(A^{!}\) and its graded shifts in \(\operatorname{D}(A^{!})\), as in the discussion following Definition 2.4. In particular, note that if \(A\) is graded finite dimensional then \(\operatorname{thick}(S)=\operatorname{D}^{b}_{\operatorname{fd}}(A)\), and thus we obtain an equivalence of the latter with \(\operatorname{D}_{\operatorname{perf}}(A^{!})\). _Remark 2.29_.: In the case that \(A\) is an augmented DG algebra, the above result holds under the weaker hypothesis that \(A^{!}\) is locally finite. These results follow from Theorems 5.7 and 5.4 of _loc. cit._, respectively, and the relevant definitions are recalled in Definition 2.1. _Remark 2.30_.: Recall that for a plain (non DG) Koszul associative algebra \(\Lambda\), the Koszul dual algebra is concentrated in bi-degrees \((k,-k)\) for \(k\in\mathbb{N}\). Thus, in order to relate the preceding Theorem with more classical accounts of Koszul duality of plain graded algebras, it is necessary to apply a cohomological shearing operation, which we now explain following Section 7.2 of [10]: Let \(A\) be a graded DG associative algebra with zero differential, and underlying bi-graded vector space \[A=\bigoplus_{k,j\in\mathbb{Z}}A^{k}_{j}[-k]\langle-j\rangle\.\] Then the cohomological shear \(A^{\operatorname{sh}}\) of \(A\), defined by \[A^{\operatorname{sh}}=\bigoplus_{k,j\in\mathbb{Z}}(A^{\operatorname{sh}})^{k}_ {j}[-k]\langle-j\rangle\qquad\text{with}\qquad(A^{\operatorname{sh}})^{k}_{j }=A^{k+j}_{-j}\] is again a graded associative algebra. For example, if \(A\) is concentrated in bidegree \((k,-k)\), as we observed for the Koszul dual of a plain graded Koszul associative algebra, then \(A^{\operatorname{sh}}\) defines a plain graded associative algebra, concentrated in cohomological degree \(0\). Moreover, the analogous shear functor gives an equivalence \((\cdot)^{\operatorname{sh}}:\operatorname{D}(A)\xrightarrow{\cong} \operatorname{D}(A^{\operatorname{sh}})\) on categories of DG modules, defined by \[M=\bigoplus_{i,j\in\mathbb{Z}}M^{i}_{j}[-i]\langle-j\rangle\mapsto M^{ \operatorname{sh}}=\bigoplus_{i,j\in\mathbb{Z}}M^{i+j}_{-j}[-i]\langle-j \rangle\.\] If \(A\) is finite dimensional, it also restricts to an equivalence \((\cdot)^{\operatorname{sh}}:\operatorname{D}_{\operatorname{fd}}(A) \xrightarrow{\cong}\operatorname{D}_{\operatorname{fd}}(A^{\operatorname{sh}})\). _Warning 2.31_.: We will often describe algebras and their DG modules in terms of the cohomologically sheared conventions, but omit the superscript \((\cdot)^{\operatorname{sh}}\) by abuse of notation. Further, in this setting we will use superscripts and subscripts that differ from those above, to denote the decomposition of a plain (non DG) graded \(S\) algebra \[A=\bigoplus_{i,j\in I,\ k\in\mathbb{Z}}\ _{i}A^{k}_{j}\langle-k\rangle\qquad \text{where}\qquad_{i}A_{j}=S_{i}AS_{j}\] denote the components with respect to the idempotents of the semisimple base ring \(S=\oplus_{i}S_{i}\). ### Calabi-Yau structures and quivers with potential Let \(Q\) be a bigraded quiver as in Definition 2.20 above, \(\mathbb{K}Q\) its path algebra, and define \[\mathbb{K}Q_{\rm cyc}=\mathbb{K}Q/[\mathbb{K}Q,\mathbb{K}Q]\.\] Suppose that the edge space \(\mathbb{K}\langle E_{Q}\rangle\) of the underlying quiver \(Q\) admits a non-degenerate pairing \[(\cdot,\cdot):\mathbb{K}\langle E_{Q}\rangle^{\otimes 2}\to\mathbb{K}[1] \tag{2.6}\] such that 1. \((a,b)=(-1)^{|a||b|}(b,a)\) 2. \((a,b)=0\) unless \(t(a)=s(b)\) and \(s(a)=t(b)\). We also let \(\langle\cdot,\cdot\rangle:(\mathbb{K}\langle E_{Q}\rangle^{\vee})^{\otimes 2} \to\mathbb{K}[-1]\) denote the inverse pairing. _Definition 2.32_.: A _non-commutative, \((-1)\)-shifted symplectic form_ on the path algebra \(\mathbb{K}Q\) is a pairing \(\langle\cdot,\cdot\rangle:(\mathbb{K}\langle E_{Q}\rangle^{\vee})^{\otimes 2} \to\mathbb{K}[-1]\) given by the inverse of a pairing \((\cdot,\cdot):\mathbb{K}\langle E_{Q}\rangle^{\otimes 2}\to\mathbb{K}[1]\) as in Equation 2.6 above. For simplicity, we will simply refer to such a pairing as a symplectic form on the underlying \(\mathbb{Z}\)-graded quiver \(Q\). _Example 2.33_.: In the setting of Section 3.2 below, let \(\Sigma=\operatorname{Ext}^{\bullet}(F,F)\) and \(Q_{Y}\) the corresponding DG quiver with edge set \(\mathbb{K}\langle E_{Q_{Y}}\rangle=\bar{\Sigma}^{\vee}[-1]\) so that \(\Lambda=\mathbb{K}Q_{Y}\). Since \(Y\) is a Calabi-Yau threefold, \(\mathbb{K}Q_{Y}\) admits a symplectic form by restricting the Serre duality pairing \[\langle\cdot,\cdot\rangle:\operatorname{Ext}^{\bullet}(F,F)[1]\otimes \operatorname{Ext}^{\bullet}(F,F)[1]\to\mathbb{K}[-1]\] noting that the object \(F\) is compactly supported. For a \(\mathbb{Z}\)-graded quiver \(Q\) with symplectic form, there is a canonical non-commutative Poisson bracket \(\{\cdot,\cdot\}:\mathbb{K}Q^{\otimes 2}\to\mathbb{K}Q[1]\) of degree \(+1\), defined on generators by the pairing 2.6 and extended as a non-commutative biderivation. _Definition 2.34_.: A _quiver with potential_ is a DG quiver \((Q,d)\) in the sense of Definition 2.22, equipped with a symplectic form, and a potential \(W\in\mathbb{K}Q_{\rm cyc}\) such that \[d=\{W,\cdot\}\ \in\operatorname{Der}^{1}(\mathbb{K}Q)\.\] Note that the potential \(W\in\mathbb{K}Q_{\rm cyc}\) necessarily satisfies the master equation \(\{W,W\}=0\) since \(d^{2}=0\) by hypothesis. The following construction is well-known: _Example 2.35_.: Let \(\Sigma\) and \(Q_{Y}\) be as in Example 2.33 above. Then there is a canonical potential \(W\in\mathbb{K}Q_{\rm cyc}\), defined by \[W=\sum_{n\geq 1}\sum_{a_{1},...,a_{n+1}\in E_{Q}}\langle m_{n}(a_{1}^{\vee},..., a_{n}^{\vee}),a_{n+1}^{\vee}\rangle a_{1}\cdot...\cdot a_{n+1}\,\] such that \((Q_{Y},d,W)\) is a quiver with potential, where \(m_{n}:\Sigma^{\otimes n}\to\Sigma[2-n]\) denote the \(A_{\infty}\) structure maps on \(\Sigma\). ### \(A_{\infty}\) categories and the twisted objects construction of Kontsevich _Definition 2.36_.: An \(A_{\infty}\) category \(\mathcal{A}\) is a set \(\operatorname{ob}(\mathcal{A})\) together with * graded vector spaces \(\mathcal{A}(i,j)=\operatorname{Hom}_{\mathcal{A}}(i,j)\in\mathbb{K}\text{-Mod}_{ \mathbb{Z}}\) for each \(i,j\in\operatorname{ob}(\mathcal{A})\), and * maps of graded vector spaces \[m_{n}^{i_{0},...,i_{n}}:\mathcal{A}(i_{0},i_{1})\otimes_{\mathbb{K}}\ldots \otimes_{\mathbb{K}}\mathcal{A}(i_{n-1},i_{n})\to\mathcal{A}(i_{0},i_{n})[2-n]\] for each finite list of objects \(i_{0},...,i_{n}\in\operatorname{ob}(\mathcal{A})\), satisfying the natural generalization of the usual \(A_{\infty}\) relations. In particular, the \(A_{\infty}\) category structure maps for \(n=1\) endow each of the graded vector spaces \(\mathcal{A}(i,j)\) with the structure of a cochain complex; \(A_{\infty}\) categories are the natural non-strictly associative homotopical generalization of DG categories, analogous to the way in which \(A_{\infty}\) algebras generalize DG associative algebras. _Example 2.37_.: Let \(S=\oplus_{i\in I}S_{i}=\oplus_{i\in I}\mathbb{K}\) be a semi simple base ring and \(A\) an \(A_{\infty}\) algebra over \(S\). Then \(A\) can equivalently be considered as an \(A_{\infty}\) category \(\mathcal{A}\) with \[\operatorname{ob}(\mathcal{A})=I\qquad\text{and}\qquad\mathcal{A}(i,j)= \operatorname{Hom}_{\mathcal{A}}(i,j)=\ _{i}A_{j}\.\] For example, if \(A_{M}=\operatorname{Ext}_{\mathcal{D}}^{\bullet}(M,M)\) for some triangulated category \(\mathcal{D}\) and \(M=\oplus_{i\in I}M_{i}\in\mathcal{D}\), then the corresponding \(A_{\infty}\) category \(\mathcal{A}_{M}\) is given by \[\operatorname{Hom}_{\mathcal{A}_{M}}(i,j)=\operatorname{Ext}_{\mathcal{D}}^{ \bullet}(M_{i},M_{j})\,\] together with the natural higher multiplication maps induced by homotopy transfer. Similarly, generalizing the usual definition of a map of \(A_{\infty}\) algebras, we have: _Definition 2.38_.: Let \(\mathcal{A},\mathcal{B}\) be \(A_{\infty}\) categories. A functor \(f:\mathcal{A}\to\mathcal{B}\) is a map \(f:\operatorname{ob}(\mathcal{A})\to\operatorname{ob}(\mathcal{B})\) of object sets together with maps of graded vector spaces \[f_{n}^{i_{0},...,i_{n}}:\mathcal{A}(i_{0},i_{1})\otimes_{\mathbb{K}}\ldots \otimes_{\mathbb{K}}\mathcal{A}(i_{n-1},i_{n})\to\mathcal{B}(f(i_{0}),f(i_{n} ))[1-n]\] for each finite list of objects \(i_{0},...,i_{n}\in\operatorname{ob}(\mathcal{A})\), satisfying the natural generalization of the usual conditions defining a map of \(A_{\infty}\) algebras. Similarly, there is a notion of natural transformations of \(A_{\infty}\) functors, which makes the \(A_{\infty}\) functors \(f:\mathcal{A}\to\mathcal{B}\) into a DG category \(\operatorname{Fun}_{\infty}(\mathcal{A},\mathcal{B})\). In particular, taking \(\mathcal{B}\) to be the category DGVect of cochain complexes, we obtain the DG category of modules \(\operatorname{C}_{\infty}(\mathcal{A}):=\operatorname{Fun}_{\infty}( \mathcal{A},\operatorname{DGVect})\) over the \(A_{\infty}\) category \(\mathcal{A}\). Concretely, we have: _Definition 2.39_.: Let \(\mathcal{A}\) be an \(A_{\infty}\) category. An \(A_{\infty}\) module \(\mathcal{M}\in\operatorname{C}_{\infty}(\mathcal{A})\) over \(\mathcal{A}\) is given by * graded vector spaces \(\mathcal{M}(i)\in\mathbb{K}\text{-Mod}_{\mathbb{Z}}\) for each \(i\in\operatorname{ob}(\mathcal{A})\), and * maps of graded vector spaces (2.7) \[\rho_{n}^{i_{0},...,i_{n}}:\mathcal{A}(i_{0},i_{1})\otimes_{\mathbb{K}}\ldots \otimes_{\mathbb{K}}\mathcal{A}(i_{n-1},i_{n})\otimes\mathcal{M}(i_{0})\to \mathcal{M}(i_{n})[1-n]\] for each finite list of objects \(i_{0},...,i_{n}\in\operatorname{ob}(\mathcal{A})\), satisfying the natural generalization of the usual conditions defining an \(A_{\infty}\) module. _Remark 2.40_.: For simplicity, we will often denote the collection of maps of graded vector spaces of Equation 2.7 by simply \(\mathcal{A}^{\otimes n}\otimes\mathcal{M}\to\mathcal{M}[1-n]\). _Example 2.41_.: For each object \(j\in\mathcal{A}\), there is a module \(\mathcal{A}_{j}\in\mathrm{C}_{\infty}(\mathcal{A})\) defined concretely by \[\mathcal{A}_{j}(i)=\mathcal{A}(i,j)\] with module structure maps given by the structure maps for \(\mathcal{A}\). These modules are the natural generalization of the rank \(1\) free module over a usual \(A_{\infty}\) algebra, and in the context of Example 2.37 of a category \(\mathcal{A}\) determined by an algebra \(A\) over a semi simple base ring \(S=\oplus_{i\in I}\mathbb{K}\), the module \(\mathcal{A}_{j}\) is identified with \(A_{j}\), the rank \(1\) free module multiplied on the right by the \(j^{th}\) idempotent, under the equivalence \(\mathrm{C}_{\infty}(\mathcal{A})=\mathrm{C}_{\infty}(A)\). This construction defines a functor of \(A_{\infty}\) categories \[Y:\mathcal{A}\to\mathrm{C}_{\infty}(\mathcal{A})\qquad j\mapsto\left[ \mathcal{A}(\cdot,j):\mathcal{A}\to\mathrm{Vect}\right]\, \tag{2.8}\] called the Yoneda embedding for \(A_{\infty}\) categories. We now describe the twisted objects construction, which was first proposed in [10], generalizing the results of [1] to the \(A_{\infty}\) context, and was worked out in detail in [13]. The first step is to introduce the category of shifted objects \(\mathbb{Z}\mathcal{A}\) of \(\mathcal{A}\): _Definition 2.42_.: Let \(\mathcal{A}\) be an \(A_{\infty}\) category and define the \(A_{\infty}\) category \(\mathbb{Z}\mathcal{A}\) by \[\mathrm{ob}(\mathbb{Z}\mathcal{A})=\{(i,n)\ |\ i\in\mathrm{ob}(\mathcal{A}),n \in\mathbb{Z}\}\qquad\text{and}\qquad\mathbb{Z}\mathcal{A}((i,n),(j,m))= \mathcal{A}(i,j)[m-n]\] together with the natural extension of the \(A_{\infty}\) structure maps of \(\mathcal{A}\). _Warning 2.43_.: In the following, we will often omit the integer \(n\in\mathbb{Z}\) from the notation and write simply \(i\in\mathbb{Z}\mathcal{A}\) and \(\mathbb{Z}\mathcal{A}(i,j)\) with the associated integers left implicit. We now define the desired category \(\mathrm{Tw}\mathcal{A}\) of twisted objects in \(\mathcal{A}\) _Definition 2.44_.: [13] Let \(\mathcal{A}\) be an \(A_{\infty}\) category. The category \(\mathrm{Tw}\mathcal{A}\) of twisted objects over \(\mathcal{A}\) is the \(A_{\infty}\) category defined as follows: an object of the category \(\mathrm{Tw}\mathcal{A}\) is given by a finite collection of objects \[(i_{1},n_{1}),...,(i_{d},n_{d})\in\mathbb{Z}\mathcal{A}\] together with a degree zero element \[\delta=(\delta_{kl}\in\mathbb{Z}\mathcal{A}(i_{k},i_{l})[1]=\mathcal{A}(i_{k},i_{l})[n_{l}-n_{k}+1])_{k,l=1}^{d}\in\mathfrak{gl}_{d}(\mathbb{Z}\mathcal{A})\] such that \(\delta_{kl}=0\) for \(k\leq l\) and moreover \(\delta\) satisfies the Maurer-Cartan equation \[\sum_{t\in\mathbb{N}}m_{t}^{\mathfrak{gl}_{d}(\mathbb{Z}\mathcal{A})}(\delta ^{\otimes t})=0\ \in\mathfrak{gl}_{d}(\mathbb{Z}\mathcal{A})\ ; \tag{2.9}\] here \(\mathfrak{gl}_{d}(\mathbb{Z}\mathcal{A})\) denotes the \(d\times d\) matrices with coefficients in \(\mathbb{Z}\mathcal{A}\) and \(m_{t}^{\mathfrak{gl}_{d}(\mathbb{Z}\mathcal{A})}\) the extension of the \(A_{\infty}\) structure maps on \(\mathcal{A}\) to \(\mathfrak{gl}_{d}(\mathbb{Z}\mathcal{A})\), given by tensoring with the usual associative product on \(\mathfrak{gl}_{d}\). The spaces of maps in \(\mathrm{Tw}\mathcal{A}\) are given by \[\mathrm{Tw}\mathcal{A}((i_{1},...,i_{d},\delta),(j_{1},...,j_{c},\eta))=\bigoplus _{k=1,...,d,\ l=1,...,c}\mathbb{Z}\mathcal{A}(i_{k},j_{l}) \tag{2.10}\] and the \(A_{\infty}\) structure maps are defined for each \(t\in\mathbb{N}\) by \[m_{t}^{\mathrm{Tw}\mathcal{A}}=\sum_{b,d\in\mathbb{N}}m_{t+b+d}^{\mathfrak{gl} (\mathbb{Z}\mathcal{A})}(\delta^{\otimes b}\otimes\mathrm{id}\otimes\eta^{ \otimes d})\, \tag{2.11}\] where \(m^{\mathfrak{gl}(\mathbb{Z}\mathcal{A})}_{t+b+d}\) denotes the analogous extension of the \(A_{\infty}\) structure maps on \(\mathbb{Z}\mathcal{A}\) to matrices with coefficients in \(\mathbb{Z}\mathcal{A}\), and we identify \(\mathrm{Tw}\mathcal{A}((i_{1},...,i_{d},\delta),(j_{1},...,j_{c},\eta))\) with the corresponding subspace of the space of \(d\times c\) matrices with coefficients in \(\mathbb{Z}\mathcal{A}\). There is a natural, strict functor of \(A_{\infty}\) categories \[Y_{1}:\mathcal{A}\to\mathrm{Tw}\mathcal{A}\qquad\text{defined by}\qquad i\mapsto((i,0),0)\,\] together with the canonical inclusions on \(\mathrm{Hom}\) spaces. Moreover, there is a functor of \(A_{\infty}\) categories \(Y_{2}:\mathrm{Tw}\mathcal{A}\to\mathrm{C}_{\infty}(\mathcal{A})\) defined by \[(i_{1},...,i_{r}\in\mathbb{Z}\mathcal{A},\ \delta=(\delta_{kl}\in\mathbb{Z} \mathcal{A}(i_{k},i_{l})[1])^{d}_{k,l=1})\mapsto\mathcal{A}^{\delta}_{i_{1},...,i_{r}}:=\left(\mathcal{A}_{i_{1},...,i_{d}}:=\bigoplus_{k=1,...,d} \mathcal{A}_{i_{k}}[n_{k}]\,\ (\rho_{t})_{t\in\mathbb{N}}\right) \tag{2.12}\] where \[\rho_{t}=\sum_{k\in\mathbb{N}}(-1)^{\frac{t(t-1)}{2}}\rho_{t,k}^{\mathcal{A}_ {i_{1},...,i_{d}}}(\mathrm{id}_{\mathcal{A}}^{\otimes t}\otimes\mathrm{id}_{ \mathcal{A}_{i_{1},...,i_{d}}}\otimes\delta^{\otimes k})\ :\mathcal{A}^{\otimes t}\otimes \mathcal{A}_{i_{1},...,i_{d}}\to\mathcal{A}_{i_{1},...,i_{d}}[1-t]\ ; \tag{2.13}\] here we let \(\mathcal{E}_{\mathcal{A}}\) denote the \(A_{\infty}\) category with objects and \(\mathrm{Hom}\) spaces the same as \(\mathrm{Tw}\mathcal{A}\) but with \(A_{\infty}\) structure maps given only by those for \(\mathfrak{gl}(\mathbb{Z}\mathcal{A})\), and note that \(\delta\in\mathcal{E}^{1}_{\mathcal{A}}((i_{1},...,i_{d}),(i_{1},...,i_{d}))\) and moreover that \(\mathcal{A}_{i_{1},...,i_{d}}\in\mathrm{C}^{\infty}(\mathcal{A},\mathcal{E}_ {\mathcal{A}})\) is naturally an \(A_{\infty}\) bimodule over \((\mathcal{A},\mathcal{E}_{\mathcal{A}})\) by Yoneda, with structure maps denoted \[\rho_{t,k}^{\mathcal{A}_{i_{1},...,i_{d}}}:\mathcal{A}^{\otimes t}\otimes \mathcal{A}_{i_{1},...,i_{d}}\otimes\mathcal{E}_{\mathcal{A}}^{\otimes k}\to \mathcal{A}_{i_{1},...,i_{d}}[1-t-k]. \tag{2.14}\] The main result about these constructions is the following: _Theorem 2.45_.: _[_11_]_ _The \(A_{\infty}\) Yoneda embedding of Equation 2.8 admits a factorization_ inducing an equivalence of triangulated categories \[H^{0}(Y_{2}):H^{0}(\mathrm{Tw}\mathcal{A})\to\mathrm{Triang}(\mathcal{A}_{i}) \tag{2.15}\] _where \(\mathrm{Triang}(\mathcal{A}_{i})\) denotes the triangulated subcategory of \(\mathrm{D}(A)\) generated by the objects \(\mathcal{A}_{i}\)._ Further, we introduce the following categories _Definition 2.46_.: Let \(\mathcal{A}\) be an \(A_{\infty}\) category. The \(A_{\infty}\) category \(\mathrm{Tw}^{0}\mathcal{A}\) is defined as the full subcategory of \(\mathrm{Tw}\mathcal{A}\) on objects \(((i_{1},...,i_{d}),\delta)\in\mathrm{Tw}\mathcal{A}\) such that for each \(k=1,...,d\) the corresponding \(i_{k}\in\mathcal{A}\subset\mathbb{Z}\mathcal{A}\), that is, has associated integer \(n_{k}=0\). Concretely, \(\mathrm{Tw}^{0}\mathcal{A}\) is the full subcategory on objects whose image under the functor \(Y_{2}\) has underlying vector space given by a direct sum of the indecomposable summands of the rank \(1\) free module, that is \[\mathcal{A}_{i_{1},...,i_{d}}=\bigoplus_{k=1,...,d}\mathcal{A}_{i_{k}}.\] _Definition 2.47_.: Let \(M_{i}\in\mathcal{D}\) be a collection of objects in a triangulated category \(\mathcal{D}\). Then \(\mathrm{Filt}(M_{i})\) is defined as the full subcategory of \(\mathcal{D}\) on objects admitting a filtration with subquotients in the collection \(M_{i}\). Then in addition, we have _Corollary 2.48_.: [10] The equivalence of Equation 2.15 induces an equivalence of full subcategories \[H^{0}(\mathrm{Tw}^{0}\mathcal{A})\xrightarrow{\cong}\mathrm{Filt}(\mathcal{A} _{i}) \tag{2.16}\] Applying the Morita theory results of Section 2.2, we also obtain: _Corollary 2.49_.: [10] Let \(\mathcal{A}_{M}\) be the \(A_{\infty}\) category determined by the \(A_{\infty}\) algebra \(A_{M}=\mathrm{Ext}_{\mathcal{D}}^{\bullet}(M,M)\) for \(M=\oplus_{i\in I}M_{i}\in\mathcal{D}\) a direct sum of objects in a triangulated category \(\mathcal{D}\), as in Example 2.37. Then the composition of the equivalence of triangulated categories of Equation 2.15 above with that of Equation 2.1 gives an equivalence \[H^{0}(\mathrm{Tw}\mathcal{A}_{M})\xrightarrow{\cong}\mathrm{Triang}(M_{i}) \qquad\text{and in turn}\qquad H^{0}(\mathrm{Tw}^{0}\mathcal{A}_{M}) \xrightarrow{\cong}\mathrm{Filt}(M_{i})\.\] We now describe the corresponding graded variant of the twisted objects category and its functor to \(\mathrm{C}_{\infty}(\mathcal{A})\): _Definition 2.50_.: Let \(\mathcal{A}\) be a graded \(A_{\infty}\) category and define the graded \(A_{\infty}\) category \(\mathbb{Z}\mathcal{A}\) by \[\mathrm{ob}(\mathbb{Z}\mathcal{A})=\{(i,n,p)\ |\ i\in\mathrm{ob}(\mathcal{A}),n,p \in\mathbb{Z}\}\qquad\text{and}\qquad\mathbb{Z}\mathcal{A}((i,n,p),(j,m,q))= \mathcal{A}(i,j)[m-n]\langle q-p\rangle\] together with the natural extension of the graded \(A_{\infty}\) structure maps of \(\mathcal{A}\). _Warning 2.51_.: As above, we will often omit the integers \(n,p\in\mathbb{Z}\) from the notation and write simply \(i\in\mathbb{Z}\mathcal{A}\) and \(\mathbb{Z}\mathcal{A}(i,j)\) with the associated integers left implicit. Following Definition 2.44, we make the following definition: _Definition 2.52_.: Let \(\mathcal{A}\) be a graded \(A_{\infty}\) category. The category \(\mathrm{tw}\mathcal{A}\) of twisted objects over \(\mathcal{A}\) is the graded \(A_{\infty}\) category defined as follows: an object of the category \(\mathrm{tw}\mathcal{A}\) is given by a finite collection of objects \[(i_{1},n_{1},p_{1}),...,(i_{r},n_{d},p_{d})\in\mathbb{Z}\mathcal{A}\,\] together with a degree zero element \[\delta=(\ \delta_{kl}\in\mathbb{Z}\mathcal{A}(i_{k},i_{l})[1]=\ \mathcal{A}(i_{k},i_{l})[n_{l}-n_{k}+1]\langle p_{l}-p_{k}\rangle\ )^{d}_{k,l=1}\ \in\ \mathfrak{gl}_{d}(\mathbb{Z}\mathcal{A})[1]\] such that \(\delta_{kl}=0\) for \(k\leq l\) and moreover \(\delta\) satisfies the Maurer-Cartan equation \[\sum_{t\in\mathbb{N}}m_{t}^{\mathfrak{gl}_{d}(\mathbb{Z}\mathcal{A})}(\delta^ {\otimes t})=0\.\] The graded vector spaces of homomorphisms and \(A_{\infty}\) category structure maps are defined as in Equations 2.10 and 2.11, respectively. There is an analogous natural functor of graded \(A_{\infty}\) categories \(Y_{2}:\mathrm{tw}\mathcal{A}\to\mathrm{C}_{\infty}(\mathcal{A})\) to the category of graded \(A_{\infty}\) modules over \(\mathcal{A}\), defined by \[(i_{1},...,i_{d}\in\mathbb{Z}\mathcal{A},\ \delta=(\delta_{kl})^{d}_{k,l=1}) \mapsto\mathcal{A}^{\delta}_{i_{1},...,i_{d}}:=\left(\mathcal{A}_{i_{1},...,i_ {d}}:=\bigoplus_{k=1,...,d}\mathcal{A}_{i_{k}}[n_{k}]\langle p_{k}\rangle\,\ (\rho_{t})_{t \in\mathbb{N}}\right)\,\] as in Equation 2.12, where \[\rho_{t}=\sum_{k\in\mathbb{N}}(-1)^{\frac{t(t-1)}{2}}\rho_{t,k}^{\mathcal{A}_{i_{1 },\ldots,i_{d}}}(\operatorname{id}_{\mathcal{A}}^{\otimes t}\otimes \operatorname{id}_{\mathcal{A}_{i_{1},\ldots,i_{d}}}\otimes\delta^{\otimes k}) \ :\mathcal{A}^{\otimes t}\otimes\mathcal{A}_{i_{1},\ldots,i_{d}}\to \mathcal{A}_{i_{1},\ldots,i_{d}}[1-t]\, \tag{2.17}\] as in Equation 2.13, and \(\rho_{t,k}^{\mathcal{A}_{i_{1},\ldots,i_{d}}}:\mathcal{A}^{\otimes t}\otimes \mathcal{A}_{i_{1},\ldots,i_{d}}\otimes\mathcal{E}_{\mathcal{A}}^{\otimes k} \to\mathcal{A}_{i_{1},\ldots,i_{d}}[1-t-k]\) are the \(A_{\infty}\) category bimodule structure maps, as in Equation 2.14. _Remark 2.53_.: All statements in the preceding paragraph are given in terms of the natural, unsheared grading, in the sense of Remark 2.30. We will continue to work in these conventions throughout this section, explicitly adding the superscript \(\mathcal{A}^{\text{sh}}\) to denote the cohomologically sheared variants when necessary. We have the following natural analogue of Theorem 2.45 in this case: _Corollary 2.54_.: The functor \(Y_{2}:\operatorname{tw}\mathcal{A}\to\operatorname{C}_{\infty}(\mathcal{A})\) induces an equivalence of triangulated categories \[H^{0}(Y_{2}):H^{0}(\operatorname{tw}\mathcal{A})\to\operatorname{triang}( \mathcal{A}_{i})\, \tag{2.18}\] where we recall that \(\operatorname{triang}(\mathcal{A}_{i})\) denotes the minimal triangulated subcategory of \(\operatorname{D}_{\mathbb{Z}}(A)\) containing the objects \(\mathcal{A}_{i}\langle k\rangle\) for \(i\in I\) and \(k\in\mathbb{Z}\). _Definition 2.55_.: Let \(\mathcal{A}\) be a graded \(A_{\infty}\) category. The graded \(A_{\infty}\) category \(\operatorname{tw}^{0}\mathcal{A}\) is defined as the full subcategory on objects \((i_{1},...,i_{d},\delta)\in\mathbb{Z}\mathcal{A}\) such that each of the associated integers \(n_{k}=0\) for \(k=1,...,d\). Concretely, \(\operatorname{tw}^{0}\mathcal{A}\) is the full subcategory on objects whose image under the functor \(Y_{2}\) has underlying graded vector space given by a direct sum of graded shifts of the indecomposable summands of the rank \(1\) free module, that is \[\mathcal{A}_{i_{1},...,i_{d}}=\bigoplus_{k=1,...,d}\mathcal{A}_{i_{k}}\langle p _{k}\rangle\,\qquad\text{or equivalently}\qquad\mathcal{A}_{i_{1},...,i_{d}}^{ \text{sh}}=\bigoplus_{k=1,...,d}\mathcal{A}_{i_{k}}^{\text{sh}}[p_{k}] \langle-p_{k}\rangle,\] in the cohomologically sheared conventions. _Definition 2.56_.: Let \(M_{i}\in\mathcal{C}\) be a collection of objects in a mixed category \(\mathcal{C}\). The category \(\operatorname{flt}(M_{i})\) is defined as the full subcategory of \(\mathcal{C}\) on objects admitting a filtration with subquotients in the collection of objects \(M_{i}\langle k\rangle\in\mathcal{C}\) for \(i\in I\) and \(k\in\mathbb{Z}\). We have the following natural analogue of Corollary 2.48: _Corollary 2.57_.: The equivalence of Equation 2.18 induces an equivalence of the full subcategories \[H^{0}(\operatorname{tw}^{0}\mathcal{A})\xrightarrow{\cong}\operatorname{flt}( \Sigma_{i})\.\] ## 3. Perverse coherent sheaves on Calabi-Yau threefolds and quivers with potential ### Overview of Section 3 In Section 3, we explain the proof of a folklore theorem relating coherent sheaves on a certain class of toric Calabi-Yau threefold resolutions \(Y\to X\) with representations of an unframed quiver with potential \((Q_{Y},W_{Y})\), which follows from results of Bridgeland [10] and Van den Bergh [11] together with some general facts about triangulated categories. This is a precursor to Theorem A from the introduction, and is simply the special case \(M=0\). In Section 3.2, we explain the basic hypotheses on the resolution \(Y\to X\) used throughout this paper, and two corresponding descriptions of the derived category of complexes of coherent sheaves induced by two natural collections of generators. In Section 3.3, we explain that the equivalence between these two descriptions is an example of Koszul duality analogous to that studied in [1], and deduce the corresponding equivalences of hearts following _loc. cit._ and the results of [10] and [1], assuming the tilting algebra is Koszul. In Section 3.4, we use this perspective to define a canonical monad presentation of the category of perverse coherent sheaves, and in Section 3.5, we generalize the results of the previous two sections to the case that the tilting algebra is not necessarily Koszul and correspondingly the Ext algebra of the simple objects is \(A_{\infty}\). In Section 3.6, we apply these results to deduce the desired precursor to Theorem A, and in Section 3.7 we explain the results in several concrete examples. Finally, in Section 3.8 we explain the relationship of this construction to the well-known arguments using the Beilinson spectral sequence induced by a resolution of the diagonal. ### Tilting objects on toric Calabi-Yau threefolds In this subsection, we fix the primary objects of interest and hypotheses required for the main results of the paper. Let \(X\) be an affine, toric Calabi-Yau threefold singularity, \(T\) denote the associated torus, and let \(Y\stackrel{{\pi}}{{\to}}X\) be a toric resolution of singularities, such that 1. the fibres of \(\pi\) are of dimension \(\leq 1\), and 2. \(\pi_{*}\mathcal{O}_{Y}\cong\mathcal{O}_{X}\). We assume that \(X\) has a (unique) \(T\)-fixed point \(x\in X(\mathbb{C})\), and let \(C=Y\times_{X}\{x\}\) be the scheme theoretic fibre of \(\pi\) over \(x\). Under these hypotheses, we have that \(H^{0}(C,\mathcal{O}_{C})=\mathbb{K}\), \(C\) is Cohen-Macaulay, and \(C_{\mathrm{red}}\) is either a point or an arithmetic genus zero union of projective lines with normal crossings intersections. We let \[C=\bigcup_{i\in I_{+}}C_{i}\] denote the decomposition of \(C\) into irreducible components \(C_{i}\), the index set of which we denote \(I_{+}\), and let \(\iota:C\to Y\) and \(\iota_{i}:C_{i}\to Y\) denote the inclusion maps. Although many statements hold without this hypothesis, we also assume 1. the multiplicity of each \(C_{i}\) in \(C\) is one. Let \(\hat{X}=X_{x}^{\wedge}\) the formal completion of \(X\) at \(x\), and \(\hat{Y}=Y_{C}^{\wedge}\) the formal completion of \(Y\) along \(C\). Then we have: _Proposition 3.1_.: There are natural isomorphisms Proof.: This follows from the proof of Lemma 3.4.3 in [1]. In particular, there exist line bundles \(\hat{E}_{i}\in\mathrm{Pic}(\hat{Y})\) for each \(i\in I_{+}\) with the property that \[\deg\iota_{i}^{*}\hat{E}_{j}=\delta_{ij}\.\] In addition, we let \(\hat{E}_{0}=\mathcal{O}_{\hat{Y}}\), \(I=I_{+}\cup\{0\}\) and \(\hat{E}=\oplus_{i\in I}\hat{E}_{i}\) We fix once and for all a choice of \(T\)-equivariant structure on \(\hat{E}_{i}\) for each \(i\in I\), and moreover we fix extensions to global line bundles \[E_{i}\in\mathrm{Pic}(Y)^{T}\qquad\text{for each $i\in I$, and}\qquad E=\bigoplus_{i\in I}E_{i}\ \in\mathrm{Coh}(Y)^{T}\,\] recalling that every line bundle on a smooth toric variety admits a \(T\)-equivariant structure. In Section 2.3, we recalled the existence of an exotic t-structure on \(\operatorname{D}^{b}\!\operatorname{Coh}(Y)\), defined in terms of \(\pi:Y\to X\) satisfying a subset of the hypotheses of the present setting. The heart of this t-structure was the abelian category \(\operatorname{PervCoh}(Y)=\operatorname{PervCoh}(Y/X)\) of perverse coherent sheaves on \(Y\) (relative to \(f:Y\to X\)), in the sense of Definition 2.17. _Theorem 3.2_.: [Ber04b] The object \(E\) is a projective generator for \(\operatorname{PervCoh}(Y)\), and similarly on \(\hat{Y}\) the object \(\hat{E}\) is a projective generator for \(\operatorname{PervCoh}(\hat{Y})\). Proof.: This is a special case of Proposition 3.2.5, and Theorem 3.5.5 respectively, in [Ber04b]. The object \(E\) is the vector bundle appearing in the statement of Theorem 2.10; it is evidently compact and thus defines a classical tilting object, in the sense of Definition 2.5, so that letting \[\Lambda=\operatorname{Hom}_{\operatorname{D}^{b}\!\operatorname{Coh}(Y)}(E,E) \qquad\text{and}\qquad\hat{\Lambda}=\operatorname{Hom}_{\operatorname{D}^{b} \!\operatorname{Coh}(\hat{Y})}(\hat{E},\hat{E}),\] we have: _Corollary 3.3_.: There are triangle equivalences \[\operatorname{DQCoh}(Y)\xrightarrow{\cong}\operatorname{D}(\Lambda)\qquad \text{and}\qquad\operatorname{D}^{b}\!\operatorname{Coh}(Y)\xrightarrow{ \cong}\operatorname{D}_{\operatorname{Perf}}(\Lambda)\] intertwining the forgetful functor \(\operatorname{D}(\Lambda)\to\operatorname{D}(\mathbb{K})\) with \(\operatorname{Hom}_{\operatorname{DQCoh}(Y)}(E,\cdot)\), and analogous equivalences for \(\hat{Y}\) and \(\hat{\Lambda}\). Proof.: The claim follows from applying the Morita theory from [Kel] recalled in Theorem 2.6 to the object \(E\in\operatorname{DQCoh}(Y)\), and similarly for \(\hat{E}\in\operatorname{DQCoh}(\hat{Y})\). The induced \(T\)-equivariant structure on \(E\) makes \(\Lambda\) into a trigraded algebra, and we let \(\operatorname{D}_{\operatorname{perf}}(\Lambda)\) denote the thick subcategory of the derived category \(\operatorname{D}_{\mathbb{Z}^{3}}(\Lambda)\) of trigraded modules generated by \(\Lambda\) and its shifts, generalizing the notation introduced in Section 2.1. Then we have: _Corollary 3.4_.: There are triangle equivalences \[\operatorname{DQCoh}(Y)^{T}\xrightarrow{\cong}\operatorname{D}_{\mathbb{Z}^{3 }}(\Lambda)\qquad\text{and}\qquad\operatorname{D}^{b}\!\operatorname{Coh}(Y)^{T} \xrightarrow{\cong}\operatorname{D}_{\operatorname{perf}}(\Lambda)\] intertwining the forgetful functor \(\operatorname{D}_{\mathbb{Z}^{3}}(\Lambda)\to\operatorname{D}(\mathbb{K})\) with \(\operatorname{Hom}_{\operatorname{DQCoh}(Y)}(E,\cdot)\). Proof.: The claim follows by including the gradings induced by equivariant structures in the equivalences of Corollary 3.3. We define a new family of objects \(F_{i}\in\operatorname{PervCoh}(Y)\) for each \(i\in I\) given by \[F_{0}=\iota_{*}\mathcal{O}_{C}\qquad\text{and}\qquad F_{i}=\iota_{i*} \mathcal{O}_{C_{i}}(-1)[1]\,\] for \(i\in I_{+}\). It is straightforward to check these objects lie in the heart of the perverse coherent t-structure, by Corollary 2.18. _Proposition 3.5_.: The objects \(E_{i}\), \(F_{j}\) for \(i,j\in I\) satisfy \[\operatorname{Hom}_{\operatorname{D}^{b}\!\operatorname{Coh}(Y)}(E_{i},F_{j} )=\begin{cases}\mathbb{K}&\text{ if $i=j$, and}\\ 0&\text{ if $i\neq j$,}\end{cases}\] similarly for \(\hat{Y}\), and the objects \(F_{i}\) for \(i\in I\) are the unique simple objects in \(\operatorname{PervCoh}(\hat{Y})\). Proof.: This follows from the proof of Proposition 3.5.7 in [Ber04b]. We fix once and for all a \(T\)-equivariant structure on each object \(F_{i}\), compatible with those on \(E_{i}\) in the sense that the preceding proposition holds as graded vector spaces, that is, such that the one dimensional \(\operatorname{Hom}\) space is in graded degree zero. Further, we define \[F=\oplus_{i\in I}F_{i}\ \in\operatorname{D}^{b}\mathrm{Coh}(Y)^{T}. \tag{3.1}\] Note that by Proposition 3.5, the images of the objects \(F_{i}\) and \(F\in\operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(\hat{Y})\), and their \(T\)-equivariant enhancements in \(\operatorname{D}^{b}\mathrm{Coh}(Y)^{T}\), under the equivalences of Corollaries 3.3 and 3.4, respectively, define one dimensional simple modules \[S_{i}=\mathbb{K}\ \in\operatorname{D}_{\mathrm{Fd}}(\hat{\Lambda})\qquad\text{ and}\qquad S=\oplus_{i}S_{i}\ \in\operatorname{D}_{\mathrm{Fd}}(\hat{\Lambda})\,\] and their analogues in \(\operatorname{D}_{\mathrm{fd}}(\Lambda)\). This allows us to deduce the following descriptions of the thick subcategories generated by \(F\): Let \(\mathrm{Coh}_{\mathrm{cs}}(Y)\) denote the full subcategory of compactly supported coherent sheaves, and \(\operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(Y)\) the derived category of complexes with compactly supported cohomology. _Proposition 3.6_.: The equivalences of Corollaries 3.3 and 3.4 induce equivalences \[\operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(\hat{Y})\cong\mathrm{Thick}(F) \overset{\cong}{\to}\operatorname{D}_{\mathrm{Fd}}(\hat{\Lambda})\qquad \text{and}\qquad\operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(Y)^{T}\cong \operatorname{thick}(F)\overset{\cong}{\to}\operatorname{D}_{\mathrm{fd}}( \Lambda)\,\] respectively, intertwining the forgetful functor \(\operatorname{D}(\Lambda)\to\operatorname{D}(\mathbb{K})\) with \(\operatorname{Hom}_{\operatorname{DQCoh}(Y)}(E,\cdot)\). Proof.: The image under the natural functor \(\operatorname{Hom}_{\operatorname{DQCoh}(\hat{Y})}(E,\cdot)\) of any coherent sheaf with proper support is finite dimensional, so the image of \(\operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(\hat{Y})\) under the equivalence of Corollary 3.3 is contained in \(\operatorname{D}_{\mathrm{Fd}}(\hat{\Lambda})\), and similarly for the graded case. The image of the thick subcategory \(\mathrm{Thick}(F)\) generated by \(F\) is identified with \(\mathrm{Thick}(S)\subset\operatorname{D}_{\mathrm{Fd}}(\hat{\Lambda})\), but this inclusion is an equivalence as every finite dimensional \(\hat{\Lambda}\) module admits a Jordan-Holder filtration with simple subquotients, and the summands \(S_{i}\) of \(S\) are precisely the simple modules in \(\Lambda\)-\(\operatorname{Mod}_{\mathrm{Fg}}\) by the proof of Proposition 3.5. It follows that \(\mathrm{Thick}(F)\) is equivalent to \(\operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(\hat{Y})\) and its image is equivalent to \(\operatorname{D}_{\mathrm{Fd}}(\hat{\Lambda})\), and similarly for the graded case. The object \(F=\oplus_{i\in I}F_{i}\in\operatorname{D}^{b}\mathrm{Coh}(Y)^{T}\) is evidently compact, so that letting \[\Sigma=\operatorname{Hom}_{\operatorname{D}^{b}\mathrm{Coh}(Y)}(F,F),\] equipped with the grading defined by the equivariant structure on \(F\), we have: _Corollary 3.7_.: There are equivalences of triangulated categories \[\operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(\hat{Y})\overset{\cong}{ \to}\operatorname{D}_{\mathrm{Perf}}(\Sigma)\qquad\text{and}\qquad \operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(Y)^{T}\overset{\cong}{\to} \operatorname{D}_{\mathrm{perf}}(\Sigma)\,\] intertwining the forgetful functor \(\operatorname{D}(\Sigma)\to\operatorname{D}(\mathbb{K})\) with \(\operatorname{Hom}_{\operatorname{DQCoh}(Y)}(F,\cdot)\). Proof.: This follows from the Morita theory from [Kel] recalled in Theorem 2.6 applied to the object \(F\), together with the identifications of \(\mathrm{Thick}(F)\) and \(\mathrm{thick}(F)\) with the compactly supported derived categories in Proposition 3.6. ### Koszul duality patterns in equivariant enumerative geometry The relationship between the two descriptions of \(\operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(Y)\) given in Proposition 3.6 and Corollary 3.7, in terms of modules over the associative algebras \(\Lambda\) and \(\Sigma\), is an example of Koszul duality. These two algebras are the analogues of the endomorphisms of the projective generator and the Ext algebra of the direct sum of simple modules in the BGG category \(\mathcal{O}\), respectively, whose Koszul duality was studied in the seminal paper of Beilinson-Ginzburg-Soergel [1]. Note that the module structure map for the object \(S\in\mathrm{D}_{\mathrm{Fd}}(\hat{\Lambda})\) (and its graded enhancement \(S\in\mathrm{D}_{\mathrm{fd}}(\Lambda)\)) defines a augmentation \(\epsilon:\hat{\Lambda}\to S\) of \(\hat{\Lambda}\) considered as an algebra over the base ring \(\hat{\Lambda}_{0}=S\) (and similarly a graded augmentation of \(\Lambda\)). Moreover, we can identify the Koszul dual algebras of \(\hat{\Lambda}\) and \(\Lambda\) with \(\Sigma\), as we have \[\hat{\Lambda}^{!}=\mathrm{Hom}_{\mathrm{D}(\hat{\Lambda})}(S,S)\cong\mathrm{ Hom}_{\mathrm{D}^{b}\mathrm{Coh}(\hat{Y})}(F,F)=\hat{\Sigma}\qquad\text{and} \qquad\Lambda^{!}\cong\Sigma\] similarly, where \(\hat{\Sigma}\) denotes \(\Sigma\) as a plain (ungraded) algebra while \(\Sigma\) denotes the algebra equipped with its natural trigrading defined by the \(T\)-equivariant structure on \(F\). In terms of this notation, we have: _Proposition 3.8_.: There are canonical quasi-isomorphisms \[\hat{\Sigma}^{!}\cong\hat{\Lambda}\qquad\text{and}\qquad\Sigma^{!}\cong \Lambda\.\] _Warning 3.9_.: We will continue to use the notation \(\Sigma\) for both the graded and ungraded algebra; the notation for the graded and ungraded derived categories already makes this distinction, and we will explicitly indicate the grading whenever there is an ambiguity. Proof.: In the graded case, both \(\Sigma\) and \(\Lambda\) are strongly locally finite, so that by Theorem 2.26 we have \(\Sigma^{!}\cong(\Lambda^{!})^{!}\cong\Lambda\). In the ungraded case, by Proposition 2.25 we have \[\Sigma^{!}\cong(\otimes_{S}^{\bullet}(\bar{\Sigma}[1]))^{\vee}\cong\otimes_{S }^{\bullet}(\bar{\Sigma}^{\vee}[1])\,\] that is, the bar complex presentation of the Koszul dual of \(\Sigma\) (with the trivial grading) is given by the completion of the tensor algebra with respect to the natural augmentation ideal. In the graded case, the augmentation ideal agrees with the ideal \(\Lambda_{>0}\) of strictly positively graded elements, and thus the completion agrees with that induced by completion at the unique fixed point \(x\in X(\mathbb{C})\), as desired. Finally, we note that the \(T\)-equivariant structure on \(E\) can be chosen so that \(\Lambda\) is strongly locally finite, in the sense of Definition 2.1 of [10], as a trigraded algebra. We have: _Corollary 3.10_.: There are mutually inverse equivalences of categories \[\mathrm{Hom}_{\hat{\Lambda}}(S,\cdot):\;\mathrm{D}_{\mathrm{Fd}}(\hat{ \Lambda})\xleftarrow{\cong}\mathrm{D}_{\mathrm{Perf}}(\Sigma)\ :\;(\cdot)\otimes_{\Sigma}S, \tag{3.3}\] \[\mathrm{Hom}_{\Lambda}(S,\cdot):\;\mathrm{D}_{\mathrm{fd}}(\Lambda) \xleftarrow{\cong}\mathrm{D}_{\mathrm{perf}}(\Sigma)\ :\;(\cdot)\otimes_{\Sigma}S,\] (3.4) \[(\cdot)\otimes_{\Lambda}S:\;\mathrm{D}_{\mathrm{Perf}}(\hat{ \Lambda})\xleftarrow{\cong}\mathrm{D}_{\mathrm{Fd}}(\Sigma)\ :\;\mathrm{Hom}_{\Sigma}(S,\cdot),\text{ and}\] (3.5) \[(\cdot)\otimes_{\Lambda}S:\;\mathrm{D}_{\mathrm{perf}}(\Lambda) \xleftarrow{\cong}\mathrm{D}_{\mathrm{fd}}(\Sigma)\ :\;\mathrm{Hom}_{\Sigma}(S,\cdot). \tag{3.2}\] Proof.: In the graded case, both \(\Sigma\) and \(\Lambda\) are strongly locally finite by construction, so that Theorem 2.28 applies with \(A=\Lambda\) or \(A=\Sigma\) to give the two claimed equivalences of categories of graded modules. The first equivalence in the ungraded case follows from applying the same result with the weaker hypotheses of Remark 2.29, noting that \(\Lambda\) is always a plain associative algebra by projectivity of \(E\) in \(\mathrm{PervCoh}(Y)\), and that \(\Sigma\) is finite dimensional and thus locally finite since \(F\) has proper support. The latter ungraded equivalence follows again by applying DG Morita theory to the object \(S\in\mathrm{D}_{\mathrm{Fd}}(\Sigma)\) since \(\hat{\Lambda}=\mathrm{Hom}_{\Sigma}(S,S)\) by Proposition 3.8 above. Note that we have an inclusion of full subcategories \[\mathrm{D}_{\mathrm{fd}}(\Lambda)\subset\mathrm{D}_{\mathrm{perf}}(\Lambda)\qquad \text{corresponding to}\qquad\mathrm{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(Y)^{T}\subset \mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\,\] and similarly \(\mathrm{D}_{\mathrm{perf}}(\Sigma)\subset\mathrm{D}_{\mathrm{fd}}(\Sigma)\), since \(\Sigma\) is finite dimensional. However, the equivalences of Equations 3.3 and 3.5 evidently do not make the natural diagram commute. This minor technical obstruction is resolved by introducing the Nakayama functor: The Nakayama functor is the t-exact auto-equivalence defined by \[(\cdot)^{N}:=(\cdot)\otimes_{\Sigma}\mathrm{Hom}_{S}(\Sigma,S):\mathrm{D}_{ \mathrm{fd}}(\Sigma)\to\mathrm{D}_{\mathrm{fd}}(\Sigma)\, \tag{3.6}\] which has the property that it defines an equivalence between the full subcategories of (complexes of) projective and injective objects; see Proposition 7 of [10] and Lemma 4.5.6 of [11]. The main result of this subsection, which summarizes several of the results above and their compatibilities, is the following: _Theorem 3.11_.: The diagram of triangulated categories (3.7) has horizontal arrows mutually inverse triangle equivalences and vertical arrows inclusions of thick subcategories, and admits canonical commutativity data, and similarly for the diagram (3.8) To complete the proof, we need the following lemma: _Lemma 3.12_.: The diagrams of triangulated categories (3.9) admit canonical commutativity data. Proof.: The inverse equivalences commute up to the canonical natural isomorphism \[\mathrm{Hom}_{\Sigma}(S,\cdot)\circ N =\mathrm{Hom}_{\Sigma}(S,(\cdot)\otimes_{\Sigma}\mathrm{Hom}_{S}( \Sigma,S))\] \[\xrightarrow{\cong}(\cdot)\otimes_{\Sigma}\mathrm{Hom}_{\Sigma}(S, \mathrm{Hom}_{S}(\Sigma,S))\] \[\xrightarrow{\cong}(\cdot)\otimes_{\Sigma}\mathrm{Hom}_{S}(S \otimes_{\Sigma}\Sigma,S)\] \[\xrightarrow{\cong}(\cdot)\otimes_{\Sigma}S\.\] Proof.: (of Theorem 3.11) The arrows in the bottom rows define mutually inverse equivalences of categories by Corollaries 3.3 and 3.4, as well as Equations 3.4 and 3.5 of Corollary 3.10. Commutativity of the squares on the right in each diagram follows from the preceding lemma. This in turn implies commutativity of the squares on the left of each diagram, by Proposition 3.6 and Corollary 3.7. It follows the vertical arrows must all be inclusions of thick subcategories. Thus, we have obtained three equivalent descriptions of the same triangulated category (3.10) where \(P_{i}=\Lambda_{i}\) is the projective \(\Lambda\) module given by the \(i^{th}\) direct summand of \(\Lambda\) corresponding to \(E_{i}\). By abuse of notation, we have also used \(S_{i}\) to denote the simple \(\Sigma\) module corresponding to \(P_{i}\) under the latter equivalence, since both underlying vector spaces are given by \(S_{i}=\mathbb{K}\) and their direct sum \(S=\oplus_{i}S_{i}\) is the common augmentation module used to define the Koszul duality equivalence. Finally, the object \[I_{i}=\Sigma_{i}^{N}=\operatorname{Hom}_{S}(\Sigma_{i},S)\] is the injective module given by the linear dual of \(\Sigma_{i}\), where the latter denotes the \(i^{th}\) direct summand of \(\Sigma\) in the decomposition \[\Sigma=\bigoplus_{i\in I}\Sigma_{i}=\bigoplus_{i\in I}\operatorname{Hom}_{ \Lambda}(S_{i},S)\.\] It is natural to ask which abelian subcategory of the derived category of \(\Sigma\) modules is equivalent to the usual heart of \(\operatorname{D}_{\operatorname{perf}}(\Lambda)\) under the Koszul duality equivalence, and in turn to perverse coherent sheaves on \(Y\) relative to \(X\) under the equivalence of [1],[1], recalled in Theorem 2.19 above. We now describe this category, following the general approach of [1, 1]. For the initial explanation, we will assume that \(\Lambda\) is Koszul, and thus \(\Sigma\) is a \((\mathbb{Z}\times\mathbb{Z}^{3}\)-graded) associative algebra, with no non-trivial higher \(A_{\infty}\) multiplications. In the succeeding section, we generalize these results to the case where \(\Lambda\) is not necessarily Koszul and correspondingly \(\Sigma\) is a general \((\mathbb{Z}^{3}\)-graded) \(A_{\infty}\) algebra. Also, for simplicity of notation, we will often restrict the grading along a cocharacter \(\mathbb{C}^{\times}\to T\) and state the results using the classical language of graded algebras, suppressing the trigrading unless it is necessary. _Warning 3.13_.: For the remainder of this section, we will assume that \(\Lambda\) is Koszul. _Warning 3.14_.: Throughout this paper, we will often restrict the trigrading along a cocharacter \(\mathbb{C}^{\times}\to T\) for notational simplicity. Note that under this hypotheses, \(\Sigma\) is concentrated in bi-degree \((k,-k)\) for \(k\in\mathbb{N}\), and thus its cohomologically sheared avatar \(\Sigma^{\operatorname{sh}}\) is concentrated in cohomological degree \(0\), in keeping with Remark 2.30. The resulting algebra \(\Sigma^{\operatorname{sh}}\) is given by simply interpreting the cohomological degree of the above Ext algebra as an abstract graded degree, as in the classical setting of [1]. Warning 3.15_.: In keeping with Remark 2.30, we will often express \(\Sigma\) and its DG modules in terms of the cohomologically sheared grading, but sometimes omit the superscript \((\cdot)^{\mathrm{sh}}\) by abuse of notation. First, recall that the grading on \(\Sigma\) endows the abelian category of finite dimensional graded \(\Sigma\) modules \(\Sigma\)-\(\mathrm{Mod}_{\mathrm{fd}}\) with the structure of a mixed category, in the following sense: _Definition 3.16_.: A _mixed category_ is an artinian category \(\mathcal{C}\) together with an integer-valued function called the _weight_\(w:\mathrm{Irr}(\mathcal{C})\to\mathbb{Z}\) on the set of equivalence classes \(\mathrm{Irr}(\mathcal{C})\) of simple objects of \(\mathcal{C}\), such that for any two simple objects \(M,N\in\mathcal{C}\), we have \[w(M)\leq w(N)\qquad\text{implies}\qquad\mathrm{Ext}^{1}_{\mathcal{C}}(M,N)=0\.\] We recall that a category \(\mathcal{C}\) is called _artinian_ if it is abelian and every object \(M\in\mathcal{C}\) admits a finite filtration with simple subquotients. _Definition 3.17_.: A complex \(P^{\bullet}=\oplus_{i}P^{i}[-i]\) of objects in a mixed category \(\mathcal{C}\) is _linear_ if for each \(i\), the simple quotient of each indecomposable summand of \(P^{i}\) is pure of weight \(i\). Let \(\mathrm{LCP}(\mathcal{C})\) denote the full subcategory of \(D^{b}(\mathcal{C})\) of linear complexes of projective objects. In general, \(\mathrm{LCP}(\mathcal{C})\) is an abelian subcategory of \(\mathrm{D}(\mathcal{C})\), with simple objects given by indecomposable projectives of \(\mathcal{C}\) concentrated in a single cohomological degree \(j\). Defining the weight of such a simple object to be \(j\), the category \(\mathrm{LCP}(\mathcal{C})\) is a mixed category, with Tate twist given by \([-1]\langle 1\rangle\). For \(\mathcal{C}=A\)-\(\mathrm{Mod}_{\mathrm{fg}}\), this is equivalent to the condition that \(Q^{i}=AQ_{i}^{i}\) is generated in degree \(-i\). Consider the full subcategories \(\mathrm{D}^{b}_{\mathrm{fg}}(A)^{\leq 0,g}\) and \(\mathrm{D}^{b}_{\mathrm{fg}}(A)^{\geq 0,g}\) of \(\mathrm{D}(A)\) on objects isomorphic to a complex of graded projective modules \(P^{\bullet}\) such that \(P^{i}\) is generated in degree \(\leq-i\) and \(\geq-i\), respectively. We have: _Theorem 3.18_.: [1, 1] The pair \(\mathrm{D}^{b}_{\mathrm{fg}}(A)^{\leq 0,g},\mathrm{D}^{b}_{\mathrm{fg}}(A)^{\geq 0,g}\) define a t-structure on \(\mathrm{D}^{b}_{\mathrm{fg}}(A)\), with heart given by \(\mathrm{LCP}(A):=\mathrm{LCP}(A\text{-Mod}_{\mathrm{fg}})\). Moreover, the Koszul duality functor \(\mathrm{Hom}_{A}(S,\cdot):\mathrm{D}^{b}_{\mathrm{fd}}(A)\to\mathrm{D}_{ \mathrm{perf}}(A^{!})\) of Theorem 2.28 restricts to an equivalence of mixed categories \(K:A\)-\(\mathrm{Mod}_{\mathrm{fd}}\to\mathrm{LCP}(A^{!})\). In particular, note that the grading shift functor \(\langle-1\rangle\) on \(A\) modules corresponds to the simultaneous cohomological and grading shift functor \([-1]\langle 1\rangle\) on linear complexes of projectives. We now proceed to apply these results in our present context, to describe the desired t-structure on \(\mathrm{D}^{b}_{\mathrm{fd}}(\Sigma)\) corresponding to the perverse coherent t-structure on \(\mathrm{D}^{b}\mathrm{Coh}(Y)^{\mathbb{C}^{\times}}\). We define a complex of injective \(\Sigma\) modules to be linear if its corresponding dual complex of projective objects is linear, in the sense of Definition 3.17 above, and let \(\mathrm{LCI}(\Sigma):=\mathrm{LCP}(\Sigma)^{N}\) denote the full subcategory on bounded linear complexes of injective objects. In fact, in the situation at hand of the finite dimensional algebra \(\Sigma\) over the base ring \(S=\oplus_{i\in I}\mathbb{K}\), we have the following equivalent form of these definitions: _Definition 3.19_.: [1] Let \(M\in\Sigma\)-\(\mathrm{mod}_{\mathrm{fd}}\) be a finite dimensional \(\Sigma\) module and \(M=\oplus_{k}M_{k}\) a presentation of \(M\) as a direct sum of indecomposable summands \(M_{k}\). We define the category \(\mathrm{LC}_{\Sigma}(M)\) as the full subcategory of \(\mathrm{D}_{\mathrm{perf}}(\Sigma)\) on complexes \(N=(N^{\bullet},d)\) of \(\Sigma\) modules such that for each \(i\in\mathbb{Z}\), every indecomposable summand of \(N^{i}\) is of the form \(M_{k}\langle i\rangle\). In particular, letting \(P=\Sigma=\oplus_{i\in I}\Sigma_{i}\) and \(I=\Sigma^{\vee}=\oplus_{i\in I}\Sigma_{i}^{\vee}\), we have \[\mathrm{LCP}(\Sigma)=\mathrm{LC}_{\Sigma}(P)\qquad\text{and}\qquad\mathrm{LCI} (\Sigma)=\mathrm{LC}_{\Sigma}(I)\.\] We now state the main result of this subsection: _Theorem 3.20_.: Restriction to the hearts of the triangulated categories in the diagram of Equation 3.8 induces a commutative diagram of mixed categories, with horizontal arrows equivalences of mixed categories and vertical arrows inclusions of full mixed subcategories (3.11) Similarly, restriction to hearts in the diagram of Equation 3.7 induces (3.12) where \(\operatorname{Filt}(\Sigma)\) is as in Definition 2.47. Proof.: These equivalences intertwine the heart of the perverse coherent t-structure on \(\operatorname{D}^{b}\!\operatorname{Coh}(Y)\) with the usual t-structure on \(\operatorname{D}(\Lambda)\) by the results of [1, 1] recalled in Section 2.3. In the graded case, this is intertwined with (the image under the derived Nakayama functor of) the category of linear complexes of projectives, by the results of [13, 1] recalled in Theorem 3.18 above. We recall some of the details of this equivalence, following _loc. cit._, in Section 3.4 below. We also remind the reader that this latter equivalence requires the additional assumption that \(\Lambda\) is Koszul, in keeping with Warning 3.13. The extension of this theorem to the general case is given in Theorem 3.27 below. The ungraded case follows from the proof of the extension of this result to the general, not necessarily Koszul case (again, we recall this is given in Theorem 3.27 below), together with Corollary 2.48. ### Monad presentations from Koszul duality As explained in Section 2.4, the description of \(\Lambda\) as the path algebra of the quiver \(Q_{Y}\) is equivalent to its description as the Koszul dual of the algebra \(\Sigma\). Thus, towards understanding various moduli stacks of coherent sheaves on \(Y\) in terms of stacks of representations of quivers, as outlined in the introductory section 1.1 above, we now describe more explicitly the correspondence between \(\Sigma\) modules, quiver representations, and coherent sheaves. The compositions of the equivalences of Equations 3.11 and 3.12 define functors \[\hat{K}(\cdot) :=\operatorname{Hom}_{\Sigma}(S,\cdot)\otimes_{\Lambda}E: \operatorname{D}_{\operatorname{Fd}}(\Sigma)\to\operatorname{D}^{b}\! \operatorname{Coh}(\tilde{Y})\] and (3.14) \[K(\cdot) :=\operatorname{Hom}_{\Sigma}(S,\cdot)\otimes_{\Lambda}E: \operatorname{D}_{\operatorname{fd}}(\Sigma)\to\operatorname{D}^{b}\! \operatorname{Coh}(Y)^{T} \tag{3.13}\] The complex of coherent sheaves \(K(M)\) corresponding to a \(\Sigma\) module \(M\) under this equivalence can be computed explicitly, as we now explain. _Warning 3.21_.: For concreteness, we give most of the exposition in this section in the graded case, noting the ungraded variant can be recovered by completing and forgetting the grading. Further, for notation simplicity we will restrict the trigrading along a cocharacter \(\mathbb{C}^{\times}\to T\), in keeping with Warning 3.14. To begin, recall that using the Koszul resolution \[\mathcal{K}^{\bullet}:=\otimes_{S}^{\bullet}(\bar{\Sigma}[1])\otimes_{S} \Sigma\xrightarrow{\cong}S\,\] we can compute the underlying bigraded vector space \[\operatorname{Hom}_{\Sigma}(S,M)\cong\operatorname{Hom}_{\Sigma}(\otimes_{S}^{ \bullet}(\bar{\Sigma}[1])\otimes_{S}\Sigma,M)\cong\operatorname{Hom}_{S}( \otimes_{S}^{\bullet}(\bar{\Sigma}[1]),M)\cong M\otimes_{S}\Lambda\,\] and thus the underlying cohomologically graded equivariant coherent sheaf as \[K(M)=\operatorname{Hom}_{\Sigma}(S,M)\otimes_{\Lambda}E\cong M\otimes_{S} \Lambda\otimes_{\Lambda}E\cong M\otimes_{S}E. \tag{3.15}\] Moreover, the differential \(d:M\otimes_{S}E\to M\otimes_{S}E[1]\) is given by \[d(m\otimes e)=d_{M}m\otimes e+\sum_{n}\rho_{n}^{\vee}(m)\cdot e\qquad\text{ where}\qquad\rho_{n}^{\vee}:M\to M\otimes_{S}\bar{\Sigma}^{\vee}[-1]^{\otimes n}\subset M \otimes_{S}\Lambda \tag{3.16}\] is the dual of the \(A_{\infty}\) module structure map \(\rho_{n}:\Sigma^{\otimes n}\otimes M\to M[1-n]\) and \(\rho_{n}^{\vee}(m)\cdot e\) denotes the action of the second tensor factor \(\Lambda=\operatorname{Hom}_{\operatorname{D}^{b}\operatorname{Coh}(Y)}(E,E)\) on \(E\). For example, if \(\Sigma\) is a strict, Koszul associative algebra and \(\rho=\rho_{1}:\Sigma\otimes M\to M\) is the structure map for a strict \(\Sigma\) module \(M\), the differential is defined by \[d(m\otimes e)=d_{M}m\otimes e+\sum_{\alpha}v_{\alpha}m\otimes v_{\alpha}^{ \vee}e\] where \(\{v_{\alpha}\}\) are a basis for the degree \(1\) component \(\Sigma^{1}\) of \(\Sigma\) and \(v_{\alpha}^{\vee}\) denote the dual basis for \((\Sigma^{1})^{\vee}=\Lambda^{1}\), which give generators for the quadratic dual algebra \(\Lambda\). _Remark 3.22_.: In terms of the cohomological sheared grading of Remark 2.30, the image of the module \(M\in\operatorname{D}(\Sigma^{\operatorname{sh}})\) under the Koszul duality equivalence is given by \[\operatorname{Hom}_{\Sigma}(S,M)\cong M\otimes_{S}\Lambda=\bigoplus_{k,j\in \mathbb{Z}}M_{j}^{k}\otimes_{S}\Lambda_{l}[-k-j]\langle-l+j\rangle\] as in the proof of Theorem 2.12.1 in [1] (where this object is denoted \(F(M)\)), so that in particular for a plain graded \(\Sigma\) module \(M\in\Sigma^{\operatorname{sh}}\text{-Mod}_{\operatorname{fg}}\) given by \[M=\bigoplus_{j\in\mathbb{Z}}M_{j}\langle-j\rangle\qquad\text{we obtain}\qquad K(M)=\bigoplus_{j\in\mathbb{Z}}M_{j}\otimes_{S}E[-j]\] as the underlying cohomologically graded coherent sheaf. _Warning 3.23_.: We will use the cohomologically sheared grading on \(\Sigma\) modules implicitly throughout the remainder of this section, omitting the superscript \((\cdot)^{\operatorname{sh}}\) by abuse of notation, in keeping with Warning 2.31. Similarly, the use of indices in subscripts and superscripts describing direct summands of modules will also be as in _loc. cit._, which in particular disagrees with their use in the preceding remark. It is clear that the simple modules \(S_{i}\) over \(\Sigma\) correspond to the projective \(\Lambda\) modules \(P_{i}\) and in turn the summands \(E_{i}\) of the tilting object, as necessary: \[K(S_{i})=\operatorname{Hom}_{\Sigma}(S,S_{i})\otimes_{\Lambda}E\cong S_{i} \otimes_{S}\Lambda\otimes_{\Lambda}E\cong P_{i}\otimes_{\Lambda}E\cong E_{i}\.\] More generally, any finite dimensional \(\Sigma\) module \(M\) admits a semi-simple composition series, that is, a filtration with semi-simple subquotients: \[0=M_{m+1}\subset M_{m}\subset\cdots\subset M_{1}\subset M_{0}=M\qquad\text{ such that}\qquad M_{k}/M_{k+1}=\bigoplus_{i_{k}\in I_{k}}S_{i_{k}}\langle-k\rangle\] for some finite sets \(I_{k}\) of elements \(i_{k}\in I\) including repetitions, where we have assumed for notational simplicity \(M\) is non-negatively graded; we summarize this situation by writing \[M=\left[\bigoplus_{i_{m}\in I_{m}}S_{i_{m}}\langle-m\rangle<\cdots<\bigoplus_{i_{ 1}\in I_{1}}S_{i_{1}}\langle-1\rangle<\bigoplus_{i_{0}\in I_{0}}S_{i_{0}} \right]\.\] Following [1], the image of such a module \(M\) under the Koszul duality equivalence is given by the corresponding complex of projective objects \[K(M)\cong\left[\bigoplus_{i_{0}\in I_{0}}E_{i_{0}}\to\bigoplus_{i_{1}\in I_{ 1}}E_{i_{1}}[-1]\to\cdots\to\bigoplus_{i_{m}\in I_{m}}E_{i_{m}}[-m]\right]. \tag{3.17}\] where each summand \(E_{i_{k}}\to E_{i_{k+1}}\) of the differential is determined by the class of the extension defining \(M\) in \(\operatorname{Ext}^{1}_{\Sigma}(S_{i_{k}}\langle-k\rangle,S_{i_{k+1}}\langle- k-1\rangle)\); the identification of \(\Lambda\) with the Koszul dual algebra of \(\Sigma\) gives an isomorphism \[\operatorname{Hom}_{\Sigma}(S,S)\cong\Lambda=\operatorname{Hom}_{\operatorname {D}^{b}\operatorname{Coh}(Y)}(E,E)\] under which the above extension class determines a map \(E_{i_{k}}\to E_{i_{k+1}}\). These maps can be understood concretely as follows: the failure of the extension to split is detected by an element of \(\Sigma\) sending \(S_{i_{k}}\) to \(S_{i_{k+1}}\langle-1\rangle\), or equivalently a non-trivial restriction of the module structure map \(\Sigma\otimes S_{i_{k}}\to S_{i_{k+1}}\langle-1\rangle\). This determines the desired differential by inducing the dual map \(S_{i_{k}}\to\Lambda^{1}\otimes S_{i_{k+1}}\) (after inverting the cohomological shearing of Remark 2.30) along the inclusion \(S\to\Lambda\) to give \[P_{i_{k}}\to\Lambda^{1}\otimes P_{i_{k+1}}\xrightarrow{m}P_{i_{k+1}}\qquad \text{or equivalently}\qquad E_{i_{k}}\to E_{i_{k+1}}\] under the equivalence \(\operatorname{D}_{\operatorname{perf}}(\Lambda)\cong\operatorname{D}^{b} \operatorname{Coh}(Y)^{\mathbb{C}^{\times}}\), where \(m:\Lambda^{1}\otimes P_{i_{k+1}}\to P_{i_{k+1}}\) is the \(\Lambda\) module structure map. Similarly, the higher arity structure maps of the \(A_{\infty}\) module determine terms in the differential given by the action of higher degree components \(\Lambda^{n}\) of \(\Lambda\). In particular, the composition series of the injective \(\Sigma\) modules \(I_{i}=\Sigma_{i}^{\vee}\), given by \[I_{i}=[I_{i}^{0}<I_{i}^{-1}\langle 1\rangle<\cdots<I_{i}^{-m_{i}}\langle m_{i}\rangle] \tag{3.18}\] in the cohomologically sheared notation (see Warning 3.23), determine canonical Koszul-type resolutions of the simple objects \(F_{i}\) in terms of the distinguished projective objects \(E_{i_{j}}\): \[K(I_{i})=\left[I_{i}^{-m_{i}}\otimes_{S}E[m_{i}]\to\cdots\to I_{i}^{-1}\otimes _{S}E[1]\to E_{i}\right]\xrightarrow{\cong}F_{i}. \tag{3.19}\] More generally, we obtain a canonical Koszul-type projective resolution of an arbitrary compactly supported, \(\mathbb{C}^{\times}\)-equivariant, perverse coherent sheaf \(H\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)^{\mathbb{C}^{\times}}\) in terms of the distinguished projective objects \(E_{i}\in\operatorname{Coh}(Y)^{\mathbb{C}^{\times}}\). Again, we begin with the simplifying assumption that \(\Lambda\) is Koszul, and correspondingly \(\Sigma\) has no non-trivial higher \(A_{\infty}\) multiplication maps. _Warning 3.24_.: Throughout the remainder of this section we assume that \(\Lambda\) is Koszul, as in Warning 3.13. The main result of this section, Proposition 3.26 below, is proved in the general case in Proposition 3.29. Under this assumption, the desired resolution of \(H\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)^{\mathbb{C}^{\times}}\) is simply defined by the image of the corresponding linear complex of injectives \(L\in\operatorname{LCP}(\Sigma)\) under the explicit model of the Koszul duality equivalence \(K:\operatorname{D}^{b}_{\operatorname{fd}}(\Sigma)\xrightarrow{\cong} \operatorname{D}^{b}\operatorname{Coh}(Y)^{\mathbb{C}^{\times}}\) explained above. Note that each term of the differential in a linear complex of injectives decomposes as a direct sum of maps \(I_{i}\to I_{j}\langle 1\rangle\) between the distinguished injectives, of abstract graded degree \(1\) by the linearity assumption. By Theorem 3.11, we have identifications \[{}_{i}\Sigma_{j}^{k}\cong\operatorname{Hom}_{\Sigma}^{0}(I_{i},I_{j}\langle k \rangle)\cong\operatorname{Hom}_{\operatorname{D}^{b}\operatorname{Coh}(Y)}^{0} (K(I_{i}),K(I_{j})[k]) \tag{3.20}\] between the space \({}_{i}\Sigma_{j}^{k}\) dual to the degree \(1-k\) edges of the DG quiver \(Q_{Y}\) corresponding to \(Y\), the space of graded degree \(k\) maps between the injective \(\Sigma\) modules \(I_{i}\), and the space of cohomological degree \(k\) maps between the canonical Koszul resolutions of the simple coherent sheaves \(F_{i}\) from Equation 3.19. Concretely, an element \(\delta_{ij}\in\ _{i}\Sigma_{j}^{k}\) determines by multiplication a map \(d_{\delta_{ij}}:\Sigma_{i}\to\Sigma_{j}\langle k\rangle\) between the distinguished projective \(\Sigma\) modules, and in turn a dual map \[d_{\delta_{ij}}^{N}:I_{i}\to I_{j}\langle k\rangle\qquad\text{or equivalently}\qquad(d_{\delta_{ij}}^{N})^{l}:I_{i}^{l}\to I_{j}^{l+k}\qquad \text{for $l\in\mathbb{Z}$}\] in graded components. Finally, this determines a map \(K(d_{\delta_{ij}}^{N}):K(I_{i})\to K(I_{j})[k]\langle-k\rangle\) of the corresponding complexes of projectives of Equation 3.19, defined explicitly as Thus, we can explicitly describe the canonical Koszul resolution of an arbitrary extension \[0\to F_{j}\to H\to F_{i}\to 0\] of two simple, compactly supported perverse coherent sheaves \(F_{i}\): it is the image of the linear complex of injectives \(I_{i}\xrightarrow{d_{\delta_{ij}}^{N}}I_{j}\langle 1\rangle\) under the explicit description of the Koszul duality functor above, which gives the totalization of the double complex More generally, for an arbitrary iterated extension of \(d\) simple objects \[H=[F_{i_{d}}\langle-d\rangle<\cdots<F_{i_{1}}\langle-1\rangle<F_{i_{0}}]\quad \in\operatorname{PervCoh}_{\operatorname{cs}}(Y)^{\mathbb{C}^{\times}}\,\] the image of the corresponding linear complex of injectives \[L=\big{[}I_{i_{o}}\to I_{i_{1}}[-1]\langle 1\rangle\to\cdots\to I_{i_{d-1}}[-(d-1)]\langle d-1\rangle\to I_{i_{d}}[-d] \langle d\rangle\big{]}\quad\in\operatorname{LCI}(\Sigma)\] under the Koszul duality equivalence is given by the totalization of the double complex The collection of all such iterated extensions \(H\) of the simple objects \(F_{i}\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)^{\mathbb{C}^{\times}}\) is parameterized by the space of finite dimensional graded \(\Lambda\) modules, or equivalently representations of the graded DG quiver \(Q_{Y}\) with path algebra \(\Lambda=\mathbb{K}Q_{Y}\), as we have seen in Theorem 3.11. The canonical resolutions constructed in the preceding discussion can thus also be understood in terms of quiver representations, which we now explain: The K-theory class of an object \(H\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)^{\mathbb{C}^{\times}}\) is determined by the multiplicities \(d_{i}\in\mathbb{N}\) of the simple factors \(F_{i}\) (and their graded shifts), and we can collect multiplicities in the object \(K(L)\) with respect to the variable \(i\in I\) to express the complex as \[K(L)=\left(\bigoplus_{i\in I}K(I_{i})\otimes V_{i}\,\ K(d_{\delta}^{N})\right) \qquad\text{with}\qquad K(d_{\delta}^{N}):=\sum_{k=1}^{d}K(d_{\delta_{i_{k-1} ^{i}k}}^{N})\, \tag{3.21}\] where \(V_{i}\) is a (graded) vector space with \(\dim V_{i}=d_{i}\) for each \(i\in I\) and \(\sum_{i\in I}d_{i}=d+1\). These multiplicities correspond to those of the simple \(\Lambda\) module factors \(S_{i}\) occurring in the corresponding quiver representation, so that the quiver representation corresponding to \(H\) is determined by a \(\mathbf{d}=(d_{i})_{i\in I}\) dimensional graded \(S\) module \[V=\bigoplus_{i\in V_{Q_{Y}}}V_{i}\qquad\text{together with}\qquad\rho: \mathbb{K}\langle E_{Q_{Y}}\rangle^{0}=(\Sigma^{1})^{\vee}\to\underline{ \operatorname{End}}(V) \tag{3.22}\] a map of graded \(S\) bimodules from the space of cohomological degree \(0\) edges of the quiver to the endomorphism algebra of \(V\), such that the induced map from the cohomological degree zero component of the quasi-free resolution of the path algebra \(\Lambda^{0}=\otimes_{S}^{\bullet}\mathbb{K}\langle E_{Q}\rangle^{0}\) to \(\underline{\operatorname{End}}_{S\text{-BiMod}}(V)\) takes the ideal of relations in the graded quiver \(Q_{Y}\) (or equivalently the image of the differential in the quasi-free resolution) to zero in \(\underline{\operatorname{End}}_{S\text{-BiMod}}(V)\). The map \(\rho\) of Equation 3.22 can equivalently be interpreted as an element \[\delta=\sum_{i,j\in I}b_{ij}\otimes B_{ij}\quad\in\quad\Sigma^{1}\otimes_{S} \underline{\operatorname{End}}_{S\text{-BiMod}}(V)=\bigoplus_{i,j\in I} \operatorname{Ext}^{1}(F_{i},F_{j})\otimes\operatorname{Hom}(V_{i},V_{j}). \tag{3.23}\] _Warning 3.25_.: We use the abuse of notation \(b_{ij}\otimes B_{ij}\in\operatorname{Ext}^{1}(F_{i},F_{j})\otimes\operatorname{ Hom}(V_{i},V_{j})\) to denote a not necessarily pure tensor \[b_{ij}\otimes B_{ij}:=\sum_{\alpha}b_{ij}^{\alpha}\otimes B_{ij}^{\alpha} \qquad\text{with}\qquad b_{ij}^{\alpha}\in\operatorname{Ext}^{1}(F_{i},F_{j}), \ B_{ij}^{\alpha}\in\operatorname{Hom}(V_{i},V_{j})\] where the sum is over \(\alpha\in\mathcal{B}_{ij}\) the parameterizing set for a basis \(\{b_{ij}^{\alpha}\}\) for \(\operatorname{Ext}^{1}(F_{i},F_{j})\). We will often use the shorthand \(B_{ij}=(B_{ij}^{\alpha})\) and \(b_{ij}=(b_{ij}^{\alpha})\) to denote these collections of data. Further, the element \(\delta\) determines a cohomological degree \(+1\) map \(d_{B}:\tilde{H}\to\tilde{H}[1]\) of the equivariant perverse coherent complex \[\tilde{H}:=\bigoplus_{i\in I}K(I_{i})\otimes V_{i}\ \in\operatorname{ pervCoh}_{\operatorname{cs}}(Y)^{\mathbb{C}^{\times}}\qquad\text{ or}\qquad\tilde{H}:=\bigoplus_{i\in I}\hat{K}(I_{i})\otimes V_{i}\ \in\operatorname{ pervCoh}_{\operatorname{cs}}(\hat{Y}) \tag{3.24}\] in the ungraded case, where \(K(I_{i})\) is the complex of Equation 3.19, via the isomorphism \[\operatorname{Ext}^{1}(F_{i},F_{j})\cong\operatorname{Hom}(K(I_{i}),K(I_{j})[ 1])\] of Equation 3.20. Moreover, the resulting differential is precisely \(K(d_{\delta}^{N})\) of Equation 3.21 above, so that we have the identification \[K(d_{\delta}^{N})=d_{B}:=\sum_{i,j\in I}K(b_{ij})\otimes B_{ij}\qquad\text{ with}\qquad K(b_{ij})\otimes B_{ij}:K(I_{i})\otimes V_{i}\to K(I_{j})\otimes V_{j}[1]. \tag{3.25}\] In fact, the relations required for the map \(\rho\) of Equation 3.22 to define a quiver representation are equivalent to the condition that \(d_{B}\) defines a differential deforming the complex \(\tilde{H}\) of Equation 3.24 to a projective resolution of the object \(H\): _Proposition 3.26_.: Let \(\tilde{H}\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)^{T}\) be as in Equation 3.24 and fix \[B:=(B_{ij})_{i,j\in I}\qquad\text{with}\qquad B_{ij}=(B_{ij}^{\alpha}\in \operatorname{Hom}(V_{i},V_{j}))_{\alpha\in\mathcal{B}_{ij}}\.\] The following are equivalent: * the induced map \(d_{B}:\tilde{H}\to\tilde{H}[1]\) of Equation 3.25 satisfies \(d_{B}^{2}=0\), and * the induced map \(\rho:(\Sigma^{1})^{\vee}\to\underline{\operatorname{End}}_{S\text{-BiMod}}(V)\) of Equation 3.22 defines a representation of the graded DG quiver \(Q_{Y}\). Further, when these conditions hold, the resulting complex \((\tilde{H},d_{B})\) is a projective resolution of the object \(H\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)^{T}\) corresponding to the quiver representation \(V\in\Lambda\text{-Mod}_{\operatorname{fd}}\). Similarly, for \(\tilde{H}\in\operatorname{PervCoh}_{\operatorname{cs}}(\hat{Y})\) as in Equation 3.24, the following are equivalent: * the induced map \(d_{B}:\tilde{H}\to\tilde{H}[1]\) of Equation 3.25 satisfies \(d_{B}^{2}=0\), and * the induced map \(\rho:(\Sigma^{1})^{\vee}\to\underline{\operatorname{End}}_{S\text{-BiMod}}(V)\) of Equation 3.22 defines a nilpotent representation of the DG quiver \(Q_{Y}\). Further, when these conditions hold, the resulting complex \((\tilde{H},d_{B})\) is a projective resolution of the object \(H\in\operatorname{PervCoh}_{\operatorname{cs}}(\hat{Y})\) corresponding to the quiver representation \(V\in\Lambda\text{-Mod}_{\operatorname{fd}}\). Proof.: This result is a special case of the more general result proved in Proposition 3.29 without the assumption that \(\Lambda\) is Koszul, in keeping with Proposition 3.28. ### Generalization to the \(A_{\infty}\) case via twisted objects In this section, we use the twisted objects construction recalled in Section 2.6 to generalize the results of Theorem 3.20, which describes the induced equivalences between the hearts of the triangulated categories of Theorem 3.11, to the case where \(\Lambda\) is not Koszul and in turn \(\Sigma\) is a general \(A_{\infty}\) algebra with non-trivial higher multiplication maps. Similarly, we generalize the monad presentation given in Proposition 3.26 to the case where \(\Lambda\) is not Koszul. Let \(\mathcal{D}=\mathrm{D}^{b}\mathrm{Coh}(Y)^{\mathbb{C}^{\times}}\), \(F=\oplus_{i\in I}F_{i}\) and consider the graded variant of the \(A_{\infty}\) category \(\mathcal{A}_{F}\) defined in Example 2.37, with objects \(i\in I\) and Hom spaces given by \[\mathcal{A}_{F}(i,j)=\mathrm{Ext}^{\bullet}(F_{i},F_{j})=\ _i\Sigma_{j}\.\] In this setting, an object of the category \(\mathrm{tw}\mathcal{A}_{F}\) is given by a finite collection of objects \[(i_{1},n_{1},p_{1}),...,(i_{d},n_{d},p_{d})\in\mathbb{Z}\mathcal{A}_{F}\,\] together with a degree zero element \[\delta=(\ \delta_{kl}\in\ _{i_{k}}\Sigma_{i_{l}}[n_{l}-n_{k}+1]\langle p_{l}-p_ {k}\rangle\ )_{k,l=1}^{d}\ \in\ \mathfrak{gl}_{d}(\mathbb{Z}\Sigma)[1]\] such that \(\delta_{kl}=0\) for \(k\leq l\) and moreover \(\delta\) satisfies the Maurer-Cartan equation \[\sum_{t\in\mathbb{N}}m_{t}^{\mathfrak{gl}_{d}(\mathbb{Z}\Sigma)}(\delta^{ \otimes t})=0\.\] By Corollary 2.54, we obtain that the functor \(Y_{2}:\mathrm{tw}\mathcal{A}_{F}\to\mathrm{C}_{\infty}(\Sigma)\) induces an equivalence of triangulated categories \[H^{0}(Y_{2}):H^{0}(\mathrm{tw}\mathcal{A}_{F})\to\mathrm{triang}(\Sigma_{i})\, \tag{3.26}\] where we recall that \(\mathrm{triang}(\Sigma_{i})\) denotes the minimal triangulated subcategory of \(\mathrm{D}_{\mathbb{Z}}(\Sigma)\) containing the objects \(\Sigma_{i}\langle k\rangle\) for \(i\in I\) and \(k\in\mathbb{Z}\). Similarly, we recall that \(\mathrm{tw}^{0}\mathcal{A}_{F}\) is the full subcategory on objects \((i_{1},...,i_{r},\delta)\) such that each of the associated integers \(n_{k}=0\) for \(k=1,...,d\), so that their image under the functor \(Y_{2}\) has underlying graded vector space \[\Sigma_{i_{1},...,i_{d}}=\bigoplus_{k=1,...,d}\Sigma_{i_{k}}\langle p_{k} \rangle\,\qquad\text{or equivalently}\qquad\Sigma_{i_{1},...,i_{d}}^{\mathrm{sh}}= \bigoplus_{k=1,...,d}\Sigma_{i_{k}}^{\mathrm{sh}}[p_{k}]\langle-p_{k}\rangle,\] in the cohomologically sheared conventions. We can also define the corresponding ungraded \(A_{\infty}\) category \(\mathcal{A}_{F}\), as before, and have the corresponding \(A_{\infty}\) category \(\mathrm{Tw}\mathcal{A}_{F}\) with objects given by a finite collection of objects \[(i_{1},n_{1}),...,(i_{r},n_{d})\in\mathbb{Z}\mathcal{A}_{F}\,\] together with a degree zero element \[\delta=(\ \delta_{kl}\in\ _{i_{k}}\Sigma_{i_{l}}[n_{l}-n_{k}+1]\ )_{k,l=1}^{d}\ \in\ \mathfrak{gl}_{d}(\mathbb{Z}\Sigma)[1]\] satisfying the analogous conditions. Similarly, we have the full category \(\mathrm{Tw}^{0}\mathcal{A}_{F}\) on objects \((i_{1},...,i_{d},\delta)\), such that each of the associated integers \(n_{k}=0\) for \(k=1,...,d\), so that their image under the functor \(Y_{2}\) has underlying vector space \[\Sigma_{i_{1},...,i_{d}}=\bigoplus_{k=1,...,d}\Sigma_{i_{k}}\.\] We now establish the generalization of Theorem 3.20 to the case where \(\Lambda\) is not necessarily Koszul: _Theorem 3.27_.: Restriction to the hearts of the triangulated categories in the diagram of Equation 3.8 induces mutually inverse equivalences of mixed categories Similarly, restriction to hearts in the diagram of Equation 3.7 induces Proof.: The mutually inverse equivalences on the left of each diagram have already been established in Theorem 3.20, following Bridgeland [10] and Van den Bergh [11]. The equivalences on the right follow from the identifications \[\Lambda\text{-Mod}_{\text{fd}}\cong\text{filt}(S_{i})\xrightarrow{\cong}\text{ filt}(\Sigma_{i})\cong H^{0}(\text{tw}^{0}\mathcal{A}_{F})\,\] and similarly in the ungraded case \[\Lambda\text{-Mod}_{\text{fd}}\cong\text{Filt}(S_{i})\xrightarrow{\cong}\text {Filt}(\Sigma_{i})\cong H^{0}(\text{Tw}^{0}\mathcal{A}_{F})\,\] where the first isomorphism in each line follows from the existence of Jordan-Holder filtrations, as in the proof of Proposition 3.6 for example, and the final isomorphism in each line follows from Corollaries 2.57 and 2.48, respectively. Finally, the middle equivalence in each line follows from the fact that triangulated functors preserve categories of iterated extensions, and we have seen that the given functor maps \(S_{i}\) to \(\Sigma_{i}\) (and, in the graded case, preserves grading shifts in the unsheared conventions). We note that this equivalence is indeed a generalization of Theorem 3.20, in the sense that we have the following proposition: _Proposition 3.28_.: Suppose \(\Lambda\) is a Koszul algebra. Then the cohomological grading shear equivalence of Remark 2.30 restricts to an equivalence of mixed categories \[H^{0}(\text{tw}^{0}\mathcal{A}_{F})\xrightarrow{\cong}\text{LCP}(\Sigma)\.\] Proof.: For \(\Lambda\) a Koszul algebra, the corresponding Koszul dual algebra \(\Sigma\) is a plain, Koszul associative algebra, and thus the \(A_{\infty}\) bimodule structure maps \[\rho_{t,k}^{\Sigma_{i_{1},\ldots,i_{d}}}:\Sigma^{\otimes t}\otimes\Sigma_{i_{ 1},\ldots,i_{d}}\otimes\mathcal{E}_{\Sigma}^{\otimes k}\to\Sigma_{i_{1},\ldots,i_{d}}[1-t-k]\] of Equation 2.14 necessarily vanish for \(t\geq 2\), \(k\geq 2\) and for \(t=k=1\) and \(t=k=0\). Thus, the only non-trivial structure maps for objects in \(H^{0}(\text{tw}^{0}\mathcal{A}_{F})\) are \(\rho_{1,0}\) and \(\rho_{0,1}\). The former is simply the usual \(\Sigma\) module structure map on the direct sum of projective modules \(\Sigma_{i_{1},\ldots,i_{d}}\), and the latter gives precisely the usual allowed linear differentials defining a linear complex of projectives. Next, we describe the analogous monad formalism arising from Theorem 3.27, generalizing Proposition 3.26 to the \(A_{\infty}\) case. Given an object \((i_{1},...,i_{d},\delta)\in\text{tw}^{0}\mathcal{A}_{F}\), the corresponding object of \(\text{D}_{\text{fd}}(\Sigma)\) has underlying bigraded vector space given by \[\Sigma_{i_{1},\ldots,i_{d}}=\bigoplus_{k=1,\ldots,d}\Sigma_{i_{k}}\langle p_{ k}\rangle\cong\bigoplus_{i\in I}\Sigma_{i}\otimes V_{i}\] where each \(V_{i}\) is a graded vector space \[V_{i}=\bigoplus_{p\in\mathbb{Z}}V_{i,-p}\langle p\rangle\qquad\text{with} \qquad\dim V_{i,p}=\#\{k\in\{1,...,d\}\ |\ i_{k}=i\text{ and }p_{k}=p\}\.\] Note in particular that \(\dim\oplus_{i\in I}V_{i}=d\). Now, the degree zero element \(\delta\in\mathfrak{gl}_{r}(\mathbb{Z}\mathcal{A}_{F})[1]\) is in fact an element of the subspace \[\delta\in\operatorname{Hom}_{\operatorname{D}(\Sigma)}(\bigoplus_{i\in I} \Sigma_{i}\otimes V_{i},\bigoplus_{j\in I}\Sigma_{j}\otimes V_{j}[1])\subset \mathfrak{gl}_{d}(\Sigma)[1]\cong\mathfrak{gl}_{d}(\mathcal{A}_{F})[1]\] and thus admits a decomposition \[\delta=\sum_{i,j\in I}b_{ij}\otimes B_{ij}\quad\in\quad\bigoplus_{i,j\in I} \ i\Sigma_{j}\otimes\operatorname{Hom}(V_{i},V_{j})[1]\, \tag{3.27}\] as in Equation 3.23, where we again use the abuse of notation \(B_{ij}\otimes b_{ij}\) to denote the not necessarily pure tensor, as in Warning 3.25. The \(A_{\infty}\) module structure maps for \(\Sigma^{\delta}_{i_{1},...,i_{d}}\) are corrected from those of \(\Sigma_{i_{1},...,i_{d}}\), the underlying direct sum of rank \(1\) free modules, by the higher \((\mathcal{A}_{F},\mathcal{E}_{\mathcal{A}_{F}})\) bimodule structure maps evaluated on the tensor powers of \(\delta\), according to Equation 2.17. In particular, the underlying cochain complex differential is given by \[d_{\delta}=\sum_{k\in\mathbb{Z}}\rho_{k}^{\Sigma}(\cdot,\delta^{\otimes k-1}) \quad\in\quad\operatorname{Hom}_{\operatorname{D}(\Sigma)}(\Sigma_{i_{1},...,i_{d}},\Sigma_{i_{1},...,i_{d}}[1])\] where \(\rho_{k}^{\Sigma}:\Sigma_{i_{1},...,i_{d}}\otimes\Sigma^{\otimes k-1}\to \Sigma_{i_{1},...,i_{d}}[1-k]\) denote the \(A_{\infty}\) module structure maps for \(\Sigma_{i_{1},...,i_{d}}\) over \(\mathcal{E}_{\mathcal{A}_{F}}\). Decomposing according to Equation 3.27, we have the analogous decomposition \[d_{\delta}=\sum_{k\in\mathbb{Z},\ i,i_{2},...,i_{k-1}j\in I}m_{k}^{\Sigma}( \cdot,b_{ii_{2}},...,b_{i_{k-1}j})\otimes(B_{ii_{2}}...B_{i_{k-1}j})\quad\in \quad\bigoplus_{i,j\in I}\operatorname{Hom}(\Sigma_{i},\Sigma_{j}[1])\otimes \operatorname{Hom}(V_{i},V_{j}) \tag{3.28}\] of the differential on \(\Sigma^{\delta}_{i_{1},...,i_{d}}\). Thus, the image of the Nakayama dual of \(\Sigma^{\delta}_{i_{1},...,i_{r}}\in\operatorname{D}_{\operatorname{perf}}(\Sigma)\) under the Koszul duality equivalence of Equation 3.14 is given by \[K(\Sigma^{\delta,N}_{i_{1},...,i_{d}})=\big{(}K(\Sigma^{N}_{i_{1},...,i_{d}}), K(d^{N}_{\delta})\big{)}\qquad\text{where}\qquad K(\Sigma^{N}_{i_{1},...,i_{d}})= \bigoplus_{i\in I}K(I_{i})\otimes V_{i}\, \tag{3.29}\] and where the differential \(K(d^{N}_{\delta}):K(\Sigma^{N}_{i_{1},...,i_{d}})\to K(\Sigma^{N}_{i_{1},...,i_ {d}})[1]\) is given by \[K(d^{N}_{\delta}):=\sum K(m_{k}^{\Sigma}(\cdot,b_{i,i_{2}},...,b_{i_{k-1},j})^ {N})\otimes(B_{i,i_{2}}...B_{i_{k-2},j})\quad\in\quad\bigoplus_{i,j\in I} \operatorname{Hom}(K(I_{i}),K(I_{j})[1])\otimes\operatorname{Hom}(V_{i},V_{j} )\, \tag{3.30}\] where the sum is over the same index set as in Equation 3.28 above. We also introduce the notation \(d_{B}:\tilde{H}\to\tilde{H}[1]\), where \(\tilde{H}\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)^{\mathbb{C}^{ \times}}\) is as in Equation 3.24 for the map \[d_{B}=\sum_{k\in\mathbb{Z},\ i,i_{2},...,i_{k-1},j\in I}K(m_{k}^{\Sigma}(\cdot, b_{i,i_{2}},...,b_{i_{k-1},j})^{N})\otimes(B_{i,i_{2}}...B_{i_{k-2},j}). \tag{3.31}\] This is essentially the same as the map in Equation 3.30 above, but emphasizes that a differential of this functional form is determined uniquely by the choice of linear maps \(B=(B^{\alpha}_{i,j})\), generalizing the definition of \(d_{B}\) in Equation 3.25. We can now state the desired generalization of Proposition 3.26: _Proposition 3.29_.: Let \(\tilde{H}\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)^{T}\) be as in Equation 3.24 and fix \[B:=(B_{ij})_{i,j\in I}\qquad\text{with}\qquad B_{ij}=(B^{\alpha}_{ij}\in \operatorname{Hom}(V_{i},V_{j}))_{\alpha\in\mathcal{B}_{ij}}\.\] The following are equivalent: * the induced map \(d_{B}:\tilde{H}\to\tilde{H}[1]\) of Equation 3.31 satisfies \(d_{B}^{2}=0\), and * the induced map \(\rho:(\Sigma^{1})^{\vee}\to\underline{\operatorname{End}}_{S\text{-BiMod}}(V)\) of Equation 3.22 defines a representation of the graded DG quiver \(Q_{Y}\). Further, when these conditions hold, the resulting complex \((\tilde{H},d_{B})\) is a projective resolution of the object \(H\in\operatorname{PervCoh_{cs}}(Y)^{T}\) corresponding to the quiver representation \(V\in\Lambda\text{-Mod}_{\operatorname{fd}}\). Similarly, for \(\tilde{H}\in\operatorname{PervCoh_{cs}}(\hat{Y})\) as in Equation 3.24, the following are equivalent: * the induced map \(d_{B}:\tilde{H}\to\tilde{H}[1]\) of Equation 3.31 satisfies \(d_{B}^{2}=0\), and * the induced map \(\rho:(\Sigma^{1})^{\vee}\to\underline{\operatorname{End}}_{S\text{-BiMod}}(V)\) of Equation 3.22 defines a nilpotent representation of the DG quiver \(Q_{Y}\). Further, when these conditions hold, the resulting complex \((\tilde{H},d_{B})\) is a projective resolution of the object \(H\in\operatorname{PervCoh_{cs}}(\hat{Y})\) corresponding to the quiver representation \(V\in\Lambda\text{-Mod}_{\operatorname{fd}}\). Proof.: By definition, the putative differential \(d_{B}\) is uniquely determined by an element \(\delta\) as in Equation 3.27 by the formula for \(d_{\delta}\) given in Equation 3.28, and by the standard argument for \(A_{\infty}\) algebras the condition \(d_{\delta}^{2}=0\) is equivalent to the Maurer-Cartan equation for \(\delta\) of Equation 2.9. The Maurer-Cartan equation can be restated as a system of equations on the linear algebraic data \(B_{ij}\in\operatorname{Hom}(V_{i},V_{j})\) with coefficients determined by the higher products of the corresponding elements \(b_{ij}\in\Sigma[1]\). Indeed, for \(i,j\in I\), the corresponding component of the Maurer-Cartan equation on \(\delta\in\mathfrak{gl}_{d}(\mathcal{A}_{F})\) is given by \[0=\left(\sum_{t\in\mathbb{N}}m_{t}(\delta^{\otimes t})\right)_{ij}=\sum_{t\in \mathbb{N}}\sum_{i_{2},...,i_{t-1}\in I}m_{t}(b_{ii_{2}},...,b_{i_{t-1}j}) \otimes(B_{ii_{2}}\cdots B_{i_{t-1}j})\, \tag{3.32}\] understood as a polynomial equation on the linear maps \(B_{ij}\). Thus, we need to show that these equations are equivalent to the requirement that the collection of linear maps \(B_{ij}\) define a representation of the quiver \(Q_{Y}\), or equivalently a module over the path algebra \(\Lambda=\Sigma^{!}\). The element \(\delta\) is equivalent to the map of \(S\) bimodules \[\rho:(\Sigma^{1})^{\vee}\to\underline{\operatorname{End}}(V)\] as in Equation 3.22, and extends by the free-forgetful adjunction to define a map of algebras internal to \(S\)-BiMod \[r_{\rho}:\Lambda^{0}=\otimes_{S}^{\bullet}(\Sigma^{1})^{\vee}\to\underline{ \operatorname{End}}(V)\.\] The map \(r_{\rho}\) extends to define a module structure on \(V\) over the path algebra \(\Lambda\) of the DG quiver \(Q_{Y}\) if and only if the composition \[\Lambda^{-1}\overset{d_{\Lambda}}{\longrightarrow}\Lambda^{0}\overset{r_{ \rho}}{\longrightarrow}\underline{\operatorname{End}}(V)\] vanishes. The differential \(d_{\Lambda}\) is defined on the generators \[(\Sigma^{2})^{\vee}\subset\Lambda^{-1}=(\Sigma^{2})^{\vee}\otimes_{S}\otimes _{S}^{\bullet}(\Sigma^{1})^{\vee}\] of \(\Lambda^{-1}\) over \(\Lambda^{0}\), by the formula \[d_{\Lambda}=\sum_{t\in\mathbb{N}}m_{t}^{\vee}:(\Sigma^{2})^{\vee}\to\otimes_ {S}^{\bullet}(\Sigma^{1})^{\vee}\,\] that is, as the sum of the duals of the \(A_{\infty}\) multiplication maps \[m_{t}:\otimes_{S}^{t}\Sigma\to\Sigma[2-n]\.\] Thus, vanishing of the composition \[r_{\rho}\circ d_{\Lambda}=r_{\rho}\circ\sum_{t\in\mathbb{N}}m_{t}^{\vee}=\sum_{t \in\mathbb{N}}\rho^{\otimes t}\circ m_{t}^{\vee}\ :(\Sigma^{2})^{\vee}\to\underline{\mathrm{End}}(V)\] is equivalent under \(\mathrm{Hom}\)-tensor adjunction to the vanishing of the element \[\sum_{t\in\mathbb{N}}m_{t}(\delta^{\otimes t})=\sum_{t\in\mathbb{N}}\sum_{i_{1 },...,i_{t}\in I}m_{n}(b_{i_{1}i_{2}},...,b_{i_{t-1}i_{t}})\otimes(B_{i_{1}i_{2} }\cdots B_{i_{t-1}i_{t}})\ \in\Sigma^{2}\otimes_{S}\underline{\mathrm{End}}(V)\,\] which is evidently equivalent to the vanishing of the Maurer-Cartan equation given in Equation 3.32, as desired. ### Moduli spaces of perverse coherent sheaves and quivers with potential In this section, we formulate and prove the first variant of Theorem A from the introduction. The categorical results discussed above determine a description of the moduli stacks of objects in the categories we have introduced above in terms of linear algebraic data. We begin by recalling the geometric description of the moduli stack of quiver representations and then state the main result establishing its equivalence with the moduli stack of perverse coherent sheaves. Let \(Q\) be a graded quiver with edge set \(E_{Q}\), vertex set \(V_{Q}\), and path algebra \[\mathbb{K}Q=\otimes_{S}^{\bullet}\mathbb{K}\langle E_{Q}\rangle\qquad\text{ where}\qquad S=\oplus_{i\in V_{Q}}S_{i}=\oplus_{i\in V_{Q}}\mathbb{K}\] as in Section 2.4. For each dimension vector \(\mathbf{d}=(d_{i})\in\mathbb{N}^{V_{Q}}\), define \[X_{\mathbf{d}}(Q)=\bigoplus_{e\in E_{Q}}\mathrm{Hom}(\mathbb{K}^{d_{s(e)}}, \mathbb{K}^{d_{t(e)}})\qquad\text{and}\qquad G_{\mathbf{d}}(Q)=\prod_{i\in V_{ Q}}\mathrm{Gl}_{d_{i}}(\mathbb{K})\.\] The stack of representations \(\mathfrak{M}(Q)\) of \(Q\) is defined as the disjoint union of the quotient stacks \[\mathfrak{M}(Q)=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q}}}\mathfrak{M}_{ \mathbf{d}}(Q)\qquad\text{where}\qquad\mathfrak{M}_{\mathbf{d}}(Q)=\left[X_{ \mathbf{d}}(Q)/G_{\mathbf{d}}(Q)\right]\.\] Let \(\underline{\mathrm{End}}(\mathbb{K}^{\mathbf{d}}):=\mathrm{Hom}(\mathbb{K}^{ \mathbf{d}},\mathbb{K}^{\mathbf{d}})\in S\)-BiMod denote the matrix algebra on \(\mathbb{K}^{\mathbf{d}}=\oplus_{i\in V_{Q}}\mathbb{K}^{d_{i}}\) with its natural \(S\) bimodule structure, and note that there are canonical identifications \[X_{\mathbf{d}}(Q)\cong\mathrm{Hom}_{S\text{-BiMod}}(\mathbb{K}\langle E_{Q} \rangle,\underline{\mathrm{End}}(\mathbb{K}^{\mathbf{d}}))\cong\mathrm{Hom}_{ \mathrm{Alg}_{\mathrm{Ass}}(S\text{-BiMod})}(\mathbb{K}Q,\underline{\mathrm{ End}}(\mathbb{K}^{\mathbf{d}}))\,\] so that we have \[\mathfrak{M}_{\mathbf{d}}(Q)(\mathbb{K})\cong\{V\in S\text{-Mod}_{\mathrm{fd}},\ \varphi\in\mathrm{Hom}_{\mathrm{Alg}_{\mathrm{Ass}}(S\text{-BiMod})}(\mathbb{K}Q, \underline{\mathrm{End}}(V))\ |\ \dim V=\mathbf{d}\ \}\,\] that is, the groupoid of geometric points of \(\mathfrak{M}_{\mathbf{d}}\) is the maximal subgroupoid of modules over the free path algebra \(\mathbb{K}Q\) of dimension \(\mathbf{d}\) over \(S\). Now, suppose \((Q,R)\) is a quiver with relations \(R\subset\mathbb{K}Q\), as in Definition 2.22, and define the closed subvariety \(Z_{\mathbf{d}}(Q,R)\subset X_{\mathbf{d}}(Q)\) by \[Z_{\mathbf{d}}(Q,R)=\{\varphi\in\mathrm{Hom}_{\mathrm{Alg}_{\mathrm{Ass}}(S \text{-BiMod})}(\mathbb{K}Q,\underline{\mathrm{End}}(\mathbb{K}^{\mathbf{d}})) \ |\ \varphi(R)=\{0\}\ \subset\underline{\mathrm{End}}(\mathbb{K}^{\mathbf{d}})\}\.\] Note that \(Z_{\mathbf{d}}(Q,R)\) is \(G_{\mathbf{d}}(Q)\)-invariant, as the condition that the corresponding map to \(\underline{\mathrm{End}}_{S\text{-BiMod}}(V)\) satisfies \(\varphi(R)=0\) is well-defined, independent of the choice of isomorphism \(V\cong\mathbb{K}^{\mathbf{d}}\). The stack of representations \(\mathfrak{M}(Q,R)\) of \((Q,R)\) is defined analogously as the disjoint union of the quotient stacks \[\mathfrak{M}(Q,R)=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q}}}\mathfrak{M}_{ \mathbf{d}}(Q,R)\qquad\text{where}\qquad\mathfrak{M}_{\mathbf{d}}(Q,R)=\left[Z _{\mathbf{d}}(Q,R)/G_{\mathbf{d}}(Q)\right]\] and we have the analogous description of the geometric points \[\mathfrak{M}_{\mathbf{d}}(Q,R)(\mathbb{K})=\{V\in S\text{-BiMod},\ \varphi\in \operatorname{Hom}_{\operatorname{Alg}_{\operatorname{Ass}}(S\text{-BiMod})}( \mathbb{K}Q/R,\underline{\operatorname{End}}(V))\ |\ \dim V=\mathbf{d}\ \}\] as parameterizing the modules over the path algebra \(\mathbb{K}Q_{R}=\mathbb{K}Q/R\) of dimension \(\mathbf{d}\) over \(S\). Finally, suppose \((Q,W)\) is a quiver with potential \(W\in\mathbb{K}Q_{\operatorname{cyc}}=\mathbb{K}Q/[\mathbb{K}Q,\mathbb{K}Q]\), as in Definition 2.34. Then we obtain a canonical function \[\operatorname{Tr}(W)_{\mathbf{d}}:X_{\mathbf{d}}(Q)\to\mathbb{K}\qquad\text{ defined by}\qquad\varphi\mapsto\operatorname{Tr}_{\mathbb{K}^{\mathbf{d}}}(\varphi(W))\] where we identify \(\varphi\in X_{\mathbf{d}}(Q)\cong\operatorname{Hom}_{\operatorname{Alg}_{ \operatorname{Ass}}(S\text{-BiMod})}(\mathbb{K}Q,\underline{\operatorname{ End}}(\mathbb{K}^{\mathbf{d}}))\), and we define the closed subvariety \(Z_{\mathbf{d}}(Q,W)\subset X_{\mathbf{d}}(Q)\) by \[Z_{\mathbf{d}}(Q,W):=\operatorname{Crit}(\operatorname{Tr}(W)_{\mathbf{d}})= \Gamma_{d\operatorname{Tr}(W)_{\mathbf{d}}}\times_{T^{\vee}X_{\mathbf{d}}}X_{ \mathbf{d}}\.\] Again, \(Z_{\mathbf{d}}(Q,W)\) is \(G_{\mathbf{d}}\) invariant, since the function \(\operatorname{Tr}(W)_{\mathbf{d}}\) is itself, and the stack of representations of \((Q,W)\) is defined as the disjoint union of the quotient stacks \[\mathfrak{M}_{\mathbf{d}}(Q,W)=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q}}} \mathfrak{M}_{\mathbf{d}}(Q,W)\qquad\text{where}\qquad\mathfrak{M}_{\mathbf{ d}}(Q,W)=\left[Z_{\mathbf{d}}(Q,W)/G_{\mathbf{d}}(Q)\right]\.\] There are also canonical derived enhancements of these stacks: letting \[Z_{\mathbf{d}}^{h}(Q,W):=\operatorname{dCrit}(\operatorname{Tr}(W)_{\mathbf{d} })=\Gamma_{d\operatorname{Tr}(W)_{\mathbf{d}}}\times_{T^{\vee}X_{\mathbf{d}}} ^{h}X_{\mathbf{d}}\.\] the derived stack of representations of the quiver with potential \((Q,W)\) is defined by \[\mathfrak{M}_{\mathbf{d}}^{h}(Q,W)=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q}}} \mathfrak{M}_{\mathbf{d}}^{h}(Q,W)\qquad\text{where}\qquad\mathfrak{M}_{ \mathbf{d}}^{h}(Q,W)=\left[Z_{\mathbf{d}}^{h}(Q,W)/G_{\mathbf{d}}(Q)\right]\.\] There are subspaces \(X_{\mathbf{d}}^{\operatorname{nil}}(Q)\subset X_{\mathbf{d}}(Q)\) defined by \[X_{\mathbf{d}}^{\operatorname{nil}}(Q)=\{\varphi\in\operatorname{Hom}_{ \operatorname{Alg}_{\operatorname{Ass}}(S\text{-BiMod})}(\mathbb{K}Q, \underline{\operatorname{End}}(\mathbb{K}^{\mathbf{d}}))\ |\ \varphi(\mathbb{K}Q_{(n)})=\{0\}\ \subset \underline{\operatorname{End}}(\mathbb{K}^{\mathbf{d}})\text{ for }n\gg 0\}\,\] and in turn a substack \(\mathfrak{M}^{\operatorname{nil}}(Q)\subset\mathfrak{M}(Q)\) defined by \[\mathfrak{M}^{\operatorname{nil}}(Q)=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q }}}\mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q)\qquad\text{where}\qquad \mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q)=\left[X_{\mathbf{d}}^{ \operatorname{nil}}(Q)/G_{\mathbf{d}}(Q)\right]\.\] Similarly, there are also subvarieties \(Z_{\mathbf{d}}^{\operatorname{nil}}(Q,R)\subset Z_{\mathbf{d}}(Q,R)\) and \(Z_{\mathbf{d}}^{\operatorname{nil}}(Q,W)\subset Z_{\mathbf{d}}(Q,W)\) defined by \[Z_{\mathbf{d}}^{\operatorname{nil}}(Q,R)=Z_{\mathbf{d}}(Q,R)\times_{X_{ \mathbf{d}}(Q)}X_{\mathbf{d}}^{\operatorname{nil}}(Q)\qquad\text{and}\qquad Z _{\mathbf{d}}^{\operatorname{nil}}(Q,W)=Z_{\mathbf{d}}(Q,W)\times_{X_{\mathbf{ d}}(Q)}X_{\mathbf{d}}^{\operatorname{nil}}(Q)\,\] and in turn closed substacks \(\mathfrak{M}^{\operatorname{nil}}(Q,R)\) and \(\mathfrak{M}^{\operatorname{nil}}(Q,W)\subset\mathfrak{M}^{\operatorname{nil}} (Q)\) defined by \[\mathfrak{M}^{\operatorname{nil}}(Q,R)=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q }}}\mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q,R)\qquad\text{where}\qquad \mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q,R)=\left[Z_{\mathbf{d}}^{ \operatorname{nil}}(Q,R)/G_{\mathbf{d}}(Q)\right]\,\] and similarly \[\mathfrak{M}^{\operatorname{nil}}(Q,W)=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q }}}\mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q,W)\qquad\text{where} \qquad\mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q,W)=\left[Z_{\mathbf{d}}^{ \operatorname{nil}}(Q,W)/G_{\mathbf{d}}(Q)\right]\.\] We now recall a general construction of the moduli of objects in an abelian category, following for example Definition 7.8 of [1]. Throughout, we let \(\mathcal{C}\) be a locally Noetherian, cocomplete, \(\mathbb{K}\)-linear abelian category. _Definition 3.30_.: Let \(R\) be a commutative \(\mathbb{K}\) algebra. The base change category \(\mathcal{C}_{R}\) of \(\mathcal{C}\) is defined to be the category of \(R\)-module objects internal to \(\mathcal{C}\), that is, pairs \((C,\xi_{C})\) of an object \(C\in\mathcal{C}\) together with a map of \(\mathbb{K}\) algebras \(\xi_{R}:R\to\operatorname{End}_{\mathcal{C}}(C)\). _Definition 3.31_.: The moduli stack \(\mathfrak{M}_{\mathcal{C}}\) of objects of \(\mathcal{C}\) is defined by \[\mathfrak{M}_{\mathcal{C}}(R)=\{E\in\mathcal{C}_{R}\ \big{|}\ E\text{ is flat over $R$ and compact }\}\,\] for each commutative \(\mathbb{K}\) algebra \(R\). It is shown in _loc. cit._ that \(\mathfrak{M}_{\mathcal{C}}\) indeed defines a stack in the big fppf topology on \(\mathbb{K}\) schemes. More generally, if \(\mathcal{C}\) is a non-cocomplete, locally Noetherian, \(\mathbb{K}\)-linear abelian category, we write \(\mathfrak{M}_{\mathcal{C}}:=\mathfrak{M}_{\operatorname{Ind}(\mathcal{C})}\), and we recall that the finitely presented (or equivalently, compact) objects of \(\operatorname{Ind}(\mathcal{C})\) are given by the objects of \(\mathcal{C}\), noting that \(\mathcal{C}\) is abelian and thus idempotent-complete. We can now state the main result of this subsection. For simplicity, we introduce the notation \[\mathfrak{M}(Y):=\mathfrak{M}_{\operatorname{PervCohcs}(\hat{Y})}\.\] Then we have: _Theorem 3.32_.: Let \(Y\xrightarrow{\pi}X\) be a toric resolution of singularities of an affine, toric, Calabi-Yau threefold, such that the fibres of \(\pi\) are of dimension \(\leq 1\) and \(\pi_{*}\mathcal{O}_{Y}\cong\mathcal{O}_{X}\), and let \((Q_{Y},W_{Y})\) be the associated quiver with potential. There is an equivalence of algebraic stacks \[\mathfrak{M}^{\operatorname{nil}}(Q_{Y},W_{Y})\xrightarrow{\cong}\mathfrak{M }(Y) \tag{3.33}\] where the induced equivalence of groupoids of \(\mathbb{K}\) points is defined on objects by \[(V_{i},B_{ij})\mapsto\left(\tilde{H}:=\bigoplus_{i\in I}K(I_{i})\otimes V_{i} \,\ d_{B}:=\sum_{k\in\mathbb{Z},\ i,i_{2},...,i_{k-1},j\in I}K(m_{k}^{\Sigma}( \cdot,b_{i,i_{2}},...,b_{i_{k-1},j})^{N})\otimes(B_{i,i_{2}}...B_{i_{k-2},j}) \right)\,\] in the notation of Section 3.4. Similarly, this induces an equivalence of the homotopy \(T\) fixed points \[\mathfrak{M}(Q_{Y},W_{Y})^{T}\xrightarrow{\cong}\mathfrak{M}_{\operatorname {PervCohcs}(Y)^{T}}\.\] The main remaining ingredient of the proof is the following standard fact: _Lemma 3.33_.: Let \(\Lambda=\mathbb{K}\langle Q\rangle_{R}\) be the path algebra of a quiver with relations \((Q,R)\) and \(\check{\Lambda}\) its completion with respect to the filtration by path length. There are equivalences of algebraic stacks \[\mathfrak{M}(Q,R)\xrightarrow{\cong}\mathfrak{M}_{\Lambda\text{-Mod}_{ \operatorname{Pd}}}\qquad\text{and}\qquad\mathfrak{M}^{\operatorname{nil}}(Q, R)\xrightarrow{\cong}\mathfrak{M}_{\check{\Lambda}\text{-Mod}_{\operatorname{Pd}}}\.\] Proof.: This is a standard result, but we give a proof a generalization of it in Lemma 3.33; taking \(\mathbf{d}=(\mathbf{d}_{0},0)\in\mathbb{N}^{V_{Q}}\times\{0\}\subset\mathbb{N} ^{V_{Q_{M}}}\) in _loc. cit._ gives precisely this proposition, by commutativity of the diagram in Equation 4.13. Proof.: (of Theorem 3.32 The definition of the moduli of objects functor is evidently natural with respect to equivalences of categories, so that we obtain a canonical equivalence of algebraic stacks \[\mathfrak{M}_{\check{\Lambda}\text{-Mod}_{\operatorname{Fr}}}\xrightarrow{ \cong}\mathfrak{M}_{\operatorname{PervCohcs}(\hat{Y})},\] by the equivalence of Theorem 3.20. Composing with the equivalence of algebraic stacks of Proposition 3.33 yields the desired equivalence of Equation 3.33, and the induced equivalence on groupoids of \(\mathbb{K}\) points is defined on objects by the claimed formula, by Proposition 3.29. Finally, we note that the quiver with relations determined by \(\Sigma\) is in fact a quiver with potential, by Example 2.35 above. ### Examples In this section, we recall computations of the quivers with potential corresponding to Calabi-Yau threefolds in several concrete examples. _Example 3.34_.: Let \(Y=X=\mathbb{C}^{3}\), so that we have a single projective object \(E=\mathcal{O}_{\mathbb{C}^{3}}\) and corresponding simple object \(F_{0}=\iota_{*}\mathcal{O}_{\mathrm{pt}}\). The algebra \(\Sigma\) is given by \[\Sigma\cong\mathrm{Sym}^{\bullet}(\mathbb{K}^{3}[-1])=\mathbb{K}\oplus\mathbb{ K}^{3}[-1]\oplus\mathbb{K}^{3}[-2]\oplus\mathbb{K}[-3]\] the symmetric algebra on \(3\) generators of degree \(1\), and the corresponding quiver with potential \((Q_{Y},W_{Y})\) is given by \[Q_{Y}=\] Thus the path algebra \(\Lambda\) is given by the commutative algebra \(\mathbb{K}[x,y,z]=\mathcal{O}(\mathbb{C}^{3})\) of global functions on \(\mathbb{C}^{3}\). The single compactly supported simple object \(F=\iota_{*}\mathcal{O}_{\mathrm{pt}}\) corresponding to the one node of the quiver has projective resolution determined by the injective \(\Sigma\) module \(I=\Sigma^{\vee}\) with composition series \[I=\left[\mathbb{K}<\mathbb{K}^{3}[1]<\mathbb{K}^{3}[2]<\mathbb{K}[3]\right]\.\] There are three independent extension classes of the simple \(\Sigma\) module \(S=\mathbb{K}\) by itself, which correspond to multiplication by the coordinate functions \(x,y,z\), under the identification \[\mathrm{Ext}^{1}_{\Sigma}(S,S)\cong\Lambda^{1}=\mathbb{C}^{3}_{x,y,z}\subset \Lambda=\mathrm{Hom}(\mathcal{O},\mathcal{O})=\mathcal{O}(\mathbb{A}^{3})= \mathbb{K}[x,y,z]\,\] so that the resolution determined by the composition series of \(I\) as in Equation 3.19 is given in this example by \[K(I)=[\mathcal{O}\xrightarrow{\begin{pmatrix}-x\\ y\\ -z\end{pmatrix}}\mathcal{O}^{3}\xrightarrow{\begin{pmatrix}0&-z&-y\\ -z&0&x\\ y&x&0\end{pmatrix}}\mathcal{O}^{3}\xrightarrow{\begin{pmatrix}x&y&z\\ \end{pmatrix}}\mathcal{O}]\xrightarrow{\cong}\iota_{*}\mathcal{O}_{\mathrm{pt} }=F\, \tag{3.34}\] which is simply the usual Koszul resolution of \(\iota_{*}\mathcal{O}_{\mathrm{pt}}\). The generators \(b_{i}\in\mathrm{Ext}^{1}(F,F)\) for \(i=1,2,3\) correspond to the maps of complexes so that the monad formalism of Proposition 3.26, which determines the resolution \((\tilde{H},d)\) of an object \(H\in\operatorname{Coh_{cs}}(\mathbb{C}^{3})\) in terms of the corresponding quiver representation \(V\), is given by (3.35) with \(\dim V=n\) given by the coefficient of the \(K\) theory class of \(H\) in terms of the simple object \([H]=n[F]\), or equivalently the length of the corresponding pure sheaf of dimension \(0\). _Example 3.35_.: Let \(Y=|\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}(-2)|\to X=Y^{ \operatorname{aff}}\cong\operatorname{Spec}\,\mathbb{C}[x_{1},x_{2},x_{3},x_{4} ]/(x_{1}x_{2}-x_{3}^{2})\) so that the simple objects in \(\operatorname{PervCoh_{cs}}(Y)\) are given by \(F_{0}=\iota_{*}\mathcal{O}_{\mathbb{P}^{1}}\) and \(F_{1}=\iota_{*}\mathcal{O}_{\mathbb{P}^{1}}(-1)[1]\). The algebra \(\Sigma\) can be computed explicitly as follows: let \(\tilde{F}_{0}=\mathcal{O}_{\mathbb{P}^{1}}\) and \(\tilde{F}_{1}=\mathcal{O}_{\mathbb{P}^{1}}(-1)[1]\), and for simplicity we drop the dependence on \(\mathbb{P}^{1}\) from the notation. Then we have \[\Sigma=\bigoplus_{i,j=0,1}\ \ j\Sigma_{i}\cong\bigoplus_{i,j=0,1}\operatorname{ Ext}^{\bullet}(F_{i},F_{j}\otimes\operatorname{Sym}^{\bullet}((\mathcal{O} \oplus\mathcal{O}(-2))[-1]))\,\] so that we have \[{}_{0}\Sigma_{0} :=\operatorname{Ext}^{\bullet}(\mathcal{O}(-1)[1],\mathcal{O}(-1 )[1]\otimes\operatorname{Sym}^{\bullet}((\mathcal{O}\oplus\mathcal{O}(-2))[- 1]))\] \[=\operatorname{Ext}^{0}(\mathcal{O},\mathcal{O})\oplus \operatorname{Ext}^{0}(\mathcal{O},\mathcal{O})[-1]\oplus\operatorname{Ext}^{ 1}(\mathcal{O},\mathcal{O}(-2))[-2]\oplus\operatorname{Ext}^{1}(\mathcal{O}, \mathcal{O}(-2))[-3]\] \[\cong\mathbb{K}\oplus\mathbb{K}[-1]\oplus\mathbb{K}[-2]\oplus \mathbb{K}[-3]\] \[{}_{1}\Sigma_{1} :=\operatorname{Ext}^{\bullet}(\mathcal{O},\operatorname{Sym}^{ \bullet}((\mathcal{O}\oplus\mathcal{O}(-2))[-1]))\] \[=\operatorname{Ext}^{0}(\mathcal{O},\mathcal{O})\oplus \operatorname{Ext}^{0}(\mathcal{O},\mathcal{O})[-1]\oplus\operatorname{Ext}^{ 1}(\mathcal{O},\mathcal{O}(-2))[-2]\oplus\operatorname{Ext}^{1}(\mathcal{O}, \mathcal{O}(-2))[-3]\] \[\cong\mathbb{K}\oplus\mathbb{K}[-1]\oplus\mathbb{K}[-2]\oplus \mathbb{K}[-3]\] \[{}_{1}\Sigma_{0} :=\operatorname{Ext}^{\bullet}(\mathcal{O}(-1)[1],\operatorname{ Sym}^{\bullet}((\mathcal{O}\oplus\mathcal{O}(-2))[-1]))\] \[\cong\operatorname{Ext}^{0}(\mathcal{O},\mathcal{O}(1))[-1] \oplus\operatorname{Ext}^{0}(\mathcal{O},\mathcal{O}(1))[-2]\] \[\cong\mathbb{K}^{2}[-1]\oplus\mathbb{K}^{2}[-2]\] \[{}_{0}\Sigma_{1} :=\operatorname{Ext}^{\bullet}(\mathcal{O},\mathcal{O}(-1)[1] \otimes\operatorname{Sym}^{\bullet}((\mathcal{O}\oplus\mathcal{O}(-2))[-1]))\] \[\cong\operatorname{Ext}^{1}(\mathcal{O},\mathcal{O}(-3))[-1] \oplus\operatorname{Ext}^{1}(\mathcal{O},\mathcal{O}(-3))[-2]\] \[\cong\mathbb{K}^{2}[-1]\oplus\mathbb{K}^{2}[-2]\] and the corresponding quiver with potential \((Q_{Y},W_{Y})\) is given by The compactly supported objects \(F_{0}=\iota_{*}\mathcal{O}_{\mathbb{P}^{1}}\) and \(F_{1}=\iota_{*}\mathcal{O}_{\mathbb{P}^{1}}(-1)[1]\) corresponding to the two nodes of the quiver have projective resolutions determined by the injective \(\Sigma\) modules \[I_{0} =[S_{0}<S_{0}\oplus S_{1}^{\oplus 2}[1]<S_{0}\oplus S_{1}^{\oplus 2 }[2]<S_{0}[3]]\] \[I_{1} =[S_{1}<S_{1}\oplus S_{0}^{\oplus 2}[1]<S_{1}\oplus S_{0}^{\oplus 2 }[2]<S_{1}[3]]\.\] There is a single independent self extension class for each of the simple objects which both correspond to the map of coherent sheaves \(y\), as well as two independent extension classes in each of the groups \(\operatorname{Ext}^{1}_{\Sigma}(S_{1},S_{2})\) and \(\operatorname{Ext}^{1}_{\Sigma}(S_{2},S_{1})\), which correspond to the maps of coherent sheaves \(1,z\) and \(x,y\), respectively, under the identifications \[\operatorname{Ext}^{1}_{\Sigma}(S_{0},S_{0}) \cong({}_{0}\Lambda_{0})^{1}\cong\operatorname{Hom}_{\mathbb{P} ^{1}}(\mathcal{O}_{\mathbb{P}^{1}},\mathcal{O}_{\mathbb{P}^{1}})=\mathbb{C}_{y} \subset\ {}_{0}\Lambda_{0}=\operatorname{Hom}_{Y}(\mathcal{O},\mathcal{O})\] \[\operatorname{Ext}^{1}_{\Sigma}(S_{1},S_{1}) \cong({}_{1}\Lambda_{1})^{1}\cong\operatorname{Hom}_{\mathbb{P} ^{1}}(\mathcal{O}_{\mathbb{P}^{1}},\mathcal{O}_{\mathbb{P}^{1}})=\mathbb{C}_{y} \subset\ {}_{1}\Lambda_{1}=\operatorname{Hom}_{Y}(\mathcal{O},\mathcal{O})\] \[\operatorname{Ext}^{1}_{\Sigma}(S_{0},S_{1}) \cong({}_{0}\Lambda_{1})^{1}\cong\operatorname{Hom}_{\mathbb{P} ^{1}}(\mathcal{O}_{\mathbb{P}^{1}},\mathcal{O}_{\mathbb{P}^{1}}(1))=\mathbb{ C}^{2}_{1,z} \subset\ {}_{0}\Lambda_{1}=\operatorname{Hom}_{Y}(\mathcal{O},\mathcal{O}(1))\] \[\operatorname{Ext}^{1}_{\Sigma}(S_{1},S_{0}) \cong({}_{1}\Lambda_{0})^{1}\cong\operatorname{Hom}_{\mathbb{P} ^{1}}(\mathcal{O}_{\mathbb{P}^{1}},\mathcal{O}_{\mathbb{P}^{1}}(1))=\mathbb{ C}^{2}_{x,xz} \subset\ {}_{1}\Lambda_{0}=\operatorname{Hom}_{Y}(\mathcal{O}(1),\mathcal{O})\] where \(z\) denotes some choice of local coordinate on the base \(\mathbb{P}^{1}\) and \(x,y\) denote the linear coordinates on the fibres of \(\mathcal{O}(-2)\) and \(\mathcal{O}\), respectively. Thus, the resolutions of the simple objects \(F_{i}\) are given by \[K(I_{0}) =[\mathcal{O}\xrightarrow{\begin{pmatrix}y\\ 1\\ z\end{pmatrix}}\mathcal{O}\oplus\mathcal{O}(1)^{2}\xrightarrow{\begin{pmatrix}0 &xz&-x\\ -z&0&y\\ 1&-y&0\end{pmatrix}}\mathcal{O}\oplus\mathcal{O}(1)^{2}\xrightarrow{\begin{pmatrix} y&x&xz\\ \end{pmatrix}}\mathcal{O}]\xrightarrow{\cong}s_{*}\mathcal{O}_{\mathbb{P}^{1}}=F_{0}\] and \[K(I_{1}) =[\mathcal{O}(1)\xrightarrow{\begin{pmatrix}y\\ x\\ xz\end{pmatrix}}\mathcal{O}(1)\oplus\mathcal{O}^{2}\xrightarrow{\begin{pmatrix}0 &z&-1\\ -xz&0&y\\ x&-y&0\end{pmatrix}}\mathcal{O}(1)\oplus\mathcal{O}^{2}\xrightarrow{\begin{pmatrix} y&1&z\\ \end{pmatrix}}\mathcal{O}(1)]\xrightarrow{\cong}s_{*}\mathcal{O}_{\mathbb{P}^{1}}(-1)[1]=F_{1}\] The maps between these resolutions corresponding to the generators of \(\Sigma^{1}\) are given by: \[\begin{CD}e:\\ @V{}V{0}V@V{}V{}V\\ \begin{pmatrix}1\\ 0\\ 0\end{pmatrix}@V{}V{}V\\ \begin{pmatrix}0\\ 0\\ 0\end{pmatrix}@V{}V{}V\\ \begin{pmatrix}0&0&0\\ 0&1&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}0&0\\ 0&0&-1\\ 0&1&0\end{pmatrix}&\begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}0&0\\ 0&0\\ 0&1&0\end{pmatrix}&\begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}0&0\\ 0&0\end{pmatrix}&\begin{pmatrix}0&0\\ 0&0&-1\\ 0&1&0\end{pmatrix}&\begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}0&0\\ 0&1\end{pmatrix}&\begin{pmatrix}0&0\\ 0&0&-1\\ 0&1&0\end{pmatrix}&\begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}0&0\\ 0&1\end{pmatrix}&\begin{pmatrix}0&0\\ 0&0&-1\\ 0&1&0\end{pmatrix}&\begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}V{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{\pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{\pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{\pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{\pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{\pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{\pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{\end{pmatrix}1}&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{\pmatrix}1&0&0\end{pmatrix}\\ @V{}{}V\\ \begin{\end{pmatrix}1}&0&0\ \(a:\)\(c:\)\(\mathcal{O}(1)\)\(\mathcal{O}(1)\oplus\mathcal{O}^{2}\)\(\mathcal{O}(1)^{2}\)\(\mathcal{O}\oplus\mathcal{O}(1)^{2}\)\(\mathcal{O}(1)\)\(\mathcal{O}(1)\oplus\mathcal{O}^{2}\)\(\mathcal{O}(1)\)\(\mathcal{O}(1)\)\(\mathcal{O}(1)\oplus\mathcal{O}^{2}\)\(\mathcal{O \[\cong\operatorname{Ext}^{0}(\mathcal{O},\mathcal{O})\oplus \operatorname{Ext}^{1}(\mathcal{O},\mathcal{O}(-2))[-3]\] \[\cong\mathbb{K}\oplus\mathbb{K}[-3]\] \[{}_{1}\Sigma_{1} :=\operatorname{Ext}^{\bullet}(\mathcal{O},\operatorname{Sym}^{ \bullet}((\mathcal{O}(-1)\oplus\mathcal{O}(-1))[-1]))\] \[\cong\operatorname{Ext}^{0}(\mathcal{O},\mathcal{O})\oplus \operatorname{Ext}^{1}(\mathcal{O},\mathcal{O}(-2))[-3]\] \[\cong\mathbb{K}\oplus\mathbb{K}[-3]\] \[{}_{0}\Sigma_{1} :=\operatorname{Ext}^{\bullet}(\mathcal{O},\mathcal{O}(-1)[1] \otimes\operatorname{Sym}^{\bullet}((\mathcal{O}(-1)\oplus\mathcal{O}(-1))[-1 ]))\] \[\cong\operatorname{Ext}^{1}(\mathcal{O},\mathcal{O}(-2))^{\oplus 2 }[-1]\oplus\operatorname{Ext}^{1}(\mathcal{O},\mathcal{O}(-3))[-2]\] \[\cong\mathbb{K}^{2}[-1]\oplus\mathbb{K}^{2}[-2]\] \[{}_{1}\Sigma_{0} :=\operatorname{Ext}^{\bullet}(\mathcal{O}(-1)[1],\operatorname{ Sym}^{\bullet}((\mathcal{O}(-1)\oplus\mathcal{O}(-1))[-1]))\] \[\cong\operatorname{Ext}^{0}(\mathcal{O},\mathcal{O}(1))[-1] \operatorname{Ext}^{0}(\mathcal{O},\mathcal{O})^{\oplus 2}[-2]\] \[\cong\mathbb{K}^{2}[-1]\oplus\mathbb{K}^{2}[-2]\] Thus, the corresponding quiver with potential \((Q_{Y},W_{Y})\) is given by The compactly supported objects \(F_{0}=s_{*}\mathcal{O}_{\mathbb{P}^{1}}\) and \(F_{1}=s_{*}\mathcal{O}_{\mathbb{P}^{1}}(-1)[1]\) corresponding to the two nodes of the quiver have projective resolutions determined by the injective \(\Sigma\) modules \[I_{1} =[S_{1}<S_{2}^{\oplus 2}[1]<S_{2}^{\oplus 2}[2]<S_{1}[3]]\] \[I_{2} =[S_{2}<S_{1}^{\oplus 2}[1]<S_{1}^{\oplus 2}[2]<S_{2}[3]]\.\] There are no non-trivial self extensions of the simple objects, while there are two independent extension classes in each of the groups \(\operatorname{Ext}^{1}_{\Sigma}(S_{1},S_{2})\) and \(\operatorname{Ext}^{1}_{\Sigma}(S_{2},S_{1})\), which correspond to the maps of coherent sheaves \(1,z\) and \(x,y\), respectively, under the identifications \[\operatorname{Ext}^{1}_{\Sigma}(S_{1},S_{2}) \cong({}_{1}\Lambda_{2})^{1}\cong\operatorname{Hom}_{\mathbb{P} ^{1}}(\mathcal{O}_{\mathbb{P}^{1}},\mathcal{O}_{\mathbb{P}^{1}}(1))=\mathbb{ C}^{2}_{1,z} \subset\ _{1}\Lambda_{2}=\operatorname{Hom}_{Y}(\mathcal{O},\mathcal{O}(1))\] \[\operatorname{Ext}^{1}_{\Sigma}(S_{2},S_{1}) \cong({}_{2}\Lambda_{1})^{1}\cong\operatorname{Hom}_{\mathbb{P} ^{1}}(\mathcal{O}_{\mathbb{P}^{1}},\mathcal{O}_{\mathbb{P}^{1}}^{\oplus 2})=\mathbb{ C}^{2}_{x,y} \subset\ _{2}\Lambda_{1}=\operatorname{Hom}_{Y}(\mathcal{O}(1),\mathcal{O})\] where \(z\) denotes some choice of local coordinate on the base \(\mathbb{P}^{1}\) and \(x,y\) denote the linear coordinates on the fibres of the rank two bundle. Thus, the resolutions of the simple objects \(F_{i}\) are given by \[K(I_{1}) =[\mathcal{O}\xrightarrow{\begin{pmatrix}1\\ z\end{pmatrix}}\mathcal{O}(1)^{2}\xrightarrow{\begin{pmatrix}zy&-y\\ -zx&x\end{pmatrix}}\mathcal{O}(1)^{2}\xrightarrow{\begin{pmatrix}x&y\\ \end{pmatrix}}\mathcal{O}]\xrightarrow{\cong}s_{*}\mathcal{O}_{\mathbb{P}^{1}}=F_ {1}\] \[K(I_{2}) =[\mathcal{O}(1)\xrightarrow{\begin{pmatrix}x\\ y\end{pmatrix}}\mathcal{O}^{2}\xrightarrow{\begin{pmatrix}yz&-xz\\ -y&x\end{pmatrix}}\mathcal{O}^{2}\xrightarrow{\begin{pmatrix}1&z\\ \end{pmatrix}}\mathcal{O}(1)]\xrightarrow{\cong}s_{*}\mathcal{O}_{\mathbb{P}^{1}}( -1)[1]=F_{2}\] The maps between these resolutions corresponding to the generators of \(\Sigma^{1}\) are given by: so that the monad formalism of Proposition 3.29 in this example determines the resolution \((\tilde{H},d)\) of an object \(H\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)\) in terms of the quiver representation \((V_{0},V_{1})\) by \[\begin{pmatrix}1&-B\\ z&-D\\ -A&x\\ -C&y\end{pmatrix}\quad\begin{pmatrix}zy-DC&CB-y&0&zB-D\\ DA-zx&x-BA&D-zB&0\\ 0&yA-xC&yz-CD&AD-xz\\ xC-yA&0&CB-y&x-AB\end{pmatrix}\quad\begin{pmatrix}x&y&B&D\\ A&C&1&z\end{pmatrix}\qquad.\] ### Koszul resolutions from Beilinson spectral sequences In this section, we explain the existence of the canonical Koszul resolutions of perverse coherent sheaves described in Equation 3.17, in terms of the Beilinson spectral sequence induced by a natural resolution of the diagonal in terms of the distinguished projective generators. Similar arguments feature in the proofs of some of the main results of [1], [1],[1], and [1], though in this section we follow most closely the explicit discussion in [12]. _Warning 3.37_.: The results of this section will not be used in the remainder of the text, so the proofs are omitted for brevity. We recommend [12] for statements of the results closest to the presentation here. Throughout this section, let \(Y\) be a smooth variety, \(E\in\operatorname{D}^{b}\operatorname{Coh}(Y)\) a classical tilting object, and \(\Lambda=\operatorname{Hom}_{\operatorname{D}^{b}\operatorname{Coh}(Y)}(E,E)\) the (classical) associative algebra of endomorphisms of \(E\). Then as above, we obtain inverse equivalences of triangulated categories \[\operatorname{Hom}_{\operatorname{D}^{b}\operatorname{Coh}(Y)}(E,\cdot): \operatorname{D}^{b}\operatorname{Coh}(Y)\xrightarrow{\cong}\operatorname{D}_{ \operatorname{perf}}(\Lambda):(\cdot)\otimes_{\Lambda}E\,\] so that in particular we obtain a natural isomorphism \[\operatorname{Hom}_{\operatorname{D}^{b}\operatorname{Coh}(Y)}(E,H)\otimes_{ \Lambda}E\xrightarrow{\cong}H \tag{3.36}\] for each object \(H\in\operatorname{D}^{b}\operatorname{Coh}(Y)\). The notions of \(\Lambda\) modules in \(\operatorname{D}^{b}\operatorname{Coh}(Y)\) and their tensor products with objects in \(\operatorname{D}(\Lambda)\) occuring in the preceding expressions are defined because \(\operatorname{D}^{b}\operatorname{Coh}(Y)\) is tensored over \(\operatorname{D}(\mathbb{K})\). In general, the tensor product of two objects \(H_{1}\in\Lambda\text{-Mod}(\operatorname{D}^{b}\operatorname{Coh}(Y)),H_{2} \in\Lambda\text{-Mod}(\operatorname{D}^{b}\operatorname{Coh}(Y))\) can be defined in terms of the induced algebra \(\Lambda_{\mathcal{O}_{Y}}=\Lambda\otimes_{\mathbb{K}}\mathcal{O}_{Y}\in \operatorname{Alg}_{\operatorname{Ass}}(\operatorname{D}^{b}\operatorname{Coh}(Y))\) by \[H_{1}\otimes_{\Lambda}H_{2}=H_{1}\otimes_{\Lambda_{\mathcal{O}_{Y}}}H_{2}\] where on the right hand side the objects \(H_{i}\) are interpreted in the usual sense of module objects over an algebra object internal to the same category \(\operatorname{D}^{b}\operatorname{Coh}(Y)\). Similarly, one defines \[H_{1}\boxtimes_{\Lambda}H_{2}=\pi_{1}^{*}H_{1}\otimes_{\Lambda_{\mathcal{O}_{Y \times Y}}}\pi_{2}^{*}H_{2}\quad\in\operatorname{D}^{b}\operatorname{Coh}(Y \times Y)\,\] which satisfies the usual identification \[\Delta^{*}(H_{1}\boxtimes_{\Lambda}H_{2})=H_{1}\otimes_{\Lambda}H_{2}\.\] Now, taking \(H=\mathcal{O}_{Y}\), the isomorphism of Equation 3.36 is given by \[E^{\vee}\otimes_{\Lambda}E\xrightarrow{\cong}\mathcal{O}_{Y}\qquad\text{ inducing a map}\qquad E^{\vee}\boxtimes_{\Lambda}E\to\Delta_{*}\mathcal{O}_{Y}\] under the \((\Delta^{*},\Delta_{*})\) adjunction. Moreover, we have: _Proposition 3.38_.: [11] Let \(E\in\operatorname{Coh}(Y)\) a classical tilting object which is in addition a vector bundle. Then the natural map \[E^{\vee}\boxtimes_{\Lambda}E\xrightarrow{\cong}\Delta_{*}\mathcal{O}_{Y}\] is an isomorphism. Concretely, the resulting resolution of the diagonal can be computed explicitly as follows: given a projective resolution of \(\Lambda\) as a \((\Lambda,\Lambda)\) bimodule \[Q^{\bullet}=\left[\cdots\to Q^{i}\to\cdots\to Q^{-1}\to Q^{0}\right] \xrightarrow{\cong}\Lambda\,\] the exterior tensor product is computed by the complex \[E^{\vee}\boxtimes_{\Lambda}^{\bullet}E:=\left[\pi_{1}^{*}E^{\vee}\otimes_{ \Lambda_{\mathcal{O}_{Y\times Y}}}Q^{\bullet}_{\mathcal{O}_{Y\times Y}}\otimes _{\Lambda_{\mathcal{O}_{Y\times Y}}}\pi_{2}^{*}E\right]\xrightarrow{\cong}E^{ \vee}\boxtimes_{\Lambda}E. \tag{3.37}\] Now, we return to the setting of the preceding section, and suppose \(Y\) is a quiver-accessible variety with distinguished projective objects \(E_{i}\) and simple objects \(F_{i}\) in \(\operatorname{D}^{b}\operatorname{Coh}(Y)\) with corresponding \(\Lambda\) modules \(P_{i}\) and \(S_{i}\), so that \(\Lambda\) is equivalent to the path algebra of a DG quiver determined by \(\Sigma=\operatorname{Ext}^{\bullet}_{\operatorname{D}(\Lambda)}(S,S)\). In particular, we also assume that the objects \(E_{i}\) are vector bundles. In this setting, there is a natural projective resolution of \(\Lambda\) as a \((\Lambda,\Lambda)\) bimodule with \[Q^{k}=\bigoplus_{i,j\in V_{Q}}\Lambda S_{i}\otimes_{\mathbb{K}}(S_{i}\Sigma^{ k}S_{j})^{\vee}\otimes_{\mathbb{K}}S_{j}\Lambda\,\] where \(i,j\) runs over \(V_{Q}\) the index set of the distinguished projective objects, which is by definition the vertex set of the corresponding quiver \(Q\); the resolution is determined by the identifications \[\Lambda=\Lambda\otimes_{S}S\otimes_{\Lambda}S\otimes_{S}\Lambda\cong\Lambda \otimes_{S}\left((\otimes_{S}^{\bullet}\bar{\Lambda}[1])\otimes_{S}\Lambda \right)\otimes_{\Lambda}S\otimes_{S}\Lambda\cong\Lambda\otimes_{S}\Sigma^{ \vee}\otimes_{S}\Lambda\] where we have used the Koszul resolution of \(\Lambda\) together with the identification \((\otimes_{S}^{\bullet}\bar{\Lambda}[1])\cong\Sigma^{\vee}\). The resulting resolution of the diagonal in Equation 3.37 above is thus \[E^{\vee}\boxtimes_{\Lambda}^{\bullet}E=\left[\bigoplus_{i,j\in V_{Q}}\pi_{1}^{ \ast}E_{i}^{\vee}\otimes(S_{i}\Sigma^{\bullet}S_{j})^{\vee}_{\mathcal{O}_{Y \times 2}}\otimes\pi_{2}^{\ast}E_{j}\right]\xrightarrow{\cong}\Delta_{\ast} \mathcal{O}_{Y}\.\] Now, given an arbitrary object \(H\in\mathrm{PervCoh}_{\mathrm{cs}}(Y)^{\mathbb{C}^{\times}}\), we obtain a canonical resolution of \(H\) via the Beilinson spectral sequence associated to the above resolution of the diagonal: following the standard argument, we have a canonical isomorphism \[H\cong\pi_{2\ast}\left(\pi_{1}^{\ast}H\otimes\Delta_{\ast}\mathcal{O}_{Y} \right).\] and the Grothendieck spectral sequence for the composition of derived functors, computed using the resolution of the diagonal constructed above, yields the canonical resolution \[\left[\bigoplus_{i,j\in V_{Q}}\mathrm{Hom}_{\mathrm{D}^{b}\mathrm{Coh}(Y)}(E_ {i},H)\otimes_{\mathbb{K}}(S_{i}\Sigma^{\bullet}S_{j})^{\vee}\otimes_{\mathbb{ K}}E_{j}\right]\xrightarrow{\cong}H\, \tag{3.38}\] so that by construction, we have: _Proposition 3.39_.: Under the equivalence \(\Lambda\text{-Mod}\xrightarrow{\cong}\mathrm{PervCoh}(Y)^{\mathbb{C}^{\times}}\), the resolutions of objects in \(\Lambda\text{-Mod}\) constructed in terms of Koszul duality, as in Equation 3.17, are canonically identified with those induced by the Beilinson spectral sequence, as in Equation 3.38. ## 4. Perverse coherent extensions on Calabi-Yau threefolds and extended quivers ### Overview of Section 4 Let \(\pi:X\to Y\) be as in Section 3.2. In this section, we extend the results of Theorems 3.11, 3.27, and 3.32 to describe certain categories generated by iterated extensions of compactly supported perverse coherent sheaves on \(Y\) together with an auxiliary object \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) which does not necessarily have compact support. These descriptions are given in terms of finite rank representations of an _extended_ quiver, in a sense we will explain below. We recommend the reader consult the description of the motivation for these constructions given in Section 1.1 of the introduction prior to reading this section. Throughout this section, we suppose that \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) satisfies the following hypotheses: 1. The algebra \(\mathrm{Ext}^{0}(M,M)\) is commutative, 2. \(\mathrm{Ext}^{i}(F,M)=0\) and \(\mathrm{Ext}^{i}(M,F)=0\) for all \(i\leq 0\) and \(F\in\mathrm{PervCoh}_{\mathrm{cs}}(Y)\), and 3. \(M\in\mathrm{PervCoh}^{\mathrm{p}}(Y)\) lies in the heart of an admissible Bridgeland-Deligne perverse coherent t-structure on \(Y\), in the sense of Definitions 4.10 and 4.11, and in particular \(\mathrm{Ext}^{i}(M,M)=0\) for all \(i<0\). In Section 4.2, we introduce the Bridgeland-Deligne perverse coherent t-structures used in the above hypotheses. In Sections 4.3, 4.4, and 4.5 we explain the analogues of the results of Sections 3.3, 3.4 and 2.4 in the setting of perverse coherent extensions. In Section 4.6, we define the notion of framing structures and prove Theorem A from the introduction. In Section 4.7, we explain many examples of these constructions relevant to the applications to representation theory in Section 6. ### Bridgeland-Deligne t-structures To begin, we recall the classical theory of tilting \(t\)-structures by torsion theories. _Definition 4.1_.: Let \(\mathcal{B}\) be an abelian category. A torsion theory on \(\mathcal{B}\) is a pair \((\mathcal{T},\mathcal{F})\) of full subcategories of \(\mathcal{B}\) such that 1. \(\operatorname{Hom}(T,F)=0\) for each \(T\in\mathcal{T}\) and \(F\in\mathcal{F}\), and 2. for each \(E\in\mathcal{B}\) there is a short exact sequence \[0\to T\to E\to F\to 0\] with \(T\in\mathcal{T}\) and \(F\in\mathcal{F}\). Let \(\mathcal{D}\) be a triangulated category with \(t\)-structure \((\mathcal{D}^{\leq 0},\mathcal{D}^{\geq 0})\) and heart \(\mathcal{B}=\mathcal{D}^{\leq 0}\cap\mathcal{D}^{\geq 0}\). Then, given a torsion theory \((\mathcal{T},\mathcal{F})\) on \(\mathcal{B}\), define \[\mathcal{D}^{\leq 0}_{t}=\{E\in\mathcal{D}^{\leq 0}\ |\ H^{0}E\in\mathcal{T}\} \qquad\text{and}\qquad\mathcal{D}^{\geq 0}_{t}=\{E\in\mathcal{D}^{\geq-1}\ |\ H^{-1}E\in\mathcal{F}\}\.\] _Proposition 4.2_.: The pair \((\mathcal{D}^{\leq 0}_{t},\mathcal{D}^{\geq 0}_{t})\) defines a \(t\)-structure on \(\mathcal{D}\). Proof.: See for example Proposition 2.1 of [10]. The heart of this \(t\)-structure is given by \[\mathcal{B}_{t}:=\{E\in\mathcal{D}^{[-1,0]}\ |\ H^{0}E\in\mathcal{T},\ H^{-1}E \in\mathcal{F}\}\,\] or more concretely, if \(\mathcal{D}\) is the derived category of the abelian category \(\mathcal{B}\) with the standard \(t\) structure, then \(\mathcal{B}_{t}\) is the full subcategory of two-term complexes of the form \[\mathcal{B}_{t}=\{E=\left[E^{-1}[1]\xrightarrow{\varphi}E^{0}\right]\ \left|\ E^{i}\in\mathcal{B},\ \text{coker}\varphi\in\mathcal{T},\ \ker\varphi\in\mathcal{F}\right\}\,.\] In particular, we have: _Corollary 4.3_.: \(\mathcal{B}_{t}\) is an abelian category. _Example 4.4_.: Let \(\mathcal{B}=\operatorname{Coh}(X)\) be the abelian category of coherent sheaves on \(X\) an irreducible algebraic variety of dimension \(d\), \(\mathcal{T}\) the full subcategory on objects with support of dimension \(\leq d-1\), and \(\mathcal{F}\) the full subcategory on torsion free coherent sheaves on \(X\). There are evidently no possible maps \(\mathcal{T}\) to \(\mathcal{F}\), and for each \(E\in\operatorname{Coh}(X)\), the torsion filtration \[0\to T_{d-1}E\to E\to E/T_{d-1}E\to 0\] gives the required short exact sequence, where \(T_{d-1}E\) denotes the maximal subsheaf of \(E\) with support of dimension \(\leq d-1\). The resulting tilted category \(\mathcal{B}_{t}\) is given by \[\mathcal{B}_{t}=\{E=\left[E^{-1}[1]\xrightarrow{\varphi}E^{0}\right]\ \left|\ E^{i}\in\operatorname{Coh}(X),\ \dim\operatorname{supp}\!H^{0}E\leq d-1,\ H^{-1}E\text{ is torsion free }\right.\}\.\] More generally, we have: _Example 4.5_.: Let \(\mathcal{B}=\operatorname{Coh}(X)\) as in the previous example, \(\mathcal{T}=\operatorname{Coh}_{\leq k}(X)\) the full subcategory on objects with support of dimension \(\leq k\), and \[\mathcal{F}=\operatorname{Coh}_{\geq k+1}(X):=\mathcal{T}^{\perp}\,\] the right orthogonal to \(\mathcal{T}\). Since \(\mathcal{B}\) is Noetherian and \(\mathcal{T}\) is closed under extensions and quotients, we have that \((\mathcal{T},\mathcal{F})\) is a torsion pair. The resulting tilted category \(\mathcal{B}_{t}\) is given by \[\mathcal{B}_{t}=\{E=\left[E^{-1}[1]\xrightarrow{\varphi}E^{0}\right]\ \bigg{|}\ E^{i}\in\mathrm{Coh}(X),\ \dim\mathrm{supp}\ H^{0}E\leq k,\ H^{-1}E\in\mathrm{Coh}_{\geq k+1}(X)\ \}\.\] Another variant of this construction is to use coherent sheaves supported along a particular subvariety: _Example 4.6_.: Let \(\mathcal{B}=\mathrm{Coh}(X)\) as in the previous example, and fix a closed subvariety \(Z\subset X\). Then let \(\mathcal{T}=\mathrm{Coh}_{Z}(X)\) denote the full subcategory on objects supported on \(Z\) and \(\mathcal{F}=\mathcal{T}^{\perp}\). Since \(\mathcal{B}\) is Noetherian and \(\mathcal{T}\) is closed under extensions and quotients, we have that \((\mathcal{T},\mathcal{F})\) is a torsion pair. The resulting tilted category \(\mathcal{B}_{t}\) is given by \[\mathcal{B}_{t}=\{E=\left[E^{-1}[1]\xrightarrow{\varphi}E^{0}\right]\ \bigg{|}\ E^{i}\in\mathrm{Coh}(X),\ \mathrm{supp}\ H^{0}E\subset Z,\ H^{-1}E\in\mathrm{Coh}_{Z}(X)^{\perp}\ \}\.\] In fact, the previous construction can be iterated to define a family of t-structures on the derived category of coherent sheaves; this construction is recorded in [1], following unpublished results of Deligne. Similar results appeared in [1] and [11], and we recommend the latter for a more explicit explanation of the geometric interpretation of the perversity function. We now recall the general definition: _Definition 4.7_.: Let \(X\) be a variety and \(X^{\mathrm{top}}\) denote the Zariski topological space underlying \(X\). A (monotone and comonotone) _perversity function_\(\mathrm{p}:X^{\mathrm{top}}\to\mathbb{Z}\) is a function such that \[\mathrm{p}(y)\geq\mathrm{p}(x)\qquad\text{and}\qquad\mathrm{p}(y)\leq\mathrm{ p}(x)+\dim(x)-\dim(y)\qquad\text{for any}\quad y\in\overline{\{x\}}\.\] Given a perversity function \(\mathrm{p}:X^{\mathrm{top}}\to\mathbb{Z}\), the _Deligne perverse coherent \(t\)-structure_ associated to \(\mathrm{p}\) is defined by \[\mathrm{D}^{b}\mathrm{Coh}(X)^{\mathrm{p},\leq 0} :=\{E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\ |\ \iota_{x}^{\mathrm{\imath}}E\in\mathrm{D}^{\leq\mathrm{p}(x)}(\mathcal{O}_{x}) \text{ for any }x\in X^{\mathrm{top}}\}\qquad\qquad,\] \[\mathrm{D}^{b}\mathrm{Coh}(X)^{\mathrm{p},\geq 0} :=\{E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\ |\ \iota_{x}^{\mathrm{\imath}}E\in\mathrm{D}^{\geq\mathrm{p}(x)}(\mathcal{O}_{x}) \text{ for any }x\in X^{\mathrm{top}}\}\qquad\qquad.\] Indeed, we have the following result: _Theorem 4.8_.: _[_1_]_ _The pair \((\mathrm{D}^{b}\mathrm{Coh}(X)^{\mathrm{p},\leq 0},\mathrm{D}^{b}\mathrm{Coh}(X)^{ \mathrm{p},\geq 0})\) defines a t-structure on \(\mathrm{D}^{b}\mathrm{Coh}(X)\)._ Proof.: See Theorem in [1]. We introduce the notation \(\mathrm{PervCoh}^{\mathrm{p}}(X)\) to denote the category of Deligne perverse coherent sheaves on \(X\) associated to perversity function \(\mathrm{p}\) These t-structures are often called simply perverse coherent t-structures. We use the prefix Deligne to avoid confusion with the notion of perverse coherent sheaf defined by Bridgeland, recalled in Definition 2.15. In fact, the category of perverse coherent sheaves (in the sense of Bridgeland) can also be constructed by tilting the abelian category of coherent sheaves with respect to a torsion theory: _Example 4.9_.: Let \(f:Y\to X\) be as in Section 2.3 and let \(\mathcal{C}^{\heartsuit\mathcal{V}}\) denote the full subcategory of \(\mathrm{Coh}(Y)\) on objects \(C\) such that \(f_{\ast}C=0\). Define full subcategories \(\mathcal{T}\) and \(\mathcal{F}\) of \(\mathrm{Coh}(Y)\) by \[\mathcal{T} =\{E\in\mathrm{Coh}(Y)\ |\ \mathbb{R}^{1}f_{\ast}T=0,\ \mathrm{Hom}(T,C)=0 \text{ for any }C\in\mathcal{C}^{\heartsuit\mathcal{V}}\ \}\] \[\mathcal{F} =\{E\in\mathrm{Coh}(Y)\ |\ \mathbb{R}^{0}f_{\ast}T=0\ \}\.\] From the first exact triangle of Equation 2.3, one can check that the pair \((\mathcal{T},\mathcal{F})\) defines a torsion theory on \(\mathrm{Coh}(Y)\), and the resulting tilted category is equivalent to the category of perverse coherent sheaves, \[\mathrm{Coh}(Y)_{t}=\mathrm{PervCoh}(Y)\,\] by Corollary 2.18. We can now introduce the desired \(t\)-structures, which will be used to define the categories of perverse coherent extensions. _Definition 4.10_.: Let \(f:Y\to X\) be as in Section 2.3, and let \(\mathrm{p}:X^{\mathrm{top}}\to\mathbb{Z}\) be a perversity function on \(X\). The _Bridgeland-Deligne perverse coherent t-structure_ associated to \(\mathrm{p}\) is defined by \[\mathrm{D}^{b}\mathrm{Coh}(Y)^{p,\leq 0} :=\{E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\ |\ f_{*}E\in\mathrm{D}^{b}\mathrm{Coh}(X)^{\mathrm{p},\leq 0} \text{ and }\iota^{L}E\in\mathcal{C}^{-1,\leq 0}\}\qquad, \tag{4.2}\] \[\mathrm{D}^{b}\mathrm{Coh}(Y)^{p,\geq 0} :=\{E\in\mathrm{D}^{b}\mathrm{Coh}(Y)\ |\ f_{*}E\in\mathrm{D}^{b}\mathrm{Coh}(X)^{\mathrm{p},\geq 0 }\text{ and }\iota^{R}E\in\mathcal{C}^{-1,\geq 0}\}\qquad. \tag{4.1}\] The category of Bridgeland-Deligne perverse coherent sheaves on \(f:Y\to X\) with respect to the perversity function \(\mathrm{p}\) is the abelian category defined by \[\mathrm{PervCoh}^{\mathrm{p}}(Y/X)=\mathrm{D}^{b}\mathrm{Coh}(Y)^{\mathrm{p}, \leq 0}\cap\mathrm{D}^{b}\mathrm{Coh}(Y)^{\mathrm{p},\geq 0}\.\] Note that this indeed defines a t-structure, by Proposition 2.14 and Theorem 2.11, combined with Theorem 4.8 above. In particular, we make the following definition. As for the usual perverse coherent sheaves in the sense of Bridgeland, recalled in Definition 2.15, we will often drop the dependence on \(X\) from the notation and write simply \[\mathrm{PervCoh}^{\mathrm{p}}(Y):=\mathrm{PervCoh}^{\mathrm{p}}(Y/X)\.\] For our primary application of interest, it will be necessary to require the following additional hypothesis: _Definition 4.11_.: A Bridgeland-Deligne perverse coherent t-structure is called _admissible_ if the corresponding heart satisfies \[\mathrm{PervCoh}_{\mathrm{cs}}(Y)\subset\mathrm{PervCoh}^{\mathrm{p}}(Y)\,\] that is, it contains the category of compactly supported perverse coherent sheaves. _Example 4.12_.: Let \(X\) be an affine variety and define the perversity function \(\mathrm{p}:X^{\mathrm{top}}\to\mathbb{Z}\) by \(\mathrm{p}(x)=0\) for each closed point \(x\in X(\mathbb{K})\) and \(\mathrm{p}(y)=-1\) otherwise. Then \(\mathrm{PervCoh}^{\mathrm{p}}(X)=\mathrm{Coh}(X)_{t}\), the tilt of \(\mathrm{Coh}(X)\) with respect to the torsion theory \((\mathcal{T},\mathcal{F})\) defined by \(\mathcal{T}=\mathrm{Coh}_{\leq 0}(X)\) and \(\mathcal{F}=\mathcal{T}^{\perp}\), as in Example 4.5. Moreover, for \(f:Y\to X\) as in Section 2.3, the Bridgeland-Deligne perverse coherent t-structure determined by \(\mathrm{p}\) is admissible, and for any subvariety \(Z\subset Y\) of dimension \(\geq 1\) the object \(M=\mathcal{O}_{Z}[1]\in\mathrm{PervCoh}^{\mathrm{p}}(Y)\). _Example 4.13_.: Let \(X\) be an affine variety and define the perversity function \(\mathrm{p}:X^{\mathrm{top}}\to\mathbb{Z}\) by \(\mathrm{p}(x)=0\) for each closed point \(x\in X(\mathbb{K})\) or one dimensional subvariety, and \(\mathrm{p}(y)=-1\) otherwise. Then \(\mathrm{PervCoh}^{\mathrm{p}}(X)=\mathrm{Coh}(X)_{t}\), the tilt of \(\mathrm{Coh}(X)\) with respect to the torsion theory \((\mathcal{T},\mathcal{F})\) defined by \(\mathcal{T}=\mathrm{Coh}_{\leq 1}(X)\) and \(\mathcal{F}=\mathcal{T}^{\perp}\), as in Example 4.5. We remark that a similar t-structure appeared in the proof of the main theorem of [11]. Moreover, for \(f:Y\to X\) as in Section 2.3, the Bridgeland-Deligne perverse coherent t-structure determined by \(\mathrm{p}\) is evidently admissible, and for any subvariety \(Z\subset Y\) of pure dimension \(\geq 2\) the object \(M=\mathcal{O}_{Z}[1]\in\mathrm{PervCoh}^{\mathrm{p}}(Y)\), and similarly for any subvariety \(Z\) of pure dimension \(\leq 1\) the object \(\mathcal{O}_{Z}\in\mathrm{PervCoh}^{\mathrm{p}}(Y)\). ### Categories of perverse coherent extensions Let \(\operatorname{Thick}(F\oplus M)\) denote the thick subcategory of \(\operatorname{D}^{b}\mathrm{Coh}(\tilde{Y})\) generated by the compactly supported simple objects \(F_{i}\) together with \(M\), and similarly for the equivariant analogues. There are equivalent corresponding thick subcategories of \(\operatorname{D}_{\mathrm{perf}}(\hat{\Lambda})\) and \(\operatorname{D}_{\mathrm{Fd}}(\Sigma)\), and their graded analogues, and we will describe them (as well as their hearts with respect to certain t-structures) in terms of explicit extensions of canonical resolutions determined by complexes of \(\Sigma\), generalizing the descriptions given in Sections 3.2,3.3 and 3.4 of the images of the objects in \(\operatorname{D}^{b}\mathrm{Coh}_{\mathrm{cs}}(Y)\) (and in \(\operatorname{PervCoh}_{\mathrm{cs}}(Y)\)) under these equivalences. In order to analogously parameterize this larger class of extensions, and ultimately give descriptions of the moduli stacks of objects of these categories, it will be necessary to first understand the category in terms of Morita theory for the object \(F\oplus M\), which leads naturally to the extended quiver, as we now explain. To begin, in analogy with Corollary 3.7, letting \[\Sigma_{M}=\operatorname{Hom}(F\oplus M,F\oplus M)\qquad\text{and}\qquad \Sigma_{\infty}=\operatorname{Hom}(M,M)\] denote the (graded) DG associative algebras of endomorphisms of \(F\oplus M\) and \(M\), respectively, we have: _Corollary 4.14_.: There are equivalences of triangulated categories \[\operatorname{Thick}(F\oplus M)\xrightarrow{\cong}\operatorname{D}_{\mathrm{ perf}}(\Sigma_{M})\qquad\text{and}\qquad\operatorname{thick}(F\oplus M)\xrightarrow{\cong} \operatorname{D}_{\mathrm{perf}}(\Sigma_{M})\,\] intertwining the forgetful functor \(\operatorname{D}(\Sigma_{M})\to\operatorname{D}(\mathbb{K})\) with \(\operatorname{Hom}_{\operatorname{D}\mathrm{C}\mathrm{Coh}(Y)}(F\oplus M, \cdot)\), and similarly \[\operatorname{Thick}(M)\xrightarrow{\cong}\operatorname{D}_{\mathrm{perf}}( \Sigma_{\infty})\qquad\text{and}\qquad\operatorname{thick}(M)\xrightarrow{ \cong}\operatorname{D}_{\mathrm{perf}}(\Sigma_{\infty})\,\] intertwining the forgetful functor \(\operatorname{D}(\Sigma_{\infty})\to\operatorname{D}(\mathbb{K})\) with \(\operatorname{Hom}_{\operatorname{D}\mathrm{C}\mathrm{Coh}(Y)}(M,\cdot)\). Proof.: As in the analogous statement for \(F\), we apply the DG Morita theory of Keller [10] recalled in Theorem 2.6 to the thick subcategory generated by the objects \(F\oplus M\) and \(M\), respectively. Further, we let \(I_{M}=I\cup\{\infty\}\) and define \[S_{\infty}=\operatorname{Ext}^{0}(M,M)\ \in\operatorname{D}_{\mathrm{Fd}}( \Sigma_{M})\qquad\text{and}\qquad S_{M}=\bigoplus_{i\in I_{M}}S_{i}=S\oplus S _{\infty}\ \in\operatorname{D}_{\mathrm{Fd}}(\Sigma_{M})\,\] as well as their natural graded enhancements. Note that the module structure on \(S_{\infty}\) factors through the natural projection \(\Sigma_{M}\to\Sigma_{\infty}\), or equivalently is pulled back from a module \(S_{\infty}\in\operatorname{D}_{\mathrm{Fd}}(\Sigma_{\infty})\), for which we use the same notation by abuse, and similarly for the graded variants. Further, as in Section 3.3, we can introduce corresponding Koszul dual graded algebras \[\Lambda_{\infty}=\operatorname{Hom}_{\Sigma_{\infty}}(S_{\infty},S_{\infty}) \qquad\text{and}\qquad\Lambda_{M}=\operatorname{Hom}_{\Sigma_{M}}(S_{M},S_{M })\,\] as well as their I-adically complete, ungraded variants \(\hat{\Lambda}_{\infty}\) and \(\hat{\Lambda}_{M}\). As in Equation 3.2, we have induced presentations of \(\hat{\Lambda}_{\infty}\) and \(\hat{\Lambda}_{M}\) (and their graded variants) as quasi-free complete (or graded) DG associative algebras, with underlying cohomologically (and abstractly bi)graded associative algebras given by \[.\] We again let \(S_{M}\in\hat{\Lambda}_{M}\)-\(\operatorname{Mod}\) and similarly \(S_{M}\in\Lambda_{M}\)-\(\operatorname{Mod}_{\mathbb{Z}}\) denote the corresponding augmentation modules, and we make the following definition: _Definition 4.15_.: The derived category \(\mathrm{D}_{\mathrm{Fr}}(\hat{\Lambda}_{M})\) of finite rank (over \(S_{M}\)) \(\hat{\Lambda}_{M}\) modules is defined as the thick subcategory \(\mathrm{D}_{\mathrm{Fr}}(\hat{\Lambda}_{M})=\mathrm{Thick}(S_{M})\) generated by \(S_{M}\), and similarly for the category of plain finite rank modules \(\hat{\Lambda}_{M}\text{-}\mathrm{Mod}_{\mathrm{Fr}}=\mathrm{Filt}(S_{M})\). Their graded variants are defined similarly, as \(\mathrm{D}_{\mathrm{fr}}(\Lambda_{M})=\mathrm{thick}(S_{M})\) and \(\Lambda_{M}\text{-}\mathrm{Mod}_{\mathrm{fr}}=\mathrm{filt}(S_{M})\). We again have induced Koszul duality equivalences, which are the analogues for \(\Sigma_{M}\) and \(\Sigma_{\infty}\) of the equivalences relating \(\Sigma\) modules and \(\Lambda\) modules given in Corollary 3.10: _Corollary 4.16_.: There are mutually inverse equivalences of triangulated categories \[\mathrm{Hom}_{\Lambda_{M}}(S_{M},\cdot) :\;\mathrm{D}_{\mathrm{Fr}}(\hat{\Lambda}_{M})\xRightarrow{ \cong}\mathrm{D}_{\mathrm{Perf}}(\Sigma_{M})\;:(\cdot)\otimes_{\Sigma_{M}}S_{M}, \tag{4.4}\] \[\mathrm{Hom}_{\Lambda_{M}}(S_{M},\cdot) :\;\mathrm{D}_{\mathrm{fr}}(\Lambda_{M})\xRightarrow{\cong} \mathrm{D}_{\mathrm{perf}}(\Sigma_{M})\;:(\cdot)\otimes_{\Sigma_{M}}S_{M},\] (4.5) \[\mathrm{Hom}_{\Lambda_{\infty}}(S_{\infty},\cdot) :\;\mathrm{D}_{\mathrm{Fr}}(\hat{\Lambda}_{\infty})\xRightarrow{ \cong}\mathrm{D}_{\mathrm{Perf}}(\Sigma_{\infty})\;:(\cdot)\otimes_{\Sigma_{ \infty}}S_{\infty},\text{ and}\] (4.6) \[\mathrm{Hom}_{\Lambda_{\infty}}(S_{\infty},\cdot) :\;\mathrm{D}_{\mathrm{fr}}(\Lambda_{\infty})\xRightarrow{ \cong}\mathrm{D}_{\mathrm{perf}}(\Sigma_{\infty})\;:(\cdot)\otimes_{\Sigma_{ \infty}}S_{\infty}. \tag{4.3}\] Proof.: Follows from the proof of Corollary 3.10, _mutatis mutandis_. We now proceed to prove the generalization of Theorem 3.11 to the thick subcategory \(\mathrm{Thick}(F\oplus M)\), and its graded variant. To begin, we note _Proposition 4.17_.: There exist functors \[(\cdot)\otimes_{S_{M}}S:\mathrm{D}_{\mathrm{Perf}}(\Sigma_{M}) \rightarrow\mathrm{D}_{\mathrm{Fd}}(\Sigma) (\cdot)\otimes_{S_{M}}S:\mathrm{D}_{\mathrm{perf}}(\Sigma_{M}) \rightarrow\mathrm{D}_{\mathrm{fd}}(\Sigma)\] \[(\cdot)\otimes_{\hat{\Lambda}_{M}}\hat{\Lambda}:\mathrm{D}_{ \mathrm{Fr}}(\hat{\Lambda}_{M})\rightarrow\mathrm{D}_{\mathrm{Perf}}(\hat{ \Lambda}) (\cdot)\otimes_{\Lambda_{M}}\Lambda:\mathrm{D}_{\mathrm{fr}}(\Lambda_{M}) \rightarrow\mathrm{D}_{\mathrm{perf}}(\Lambda).\] Proof.: The codomain of the functors in the first line are as claimed, since \(\Sigma_{M}\otimes_{S_{M}}S=\mathrm{Hom}(F,F\oplus M)\) is finite dimensional over the base field, as \(F\) has compact support. The codomain of the functors on the second line are as claimed since we have \(\mathrm{D}_{\mathrm{Fr}}(\hat{\Lambda}_{M})\subset\mathrm{D}_{\mathrm{perf}}( \hat{\Lambda}_{M})\), and similarly for the graded variant, and the functors are clearly defined on the latter categories and preserve perfect complexes. In terms of these functors, the desired generalization of Theorem 3.11 is the following: _Theorem 4.18_.: The diagram of triangulated categories (4.7) has horizontal arrows mutually inverse triangle equivalences and vertical arrows inclusions of thick subcategories, and admits canonical commutativity data, and similarly for the diagram (4.8) To complete the proof, we will need the following lemma: _Lemma 4.19_.: The diagrams of triangulated categories admit canonical commutativity data., where \((\cdot)^{N_{\Sigma_{M}}}:\operatorname{D_{\mathrm{Fr}}}(\Sigma_{M})\to \operatorname{D_{\mathrm{Fr}}}(\Sigma_{M})\) denotes the Nakayama functor for \(\Sigma_{M}\) and similarly for \(N_{\Sigma}\) and their graded variants. Proof.: First, note that we have the natural isomorphism \[S\otimes_{S_{M}}\Sigma_{M}^{N_{\Sigma_{M}}}\otimes_{S_{M}}S =S\otimes_{S_{M}}\operatorname{Hom}_{S_{M}}(\Sigma_{M},S_{M}) \otimes_{S_{M}}S\] \[\cong\operatorname{Hom}_{S_{M}}(\Sigma_{M},S)\otimes_{S_{M}}S\] \[\cong\operatorname{Hom}_{S_{M}}(\operatorname{Hom}(F,F\oplus M),S )\otimes_{S_{M}}S\] \[\cong\operatorname{Hom}_{S_{M}}(\operatorname{Hom}(F,F),S)\] \[\cong\Sigma^{N_{\Sigma}}\] Now, it suffices to check the result on the generator \(\Sigma_{M}\) of \(\operatorname{D_{\mathrm{Perf}}}(\Sigma_{M})\); we have \[\Sigma_{M}\otimes_{S_{M}}S\otimes_{\Sigma}\Sigma^{N_{\Sigma}} =\Sigma_{M}\otimes_{S_{M}}S\otimes_{\Sigma}\left(S\otimes_{S_{M} }\Sigma_{M}^{N_{\Sigma_{M}}}\otimes_{S_{M}}S\right)\] \[\cong S\otimes_{S_{M}}\Sigma_{M}\otimes_{S_{M}}S\otimes_{\Sigma} \left(\Sigma_{M}^{N_{\Sigma_{M}}}\otimes_{S_{M}}S\right)\] \[\cong\Sigma_{M}^{N_{\Sigma_{M}}}\otimes_{S_{M}}S\] , where the first equality follows from the preceding natural isomorphism. Proof.: (of Theorem 4.18) Note that we have already established the existence of the mutually inverse triangle equivalences in the bottom rows of the diagram in Theorem 3.11, and those in the top rows in Corollaries 4.14 and 4.16. Thus, it remains to check commutativity, which we now prove for the rightmost and outer squares, as this is sufficient: For the rightmost squares, applying the preceding lemma, we have the natural isomorphism \[\cong(\cdot)\otimes_{\hat{\Lambda}_{M}}\hat{\Lambda}\] as desired, and similarly for the graded case. Commutativity of the outer squares follows from the proof of Theorem 3.11, noting we have the natural isomorphism \[\operatorname{Hom}_{\hat{Y}}(F\oplus M,\cdot)\otimes_{S_{M}}S\cong \operatorname{Hom}_{\hat{Y}}(F,\cdot)\,\] and similarly for the graded case. We now describe the induced equivalences of the natural hearts of the categories in the top lines of the diagrams in Theorem 4.18, generalizing the descriptions of Theorems 3.20 and 3.27 to the extended case. Recall that by hypothesis \[M\in\operatorname{PervCoh}^{\operatorname{p}}(Y)=\operatorname{PervCoh}^{ \operatorname{p}}(Y/X)\] is an object in the heart of \(\operatorname{D}^{b}\operatorname{Coh}(Y)\) with respect to an admissible Bridgeland-Deligne perverse coherent t-structure, in the sense of Definition 4.10. Further, recall that the compactly supported perverse coherent sheaves \(F\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)\subset\operatorname{PervCoh }^{\operatorname{p}}(Y)\) are objects of this heart by the admissible assumption. Thus, we can define the strictly full subcategories \[\operatorname{PervCoh}^{\operatorname{p}}_{M}(Y) :=\operatorname{Filt}(F\oplus M)\subset\operatorname{PervCoh}^{ \operatorname{p}}(Y) \text{and} \tag{4.10}\] \[\operatorname{PervCoh}^{\operatorname{p}}_{M}(Y)^{T} :=\operatorname{filt}(F\oplus M)\subset\operatorname{PervCoh}^{ \operatorname{p}}(Y)^{\mathbb{C}^{\times}}, \tag{4.9}\] on objects admitting a filtration with subquotients given by the direct summands of the object \(F\oplus M\) (and its graded shifts, in the graded case), as in Definitions 2.47 and 2.56, respectively. Further, let \(\mathcal{D}=\operatorname{D}^{b}\operatorname{Coh}(Y)^{T}\), \(F\oplus M=\oplus_{i\in I}F_{i}\oplus M\) and consider the graded variant of the \(A_{\infty}\) category \(\mathcal{A}_{F\oplus M}\) defined in Example 2.37, with objects \(i\in I_{M}=I\cup\{\infty\}\) and \(\operatorname{Hom}\) spaces given by \[\mathcal{A}_{F\oplus M}(i,j)=\ _{i}(\Sigma_{M})_{j}\qquad\text{for}\qquad i,j \in I_{M}.\] The analogue of Theorem 3.27 is given by the following: _Theorem 4.20_.: Restriction to the hearts of the triangulated categories in the diagram of Equation 4.8 induces mutually inverse equivalences of mixed categories Similarly, restriction to hearts in the diagram of Equation 4.7 induces Proof.: The equivalences on the right follow from the identifications \[\Lambda_{M}\text{-Mod}_{\operatorname{fr}}=\operatorname{filt}(S_{M}) \xrightarrow{\cong}\operatorname{filt}(\Sigma_{M})\cong H^{0}(\operatorname{ tw}^{0}\mathcal{A}_{F\oplus M})\,\] and similarly in the ungraded case \[\hat{\Lambda}_{M}\text{-Mod}_{\operatorname{Fr}}=\operatorname{Filt}(S_{M}) \xrightarrow{\cong}\operatorname{Filt}(\Sigma_{M})\cong H^{0}(\operatorname{ Tw}^{0}\mathcal{A}_{F\oplus M})\,\] where the latter isomorphism in each line follow from Corollaries 2.57 and 2.48, respectively, and the middle equivalence in each line follows from the fact that triangulated functors preserve categories of iterated extensions, and we have seen that the given functor maps \(S_{M}\) to \(\Sigma_{M}\) (and, in the graded case, preserves grading shifts in the unsheared conventions). Similarly, the triangulated equivalence given by the composition of the two horizontal equivalences in the top lines of Equations 4.8 and 4.7 are given by the functor \(\operatorname{Hom}(F\oplus M,\cdot)\), which evidently induces equivalences \[\operatorname{PervCoh}_{M}^{\operatorname{P}}(Y)^{T}:=\operatorname{flit}(F \oplus M)\xrightarrow{\cong}\operatorname{flit}(\Sigma_{M})\cong H^{0}( \operatorname{tw}^{0}\!\mathcal{A}_{F\oplus M})\,\] and similarly in the ungraded case \[\operatorname{PervCoh}_{M}^{\operatorname{P}}(\hat{Y})=\operatorname{Filit}(F \oplus M)\xrightarrow{\cong}\operatorname{Filit}(\Sigma_{M})\cong H^{0}( \operatorname{Tw}^{0}\!\mathcal{A}_{F\oplus M})\,\] which again follows from the fact that triangulated functors preserve categories of iterated extensions, and \(\operatorname{Hom}(F\oplus M,\cdot)\) evidently maps \(F\oplus M\) to \(\Sigma_{M}\). We now recall several results about the compatibility between the equivalences of the preceding Theorem 4.18 and those of Corollary 3.7 and Theorem 3.11 describing \(\operatorname{Thick}(F)\), as well as the analogous descriptions of \(\operatorname{Thick}(M)\), and their graded variants. To begin, note there are natural (graded) bimodules \[S\otimes_{S_{M}}\Sigma_{M} =\operatorname{Hom}(F\oplus M,F)\ \in\ (\Sigma,\Sigma_{M})\text{-BiMod} \text{and} \tag{4.12}\] \[S_{\infty}\otimes_{S_{M}}\Sigma_{M} =\operatorname{Hom}(F\oplus M,M)\ \in\ (\Sigma_{\infty},\Sigma_{M})\text{-BiMod} \text{,} \tag{4.11}\] inducing functors \[(\cdot)\otimes_{\Sigma}(S\otimes_{S_{M}}\Sigma_{M}) :\operatorname{D_{Perf}}(\Sigma)\to\operatorname{D_{Perf}}( \Sigma_{M}) \text{and}\] \[(\cdot)\otimes_{\Sigma_{\infty}}(S_{\infty}\otimes_{S_{M}}\Sigma_ {M}) :\operatorname{D_{Perf}}(\Sigma_{\infty})\to\operatorname{D_{Perf}}( \Sigma_{M}) \text{,}\] and similarly for the graded variants. Further, note there are natural maps of algebras \[\varpi:\Lambda_{M}\to\Lambda\qquad\text{and}\qquad\varpi_{\infty}:\Lambda_{M }\to\Lambda_{\infty}\] and induced restriction functors on their module categories \[\varpi^{*}:\operatorname{D_{Fd}}(\hat{\Lambda})\to\operatorname{D_{Fr}}(\hat{ \Lambda}_{M})\qquad\text{and}\qquad\varpi_{\infty}^{*}:\operatorname{D_{Fr}}( \hat{\Lambda}_{\infty})\to\operatorname{D_{Fr}}(\hat{\Lambda}_{M})\,\] and similarly for the graded variants. The main compatibility results are the following: _Proposition 4.21_.: The diagram of triangulated categories (4.13) has horizontal arrows mutually inverse triangle equivalences and vertical arrows inclusions of thick subcategories, and admits canonical commutativity data, and similarly for the diagram (4.14) as well as their graded variants (4.15) and (4.16) Moreover, we have the following induced equivalences of the natural hearts of these categories: _Corollary 4.22_.: Restriction to the hearts of the triangulated categories in the diagram of Equation 4.16 induces mutually inverse equivalences of mixed categories Similarly, restriction to hearts in the diagram of Equation 4.14 induces ### Monad presentations of perverse coherent extensions In this section, we generalize the construction of Section 3.4, and its generalization in Section 3.5, describing certain canonical resolutions of compactly supported perverse coherent sheaves in terms of representations of quivers, to describe objects of the category \(\operatorname{Filt}(F\oplus M)\) in terms of representations of extended quivers. The compositions of the equivalences of Equations 4.7 and 4.8 define triangle equivalences \[\hat{K}_{M}(\cdot):\operatorname{D}_{\operatorname{Perf}}(\Sigma_{M}) \xrightarrow{\cong}\operatorname{Thick}(F\oplus M)\qquad\text{and}\qquad K_{ M}(\cdot):\operatorname{D}_{\operatorname{perf}}(\Sigma_{M})\xrightarrow{ \cong}\operatorname{thick}(F\oplus M)\,\] analogous to the functors of Equations 3.13 and 3.14. Our goal is to give an explicit presentation of the subcategory \(\operatorname{PervCoh}_{M}(\hat{Y})\) as the image of the corresponding subcategory \(H^{0}(\operatorname{Tw}^{0}\!\mathcal{A}_{F\oplus M})\), and thus in terms of representations of the extended quiver \(Q_{M}\). One of the primary implications of commutativity in the statement of Theorem 4.18 is the following: _Corollary 4.23_.: There are canonical natural isomorphisms \[\hat{K}_{M} \cong\hat{K}\circ(\cdot)^{N}\circ(\cdot)\otimes_{S_{M}}S: \operatorname{D}_{\operatorname{Perf}}(\Sigma_{M})\to\operatorname{D}^{b} \mathrm{Coh}(\hat{Y})\] and (4.18) \[K_{M} \cong K\circ(\cdot)^{N}\circ(\cdot)\otimes_{S_{M}}S: \operatorname{D}_{\operatorname{perf}}(\Sigma_{M})\to\operatorname{D}^{b} \mathrm{Coh}(Y)^{T} \tag{4.17}\] where \(\hat{K}\) and \(K\) are the functors of Equations 3.13 and 3.14. Proof.: These natural isomorphisms are simply the commutativity data for the outer squares of Equations 4.7 and 4.8. Further, the preceding corollary induces the following equivalent presentations of these categories: _Corollary 4.24_.: The image of \(\mathrm{D}_{\mathrm{perf}}(\Sigma_{M})\) under \((\cdot)\otimes_{S_{M}}S:\mathrm{D}_{\mathrm{fr}}(\Sigma_{M})\to\mathrm{D}_{ \mathrm{fd}}(\Sigma)\) is given by \[\mathrm{D}_{\mathrm{perf}}(\Sigma_{M})\xrightarrow{\cong}\mathrm{thick}(\Sigma \oplus\mathrm{Hom}(F,M))\subset\mathrm{D}_{\mathrm{fd}}(\Sigma)\,\] inducing an equivalence of mixed categories between the corresponding subcategories \[H^{0}(\mathrm{tw}^{0}(\mathcal{A}_{F\oplus M}))\xrightarrow{\cong}\mathrm{ filt}_{\Sigma}(\Sigma\oplus\mathrm{Hom}(F,M))\.\] Similarly, image of \(\mathrm{D}_{\mathrm{Perf}}(\Sigma_{M})\) under \((\cdot)\otimes_{S_{M}}S:\mathrm{D}_{\mathrm{Fr}}(\Sigma_{M})\to\mathrm{D}_{ \mathrm{Fd}}(\Sigma)\) is given by \[\mathrm{D}_{\mathrm{Perf}}(\Sigma_{M})\xrightarrow{\cong}\mathrm{ Thick}(\Sigma\oplus\mathrm{Hom}(F,M))\subset\mathrm{D}_{\mathrm{Fd}}(\Sigma)\,\] inducing an equivalence \[H^{0}(\mathrm{Tw}^{0}(\mathcal{A}_{F\oplus M}))\xrightarrow{\cong}\mathrm{ filt}_{\Sigma}(\Sigma\oplus\mathrm{Hom}(F,M))\.\] Proof.: This follows immediately, noting that the image of \(\Sigma_{M}\) is given by \[\Sigma_{M}\otimes_{S_{M}}S=\mathrm{Hom}(F\oplus M,F\oplus M)\otimes_{S_{M}}S \cong\mathrm{Hom}(F,F\oplus M)=\Sigma\oplus\mathrm{Hom}(F,M)\.\] Thus, it remains to understand the image of (the Nakayama dual of) \(\mathrm{filt}_{\Sigma}(\Sigma\oplus\mathrm{Hom}(F,M))\) under the functor \(K:\mathrm{D}_{\mathrm{perf}}(\Sigma)\to\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) studied in Section 3.4, generalizing the calculation of the image of \(\mathrm{filt}_{\Sigma}(\Sigma)\) in _loc. cit._ (as well as its I-adically complete, ungraded variant). _Warning 4.25_.: For concreteness, we give most of the exposition in this section in the graded case, noting the ungraded variant can be recovered by completing and forgetting the grading. Further, for notation simplicity we will restrict the trigrading along a cocharacter \(\mathbb{C}^{\times}\to T\), in keeping with Warning 3.14. To begin, we let \[\Sigma_{\infty}=\mathrm{Hom}(F,M),\ I_{\infty}=\mathrm{Hom}(F,M)^{N}\ \in\mathrm{D}_{\mathrm{fd}}(\Sigma)\] and similarly for \(\Sigma_{\infty},I_{\infty}\in\mathrm{D}_{\mathrm{Fd}}(\Sigma)\), and suppose \(I_{\infty}\) has Jordan-Holder series \[I_{\infty}=[I_{\infty}^{0}<I_{\infty}^{-1}\langle 1\rangle<\dots<I_{\infty}^{ -m_{\infty}}\langle m_{\infty}\rangle]\, \tag{4.19}\] in the cohomologically sheared notation, in analogy with Equation 3.18. Then the image of \(I_{\infty}\) under the explicit presentation of the Koszul duality functor established in Section 3.4 is given by \[K(I_{\infty})=\left[I_{i}^{-m_{\infty}}\otimes_{S}E[m_{\infty}]\to\dots\to I_{ \infty}^{-1}\otimes_{S}E[1]\to I_{\infty}^{0}\otimes_{S}E\right]\xrightarrow{ \cong}M\,\] as in Equation 3.19. Further, we have canonical identifications \[{}_{\infty}(\Sigma_{M})_{i}^{1} \cong\mathrm{Hom}_{\Sigma}(I_{\infty},I_{i}\langle 1\rangle)\cong \mathrm{Hom}_{\mathrm{D}^{b}\mathrm{Coh}(Y)}^{0}(K(I_{\infty}),K(I_{i})[1]) \tag{4.21}\] \[{}_{i}(\Sigma_{M})_{\infty}^{1} \cong\mathrm{Hom}_{\Sigma}(I_{i},I_{\infty}\langle 1\rangle)\cong \mathrm{Hom}_{\mathrm{D}^{b}\mathrm{Coh}(Y)}^{0}(K(I_{i}),K(I_{\infty})[1])\] (4.22) \[{}_{\infty}(\Sigma_{M})_{\infty}^{1} \cong\mathrm{Hom}_{\Sigma}(I_{\infty},I_{\infty}\langle 1\rangle)\cong \mathrm{Hom}_{\mathrm{D}^{b}\mathrm{Coh}(Y)}^{0}(K(I_{\infty}),K(I_{\infty})[1]) \tag{4.20}\] as in Equation 3.20, though we note that \(I_{\infty}\in\mathrm{D}_{\mathrm{fd}}(\Sigma)\) is not necessarily injective so the Hom functors over \(\Sigma\) appearing in the preceding equations are not necessarily exact, in contrast to those in _loc. cit._ Given an object \((i_{1},...,i_{d},\delta)\in\mathrm{tw}^{0}\mathcal{A}_{F\oplus M}\), where we recall that each \(i_{k}\in I_{M}=I\cup\{\infty\}\), the corresponding object of \(\mathrm{D}_{\mathrm{fd}}(\Sigma)\) has underlying bigraded vector space given by \[\Sigma_{i_{1},...,i_{d}}=\bigoplus_{k=1,...,d}\Sigma_{i_{k}}\langle p_{k} \rangle\cong\bigoplus_{i\in I_{M}}\Sigma_{i}\otimes V_{i}\cong\left(\bigoplus _{i\in I}\Sigma_{i}\otimes V_{i}\right)\oplus(\Sigma_{\infty}\otimes V_{\infty})\] where each \(V_{i}\) is a graded vector space \[V_{i}=\bigoplus_{p\in\mathbb{Z}}V_{i,-p}\langle p\rangle\qquad\text{with} \qquad\dim V_{i,p}=\#\{k\in\{1,...,d\}\ |\ i_{k}=i\ \text{and}\ p_{k}=p\}\.\] Note in particular that \(\dim\oplus_{i\in I_{M}}V_{i}=d\). Now, the degree zero element \(\delta\in\mathfrak{gl}_{d}(\mathbb{Z}\mathcal{A}_{F\oplus M})[1]\) is in fact an element of the subspace \[\delta\in\mathrm{Hom}_{\mathrm{D}(\Sigma_{M})}(\bigoplus_{i\in I_{M}}( \Sigma_{M})_{i}\otimes V_{i},\bigoplus_{j\in I_{M}}(\Sigma_{M})_{j}\otimes V _{j}[1])\subset\mathfrak{gl}_{d}(\Sigma_{M})[1]\cong\mathfrak{gl}_{d}( \mathcal{A}_{F\oplus M})[1]\] and thus admits a decomposition \[\delta=\sum_{i,j\in I_{M}}b_{ij}\otimes B_{ij}\quad\in\quad\bigoplus_{i,j\in I _{M}}\ i_{(}\Sigma_{M})_{j}\otimes\mathrm{Hom}(V_{i},V_{j})[1]\, \tag{4.23}\] as in Equation 3.27, where we again use the abuse of notation \(B_{ij}\otimes b_{ij}\) to denote a not necessarily pure tensor, as in Warning 3.25. The underlying cochain complex differential is given by \[d_{\delta}=\sum_{k\in\mathbb{Z}}\rho_{k}^{\Sigma\oplus\Sigma_{\infty}}(\cdot,\delta^{\otimes k-1})\quad\in\quad\mathrm{Hom}_{\mathrm{D}(\Sigma)}(\Sigma_ {i_{1},...,i_{d}},\Sigma_{i_{1},...,i_{d}}[1])\] where \(\rho_{k}^{\Sigma\oplus\Sigma_{\infty}}:\Sigma_{i_{1},...,i_{d}}\otimes\Sigma_ {M}^{\otimes k-1}\to\Sigma_{i_{1},...,i_{d}}[1-k]\) denote the image under \((\cdot)\otimes_{S_{M}}S\) of the \(A_{\infty}\) module structure maps for \((\Sigma_{M})_{i_{1},...,i_{s}}\) over \(\mathcal{E}_{A_{F\oplus M}}\). Decomposing according to Equation 4.23, we have the analogous decomposition \[d_{\delta}=\sum_{k\in\mathbb{Z},\ i_{i_{2},...,i_{k-1}},j\in I_{M}}\rho_{k}^{ \Sigma\oplus\Sigma_{\infty}}(\cdot,b_{ii_{2}},...,b_{i_{k-1}j})\otimes(B_{ii_{ 2}}...B_{i_{k-1}j})\quad\in\quad\bigoplus_{i,j\in I_{M}}\mathrm{Hom}(\Sigma_{i },\Sigma_{j}[1])\otimes\mathrm{Hom}(V_{i},V_{j})\, \tag{4.24}\] of the differential on \(\Sigma_{i_{1},...,i_{d}}^{\delta}\). Thus, the image of the Nakayama dual of \(\Sigma_{i_{1},...,i_{d}}^{\delta}\in\mathrm{D}_{\mathrm{perf}}(\Sigma)\) under the Koszul duality equivalence of Equation 3.14 is given by \[K(\Sigma_{i_{1},...,i_{d}}^{\delta,N})=\left(K(\Sigma_{i_{1},...,i_{d}}^{N}),K (d_{\delta}^{N})\right)\qquad\text{where}\qquad K(\Sigma_{i_{1},...,i_{d}}^{N} )=\bigoplus_{i\in I_{M}}K(I_{i})\otimes V_{i}\, \tag{4.25}\] and where the differential \(K(d_{\delta}^{N}):K(\Sigma_{i_{1},...,i_{d}}^{N})\to K(\Sigma_{i_{1},...,i_{d}} ^{N})[1]\) is given by \[K(d_{\delta}^{N}):=\sum K(\rho_{k}^{\Sigma\oplus\Sigma_{\infty}}(\cdot,b_{i,i_ {2}},...,b_{i_{k-1},j})^{N})\otimes(B_{i,i_{2}}...B_{i_{k-1},j})\quad\in\quad \bigoplus_{i,j\in I_{M}}\mathrm{Hom}(K(I_{i}),K(I_{j})[1])\otimes\mathrm{Hom}(V _{i},V_{j})\, \tag{4.26}\] where the sum is over the same index set as in Equation 4.24 above. We now give a concrete description of the equivalence of this data with that of a representation of the extended quiver \(Q_{M}\), which is by definition given by a (finite rank, in the sense of Definition 4.15) module over the path algebra \(\Lambda_{M}=(\otimes_{S_{M}}^{\bullet}\bar{\Sigma}_{M}[1])^{\vee}\), as ensured by the equivalence of Theorem 4.20 between \(\mathrm{PervCoh}_{M}^{p}(Y)^{T}\) and \(\Lambda_{M}\)-\(\mathrm{Mod}_{\mathrm{fr}}\). The K theory class of an object \(H\in\operatorname{PervCoh}_{M}^{\mathrm{p}}(Y)^{T}\) is determined by the multiplicites \(d_{i}\in\mathbb{N}\) for \(i\in I\) of the simple factors \(F_{i}\) together with the multiplicity \(d_{\infty}\in\mathbb{N}\) of the auxiliary object \(M\). These multiplicities correspond to those of the finite dimensional simple \(\Lambda_{M}\) modules \(S_{i}\) for \(i\in I\) together with that of the remaining finite rank generator \(S_{\infty}\). The extended quiver representation corresponding to \(H\) is then determined by a \(\mathbf{d}=(d_{i})_{i\in I_{M}}\) dimensional \(S_{M}\) module \[V=\bigoplus_{i\in V_{Q_{M}}}V_{i}=V_{\infty}\oplus\bigoplus_{i\in V_{Q}}V_{i}\] together with a map \[\mathbb{K}\langle E_{Q_{M}}\rangle^{0}=(\Sigma_{M}^{1})^{\vee}\to\underline{ \operatorname{End}}_{S_{M}\text{-BiMod}}(V) \tag{4.27}\] of \(S_{M}\) bimodules, such that the induced map from the cohomological degree zero component of the quasi-free resolution of the path algebra \(\Lambda_{M}^{0}=\otimes_{S_{M}}^{\bullet}\mathbb{K}\langle E_{Q_{M}}\rangle^{0}\) to \(\underline{\operatorname{End}}_{S\text{-BiMod}}(V)\) maps the ideal of relations in the quiver \(Q_{M}\) to zero. The map of Equation 4.27 can equivalently be interpreted as an element \[\delta=\sum_{i,j\in I_{M}}b_{ij}\otimes B_{ij}\quad\in\quad\Sigma_{M}^{1} \otimes\underline{\operatorname{End}}(V)=\bigoplus_{i,j\in I_{M}}\ {}_{i}(\Sigma_{M})_{j}\otimes \operatorname{Hom}(V_{i},V_{j})\,\] which determines a cohomological degree \(+1\) endomorphism \(d_{B}:\tilde{H}\to\tilde{H}[1]\) of the perverse coherent complex \[\tilde{H}:=\bigoplus_{i\in I_{M}}K(I_{i})\otimes_{S_{i}}V_{i}\ \in \operatorname{PervCoh}_{M}^{\mathrm{p}}(Y)^{\mathbb{C}^{\times}}\qquad,\, \text{or}\qquad\tilde{H}:=\bigoplus_{i\in I_{M}}\hat{K}(I_{i})\otimes_{S_{i} }V_{i}\ \in\operatorname{PervCoh}_{M}^{\mathrm{p}}(\hat{Y}) \tag{4.28}\] in the ungraded case, by the formula \[d_{B}=\sum_{k\in\mathbb{Z},\ i,i_{2},...,i_{k-1},j\in I_{M}}K(\rho_{k}^{\Sigma \oplus\Sigma_{\infty}}(\cdot,b_{i,i_{2}},...,b_{i_{k-1},j})^{N})\otimes(B_{i, i_{2}}...B_{i_{k-1},j}). \tag{4.29}\] We can now state the desired generalization of Propositions 3.26 and 3.29: _Proposition 4.26_.: Let \(\tilde{H}\in\operatorname{PervCoh}_{M}^{\mathrm{p}}(Y)^{T}\) be as in Equation 4.28 and fix \[B:=(B_{ij})_{i,j\in I_{M}}\qquad\text{with}\qquad B_{ij}=(B_{ij}^{\alpha}\in \operatorname{Hom}(V_{i},V_{j}))_{\alpha\in\mathcal{B}_{ij}}\.\] The following are equivalent: * the induced map \(d_{B}:\tilde{H}\to\tilde{H}[1]\) of Equation 4.29 satisfies \(d_{B}^{2}=0\), and * the induced map \(\rho:(\Sigma_{M}^{1})^{\vee}\to\underline{\operatorname{End}}(V)\) of Equation 4.27 defines a representation of the graded DG quiver \(Q_{M}\). Further, when these conditions hold, the resulting complex \((\tilde{H},d_{B})\) is a projective resolution of the object \(H\in\operatorname{PervCoh}_{M}^{\mathrm{p}}(Y)^{T}\) corresponding to the extended quiver representation \(V\in\Lambda_{M}\text{-Mod}_{\mathrm{fr}}\). Similarly, for \(\tilde{H}\in\operatorname{PervCoh}_{M}^{\mathrm{p}}(\hat{Y})\) as in Equation 4.28, the following are equivalent: * the induced map \(d_{B}:\tilde{H}\to\tilde{H}[1]\) of Equation 4.29 satisfies \(d_{B}^{2}=0\), and * the induced map \(\rho:(\Sigma_{M}^{1})^{\vee}\to\underline{\operatorname{End}}_{S\text{-BiMod}}(V)\) of Equation 4.27 defines a nilpotent representation of the DG quiver \(Q_{M}\). Further, when these conditions hold, the resulting complex \((\tilde{H},d_{B})\) is a projective resolution of the object \(H\in\operatorname{PervCoh}_{M}(\hat{Y})\) corresponding to the extended quiver representation \(V\in\Lambda_{M}\text{-Mod}_{\mathrm{Fr}}\). Proof.: The proof is the same as that of Proposition 3.29, _mutatis mutandis_. ### Moduli spaces of perverse coherent extensions and extended quivers Let \(Q_{M}\) be an extended quiver, with vertex set \(V_{Q_{M}}=V_{Q}\cup\{\infty\}\) and edge set \(E_{Q_{M}}=E_{Q}\cup E_{\infty}\), where \(E_{\infty}\) denotes all edges of the extended quiver with \(\infty\) as at least one of its source and target vertices. The free path algebra of the extended quiver is given by \[\mathbb{K}Q_{M}=\otimes_{S_{M}}^{\bullet}\mathbb{K}\langle E_{Q_{M}}\rangle \qquad\text{where}\qquad S_{M}=\oplus_{i\in V_{Q}}S_{i}\oplus S_{\infty}= \oplus_{I\in I}\mathbb{K}\oplus S_{\infty}\.\] For each dimension vector \(\mathbf{d}=(\mathbf{d}_{0},d_{\infty})\in\mathbb{N}^{V_{Q_{M}}}=\mathbb{N}^{V_ {Q}}\times\mathbb{N}\) for the extended quiver, define \[X_{\mathbf{d}}(Q_{M})=\bigoplus_{i,j\in I_{M}}\ {}_{i}\Sigma^{1}_{j}\otimes \operatorname{Hom}(\mathbb{K}^{d_{i}},\mathbb{K}^{d_{j}})\qquad\text{and} \qquad G_{\mathbf{d}}(Q_{M})=\prod_{i\in V_{Q_{M}}}\operatorname{Gl}_{d_{i}}( S_{i})\.\] The stack of representations \(\mathfrak{M}(Q_{M})\) of the free \(S_{M}\)-algebra on the extended quiver \(Q_{M}\) is defined as the disjoint union of the quotient stacks \[\mathfrak{M}(Q_{M})=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q_{M}}}}\mathfrak{ M}_{\mathbf{d}}(Q_{M})\qquad\text{where}\qquad\mathfrak{M}_{\mathbf{d}}(Q_{M})= \left[X_{\mathbf{d}}(Q_{M})/G_{\mathbf{d}}(Q_{M})\right]\.\] _Remark 4.27_.: Note that for \(\mathbf{d}=(\mathbf{d}_{0},0)\in\mathbb{N}^{V_{Q_{M}}}=\mathbb{N}^{V_{Q}}\times \mathbb{N}\), we have tautological identifications \[\bigoplus_{i,j\in I}\ {}_{i}\Sigma^{1}_{j}\otimes\operatorname{Hom}(\mathbb{K}^{d _{i}},\mathbb{K}^{d_{j}})=\bigoplus_{e\in E_{Q}}\operatorname{Hom}(\mathbb{K}^ {d_{s(e)}},\mathbb{K}^{d_{t(e)}})\qquad\text{and}\qquad\operatorname{Gl}_{d_{i} }(S_{i})=\operatorname{Gl}_{d_{i}}(\mathbb{K})\] for \(i\in V_{Q}\), so that in this case the definitions of \(X_{\mathbf{d}}(Q_{M}),\ G_{\mathbf{d}}(Q_{M})\) and \(\mathfrak{M}_{\mathbf{d}}(Q_{M})\) reduce to the standard definitions of \(X_{\mathbf{d}}(Q),\ G_{\mathbf{d}}(Q)\) and \(\mathfrak{M}_{\mathbf{d}}(Q)\) given in Section 2.4. Let \(\underline{\operatorname{End}}(S_{M}^{\mathbf{d}}):=\operatorname{Hom}(S_{M}^{ \mathbf{d}},S_{M}^{\mathbf{d}})\in S_{M}\)-BiMod denote the matrix algebra on \[S_{M}^{\mathbf{d}}:=\bigoplus_{i\in V_{Q_{M}}}S_{i}^{d_{i}}=\mathbb{K}^{ \mathbf{d}_{0}}\oplus S_{\infty}^{d_{\infty}}\] with its natural \(S_{M}\) bimodule structure, and note that there are canonical identifications \[X_{\mathbf{d}}(Q_{M})\cong\operatorname{Hom}_{S_{M}\text{-BiMod}}(\mathbb{K} \langle E_{Q_{M}}\rangle,\underline{\operatorname{End}}(S_{M}^{\mathbf{d}})) \cong\operatorname{Hom}_{\operatorname{Alg}_{\operatorname{Ass}}(S_{M}\text{- BiMod})}(\mathbb{K}Q_{M},\underline{\operatorname{End}}(S_{M}^{ \mathbf{d}}))\,\] so that we have \[\mathfrak{M}_{\mathbf{d}}(Q_{M})(\mathbb{K})\cong\{V\in S_{M}\text{-Mod}_{ \operatorname{Fr}},\ \varphi\in\operatorname{Hom}_{\operatorname{Alg}_{\operatorname{Ass}}(S\text{-BiMod })}(\mathbb{K}Q_{M},\underline{\operatorname{End}}(V))\ |\ \dim V=\mathbf{d}\ \}\,\] that is, the groupoid of geometric points of \(\mathfrak{M}_{\mathbf{d}}\) is the maximal subgroupoid of the category of modules over the free path algebra \(\mathbb{K}Q_{M}\) with underlying \(S_{M}\) module a direct sum of free \(S_{i}\) modules of finite rank \(\mathbf{d}\). Let \(R_{M}\subset\mathbb{K}Q_{M}\) be the ideal generated by the image of the Koszul differential \[d_{\Lambda_{M}}^{-1}:\Lambda_{M}^{-1}\to\Lambda_{M}^{0}=\mathbb{K}Q_{M}\,\] and for each \(\mathbf{d}\in\mathbb{N}^{V_{Q_{M}}}\) define the closed subvariety \(Z_{\mathbf{d}}(Q_{M},R_{M})\subset X_{\mathbf{d}}(Q_{M})\) by \[Z_{\mathbf{d}}(Q_{M},I_{M}) =\{\varphi\in\operatorname{Hom}_{\operatorname{Alg}_{\operatorname {Ass}}(S_{M}\text{-BiMod})}(\mathbb{K}Q_{M},\underline{\operatorname{End}}(S_ {M}^{\mathbf{d}}))\ |\ \varphi(I_{M})=\{0\}\ \subset\underline{\operatorname{End}}(S_{M}^{\mathbf{d}})\}\] \[=\operatorname{Hom}_{\operatorname{Alg}_{\operatorname{Ass}}(S_{M} \text{-BiMod})}(\Lambda_{M},\underline{\operatorname{End}}(S_{M}^{\mathbf{d}}))\.\] Note that \(Z_{\mathbf{d}}(Q_{M},I_{M})\) is \(G_{\mathbf{d}}(Q_{M})\) invariant, as the condition that the corresponding map to \(\underline{\operatorname{End}}(V)\) satisfies \(\varphi(I_{M})=\{0\}\) is well defined, independent of the choice of isomorphism \(V\cong S_{M}^{\mathbf{d}}\). The stack of representations \(\mathfrak{M}(Q_{M},R_{M})\) of the extended quiver \((Q_{M},R_{M})\) is defined analogously as the disjoint union of the quotient stacks \[\mathfrak{M}(Q_{M},R_{M})=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q}}}\mathfrak{ M}_{\mathbf{d}}(Q_{M},R_{M})\qquad\text{where}\qquad\mathfrak{M}_{\mathbf{d}}(Q_{M},R _{M})=[Z_{\mathbf{d}}(Q_{M},R_{M})/G_{\mathbf{d}}(Q_{M})]\] and we have the analogous description of the geometric points \[\mathfrak{M}_{\mathbf{d}}(Q_{M},R_{M})(\mathbb{K})=\{V\in S_{M}\text{-BiMod}, \,\varphi\in\operatorname{Hom}_{\operatorname{Alg}_{\operatorname{Ass}}(S_{M }\text{-BiMod})}(\mathbb{K}Q_{M}/R_{M},\underline{\operatorname{End}}(V))\mid \dim_{S_{M}}V=\mathbf{d}\,\}\] as parameterizing the modules over the path algebra \(\mathbb{K}Q_{M}/R_{M}=\Lambda_{M}\) with underlying \(S_{M}\) module a direct sum of free \(S_{i}\) modules of finite rank \(\mathbf{d}\). There are subspaces \(X_{\mathbf{d}}^{\operatorname{nil}}(Q_{M})\subset X_{\mathbf{d}}(Q_{M})\) defined by \[X_{\mathbf{d}}^{\operatorname{nil}}(Q_{M})=\{\varphi\in\operatorname{Hom}_{ \operatorname{Alg}_{\operatorname{Ass}}(S_{M}\text{-BiMod})}(\mathbb{K}Q_{M },\underline{\operatorname{End}}(S_{M}^{\mathbf{d}}))\mid\varphi((\mathbb{K}Q _{M})_{(n)})=\{0\}\ \subset\underline{\operatorname{End}}(S_{M}^{\mathbf{d}})\text{ for }n\gg 0\}\,,\] and in turn a substack \(\mathfrak{M}^{\operatorname{nil}}(Q_{M})\subset\mathfrak{M}(Q_{M})\) defined by \[\mathfrak{M}^{\operatorname{nil}}(Q_{M})=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{ V_{Q_{M}}}}\mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q_{M})\qquad\text{ where}\qquad\mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q_{M})=\left[X_{ \mathbf{d}}^{\operatorname{nil}}(Q_{M})/G_{\mathbf{d}}(Q_{M})\right]\.\] Similarly, there are closed subvarieties \(Z_{\mathbf{d}}^{\operatorname{nil}}(Q_{M},R_{M})\subset X_{\mathbf{d}}^{ \operatorname{nil}}(Q_{M})\) defined by \[Z_{\mathbf{d}}^{\operatorname{nil}}(Q_{M},R_{M})=Z_{\mathbf{d}}(Q_{M},R_{M}) \times_{X_{\mathbf{d}}(Q_{M})}X_{\mathbf{d}}^{\operatorname{nil}}(Q_{M})\] and in turn a closed substack \(\mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q_{M},R_{M})\subset\mathfrak{M }^{\operatorname{nil}}(Q_{M})\) defined by \[\mathfrak{M}^{\operatorname{nil}}(Q_{M},R_{M})=\bigsqcup_{\mathbf{d}\in \mathbb{N}^{V_{Q_{M}}}}\mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q_{M},R _{M})\qquad\text{where}\qquad\mathfrak{M}_{\mathbf{d}}^{\operatorname{nil}}(Q _{M},R_{M})=\left[Z_{\mathbf{d}}^{\operatorname{nil}}(Q_{M},R_{M})/G_{ \mathbf{d}}(Q)\right]\,.\] We now state the main result of this section, which is the full statement of Theorem A from the introduction. As in Section 3.6, we introduce the notation \[\mathfrak{M}(Y,M):=\mathfrak{M}_{\operatorname{PervCoh}_{M}^{\mathrm{p}}(Y)}\,\] and we have: _Theorem 4.28_.: Let \(M\in\operatorname{PervCoh}^{\mathrm{p}}(Y)^{T}\) be an object satisfying the hypotheses in Section 4.1, and let \(Q_{M}\) be the associated extended quiver. There is an equivalence of algebraic stacks \[\mathfrak{M}^{\operatorname{nil}}(Q_{M},R_{M})\xrightarrow{\cong}\mathfrak{M }(Y,M) \tag{4.30}\] where the induced equivalence of groupoids of \(\mathbb{K}\) points is defined on objects by \[(V_{i},B_{ij})\mapsto\left(\tilde{H}:=\bigoplus_{i\in I_{M}}K(I_{i})\otimes_{S _{i}}V_{i}\,\ d_{B}:=\sum_{k\in\mathbb{Z},\ i,i_{2},...,i_{k-1},j\in I_{M}}K(\rho_{k}^{ \Sigma\oplus\Sigma_{\infty}}(\cdot,b_{i,i_{2}},...,b_{i_{k-1},j})^{N})\otimes( B_{i,i_{2}}...B_{i_{k-1},j})\right)\,\] in the notation of Section 4.4. Similarly, this induces an equivalence of the \(T\) fixed points \[\mathfrak{M}(Q_{M},R_{M})^{T}\xrightarrow{\cong}\mathfrak{M}_{\operatorname{ PervCoh}_{M}^{\mathrm{p}}(Y)^{T}}. \tag{4.31}\] The main remaining ingredient of the proof is the following lemma: _Lemma 4.29_.: Let \(\Lambda_{M}=\mathbb{K}\langle Q_{M}\rangle/R_{M}\) be the path algebra of the extended quiver \(Q_{M}\) and \(\hat{\Lambda}_{M}\) be its completion. There is an equivalence of algebraic stacks \[\mathfrak{M}^{\rm nil}(Q_{M},R_{M})\xrightarrow{\cong}\mathfrak{M}_{\hat{ \Lambda}_{M}\text{-}{\rm Mod}_{\rm Fr}}\.\] Proof.: Let \(R\) be a commutative ring with \(T=\text{Spec }R\), recall that \(W=\text{Spf }S_{\infty}\subset\hat{X}\) denotes the formal subscheme of \(\hat{X}\) on which \(\pi_{*}M\) is supported, and fix \(\mathbf{d}\in\mathbb{N}^{V_{Q_{M}}}\). The groupoid of \(R\) points of the component \(\mathfrak{M}_{\mathbf{d}}^{\rm nil}(Q_{M},I_{M})\) of the left hand side is given by \[\mathfrak{M}_{\mathbf{d}}^{\rm nil}(Q_{M},I_{M})(R) =[Z_{\mathbf{d}}(Q_{M},I_{M})/G_{\mathbf{d}}(Q_{M})](R)\] \[=\{P\in\text{Bun}_{G_{\mathbf{d}}(Q_{M})}(T)\,\ \varphi\in\Gamma(T,P\times_{G_{ \mathbf{d}}(Q_{M})}X_{\mathbf{d}}^{\rm nil}(Q_{M}))\ |\ \varphi(I_{M}\otimes R)=\{0\}\ \}\.\] Note that the underlying \(G_{\mathbf{d}}(Q_{M})=\text{Gl}_{\mathbf{d}_{0}}(\mathbb{K})\times\text{Gl}_ {d_{\infty}}(S_{\infty})\) bundle \(P\) over \(R\) is equivalent to a rank \(d_{i}\) vector bundle \(V_{i}\) over \(T\) for each \(i\in V_{Q}\) together with a rank \(d_{\infty}\) vector bundle \(V_{\infty}\) over \(T\times W\), trivializable over \(\{t\}\times W\) for each \(\mathbb{K}\) point \(t\in T(\mathbb{K})\). Under this identification, the section \(\varphi\) is equivalent to a collection of sections of the associated endomorphism bundle \[\varphi\in\bigoplus_{i,j\in I_{M}}\ \ i\Sigma_{j}^{1}\otimes_{S_{i}\otimes S_{j} ^{\rm op}}\Gamma(T,\underline{\text{Hom}}(V_{i},V_{j})\ ) \tag{4.32}\] satisfying the relations generating the ideal \(I_{M}\), such that for each \(\mathbb{K}\) point \(t\in T(\mathbb{K})\), the induced morphism of associative algebras \(\Lambda_{M}\to\underline{\text{End}}(V_{t})\) has nilpotent image, where \(V_{t}\) denotes the fibre of \(V=\oplus_{i\in I_{M}}V_{i}\) at \(t\). Alternatively, the groupoid of \(R\) points of \(\mathfrak{M}_{\hat{\Lambda}_{M}\text{-}{\rm Mod}_{\rm Fr}}\) is by definition given by \[\mathfrak{M}_{\hat{\Lambda}_{M}\text{-}{\rm Mod}_{\rm Fr}}(R)=\{V\in\text{Ind} (\hat{\Lambda}_{M}\text{-}{\rm Mod}_{\rm Fr})_{R}\ |\ V\text{ is flat over }R\text{ and compact }\}\,\] where we recall that \(\mathcal{C}_{R}\) denotes the category of \(R\) module objects internal to \(\mathcal{C}\), as in Definition 3.30. The compact objects of \(\text{Ind}(\hat{\Lambda}_{M}\text{-}{\rm Mod}_{\rm Fr})_{R}\) are \(\hat{\Lambda}_{M}\otimes_{\mathbb{K}}R\) modules with underlying \(S_{M}\) module given as a colimit of free modules of finite rank over \(S_{M}\), which are finitely generated as \(\hat{\Lambda}_{M}\otimes_{\mathbb{K}}R\) modules. These conditions together imply that the underlying \(S_{M}\otimes_{\mathbb{K}}R\) module is finitely generated. Given an object \(V\in\text{Ind}(\hat{\Lambda}_{M}\text{-}{\rm Mod}_{\rm Fr})_{R}\), let \(V=\oplus_{i\in V_{Q}}V_{i}\oplus V_{\infty}\) denote the natural decomposition as a module over \(S_{M}=S\oplus S_{\infty}\). If \(V\) is compact, then each \(V_{i}\) defines a finitely generated \(R\) module, and \(V_{\infty}\) a finitely generated \(S_{\infty}\otimes_{\mathbb{K}}R\) module such that \((V_{\infty})_{t}=V_{\infty}\otimes_{R}\mathbb{K}\) is projective for each \(\mathbb{K}\) point \(t\in T(\mathbb{K})\); the latter follows from the fact that the category of compact objects of the ind completion of a category \(\mathcal{C}\) is given by the Karoubi completion of \(\mathcal{C}\), and the Karoubi completion of the category of free \(S_{\infty}\) modules is the category of projective \(S_{\infty}\) modules. Moreover, since \(S_{\infty}\) is a local ring, projective and finitely generated implies free, so that \((V_{\infty})_{t}\) is free of finite rank over \(S_{\infty}\) for each \(\mathbb{K}\) point \(t\in T(\mathbb{K})\). Finally, for \(V\in\text{Ind}(\hat{\Lambda}_{M}\text{-}{\rm Mod}_{\rm Fr})_{R}^{\rm c}\) compact, the condition that \(V\) be flat over \(R\) corresponds under the above identifications to the condition that each \(V_{i}\) is flat as an \(R\) module for \(i\in V_{Q_{M}}\). Thus, for each \(i\in V_{Q}\), \(V_{i}\) defines a finitely generated, projective \(R\) module or equivalently a vector bundle on \(T\) of finite rank \(d_{i}\). Similarly, \(V_{\infty}\) defines a vector bundle on \(T\times W\) of finite rank \(d_{\infty}\) such that \((V_{\infty})_{t}\) is trivializable for each \(t\in T(\mathbb{K})\). Thus, we have identified the \(S_{M}\otimes_{\mathbb{K}}R\) module underlying an object \(V\in\mathfrak{M}_{\hat{\Lambda}_{M}\text{-}{\rm Mod}_{\rm Fr}}(R)\) with the collection of vector bundles underlying an object in the groupoid of \(R\) points \(\mathfrak{M}_{\mathbf{d}}^{\rm nil}(Q_{M},I_{M})(R)\). The additional data required to define the \(\hat{\Lambda}_{M}\otimes R\) module structure on \(V\) is evidently equivalent to that of \(\varphi\) in Equation 4.32; this completes the proof of the lemma. We now conclude the proof of Theorem 4.28 using the lemma and explain some implications of the result: Proof.: (of Theorem 4.28) The definition of the moduli of objects functor is evidently natural with respect to equivalences of categories, so that we obtain a canonical equivalence of algebraic stacks \[\mathfrak{M}_{\hat{\Lambda}_{M}\text{-}\operatorname{Mod}_{\operatorname{Fr}}} \xrightarrow{\cong}\mathfrak{M}_{\operatorname{PervCoh}^{\operatorname{p}}_{M }(\hat{Y})},\] by the equivalence of Theorem 4.20. Composing with the equivalence of algebraic stacks of Lemma 4.29 yields the desired equivalence of Equation 4.30, and the induced equivalence on groupoids of \(\mathbb{K}\) points is defined on objects by the claimed formula, by Proposition 4.26. _Corollary 4.30_.: Let \(\mathbf{d}=(\mathbf{d}_{0},0)\in\mathbb{N}^{V_{Q_{M}}}=\mathbb{N}^{V_{Q}}\times \mathbb{N}\). There is a commutative diagram of equivalences of algebraic stacks where the horizontal arrows are given by the equivalences of Theorems 3.32 and 4.28, and the vertical arrows are given by the tautological identifications of Remark 4.27 and of the image of \(\operatorname{Filt}(F_{i})\subset\operatorname{Filt}(F_{i},M)\) as the full subcategory on objects with \(0\) factors of \(M\) in their composition series. Proof.: The result follows from commutativity of the diagram in Equation 4.13, together with the induced equivalences of hearts given in Theorems 3.20 and 4.20. For \(Q_{M}\) the extended quiver corresponding to the object \(M\in\operatorname{PervCoh}^{\operatorname{p}}(Y)^{T}\), let \(Q_{\infty}\) denote the full extended subquiver of \(Q_{M}\) on the single extended vertex \(\infty\in V_{Q_{M}}\), and similarly define \[X_{d_{\infty}}(Q_{\infty})=\ _{\infty}\Sigma_{\infty}^{1}\otimes\operatorname{ End}(\mathbb{K}^{d_{\infty}})\qquad\text{and}\qquad\mathfrak{M}_{d_{\infty}}(Q_{ \infty})=[X_{d_{\infty}}(Q_{\infty})/\mathrm{Gl}_{d_{\infty}}(S_{\infty})]\,\] as well as closed subvarieties \(Z_{d_{\infty}}(Q_{\infty},I_{\infty})\subset X_{d_{\infty}}(Q_{\infty})\) and corresponding closed substacks \(\mathfrak{M}_{d_{\infty}}(Q_{\infty},I_{\infty})\subset\mathfrak{M}_{d_{ \infty}}(Q_{\infty})\) by \[Z_{d_{\infty}}(Q_{\infty},I_{\infty})=X_{d_{\infty}}(Q_{\infty})\times_{X_{ \mathbf{d}}(Q_{M})}Z_{\mathbf{d}}(Q_{M},R_{M})\qquad\text{and}\qquad\mathfrak{ M}_{d_{\infty}}(Q_{\infty},I_{\infty})=[Z_{d_{\infty}}(Q_{\infty},I_{\infty})/ \mathrm{Gl}_{d_{\infty}}(S_{\infty})]\,,\] where \(\mathbf{d}=(\mathbf{0},d_{\infty})\in\mathbb{N}^{V_{Q_{M}}}=\mathbb{N}^{V_{Q}} \times\mathbb{N}\). _Remark 4.31_.: Note that for \(\mathbf{d}=(\mathbf{0},d_{\infty})\in\mathbb{N}^{V_{Q_{M}}}=\mathbb{N}^{V_{Q}} \times\mathbb{N}\), we have tautological identifications \[X_{\mathbf{d}}(Q_{M})=X_{d_{\infty}}(Q_{\infty})\qquad\text{and}\qquad G_{ \mathbf{d}}(Q_{M})=\mathrm{Gl}_{d_{\infty}}(S_{\infty})\] so that in this case the definitions of \(X_{\mathbf{d}}(Q_{M},R_{M})\) and \(\mathfrak{M}_{\mathbf{d}}(Q_{M},R_{M})\) reduce to the definitions of \(X_{d_{\infty}}(Q_{\infty},I_{\infty})\) and \(\mathfrak{M}_{d_{\infty}}(Q_{\infty},I_{\infty})\) given in the preceding paragraph. _Corollary 4.32_.: Let \(\mathbf{d}=(\mathbf{0},d_{\infty})\in\mathbb{N}^{V_{Q_{M}}}=\mathbb{N}^{V_{Q}} \times\mathbb{N}\). There is a commutative diagram of equivalences of algebraic stacks where the horizontal arrows are given by the equivalence of Theorem 4.28, and the vertical arrows are given by the tautological identifications of Remark 4.31 and of the image of \(\operatorname{Filt}(M)\subset\operatorname{Filt}(F_{i},M)\) as the full subcategory on objects with \(0\) factors of \(F_{i}\) in their composition series for each \(i\in V_{Q}\). Proof.: The result follows from commutativity of the diagram in Equation 4.14,together with the induced equivalences of hearts given in Theorems 4.22 and 4.20. _Example 4.33_.: Let \(W\subset\hat{Y}\) denote a smooth, toric subvariety and \(M=\mathcal{O}_{W}\), so that \(S_{\infty}=H^{0}(W,\mathcal{O}_{W})\). Then using the Koszul resolution, we can compute \[{}_{\infty}\Sigma^{1}_{\infty}\cong\operatorname{Ext}^{1}(\mathcal{O}_{W}, \mathcal{O}_{W})\cong H^{0}(W,N_{W/Y})\,\] the space of global sections of the normal bundle \(N_{W/Y}\) to \(W\) in \(Y\). Moreover, the space \[Z_{d_{\infty}}(Q_{\infty},I_{\infty})\subset X_{d_{\infty}}(Q_{\infty})\cong H ^{0}(W,N_{W/Y})\otimes\operatorname{End}(\mathbb{K}^{d_{\infty}})\] identifies with the space of global \(N_{W/Y}\)-twisted Higgs fields on the trivial bundle \(\mathcal{O}_{W}^{\oplus d_{\infty}}\). Thus, the stack of nilpotent representations of the quiver with relations \((Q_{\infty},I_{\infty})\) is given by \[\mathfrak{M}_{d_{\infty}}^{\operatorname{nil}}(Q_{\infty},I_{\infty})=[Z_{d_{ \infty}}(Q_{\infty},I_{\infty})/\mathrm{Gl}_{d_{\infty}}(S_{\infty})]\cong \operatorname{Higgs}_{d_{\infty}}^{\operatorname{nil}}(W,N_{W/Y})_{0}\,\] the stack of nilpotent \(N_{W/Y}\)-twisted Higgs bundles on \(W\) of rank \(d_{\infty}\) with underlying vector bundle isomorphic to the trivial bundle, though without such an isomorphism fixed. Thus, in this example, we find that Corollary 4.32 reduces to the relevant variant of the spectral correspondence, giving an equivalence between the above-described stack of Higgs bundles and the stack of coherent sheaves on \(\hat{Y}\) isomorphic to an iterated extension of the structure sheaf \(\mathcal{O}_{W}\). ### Framing structures For an arbitrary element \[\delta=\sum_{i,j\in R_{M}}b_{ij}\otimes B_{ij}\quad\in\quad\Sigma^{1}_{M} \otimes\underline{\operatorname{End}}(V)=\bigoplus_{i,j\in R_{M}}\ \ i\Sigma_{j}\otimes \operatorname{Hom}(V_{i},V_{j})\,\] consider the decomposition \(V=V_{\mathbf{0}}\oplus V_{\infty}:=(\oplus_{i\in I}V_{i})\oplus V_{\infty}\) and the induced decomposition \[\delta=\ _{0}\delta_{0}+\ _{\infty}\delta_{0}+\ _{0}\delta_{\infty}+\ _{\infty} \delta_{\infty}\quad\in\quad\Sigma^{1}\otimes\underline{\operatorname{End}}( V)=\bigoplus_{i,j\in\{\mathbf{0},\infty\}}\ _{i}\Sigma_{j}\otimes \operatorname{Hom}(V_{i},V_{j})\.\] We now formulate an additional hypothesis on the object \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\), which will be necessary for the results of this section: we require that the \(i=j=\infty\) component of the Maurer-Cartan equations on \(\delta\) determined by \(\Sigma_{M}\), in the sense of Equation 2.9, satisfies \[\propto\left(\sum_{t\in\mathbb{N}}m_{t}^{\mathfrak{gl}_{d}(\Sigma_{M})}(\delta ^{\otimes t})\right)_{\infty}\ =\sum_{t\in\mathbb{N}}m_{t}^{\mathfrak{gl}_{d_{\infty}}(\,_{\infty} \Sigma_{\infty})}((_{\infty}\delta_{\infty})^{\otimes t})\ \in\mathfrak{gl}_{d_{\infty}}(_{\infty}\Sigma_{\infty})\, \tag{4.33}\] that is, the \((\infty,\infty)\) component of the Maurer-Cartan operator for \(\Sigma_{M}\) evaluated on \(\delta\) is equal to the Maurer-Cartan operator for \(\Sigma_{\infty}\) evaluated on the \((\infty,\infty)\) component of \(\delta\). _Proposition 4.34_.: Let \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) satisfy the condition of Equation 4.33, in addition to the usual hypotheses introduced in Section 4.1. Then there exists a canonical map of stacks \[\mathfrak{M}(Q_{M},R_{M})\to\mathfrak{M}(Q_{\infty},I_{\infty})\qquad\text{ defined by}\qquad(V_{i}\,B_{ij})_{i,j\in R_{M}}\mapsto(V_{\infty}\,B_{\infty \infty})\.\] Proof.: The projection map to the direct summand corresponding to \(i=j=\infty\) defines a canonical map \[X_{\mathbf{d}}(Q_{M})=\bigoplus_{i,j\in I_{M}}\ {}_{i}\Sigma^{1}_{j}\otimes\mathrm{Hom }(\mathbb{K}^{d_{i}},\mathbb{K}^{d_{j}})\to\ _{\infty}\Sigma^{1}_{\infty}\otimes\mathrm{End}(\mathbb{K}^{d_{\infty}})=X_{d_ {\infty}}(Q_{\infty})\] which evidently induces a map on quotient stacks \[\mathfrak{M}_{\mathbf{d}}(Q_{M})\to\mathfrak{M}_{d_{\infty}}(Q_{\infty})\.\] It remains to check that the restriction of the projection map to \(Z_{\mathbf{d}}(Q_{M},R_{M})\subset X_{\mathbf{d}}(Q_{M})\) defines a map \[Z_{\mathbf{d}}(Q_{M},R_{M})\to Z_{d_{\infty}}(Q_{\infty},I_{\infty})\qquad \text{and thus a map}\qquad\mathfrak{M}_{\mathbf{d}}(Q_{M},R_{M})\to \mathfrak{M}_{d_{\infty}}(Q_{\infty},I_{\infty})\,\] as desired, on the corresponding closed substacks of the above quotient stacks. The left hand side of Equation 4.33 is one of the generators of the defining ideal of the subvariety \(Z_{\mathbf{d}}(Q_{M},R_{M})\subset X_{\mathbf{d}}(Q_{M})\), and thus the equality with the right hand side, which generates the ideal defining \(Z_{d_{\infty}}(Q_{\infty},I_{\infty})\subset X_{d_{\infty}}(Q_{\infty})\), implies the result. Under the hypotheses of the preceding proposition, we also have: _Corollary 4.35_.: There exists a canonical map of stacks \[\mathfrak{M}^{\mathrm{nil}}(Q_{M},R_{M})\to\mathfrak{M}^{\mathrm{nil}}(Q_{ \infty},I_{\infty})\qquad\text{defined by}\qquad(V_{i}\,B_{ij})_{i,j\in I_{M}}\mapsto(V_{\infty}\,B_{ \infty\infty})\.\] Proof.: The projection map \(X_{\mathbf{d}}(Q_{M})\to X_{d_{\infty}}(Q_{\infty})\) evidently maps \(X_{\mathbf{d}}^{\mathrm{nil}}(Q_{M})\) to \(X_{d_{\infty}}^{\mathrm{nil}}(Q_{\infty})\), and the result follows readily from the proof of Proposition 4.34. We have the following equivalent geometric form of the corollary: _Corollary 4.36_.: There exists a canonical map of stacks \[\mathfrak{M}(Y,M)\to\mathfrak{M}_{\mathrm{Filt}(M)}\] defined by forgetting the compactly supported composition factors and remembering only the underlying iterated extension of \(M\). Proof.: This is equivalent to the statement of the preceding corollary, under the identifications of Theorem 4.28 and Corollary 4.32. Note that while the operation on complexes of coherent sheaves defining the map is evidently not well-defined in general, the geometric content of Proposition 4.34 above is that this operation is well defined under the given hypotheses. _Definition 4.37_.: A _framing structure_ for \(M\) of rank \(d_{\infty}\) is a \(\mathbb{K}\)-point \[\mathrm{f}\in Z_{d_{\infty}}(Q_{\infty},I_{\infty})(\mathbb{K})\.\] Given a framing structure \(\mathrm{f}\) for \(M\) of rank \(d_{\infty}\), we define the stack \(\mathcal{F}_{\mathrm{f}}\) over \(\mathfrak{M}_{d_{\infty}}(Q_{\infty},I_{\infty})\) by \[\mathcal{F}_{\mathrm{f}}=[\{\mathrm{f}\}/G_{\mathrm{f}}]\cong[\mathbb{O}_{ \mathrm{f}}/\mathrm{Gl}_{d_{\infty}}(S_{\infty})]\ \to\ \mathfrak{M}_{d_{\infty}}(Q_{\infty},I_{\infty})\,\] where \(G_{\mathrm{f}}=\mathrm{Stab}_{\mathrm{Gl}_{d_{\infty}}(S_{\infty})}(\mathrm{f})\) denotes the subgroup of \(\mathrm{Gl}_{d_{\infty}}(S_{\infty})\) stabilizing the point \(\mathrm{f}\), and \(\mathbb{O}_{\mathrm{f}}\subset Z_{d_{\infty}}(Q_{\infty},I_{\infty})\) denotes the orbit of \(\mathrm{f}\) under \(\mathrm{Gl}_{d_{\infty}}(S_{\infty})\). Similarly, we define the stack \(\mathcal{F}_{\mathrm{f}}^{\mathrm{c}}\) over \(\mathfrak{M}_{d_{\infty}}(Q_{\infty},I_{\infty})\) by \[\mathcal{F}_{\mathrm{f}}^{\mathrm{c}}=[\{\mathrm{f}\}/G_{\mathrm{f}}^{\mathrm{ c}}]\cong[\mathbb{O}_{\mathrm{f}}^{\mathrm{c}}/\mathrm{Gl}_{d_{\infty}}( \mathbb{K})]\ \to\ \mathfrak{M}_{d_{\infty}}(Q_{\infty},I_{\infty})\,\] where \(G_{\mathrm{f}}^{\mathrm{c}}=\mathrm{Stab}_{\mathrm{Gl}_{d_{\infty}}(\mathbb{K})} (\mathrm{f})\) denotes the subgroup of \(\mathrm{Gl}_{d_{\infty}}(\mathbb{K})\) stabilizing the point \(\mathrm{f}\), and \(\mathbb{O}_{\mathrm{f}}^{\mathrm{c}}\subset Z_{\mathbf{d}}(Q_{M},R_{M})\) denotes the orbit of \(\mathrm{f}\) under \(\mathrm{Gl}_{d_{\infty}}(\mathbb{K})\). Moreover, for any \(\mathbf{d}=(d_{i})\in\mathbb{N}^{V_{Q}}\), we define the stack of \(\mathcal{F}_{\mathrm{f}}\)-framed representations of \(Q_{M}\) of dimension \(\mathbf{d}\) by \[\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{\mathrm{f}}}(Q_{M},R_{M})=\mathfrak{M} _{(\mathbf{d},d_{\infty})}(Q_{M},R_{M})\times_{\mathfrak{M}_{d_{\infty}}(Q_{ \infty},I_{\infty})}\mathcal{F}_{\mathrm{f}}\] and define the full moduli stack of \(\mathcal{F}_{\mathrm{f}}\)-framed representations of \(Q_{M}\) by \[\mathfrak{M}^{\mathcal{F}_{\mathrm{f}}}(Q_{M},R_{M})=\bigsqcup_{\mathbf{d}\in \mathbb{N}^{V_{Q}}}\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{\mathrm{f}}}(Q_{M},R_{M})\,\] as usual. Similarly, we define the stack of \(\mathcal{F}_{\mathrm{f}}^{\mathrm{c}}\)-framed representations of \(Q_{M}\) by \[\mathfrak{M}^{\mathcal{F}_{\mathrm{f}}}(Q_{M},R_{M})=\bigsqcup_{\mathbf{d}\in \mathbb{N}^{V_{Q}}}\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{\mathrm{f}}}(Q_{M},R_{M})\qquad\text{where}\qquad\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{\mathrm{ f}}}(Q_{M},R_{M})=\mathfrak{M}_{(\mathbf{d},d_{\infty})}(Q_{M},R_{M})\times_{ \mathfrak{M}_{d_{\infty}}(Q_{\infty},I_{\infty})}\mathcal{F}_{\mathrm{f}}^{ \mathrm{c}}\.\] More explicitly, we introduce the space \[X_{\mathbf{d}}^{\mathrm{f}}(Q_{M}) =X_{\mathbf{d}}(Q_{M})\times_{X_{d_{\infty}}(Q_{\infty})}\{\mathrm{f}\}\] \[\cong\bigoplus_{i,j\in V_{Q}}i\Sigma_{j}^{1}\otimes\mathrm{Hom} (\mathbb{K}^{d_{i}},\mathbb{K}^{d_{j}})\oplus\bigoplus_{i\in V_{Q}}i\Sigma_{ \infty}^{1}\otimes\mathrm{Hom}(\mathbb{K}^{d_{i}},\mathbb{K}^{d_{\infty}}) \oplus\bigoplus_{j\in V_{Q}}\ \infty\Sigma_{j}^{1}\otimes\mathrm{Hom}(\mathbb{K}^{d_{\infty}}, \mathbb{K}^{d_{j}})\] as well as the closed subvariety \(Z_{\mathbf{d}}^{\mathrm{f}}(Q_{M},R_{M})\subset X_{\mathbf{d}}^{\mathrm{f}}(Q _{M})\), defined by \[Z_{\mathbf{d}}^{\mathrm{f}}(Q_{M},R_{M})=Z_{\mathbf{d}}(Q_{M},R_{M})\times_{Z _{d_{\infty}}(Q_{\infty},I_{\infty})}\{\mathrm{f}\}\.\] Then the moduli stacks of \(\mathcal{F}_{\mathrm{f}}\)-framed and \(\mathcal{F}_{\mathrm{f}}^{\mathrm{c}}\)-framed representations of \(Q_{M}\) of dimension \(\mathbf{d}\) are given by \[\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{\mathrm{f}}}(Q_{M},R_{M})=\left[Z_{ \mathbf{d}}^{\mathrm{f}}(Q_{M},R_{M})/(G_{\mathbf{d}}(Q_{Y})\times G_{\mathrm{ f}})\right]\qquad\text{and}\qquad\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{ \mathrm{f}}}(Q_{M},R_{M})=\left[Z_{\mathbf{d}}^{\mathrm{f}}(Q_{M},R_{M})/(G_{ \mathbf{d}}(Q_{Y})\times G_{\mathrm{f}}^{\mathrm{c}})\right]\,\] respectively. Finally, we define the stack of f-framed representations of \(Q_{M}\) by \[\mathfrak{M}^{\mathrm{f}}(Q_{M},R_{M})=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V _{Q}}}\mathfrak{M}_{\mathbf{d}}^{\mathrm{f}}(Q_{M},R_{M})\qquad\text{where} \qquad\mathfrak{M}_{\mathbf{d}}^{\mathrm{f}}(Q_{M},R_{M})=\mathfrak{M}_{( \mathbf{d},d_{\infty})}(Q_{M},R_{M})\times_{\mathfrak{M}_{d_{\infty}}(Q_{ \infty},I_{\infty})}\{\mathrm{f}\}\] and the analogous concrete description is simply given by \[\mathfrak{M}_{\mathbf{d}}^{\mathrm{f}}(Q_{M},R_{M})=\left[Z_{\mathbf{d}}^{ \mathrm{f}}(Q_{M},R_{M})/G_{\mathbf{d}}(Q_{Y})\right]\.\] We can also define the corresponding equivalent geometric moduli stacks \[\mathfrak{M}^{\mathrm{f}}(Y,M)=\bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q}}} \mathfrak{M}_{\mathbf{d}}^{\mathrm{f}}(Y,M)\qquad\text{where}\quad\mathfrak{M} _{\mathbf{d}}^{\mathcal{F}_{\mathrm{f}}}(Y,M)=\mathfrak{M}_{(\mathbf{d},d_{ \infty})}(Y,M)\times_{\mathfrak{M}_{\mathrm{Filt}(M),d_{\infty}}}\{\mathrm{f} \}\qquad\, \tag{4.35}\] \[\mathfrak{M}^{\mathcal{F}_{\mathrm{f}}}(Y,M)=\bigsqcup_{\mathbf{d} \in\mathbb{N}^{V_{Q}}}\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{\mathrm{f}}}(Y,M) \quad\text{where}\quad\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{\mathrm{f}}}(Y,M)= \mathfrak{M}_{(\mathbf{d},d_{\infty})}(Y,M)\times_{\mathfrak{M}_{\mathrm{Filt }(M),d_{\infty}}}\mathcal{F}_{\mathrm{f}}^{\mathrm{c}}\qquad\,\] (4.36) \[\mathfrak{M}^{\mathcal{F}_{\mathrm{f}}}(Y,M)=\bigsqcup_{\mathbf{d} \in\mathbb{N}^{V_{Q}}}\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{\mathrm{f}}}(Y,M) \quad\text{where}\quad\mathfrak{M}_{\mathbf{d}}^{\mathcal{F}_{\mathrm{f}}}(Y,M)= \mathfrak{M}_{(\mathbf{d},d_{\infty})}(Y,M)\times_{\mathfrak{M}_{\mathrm{Filt}(M ),d_{\infty}}}\mathcal{F}_{\mathrm{f}}\qquad. \tag{4.34}\] We can now give the framed analogue of Theorem 4.28, which establishes a special case of Conjecture 10.3.5 a) in [11]: _Theorem 4.38_.: Let \(M\in\operatorname{PervCoh}^{\operatorname{p}}(Y)^{T}\) as in Theorem 4.28 and let \(\operatorname{f}\) be a framing structure for \(M\). There is a canonical framed quiver with potential \((Q_{M}^{\operatorname{f}},W_{M}^{\operatorname{f}})\) and an equivalence of algebraic stacks \[\mathfrak{M}(Q_{M}^{\operatorname{f}},W_{M}^{\operatorname{f}})\xrightarrow{ \cong}\mathfrak{M}^{\operatorname{f}}(Y,M)\, \tag{4.37}\] where the induced equivalence of groupoids of \(\mathbb{K}\) points is defined on objects by \[(V_{i},B_{ij})\mapsto\left(\tilde{H}:=\bigoplus_{i\in I_{M}}K(I_{i})\otimes_{ S_{i}}V_{i}\,\ d_{B}:=\sum_{k\in\mathbb{Z},\ i,i_{2},...,i_{k-1},j\in I_{M}}K(\rho_{k}^{ \Sigma_{\oplus}\Sigma_{\infty}}(.,b_{i,i_{2}},...,b_{i_{k-1},j})^{N})\otimes(B _{i,i_{2}}...B_{i_{k-1},j})\right)\,\] in the notation of Section 4.4. Proof.: By Theorem 4.28 and Corollary 4.32, the desired result is equivalent to an equivalence of algebraic \(\mathbb{K}\)-stacks \(\mathfrak{M}(Q_{M}^{\operatorname{f}},W_{M}^{\operatorname{f}})\xrightarrow{ \cong}\mathfrak{M}^{\operatorname{f}}(Q_{M},R_{M})\) which we now construct. The underlying quiver \(Q_{M}^{\operatorname{f}}\) is essentially the same as the extended quiver \(Q_{M}\), but with the extended node corresponding to the object \(M\) considered as a framing node, and the edges from the extended node to itself excluded from the edge set; more formally, we define the set of internal vertices by \(V_{Q_{M}^{\operatorname{f}}}=V_{Q_{Y}}\), and define the \(\mathbb{K}\)-linear span of the edge set to be \[\mathbb{K}(E_{Q_{M}^{\operatorname{f}}}):=\bigoplus_{i,j\in V_{Q}}\ i\Sigma_{ j}^{\vee}[-1]\oplus\bigoplus_{i\in V_{Q}}\ i\Sigma_{\infty}^{\vee}[-1]\oplus\bigoplus_{j\in V _{Q}}\ \infty\Sigma_{j}^{\vee}[-1]\.\] Note that we evidently have an identification of vector spaces \[X_{\mathbf{d}}(Q_{M}^{\operatorname{f}}):=\operatorname{Hom}_{S\text{-BiMod}}( \mathbb{K}\langle E_{Q_{M}^{\operatorname{f}}}\rangle,\underline{\operatorname{ End}}(\mathbb{K}^{\mathbf{d}}))\cong X_{\mathbf{d}}^{\operatorname{f}}(Q_{M})\.\] Thus, it remains to check that the closed subvariety \(Z_{\mathbf{d}}^{\operatorname{f}}(Q_{M},R_{M})\subset X_{\mathbf{d}}^{ \operatorname{f}}(Q_{M})\) is given by the critical locus of a potential. In fact, the potential is given by a slight variant of the formula given in Example 2.35, as follows: we fix a decomposition \[\operatorname{f}=\sum_{\alpha}f_{\alpha}\otimes F_{\alpha}\ \in\ _{\infty}\Sigma_{\infty}^{1}\otimes \operatorname{End}(\mathbb{K}^{d_{\infty}})\] in terms of which the potential (which is evidently independent of this choice of decomposition) is given by \[W_{M}^{\operatorname{f}}=\sum_{n\geq 1}\sum_{a_{1},...,a_{n+1}\in E_{Q_{M}^{ \operatorname{f}}}\cup\{f_{\alpha}\}}\langle m_{n}^{\Sigma_{M}}(a_{1}^{\vee},...,a_{n}^{\vee}),a_{n+1}^{\vee}\rangle a_{1}\cdot...\cdot a_{n+1}\,\] where we note that \(E_{Q_{M}^{\operatorname{f}}}\cup\{f_{\alpha}\}\subset E_{Q_{M}}\), so that their duals define elements on which the \(A_{\infty}\) multiplication \(m_{n}^{\Sigma_{M}}:\Sigma_{M}^{\otimes_{M}}\to\Sigma_{M}[2-n]\) can be evaluated. We then interpret the variables \(a_{i}\) corresponding to edges \(a_{i}\in E_{Q_{M}^{\operatorname{f}}}\subset E_{Q_{M}}\) as the usual defining variables in the potential, while the variables \(a_{i}=f_{\alpha}\in\ _{\infty}\Sigma_{\infty}^{1}\) are defined to evaluate to the fixed linear maps \(F_{\alpha}\) when defining the induced potential \(W_{M,\mathbf{d}}^{\operatorname{f}}:X_{\mathbf{d}}^{\operatorname{f}}(Q_{M}) \to\mathbb{K}\) for any \(\mathbf{d}\in\mathbb{N}^{V_{Q}}\). In analogy with Example 2.35, the critical locus equations induce the Maurer-Cartan equations for \(\Sigma_{M}\), with the modification of the variables corresponding to \(\{\operatorname{f}_{\alpha}\}\) inducing the specialization at \(\operatorname{f}\in Z_{d_{\infty}}(Q_{\infty},I_{\infty})\), so that we have that \[Z_{\mathbf{d}}^{\operatorname{f}}(Q_{M},R_{M})=\operatorname{crit}(W_{M, \mathbf{d}}^{\operatorname{f}})\subset X_{\mathbf{d}}^{\operatorname{f}}(Q_{M})\,\] as desired. Finally, we give a brief explanation of the geometric interpretation of these stacks. By Corollary 4.36, we have the following modular descriptions of the spaces of \(\mathbb{K}\) points: * For \(\mathfrak{M}^{\mathsf{f}}(Y,M)\), a perverse coherent extension equipped with an isomorphism from the underlying iterated extension of \(M\) to that determined by \(\mathsf{f}\). * For \(\mathfrak{M}^{\mathcal{F}_{\mathsf{f}}}_{\mathbf{d}}(Y,M)\), a perverse coherent extension equipped with an isomorphism from the underlying iterated extension of \(M\) to a complex of sheaves of the form \((K(I_{\infty})\otimes_{\mathbb{K}}W,\ d)\) where \(W\) is a \(\mathbb{K}\) vector space of dimension \(d_{\infty}\) and \(d\) an auxiliary differential which is conjugate under \(\mathrm{Gl}(W)\) to the differential determined by \(f\). * For \(\mathfrak{M}^{\mathcal{F}_{\mathsf{f}}}_{\mathbf{d}}(Y,M)\), a perverse coherent extension such that the underlying iterated extension of \(M\) is isomorphic to that determined by \(\mathsf{f}\). We now explain the distinction between these variants in the simplest examples, though we note that several more involved examples are given in Section 4.7 below. We fix the dimension vector to be \(\mathbf{d}=\mathbf{0}\in\mathbb{N}^{\mathbf{V}\mathbf{q}}\), so that by Corollary 4.32, we have \[\mathfrak{M}_{(\mathbf{0},d_{\infty})}(Y,M)\cong\mathfrak{M}_{\mathrm{Filt}( M),d_{\infty}}\,\] that is, the (unframed) moduli of perverse coherent extensions is given by the moduli stack of objects in the full subcategory on objects admitting filtration with subquotients isomorphic to \(M\) of total multiplicity \(d_{\infty}\). In turn, this implies that the framed moduli spaces are simply given by \[\mathfrak{M}^{\mathsf{f}}_{\mathbf{0}}(Y,M)\cong\{\mathsf{f}\}\,\qquad \mathfrak{M}^{\mathcal{F}_{\mathsf{f}}^{\mathsf{c}}}_{\mathbf{0}}(Y,M)\cong \mathcal{F}_{\mathsf{f}}^{\mathsf{c}}\,\ \text{and}\qquad\mathfrak{M}^{\mathcal{F}_{ \mathsf{f}}}_{\mathbf{0}}(Y,M)\cong\mathcal{F}_{\mathsf{f}}\.\] We now explain these isomorphisms explicitly in some special cases: _Example 4.39_.: Let the framing object be \(M=\mathcal{O}_{\hat{Y}}\) the structure sheaf of \(\hat{Y}\). Since the structure sheaf does not admit any non-trivial extensions with itself, the only possible framing structure is given by \(\mathsf{f}=0\), and \(\mathfrak{M}_{(\mathbf{0},\mathbf{d}_{\infty})}(Y,M)\) is equivalent to the moduli stack of objects which are isomorphic to \(\mathcal{O}^{\oplus d_{\infty}}_{\hat{Y}}\). Thus, in this case, we have that \(\mathfrak{M}^{\mathcal{F}_{\mathsf{f}}}_{\mathbf{(0},\mathbf{d}_{\infty})}(Y, M)\cong\mathfrak{M}_{(\mathbf{0},\mathbf{d}_{\infty})}(Y,M)\) is given by the moduli stack of sheaves which are isomorphic to \(\mathcal{O}^{\oplus d_{\infty}}_{\hat{Y}}\), which is evidently equivalent to \(\mathcal{F}_{\mathsf{f}}=\big{[}\mathrm{pt}/\mathrm{Gl}_{d_{\infty}}( \mathcal{O}_{\hat{Y}})\big{]}\). Similarly, we have that \(\mathfrak{M}^{\mathcal{F}_{\mathsf{f}}^{\mathsf{c}}}_{\mathbf{d}}(Y,M)\) is the moduli stack of sheaves equipped with an isomorphism to \(\mathcal{O}_{\hat{Y}}\otimes_{\mathbb{K}}W\) for \(W\) a \(\mathbb{K}\) vector space of dimension \(d_{\infty}\), which is evidently equivalent to \[\mathcal{F}_{\mathsf{f}}^{\mathsf{c}}=[\mathrm{pt}/\mathrm{Gl}_{d_{\infty}}( \mathbb{K})]\ \.\] Finally, we have that \(\mathfrak{M}^{\mathsf{f}}_{\mathbf{d}}(Y,M)\) is the moduli stack of sheaves equipped with an isomorphism to \(\mathcal{O}^{\oplus d}_{\hat{Y}}\), which is evidently equivalent to just the single point \(\{\mathcal{O}^{\oplus d_{\infty}}_{\hat{Y}}\}\) corresponding to \(\mathsf{f}=\{0\}\). ### Examples In this section, we explain several examples of the construction of Theorem 4.38 and their relationships with previously known constructions of moduli spaces of coherent sheaves in terms of quivers. _Example 4.40_.: The most basic family of examples of perverse coherent extensions is the case when the extending object is given by \(M=\mathcal{O}_{Y}[1]\), the structure sheaf of the threefold shifted down in cohomological degree by one. As we explain in Section 6.1 below, in this case the definition of a perverse coherent extension is essentially equivalent to that of a perverse coherent system in the sense of Nagao-Nakajima [10], and the resulting quivers generalize those studied in _loc. cit._ in the case of the resolved conifold following [11]. Indeed, for \(M=\mathcal{O}_{Y}[1]\), we have \[{}_{\infty}\Sigma^{1}_{\infty}=0\qquad{}_{\infty}\Sigma^{1}_{j}=0\qquad{}_{i }\Sigma^{1}_{\infty}=\begin{cases}\mathbb{K}&\text{if $i=0$}\\ \{0\}&\text{otherwise}\end{cases}\] independent of the threefold \(Y\). Thus, the extended quivers are given by adding a single node and a single arrow from that node to the zeroth node of the quiver \(Q_{Y}\). In the cases of Examples 3.34, 3.35, and 3.36, the extended quivers \(Q_{M}\) are thus given by respectively. Since \({}_{\infty}\Sigma^{1}_{\infty}=0\), for each \(d_{\infty}\in\mathbb{N}\) is only a single framing stucture \(\mathrm{f}=0\in Z_{d_{\infty}}(Q_{\infty},I_{\infty})=\{0\}\), and the corresponding framed quivers with potential \((Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})\) are given by (4.38) together with the same potentials as in Examples 3.34, 3.35, and 3.36. The \(\Sigma\)-module \(I_{\infty}=S_{0}\) is given by a single copy of the simple module \(S_{0}\) in degree \(1\), and thus the corresponding projective resolution is the trivial one \(K(I_{\infty})=\mathcal{O}_{Y}[1]=M\). The representative \(i:K(I_{\infty})\to K(I_{0})\) of \({}_{0}\Sigma^{1}_{\infty}=\mathbb{K}\) is induced by the constant map \(\mathcal{O}\to\mathcal{O}\); for example, in the case of the resolved conifold, following Example 3.36, it is given by Thus, the monad presentation for perverse coherent systems induced by Proposition 4.26 is given by modifying that for compactly supported perverse coherent sheaves by a single summand of the form \(\mathcal{O}\otimes V_{\infty}\) in cohomological degree \(-1\) and a single additional auxiliary differential from degree \(-1\) to degree \(0\) given by \(I\); for example, in the case of the resolved conifold, following Example 3.36, it is given by \[\begin{pmatrix}1&-B\\ z&-D\\ -A&x\\ -C&y\end{pmatrix}\qquad\begin{pmatrix}zy-DC&CB-y&0&zB-D\\ DA-zx&x-BA&D-zB&0\\ 0&yA-xC&yz-CD&AD-xz\\ xC-yA&0&CB-y&x-AB\\ 0&0&0&0\end{pmatrix}\quad\text{.}\] \[\begin{matrix}\mathcal{O}\otimes V_{0}&&\mathcal{O}(1)^{2}\otimes V _{0}&&\mathcal{O}(1)^{2}\otimes V_{0}&&\mathcal{O}\otimes V_{0}\\ \oplus&\longrightarrow&\oplus&\longrightarrow&\oplus\\ \mathcal{O}(1)\otimes V_{1}&&\mathcal{O}^{2}\otimes V_{1}&&\mathcal{O}^{2} \otimes V_{1}&&\mathcal{O}(1)\otimes V_{1}\\ &&\oplus&\\ &&\mathcal{O}\otimes V_{\infty}\end{matrix}\quad\text{.} \tag{4.39}\] _Example 4.41_.: Let \(Y=X=\mathbb{C}^{3}=\text{Spec }\mathbb{C}[x,y,z]\) and let \(M=\mathcal{O}_{\mathbb{C}^{2}}[1]\) the shifted structure sheaf of the coordinate subspace \(\mathbb{C}^{2}_{xy}\subset\mathbb{C}^{3}\). Then we have \[{}_{\infty}\Sigma^{1}_{0}=\mathbb{K}\qquad\ {}_{0}\Sigma^{1}_{\infty}= \mathbb{K}\qquad{}_{\infty}\Sigma^{1}_{\infty}=S_{\infty}\] so that the extended quiver \(Q_{M}\) is given by The equivalence classes of constant framing structures f on \(M\) of rank \(d_{\infty}\) are given by conjugacy classes of matrices in \(A_{\text{f}}\in\mathfrak{gl}_{d_{\infty}}\), and the corresponding framed quiver with potential is given by \[Q^{\text{f}}_{M}= \tag{4.40}\] In particular, taking \(A_{\text{f}}=0\), we recover the standard quiver with potential giving the 'tripled' or 'three dimensional' variant of the ADHM construction, which corresponds under dimensional reduction to the usual ADHM quiver. More generally, \((Q^{\text{f}}_{M},W^{\text{f}}_{M})\) gives the quiver with potential studied in [13]. These relationships are discussed further in Section 3.2 below. The \(\Sigma\) module \(I_{\infty}\) corresponding to \(\mathcal{O}_{\mathbb{C}^{2}}[1]\) is given by \[I_{\infty}=[S_{0}\langle 1\rangle<S_{0}\langle 2\rangle]\qquad\text{ so that}\qquad K(I_{\infty})=\left[\mathcal{O}[2]\stackrel{{ z}}{{\to}}\mathcal{O}[1]\right]\stackrel{{\cong}}{{\to}} \mathcal{O}_{\mathbb{C}^{2}}[1]\,\] and we can choose representatives of the auxiliary elements of \(\Sigma^{1}\) according to so that the monad presentation induced by Proposition 4.26 is given by \[\begin{pmatrix}B_{1}-x\\ y-B_{2}\\ B_{3}-z\\ -J\end{pmatrix}\qquad\begin{pmatrix}0&B_{3}-z&B_{2}-y&0\\ B_{3}-z&0&x-B_{1}&0\\ y-B_{2}&x-B_{1}&0&I\\ 0&0&0&J&A_{\rm f}\end{pmatrix}\qquad\begin{pmatrix}x-B_{1}&y-B_{2}&z-B_{3}&I \end{pmatrix}\qquad.\] \[\mathcal{O}\otimes V\ \longrightarrow\ \begin{matrix}\mathcal{O}^{3} \otimes V\\ \oplus\\ \mathcal{O}\otimes V_{\infty}\end{matrix}\qquad\longrightarrow\qquad\mathcal{O }\otimes V \tag{4.41}\] _Example 4.42_.: Generalizing the previous example, let \(Y=X=\mathbb{C}^{3}=\operatorname{Spec}\,\mathbb{C}[x,y,z]\), with \(M=\mathcal{O}_{\mathbb{C}^{2}_{xy}}\oplus\mathcal{O}_{\mathbb{C}^{2}_{xz}} \oplus\mathcal{O}_{\mathbb{C}^{2}_{yz}}[1]\), and consider the simplest framing structure \({\rm f}=0\). Then the framed quiver with potential \((Q^{\rm f}_{M},W^{\rm f}_{M})\) is given by \[Q^{\rm f}_{M}=\] the quiver introduced in [11] to describe the moduli space of spiked instantons. Thus, the moduli stack of perverse coherent extensions provides a model for this moduli space defined purely in terms of algebraic geometry. There is also an interesting space of framing structures in this family of examples. With \(V^{1}_{\infty}=\{0\}\) for simplicity, a framing structure \({\rm f}\) is determined by a tuple of linear maps \(A_{\rm f}=(A_{2},A_{3},A_{2}^{3},A_{3}^{2})\) and the corresponding framed quiver with potential is given by (4.42) The monad presentations for these generalizations can be computed analogously. _Example 4.43_.: Let \(Y=|\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}(-2)|\to X= \operatorname{Spec}\mathbb{C}[x_{1},x_{2},x_{3},x_{4}]/(x_{1}x_{2}-x_{3}^{2})\), following Example 3.35, and let \(M=\mathcal{O}_{W}[1]\) for \(W=|\mathcal{O}_{\mathbb{P}^{1}}(-2)|\). Then we have \[{}_{0}\Sigma^{1}_{\infty}=\mathbb{K}\qquad{}_{\infty}\Sigma^{1}_{0}=\mathbb{K }\qquad{}_{1}\Sigma^{1}_{\infty}=\{0\}\qquad{}_{\infty}\Sigma^{1}_{1}=\{0\} \qquad{}_{\infty}\Sigma^{1}_{\infty}=S_{\infty}\] so that the extended quiver \(Q_{M}\) is given by Again, a constant framing structure f is given by an endomorphism \(G_{\mathrm{f}}\), and the resulting framed quiver with potential \((Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})\) is given by In particular, for \(G_{\mathrm{f}}=0\) the resulting quiver is the framed and tripled, in the sense of Ginzburg [10], affine type \(\mathrm{A}_{1}\) Dynkin quiver, and corresponds under dimensional reduction to the framed and doubled, in the sense of Nakajima [10], affine type A\({}_{1}\) Dynkin quiver \[\xy(0,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1}-1)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1,0)*{xy(-1)*{xy(-1,0)*{xy(-1)*{xy(-1)*{xy(-1)*{xy(-1)*{xyxy(-1)*{xy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(- -1)*{xyxy(-1)*{xyxy(-1)*{xyxyxy(-1)*{xyxy(-1)*{xyxyxy(- -1)*{xyxy(-1)*{xyxy(-1)*{xyxyxy(-1)*{xyxyxy(-1)*{xyxyxy(- )(-1)*{xyxy(-1)*{xyxy(-1)*{xyxyxy(-1)*{xyxy(-1)*{xyxyxy(-1)*{xyxyxy(- )(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxyxy(-1)*{xyxy(- )(-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{xyxyxy(-1)*{xyxyxy(-1)*{( 0)*{((-1)*{xyxyxy(-1)*{(-1)*{xyxyxy(-1)*{(-1*{xyxyxy(((((( )))))))}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\ _Example 4.44_.: Let \(Y=|\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}(-2)|\to X= \operatorname{Spec}\,\mathbb{C}[x_{1},x_{2},x_{3},x_{4}]/(x_{1}x_{2}-x_{3}^{2})\), following Example 3.35, and let \(M=\mathcal{O}_{W}[1]\) for \(W=|\mathcal{O}_{\mathbb{P}^{1}}|\). Then we have \[{}_{0}\Sigma_{\infty}^{1}=\mathbb{K}\qquad{}_{\infty}\Sigma_{0}^{1}=\{0\} \qquad{}_{1}\Sigma_{\infty}^{1}=\{0\}\qquad{}_{\infty}\Sigma_{1}^{1}=\mathbb{K }^{2}\qquad{}_{\infty}\Sigma_{\infty}^{1}=\{0\}\] so that the extended quiver \(Q_{M}\) is given by \[Q_{M}=\] The only framing structure is the trivial one \(\mathrm{f}=0\) since \({}_{\infty}\Sigma_{\infty}^{1}=\{0\}\), and the corresponding framed quiver with potential \((Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})\) is given by \[Q_{M}^{\mathrm{f}}=\] This quiver with potential corresponds under dimensional reduction to a framed variant of the product of the standard Beilinson quiver for \(\mathbb{P}^{1}\) with the Jordan quiver, given by \[\begin{CD}V_{\infty}@>{J_{1}}>{J_{2}}>\\ @V{E}V{V_{0}}V@V{B\ D}V{V_{1}}V@V{}V{F}V\\ \end{CD}\qquad\text{with relations}\qquad\begin{cases}EB-BF+IJ_{2}&=0\\ DF-ED+IJ_{1}&=0\\ \end{cases}\;.\] The \(\Sigma\) module \(I_{\infty}\) corresponding to \(M=\mathcal{O}_{|\mathcal{O}_{\mathbb{P}^{1}}|}[1]\) is given by \[I_{\infty}=\begin{bmatrix}S_{0}\langle 1\rangle<S_{1}^{\oplus 2}\langle 2 \rangle<S_{0}\langle 3\rangle\end{bmatrix}\qquad\text{so that}\qquad K(I_{\infty})= \begin{bmatrix}\mathcal{O}[3]\xrightarrow{\begin{pmatrix}-z\\ 1\end{pmatrix}}\mathcal{O}(1)^{2}[2]\xrightarrow{\begin{pmatrix}x&xz\\ \end{pmatrix}}\mathcal{O}[1]\end{bmatrix}\xrightarrow{\cong}\mathcal{O}_{| \mathcal{O}_{\mathbb{P}^{1}}(-2)|}[1]\] and we can choose representatives of the auxiliary elements of \(\Sigma^{1}\) according to \[\begin{CD}V@V{}V{0}V@V{}V{1}V\\ \end{CD}\qquad\text{and}\qquad W_{M}^{\mathrm{f}}=E(BC-DA)+F(AD-CB)+IJ_{1}A+ IJ_{2}C\;.\] \(j_{1}\in\ _{\infty}\Sigma^{1}_{1}\quad\mapsto\)\(\mathcal{O}(1)\)\(\mathcal{O}(1)\oplus\mathcal{O}^{2}\)\(\mathcal{O}(1)\oplus\mathcal{O}^{2}\)\(\mathcal{O}(1)\)\(j_{2}\in\ _{\infty}\Sigma^{1}_{1}\quad\mapsto\)\(\mathcal{O}(1)\)\(\mathcal{O}(1)\)\(\mathcal{O}(1)\oplus\mathcal{O}^{2}\)\(\mathcal{O}(1)\)\(\mathcal{ For the trivial framing structure \(G_{\rm f}=0\), the corresponding framed quiver with potential is given by This quiver with potential corresponds under dimensional reduction to the principal chainsaw quiver of [11] for \(\rho=\rho_{\rm prin}\) a principal nilpotent, given by The \(\Sigma\) module \(I_{\infty}\) corresponding to \(M=\mathcal{O}_{W}[1]\) is given by \[I_{\infty}=[S_{0}\langle 1\rangle<S_{1}\langle 2\rangle]\qquad\text{so that}\qquad K(I_{\infty})=\Big{[}\mathcal{O}[2]\xrightarrow{z} \mathcal{O}(1)[1]\Big{]}\xrightarrow{\cong}\mathcal{O}_{W}[1]\] and we can choose representatives of the auxiliary elements of \(\Sigma^{1}\) according to so that the monad presentation induced by Proposition 4.26 is given by \[\begin{pmatrix}y-E&0\\ 1&B\\ z&D\\ 0&y-F\\ A&x\\ C&xz\\ J&0\\ &(\mathcal{O}\oplus\mathcal{O}(1)^{2})\otimes V_{0}\\ &\oplus\\ \mathcal{O}(1)\otimes V_{1}\\ &\oplus\\ \mathcal{O}(1)\otimes V_{\infty}\end{pmatrix}\qquad\begin{pmatrix}0&xz&-x&0&D&-B&0\\ -z&0&y-E&-D&0&0&0\\ 1&E-y&0&B&0&0&0\\ 0&C&-A&0&z&-1&0\\ -C&0&0&-xz&0&y-F&0\\ A&0&0&x&F-y&0&I\\ 0&0&J&0&0&0&0\end{pmatrix}\begin{pmatrix}y-E&xz&0&B&D&0\\ 0&A&C&y-F&1&z&I\end{pmatrix}\quad.\] _Example 4.46_.: Let \(Y=|\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}(-2)|\to X= \text{Spec }\mathbb{C}[x_{1},x_{2},x_{3},x_{4}]/(x_{1}x_{2}-x_{3}^{2})\), following Example 3.35, and let \(M=\mathcal{O}_{W_{0}}[1]\oplus\mathcal{O}_{W_{1}}[1]\) for \(W_{0}=\mathcal{O}_{\mathbb{P}^{1}}|_{z=0}\oplus\mathcal{O}_{\mathbb{P}^{1}}( -2)|_{z=0}\cong\mathbb{A}^{2}\) and \(W_{1}=|\mathcal{O}_{\mathbb{P}^{1}}|\). Then generalizing the previous two examples, extended quiver is given by In particular, consider the framing structure f given by \(G=H=0\) but with \(K_{\text{f}}\) potentially non-trivial. Then the corresponding framed quiver with potential is given by (4.43) This quiver with potential corresponds under dimensional reduction to the quiver In particular, if \(d_{\infty_{1}}\leq d_{\infty_{0}}\) and \(K\) is injective, we conjecture that for appropriate choice of stability conditions, the third relation gives an equivalence between the space of stable representations of this quiver with relations and that of the quiver (4.44) where \(\tilde{d}_{\infty_{0}}=\dim\tilde{V}_{\infty_{0}}=d_{\infty_{0}}-d_{\infty_{1}}\); the latter quiver with relations is precisely the chainsaw quiver of [10] for \(\rho\) a nilpotent in \(\mathfrak{gl}_{n}\) for \(n=d_{\infty_{1}}+\tilde{d}_{\infty_{0}}\) with Jordan blocks of size \(d_{\infty_{1}}\) and \(\tilde{d}_{\infty_{0}}\), and this is a special case of Conjecture 6.27. The monad presentation in this example is a straightforward generalization of those in the preceding two examples. _Example 4.47_.: Let \(Y=|\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)| \to X=\text{Spec }\mathbb{C}[x_{1},x_{2},x_{3},x_{4}]/(x_{1}x_{2}-x_{3}x_{4})\), following Example 3.36, and let \(M=\mathcal{O}_{|\mathcal{O}_{\mathbb{P}^{1}}(-1)|}[1]\). Then we have \[{}_{0}\Sigma^{1}_{\infty}=\mathbb{K}\qquad{}_{\infty}\Sigma^{1}_{0}=\{0\} \qquad{}_{1}\Sigma^{1}_{\infty}=\{0\}\qquad{}_{\infty}\Sigma^{1}_{1}=\mathbb{ K}\qquad{}_{\infty}\Sigma^{1}_{\infty}=\{0\}\] so that the extended quiver \(Q_{M}\) is given by The only framing structure is the trivial one \(\mathrm{f}=0\) since \(\ {}_{\infty}\Sigma^{1}_{\infty}=\{0\}\), and the corresponding framed quiver with potential \((Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\) is given by \[Q^{\mathrm{f}}_{M}=\] \[\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20} {\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20} {\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{\xy(0,0)*{-20}{ \xy(0,0)*{-2 so that the monad presentation induced by Proposition 4.26 is given by \[\begin{pmatrix}\begin{smallmatrix}1&-B\\ z&-D\\ -A&x\\ -C&y\\ 0&J\end{smallmatrix}\quad\begin{pmatrix}zy-DC&CB-y&0&zB-D&0\\ DA-zx&x-BA&D-zB&0&I\\ 0&yA-xC&yz-CD&AD-xz&0\\ xC-yA&0&CB-y&x-AB&0\\ 0&0&0&J&0\end{pmatrix}\ \begin{pmatrix}x&y&B&D&I\\ A&C&1&z&0\end{pmatrix}\qquad.\] \[\begin{array}{ccccc}\mathcal{O}\otimes V_{0}&&\mathcal{O}(1)^{2}\otimes V_{0} &&&&\mathcal{O}(1)^{2}\otimes V_{0}&&\mathcal{O}\otimes V_{0}\\ \oplus&\longrightarrow&\oplus&\longrightarrow&\oplus&\longrightarrow& \oplus\\ \mathcal{O}(1)\otimes V_{1}&&\mathcal{O}^{2}\otimes V_{1}&&&&\mathcal{O}^{2} \otimes V_{1}&&\mathcal{O}(1)\otimes V_{1}\\ &&\oplus&\oplus&\oplus&\\ 0&0&0&J&0\end{array}\qquad.\] _Example 4.48_.: Let \(Y=X=\mathbb{C}^{3}\) and let \(M=\mathcal{O}_{\mathbb{C}}[1]\) the shifted structure sheaf of the coordinate subspace \(\mathbb{C}_{x}\subset\mathbb{C}^{3}\). Then we have \[{}_{\infty}\Sigma_{0}^{1}=\mathbb{K}^{2}\qquad\ {}_{0}\Sigma_{\infty}^{1}= \mathbb{K}\qquad\ {}_{\infty}\Sigma_{\infty}^{1}=S_{\infty}^{2}\] so that the extended quiver \(Q_{M}\) is given by In particular, for the trivial framing structure \(\mathrm{f}=0\), the corresponding framed quiver with potential \((Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})\) is given by \[Q_{M}^{\mathrm{f}}=\] This quiver with potential is precisely the one introduced by Davison-Ricolfi [11] in their study of the local DT-PT correspondence. There are also generalizations analogous to those of Example 4.42, with corresponding quivers given by where we have for simplicity only given a representative example, analogous to that of Equation 4.42, and we have omitted the calculation of the potential. _Example 4.49_.: We can also consider the case where \[M=\mathcal{O}_{\mathbb{C}^{3}}[1]\oplus\mathcal{O}_{\mathbb{C}_{x}}\oplus \mathcal{O}_{\mathbb{C}_{y}}\oplus\mathcal{O}_{\mathbb{C}_{z}}\,\] using the \(t\)-structure of Example 4.13. It is a straightforward generalization of the above computations to check that the resulting extended quiver is essentially that of [10], which was apparently computed to describe the DT topological vertex. In particular, the framing vertex has four components \[V_{\infty}=V_{\infty}^{0}\oplus V_{\infty}^{x}\oplus V_{\infty}^{y}\oplus V_{ \infty}^{z}\] corresponding to the four summands of \(M\) above. In this example the moduli stack of framed perverse coherent extensions, rather than parameterizing maps from \(\mathcal{O}_{\mathbb{C}^{3}}\) to an iterated extension of \(\mathcal{O}_{\mathrm{pt}}\), parameterizes maps from \(\mathcal{O}_{\mathbb{C}^{3}}\) to an iterated extension of \(\mathcal{O}_{\mathrm{pt}}\) together with a fixed underlying iterated extension of \(\mathcal{O}_{\mathbb{C}_{x}}\), \(\mathcal{O}_{\mathbb{C}_{y}}\), and \(\mathcal{O}_{\mathbb{C}_{z}}\). Indeed, we conjecture that for appropriate stability conditions and framing structure, the fixed points of the resulting moduli space are indexed by the set of plane partitions with fixed asymptotics \(\alpha,\beta,\gamma\), where each of the partitions is determined by the framing structure at the framing vertex \(i\), given by the pair \(A_{1}^{i},A_{2}^{i}\) of fixed endomorphisms of \(V_{\infty}^{i}\), for \(i=x,y,z\) respectively. As we will see, this is closely related to their occurrence in the construction of [11]. _Example 4.50_.: Similarly, we can also consider the analogous computation in the setting of Example 4.41, taking \[M=\mathcal{O}_{\mathbb{C}_{xy}^{2}}[1]\oplus\mathcal{O}_{\mathbb{C}_{xz}^{2}} [1]\oplus\mathcal{O}_{\mathbb{C}_{yz}^{2}}[1]\oplus\mathcal{O}_{\mathbb{C}_{ x}}\oplus\mathcal{O}_{\mathbb{C}_{y}}\oplus\mathcal{O}_{\mathbb{C}_{z}}\,\] and again using the \(t\)-structure of Example 4.13. In particular, the framing vertex has six components \[V_{\infty}=V_{\infty}^{xy}\oplus V_{\infty}^{xz}\oplus V_{\infty}^{yz}\oplus V _{\infty}^{x}\oplus V_{\infty}^{y}\oplus V_{\infty}^{z}\] corresponding to the summands of \(M\) above. This gives rise to the following apparently new framed quiver with potential (4.46) which we have drawn in the case \(V_{\infty}^{xz}=V_{\infty}^{xz}=V_{\infty}^{z}=0\), for simplicity. Again, in this example the moduli stack of framed perverse coherent extensions parameterizes, rather than just maps from a fixed iterated extension of \(\mathcal{O}_{\mathbb{C}^{2}}\) to an iterated extension of \(\mathcal{O}_{\mathrm{pt}}\), maps from the same source to an iterated extension of \(\mathcal{O}_{\mathrm{pt}}\) together with a fixed underlying iterated extension of \(\mathcal{O}_{\mathbb{C}_{x}}\), \(\mathcal{O}_{\mathbb{C}_{y}}\), and \(\mathcal{O}_{\mathbb{C}_{z}}\), determined by the framing structure. Indeed, generalizing the previous example, we conjecture that for appropriate stability conditions, the fixed points of the resulting moduli space are indexed by the set of plane partitions with fixed asymptotics \(\alpha,\beta,\gamma\), and a pit at location \((m,n)\), in the sense of [1], where the partitions \(\alpha,\beta,\gamma\) are those of length \(\dim V_{\infty}^{i}\) determined by the pair \(A_{1}^{i},A_{2}^{i}\) of endomorphisms of \(V_{\infty}^{i}\) given by the framing structure at the framing vertex \(i\), for \(i=x,y,z\) respectively, \(\dim V_{\infty}^{xz}=m\), \(\dim V_{\infty}^{yz}=n\), and \(V_{\infty}^{xy}=0\), and \(A^{xz}\) and \(A^{yz}\) are given by principal nilpotents in \(\mathfrak{gl}_{m}\) and \(\mathfrak{gl}_{n}\), respectively. For \(n=0\), these appear to be the cohomological variants of the modules constructed in [11]. ## 5. Representations of cohomological Hall algebras from perverse coherent extensions ### Overview of Section 5 In Section 5, we recall the construction of the Kontsevich-Soibelman cohomological Hall algebra from [13], and prove Theorem B from the introduction, following the approach of [14]. In Section 5.2 we recall some necessary facts about sheaves of vanishing cycles, and in Section 5.3 we recall the construction of the cohomological Hall algebra. In Section 5.4 we prove Theorem B, and in Section 5.5 we explain the interpretation of the vector spaces underlying these representations as cohomological enumerative invariants. In Section 5.6, we outline an extension of the construction of the representation of Theorem B to a representation of a larger algebra which appears to agree with the shifted quiver Yangian of [10]. ### Vanishing cycles sheaves and cohomology In this section, we review the formalism of nearby and vanishing cycles, and recall some of their key properties that we use in section. Let \(X\) be a smooth variety, \(W:X\to\mathbb{A}^{1}\) a regular function, and \(X_{0}=X\times_{\mathbb{A}^{1}}\{0\}\), \(U=X\times_{\mathbb{A}^{1}}\mathbb{G}_{m}\), \(\tilde{\mathbb{G}}_{m}\) the universal cover of \(\mathbb{G}_{m}\), and \(\tilde{U}=U\times_{\mathbb{G}_{m}}\tilde{\mathbb{G}}_{m}\), so we have that (5.1) is commutative, with all squares Cartesian. The nearby cycles functor is defined as \[\psi_{W}:=\iota_{0}^{*}j_{*}\tilde{\pi}_{*}\tilde{\pi}^{*}:\mathrm{D}_{c}(U) \rightarrow\mathrm{D}_{c}(X_{0})\.\] Note that the unit of the \((\pi^{*},\pi_{*})\) adjunction for \(\pi=j\circ\tilde{\pi}:\tilde{U}\to X\) defines a natural transformation \[\iota^{*}A\rightarrow\psi_{W}j^{*}A\] the cone on which extends to define the vanishing cycles functor \[\phi_{W,0}:\mathrm{D}_{c}(X)\rightarrow\mathrm{D}_{c}(X_{0})\qquad\qquad A \mapsto\phi_{W,0}(A)\cong\mathrm{Cone}\left[\iota^{*}A\rightarrow\psi_{W}j^{* }A\right]\.\] In fact, it is convenient to consider the cohomological shifts \(\psi_{W}^{p}=\psi_{W}[-1],\phi_{W,0}^{p}=\phi_{W,0}[-1]\) as these both preserve the hearts of the respective perverse t-structures, that is, they restrict to functors \[\psi_{W}^{p,0}:\mathrm{Perv}(U)\rightarrow\mathrm{Perv}(X_{0})\qquad\text{ and}\qquad\phi_{W,0}^{p,0}:\mathrm{Perv}(X)\rightarrow\mathrm{Perv}(X_{0})\.\] Let \(Z=\mathrm{Crit}(W):=\Gamma_{dW}\times_{T^{\vee}X}X\to X\) denote the (classical) critical locus of \(W\), that is, the (classical) vanishing locus of \(dW\in\Omega^{1}(X)\). Note \(W\) is constant on each of the finitely many irreducible components of \(Z\), so that \(f(Z)\subset\mathbb{A}^{1}\) is a finite set of points, and we have \[Z=\bigsqcup_{c\in f(Z)}Z_{c}\qquad\text{where}\qquad Z_{c}=Z\times_{\mathbb{A} ^{1}}\{c\}\] so that we have the following commutative diagram, with all squares Cartesian: For each \(c\in W(Z)\subset\mathbb{A}^{1}\), we consider the corresponding shifted vanishing cycles functor for the function \(W-c\), denoted \(\phi_{W,c}^{p}:=\phi_{W-c,0}^{p}:\mathrm{D}(X)\rightarrow\mathrm{D}(X_{c})\), and define \[\phi_{W,c}^{p,0}:\mathrm{Perv}(X)\rightarrow\mathrm{Perv}(X_{c})\qquad\text{ and}\qquad\varphi_{W,c}=\phi_{W,c}^{p,0}(\underline{\mathbb{Q}}_{X}[\dim X])\ \in\mathrm{Perv}(X_{c})\.\] In fact, the support of \(\varphi_{W,c}\) is contained in \(Z_{c}\subset X_{c}\), and thus we also define the internal variant of the \(c\) component of the vanishing cycle sheaf \[\tilde{\varphi}_{W,c}=i_{c}^{*}\varphi_{W,c}\in\mathrm{Perv}(Z_{c})\] as the image of \(\varphi_{W,c}\) under the equivalence \(\mathrm{Perv}(X_{c})_{Z_{c}}\cong\mathrm{Perv}(Z_{c})\). Finally, we define the total intrinsic variant of the vanishing cycle sheaf \[\tilde{\varphi}_{W}:=\bigoplus_{c\in W(Z)}\tilde{\varphi}_{W,c}\quad\in\quad \mathrm{Perv}(Z)=\bigoplus_{c\in W(Z)}\mathrm{Perv}(Z_{c})\,\] and similarly its extrinsic analogue, which will be the primary object of interest \[\varphi_{W}:=\iota_{*}\tilde{\varphi}_{W}\ \in\mathrm{PervCoh}(X)\.\] More generally, we define the total extrinsic vanishing cycles functor \[\phi_{W}=\bigoplus_{c\in W(Z)}\iota_{c*}\circ\phi_{W,c}^{p}:\mathrm{D}(X) \rightarrow\mathrm{D}(X)\,\] which in particular reproduces the preceding object as \(\varphi_{W}=\phi_{W}(\underline{\mathbb{Q}}_{X}[\dim X])\). We define the compactly supported vanishing cycles cohomology of \(X\), and its perverse shifted analogue, by \[H^{\bullet}_{c}(X,\phi_{W}\underline{\mathbb{Q}}_{X})=p_{!}\phi_{W}p^{*}\mathbb{Q }\qquad\qquad H^{\bullet}_{c}(X,\varphi_{W})=p_{!}\phi_{W}p^{*}\mathbb{Q}[ \dim X]\,\] where \(p:X\to\mathrm{pt}\). It satisfies the following functoriality properties: _Proposition 5.1_.: Let \(W:Y\to\mathbb{A}^{1}\) and \(f:X\to Y\) a proper map of smooth varieties. Then \(f\) induces a pullback map \[f^{*}:H^{\bullet}_{c}(Y,\phi_{W}\underline{\mathbb{Q}}_{Y})\to H^{\bullet}_{c} (X,\phi_{W\circ f}\underline{\mathbb{Q}}_{X}). \tag{5.2}\] Similarly, a smooth map \(f:X\to Y\) of smooth varieties induces a pushforward map \[f_{*}:H^{\bullet}_{c}(X,\phi_{W\circ f}\underline{\mathbb{Q}}_{X})\to H^{ \bullet}_{c}(Y,\phi_{W}\underline{\mathbb{Q}}_{Y})[-2d]\, \tag{5.3}\] where \(d=\dim X-\dim Y\) denotes the relative dimension of \(f\). Proof.: Recall that for any map of smooth varieties, there is a natural transformation \[\phi_{W}\to f_{*}\phi_{W\circ f}f^{*}\ ;\] see for example Equation 20 in [10]. This induces the desired map as \[p_{Y!}\phi_{W}p^{*}_{Y}\mathbb{Q}\to p_{Y!}f_{*}\phi_{W\circ f}f^{*}p^{*}_{Y} \mathbb{Q}\cong p_{X!}\phi_{W\circ f}p^{*}_{X}\mathbb{Q}\,\] where we have used properness of \(f\) to identify \(f_{*}=f_{!}\). Similarly, for \(f\) a smooth map of smooth varieties, the induced natural transformation \[f^{*}\phi_{W}\xrightarrow{\cong}\phi_{W\circ f}f^{*}\] is an isomorphism; see for example Equation 23 in _loc. cit._. This induces the desired map as \[p_{X!}\phi_{W\circ f}p^{*}_{X}\mathbb{Q}\cong p_{Y!}f_{!}\phi_{W\circ f}f^{*}p^ {*}_{Y}\mathbb{Q}\xrightarrow{\cong}p_{Y!}f_{!}f^{*}\phi_{W}p^{*}_{Y} \mathbb{Q}\to p_{Y!}\phi_{W}p^{*}_{Y}\mathbb{Q}[-2d]\] where we have used the counit \(f_{!}f^{!}\to\mathrm{id}\) of the \((f_{!},f^{!})\) adjunction, as well as the identification \(f^{*}=f^{!}[-2d]\) which follows from smoothness of \(f\). In particular, we also obtain: _Corollary 5.2_.: For \(f:X\to Y\) an affine fibration of relative dimension \(d\), the induced pushforward map of Equation 5.3 is an isomorphism, so that there is an inverse isomorphism \[f_{*}^{-1}:H^{\bullet}_{c}(Y,\phi_{W}\underline{\mathbb{Q}}_{Y}) \xrightarrow{\cong}H^{\bullet}_{c}(X,\phi_{W\circ f}\underline{\mathbb{Q}}_{ X})[2d]. \tag{5.4}\] Proof.: If \(f\) is an affine fibration, the counit of the \((f_{!},f^{!})\) adjunction is an isomorphism, and thus by the proof of the preceding proposition the induced map is an isomorphism. ### Cohomological Hall algebras of quivers with potential and Calabi-Yau threefolds Throughout this section, let \((Q,W)\) be a quiver with potential, and recall from Section 3.6 the moduli stacks of representations of the underlying quiver \[\mathfrak{M}_{\mathbf{d}}(Q)=[X_{\mathbf{d}}(Q)/G_{\mathbf{d}}(Q)]\qquad\text {where}\qquad X_{\mathbf{d}}(Q)=\prod_{e\in E_{Q}}\mathrm{Hom}(\mathbb{K}^{d_ {s(e)}},\mathbb{K}^{d_{t(e)}})\qquad G_{\mathbf{d}}(Q)=\prod_{i\in V_{Q}} \mathrm{Gl}_{d_{i}}(\mathbb{K})\] for each dimension vector \(\mathbf{d}\in\mathbb{N}^{V_{Q}}\); for simplicity, we will often omit \(Q\) from the notation when there is only a single quiver under consideration, as in the remainder of this subsection. Given \(\mathbf{a},\mathbf{b}\in\mathbb{N}^{V_{Q}}\), we define \[\mathfrak{M}_{\mathbf{a},\mathbf{b}}=[X_{\mathbf{a},\mathbf{b}}/G_{\mathbf{a}, \mathbf{b}}]\qquad\text{where}\qquad\begin{cases}X_{\mathbf{a},\mathbf{b}}=\left\{ \varphi\in X_{\mathbf{a}+\mathbf{b}}\ \big{|}\ \varphi\big{(}\bigoplus_{e\in E_{Q}}\mathbb{K}^{a_{s(e)}}\big{)} \subset\bigoplus_{e\in E_{Q}}\mathbb{K}^{a_{t(e)}}\right\}\\ G_{\mathbf{a},\mathbf{b}}=\left\{g\in G_{\mathbf{a}+\mathbf{b}}\ \big{|}\ g\big{(} \bigoplus_{i\in V_{Q}}\mathbb{K}^{a_{i}}\big{)}\subset\bigoplus_{i\in V_{Q}} \mathbb{K}^{a_{i}+b_{i}}\right\}\.\end{cases}\] Equivalently, these spaces are defined component-wise as \[X_{\mathbf{a},\mathbf{b}}=\prod_{e\in E_{Q}}\operatorname{Hom}_{a_{s(e)},a_{ t(e)}}(\mathbb{K}^{a_{s(e)}+b_{s(e)}},\mathbb{K}^{a_{t(e)}+b_{t(e)}})\qquad\text{and} \qquad G_{\mathbf{a},\mathbf{b}}=\prod_{i\in V_{Q}}\operatorname{Gl}_{a_{i},b_{i}}(\mathbb{K})\] where \(\operatorname{Hom}_{m_{1},m_{2}}(\mathbb{K}^{m_{1}+n_{1}},\mathbb{K}^{m_{2}+n _{2}})\) denotes the space of block upper triangular linear maps, that is, satisfying \(\varphi(\mathbb{K}^{m_{1}})\subset\mathbb{K}^{m_{2}}\), and \(\operatorname{Gl}_{m,n}(\mathbb{K})\subset\operatorname{Gl}_{m+n}(\mathbb{K})\) is the parabolic subgroup preserving the flag \(\{0\}\subset\mathbb{K}^{m}\subset\mathbb{K}^{m+n}\). Geometrically, \(\mathfrak{M}_{\mathbf{a},\mathbf{b}}\) is the moduli stack of short exact sequences of representations of the quiver \(Q\) with sub object and quotient object of dimension \(\mathbf{a}\) and \(\mathbf{b}\), respectively. In particular, note that we have a closed embedding \(X_{\mathbf{a},\mathbf{b}}\to X_{\mathbf{a}+\mathbf{b}}\) by definition, as well as a smooth map \(X_{\mathbf{a},\mathbf{b}}\to X_{\mathbf{a}}\times X_{\mathbf{b}}\) given by forgetting the extension data. These maps define a correspondence of algebraic \(\mathbb{K}\)-stacks \[\mathfrak{M}_{\mathbf{a},\mathbf{b}}\qquad\qquad\qquad\qquad\qquad\qquad,\] inducing an associative product on the Borel-Moore homology of \(\mathfrak{M}\) with coefficients in the sheaf of vanishing cycles determined by \(W\), as we now explain. We recall the definition of the cohomological Hall algebra associated to the quiver with potential \((Q,W)\) following [14]. The underlying graded vector space is given by \[\mathcal{H}(Q,W)=\bigoplus_{\mathbf{d}\in\mathbb{N}^{V_{Q}}}\mathcal{H}_{ \mathbf{d}}(Q,W)\qquad\mathcal{H}_{\mathbf{d}}(Q,W)=H_{\bullet}^{G_{\mathbf{d }}(Q)}(X_{\mathbf{d}}(Q),\varphi_{W_{\mathbf{d}}})\] where \(H_{\bullet}^{G}(X,\varphi):=H_{G,c}^{\bullet}(X,\varphi)^{\vee}\) denotes the equivariant Borel-Moore homology with coefficients in the sheaf of vanishing cycles, defined as the dual of the compactly supported cohomology with coefficients in the sheaf of vanishing cycles, and we recall that the natural perverse grading shift in the equivariant case is given by \[H_{G,c}^{\bullet}(X,\varphi)=H_{G,c}^{\bullet}(X,\phi_{W}\underline{\mathbb{ Q}}_{X})[\dim X-\dim G]\.\] Again, for simplicity, we will often omit \((Q,W)\) from the notation when there is only a single quiver with potential under consideration. The cohomological Hall algebra is not a graded algebra in the usual sense: rather than being an associative algebra object internal to the category \(\operatorname{Vect}_{\mathbb{N}^{V_{Q}}}\) of \(\mathbb{N}^{V_{Q}}\)-graded complexes with its usual monoidal structure, it is an associative algebra in this category with respect to the usual monoidal structure twisted by a cohomological degree shift, defined by \[(\bigoplus_{\mathbf{a}\in\mathbb{N}^{V_{Q}}}V_{\mathbf{a}})\otimes^{\mathrm{ tw}}(\bigoplus_{\mathbf{b}\in\mathbb{N}^{V_{Q}}}W_{\mathbf{b}}):=\bigoplus_{ \mathbf{d}\in\mathbb{N}^{V_{Q}}}\bigoplus_{\mathbf{a}+\mathbf{b}=\mathbf{d}}V _{\mathbf{a}}\otimes W_{\mathbf{b}}[\chi(\mathbf{b},\mathbf{a})-\chi(\mathbf{ a},\mathbf{b})]\,\] as explained in Section 2.7 of [14] or 3.1 of [15], where we have used the bilinear form \(\chi:\mathbb{Z}^{V_{Q}}\times\mathbb{Z}^{V_{Q}}\to\mathbb{Z}\) defined by \[\chi(\mathbf{a},\mathbf{b})=\sum_{i\in V_{Q}}a_{i}b_{i}-\sum_{e\in E_{Q}}a_{s( e)}b_{t(e)}\.\] In particular, note that \(\chi(\mathbf{a},\mathbf{a})=\dim G_{\mathbf{a}}-\dim X_{\mathbf{a}}\) and similarly \[\chi(\mathbf{a},\mathbf{b})=\dim G_{\mathbf{a},\mathbf{b}}-\dim G_{\mathbf{a}} \times G_{\mathbf{b}}-\dim X_{\mathbf{a},\mathbf{b}}+\dim X_{\mathbf{a}}\times X _{\mathbf{b}}. \tag{5.5}\] We now recall the construction of the cohomological Hall algebra of a quiver with potential introduced by Kontsevich-Soibelman in [11]. _Theorem 5.3_.: [11] Let \((Q,W)\) be a quiver with potential. The Kontsevich-Soibelman cohomological Hall algebra \[\mathcal{H}(Q,W)\in\operatorname{Alg}_{\operatorname{Ass}}(\operatorname{ Vect}_{\mathbb{N}^{V_{Q}}}^{\otimes\operatorname{tw}})\,\] that is, \(\mathcal{H}(Q,W)\) admits a canonical (twisted \(\mathbb{N}^{V_{Q}}\)-graded) associative algebra structure. Proof.: The associative product map \(m:\mathcal{H}\otimes^{\operatorname{tw}}\mathcal{H}\to\mathcal{H}\) on the cohomological Hall algebra \(\mathcal{H}=\mathcal{H}(Q,W)\) is defined in components by \[m_{\mathbf{a},\mathbf{b}}:\mathcal{H}_{\mathbf{a}}\otimes\mathcal{H}_{ \mathbf{b}}\to\mathcal{H}_{\mathbf{a}+\mathbf{b}}[\chi(\mathbf{a},\mathbf{b} )-\chi(\mathbf{b},\mathbf{a})]\.\] The components \(m_{\mathbf{a},\mathbf{b}}\) will be constructed in terms of the equivalent dual maps \[m_{\mathbf{a},\mathbf{b}}^{\vee}:H_{G_{\mathbf{a}+\mathbf{b}},c}^{\bullet}(X_ {\mathbf{a}+\mathbf{b}},\varphi_{W_{\mathbf{a}+\mathbf{b}}})\to H_{G_{\mathbf{a }},c}^{\bullet}(X_{\mathbf{a}},\varphi_{W_{\mathbf{a}}})\otimes H_{G_{\mathbf{b }},c}^{\bullet}(X_{\mathbf{b}},\varphi_{W_{\mathbf{b}}})[\chi(\mathbf{a}, \mathbf{b})-\chi(\mathbf{b},\mathbf{a})]\,\] or after accounting explicitly for the perverse shifts, \[m_{\mathbf{a},\mathbf{b}}^{\vee}:H_{G_{\mathbf{a}+\mathbf{b}},c}^{\bullet}(X_ {\mathbf{a}+\mathbf{b}},\phi_{W_{\mathbf{a}+\mathbf{b}}}\underline{\mathbb{Q} }_{X_{\mathbf{a}+\mathbf{b}}})\to H_{G_{\mathbf{a}},c}^{\bullet}(X_{ \mathbf{a}},\phi_{W_{\mathbf{a}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}}}) \otimes H_{G_{\mathbf{b}},c}^{\bullet}(X_{\mathbf{b}},\phi_{W_{\mathbf{b}}} \underline{\mathbb{Q}}_{X_{\mathbf{b}}})[2\chi(\mathbf{a},\mathbf{b})]\, \tag{5.6}\] where the claimed cohomological degree shift follows from the equality \[\chi(\mathbf{a}+\mathbf{b},\mathbf{a}+\mathbf{b})-\chi(\mathbf{a},\mathbf{a} )-\chi(\mathbf{b},\mathbf{b})+\chi(\mathbf{a},\mathbf{b})-\chi(\mathbf{b}, \mathbf{a})=2\chi(\mathbf{a},\mathbf{b})\.\] We now proceed with the construction of the desired maps: There is a restriction of equivariance map \[H_{G_{\mathbf{a}+\mathbf{b}},c}^{\bullet}(X_{\mathbf{a}+\mathbf{b}},\phi_{W_ {\mathbf{a}+\mathbf{b}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}+\mathbf{b}}}) \to H_{G_{\mathbf{a},\mathbf{b}},c}^{\bullet}(X_{\mathbf{a}+\mathbf{b}},\phi_ {W_{\mathbf{a}+\mathbf{b}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}+\mathbf{b}}})\] given by the proper pullback map of Equation 5.2 induced by the map \[X_{\mathbf{a}+\mathbf{b}}/G_{\mathbf{a},\mathbf{b}}\to X_{\mathbf{a}+\mathbf{ b}}/G_{\mathbf{a}+\mathbf{b}}\] which is proper and schematic, as its fibres are isomorphic to the partial flag variety \(G_{\mathbf{a}+\mathbf{b}}/G_{\mathbf{a},\mathbf{b}}\). Similarly, pullback along the closed embedding \(X_{\mathbf{a},\mathbf{b}}\to X_{\mathbf{a}+\mathbf{b}}\) gives a map \[H_{G_{\mathbf{a},\mathbf{b}},c}^{\bullet}(X_{\mathbf{a}+\mathbf{b}},\phi_{W_ {\mathbf{a}+\mathbf{b}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}+\mathbf{b}}}) \to H_{G_{\mathbf{a},\mathbf{b}},c}^{\bullet}(X_{\mathbf{a},\mathbf{b}},\phi_{W _{\mathbf{a},\mathbf{b}}}\underline{\mathbb{Q}}_{X_{\mathbf{a},\mathbf{b}}})\.\] Next, there is a further restriction of equivariance map \[H_{G_{\mathbf{a},\mathbf{b}},c}^{\bullet}(X_{\mathbf{a},\mathbf{b}},\phi_{W_ {\mathbf{a},\mathbf{b}}}\underline{\mathbb{Q}}_{X_{\mathbf{a},\mathbf{b}}}) \to H_{G_{\mathbf{a}}\times G_{\mathbf{b}},c}^{\bullet}(X_{\mathbf{a},\mathbf{b }},\phi_{W_{\mathbf{a},\mathbf{b}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}, \mathbf{b}}})[2(\dim G_{\mathbf{a},\mathbf{b}}-\dim G_{\mathbf{a}}\times G_{ \mathbf{b}})]\] given by the inverse pushforward map, as in Equation 5.4, induced by the affine fibration \[X_{\mathbf{a},\mathbf{b}}/(G_{\mathbf{a}}\times G_{\mathbf{b}})\to X_{\mathbf{a },\mathbf{b}}/G_{\mathbf{a},\mathbf{b}}\] with fibres given by the unipotent group \(G_{\mathbf{a},\mathbf{b}}/(G_{\mathbf{a}}\times G_{\mathbf{b}})\). There is also a pushforward map \[H_{G_{\mathbf{a}}\times G_{\mathbf{b}},c}^{\bullet}(X_{\mathbf{a},\mathbf{b}}, \phi_{W_{\mathbf{a},\mathbf{b}}}\underline{\mathbb{Q}}_{X_{\mathbf{a},\mathbf{b}}} )\to H_{G_{\mathbf{a}}\times G_{\mathbf{b}},c}^{\bullet}(X_{\mathbf{a}}\times X _{\mathbf{b}},\phi_{W_{\mathbf{a}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}}\times X _{\mathbf{b}}})[-2(\dim X_{\mathbf{a},\mathbf{b}}-\dim X_{\mathbf{a}}\times X _{\mathbf{b}})]\] induced as in Equation 5.3 by the smooth map \[X_{\mathbf{a},\mathbf{b}}/(G_{\mathbf{a}}\times G_{\mathbf{b}})\to(X_{\mathbf{a }}\times X_{\mathbf{b}})/(G_{\mathbf{a}}\times G_{\mathbf{b}})\.\] Composing the above sequence of maps with the Thom-Sebastiani isomorphism \[H_{G_{\mathbf{a}}\times G_{\mathbf{b}},c}^{\bullet}(X_{\mathbf{a}}\times X_{ \mathbf{b}},\phi_{W_{\mathbf{a}}\boxtimes W_{\mathbf{b}}}\underline{\mathbb{Q}}_{X_ {\mathbf{a}}\times X_{\mathbf{b}}})\cong H_{G_{\mathbf{a}},c}^{\bullet}(X_{ \mathbf{a}},\phi_{W_{\mathbf{a}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}}})\otimes H _{G_{\mathbf{b}},c}^{\bullet}(X_{\mathbf{b}},\phi_{W_{\mathbf{b}}}\underline{ \mathbb{Q}}_{X_{\mathbf{b}}})\,\] gives the desired map of Equation 5.6, where the correct shift follows from the equality of Equation 5.5. Associativity of the above product follows from a standard argument; see for example Section 2.3 of [11]. In particular, we make the following definition: _Definition 5.4_.: Let \(Y\xrightarrow{\pi}X\) be a resolution of singularities of a toric Calabi-Yau threefold satisfying the hypotheses of Section 3.2. The cohomological Hall algebra of \(Y\) is defined by \[\mathcal{H}(Y):=\mathcal{H}(Q_{Y},W_{Y})\ \in\operatorname{Alg}_{\operatorname{Ass }}(\operatorname{Vect}_{{\mathbb{N}}^{V_{Q}}}^{\otimes_{\operatorname{tw}}^{ \operatorname{tw}}})\,\] as constructed in the preceding Proposition, where \((Q_{Y},W_{Y})\) is the quiver with potential associated to \(Y\) as in Theorem 3.32. ### Representations of cohomological Hall algebras from perverse coherent extensions In this section, we prove Theorem B from Section 1.2 of the introduction, following the results outlined in Section 4 of [10]. Let \(M\in\operatorname{PervCoh}^{\operatorname{p}}(Y)^{T}\) satisfy the hypotheses of Section 4.1 with \(Q_{M}\) be the associated extended quiver, and let f be a framing structure for \(M\) of rank \(d_{\infty}\) with \((Q_{M}^{\operatorname{f}},W_{M}^{\operatorname{f}})\) the associated framed quiver with potential, as in Theorem 4.38. Recall that by the construction in _loc. cit._ we have \[X_{\mathbf{d}}(Q_{M}^{\operatorname{f}})=\bigoplus_{i,j\in V_{Q}}\ \ i\Sigma_{j}^{1}\otimes \operatorname{Hom}(\mathbb{K}^{d_{i}},\mathbb{K}^{d_{j}})\oplus\bigoplus_{i \in V_{Q}}\ \ i\Sigma_{\infty}^{1}\otimes\operatorname{Hom}(\mathbb{K}^{d_{i}}, \mathbb{K}^{d_{\infty}})\oplus\bigoplus_{j\in V_{Q}}\ \infty\Sigma_{j}^{1}\otimes \operatorname{Hom}(\mathbb{K}^{d_{\infty}},\mathbb{K}^{d_{j}})\] for each \(\mathbf{d}\in\mathbb{N}^{V_{Q_{Y}}}\), as well as the closed subvariety \[Z_{\mathbf{d}}(Q_{M}^{\operatorname{f}})=\operatorname{Crit}(W_{M,\mathbf{d} }^{\operatorname{f}})\subset X_{\mathbf{d}}(Q_{M}^{\operatorname{f}})\qquad \text{invariant under}\qquad G_{\mathbf{d}}(Q_{Y})=\prod_{i\in V_{Q_{Y}}} \operatorname{Gl}_{d_{i}}(\mathbb{K})\.\] Throughout this section, we will drop the dependence on the choice of quiver \(Q_{Y}\) and extended quiver \(Q_{M}\) from the notation, and write simply \[X_{\mathbf{d}}^{\operatorname{f}}=X_{\mathbf{d}}(Q_{M}^{\operatorname{f}}) \qquad X_{\mathbf{d}}=X_{\mathbf{d}}(Q_{Y})\qquad\text{and}\qquad G_{ \mathbf{d}}=G_{\mathbf{d}}(Q_{Y})\.\] Further, we fix a choice of stability condition \(\zeta\in\mathbb{R}^{V_{Q}}\) for the framed quiver with potential \((Q_{M}^{\operatorname{f}},W_{M}^{\operatorname{f}})\) and let \(X_{\mathbf{d}}^{\operatorname{f},\zeta}\subset X_{\mathbf{d}}^{\operatorname{ f}}\) denote the open subvariety of \(\zeta\)-stable points. As in the preceding Section 5.3, for \(\mathbf{a},\mathbf{b}\in\mathbb{N}^{V_{Q}}\), we let \(\mathbf{a}^{0}=(\mathbf{a},0),\mathbf{b}^{\operatorname{f}}=(\mathbf{b},d_{ \infty})\in\mathbb{N}^{V_{Q_{M}^{\operatorname{f}}}}\) and define \[\mathfrak{M}_{\mathbf{a},\mathbf{b}}^{\operatorname{f}}=[X_{\mathbf{a}, \mathbf{b}}^{\operatorname{f}}/G_{\mathbf{a},\mathbf{b}}]\qquad\text{where} \qquad\begin{cases}X_{\mathbf{a},\mathbf{b}}^{\operatorname{f}}=\big{\{} \varphi\in X_{\mathbf{a}+\mathbf{b}}^{\operatorname{f}}\ \big{|}\ \varphi\big{(}\bigoplus_{e\in E_{Q_{M}^{\operatorname{f}}}} \mathbb{K}^{a^{0}_{s(e)}}\big{)}\subset\bigoplus_{e\in E_{Q_{M}^{ \operatorname{f}}}}\mathbb{K}^{a^{0}_{t(e)}}\big{\}}\\ G_{\mathbf{a},\mathbf{b}}=\big{\{}g\in G_{\mathbf{a}+\mathbf{b}}\ \big{|}\ g\big{(} \bigoplus_{i\in V_{Q_{Y}}}\mathbb{K}^{a_{i}}\big{)}\subset\bigoplus_{i\in V _{Q_{Y}}}\mathbb{K}^{a_{i}}\big{\}}\end{cases}\] Equivalently, these spaces are defined component-wise as \[X_{\mathbf{a},\mathbf{b}}^{\operatorname{f}}=\prod_{e\in E_{Q_{M}^{ \operatorname{f}}}}\operatorname{Hom}_{a^{0}_{s(e)},a^{0}_{t(e)}}(\mathbb{K}^{ a^{0}_{s(e)}+b^{\operatorname{f}}_{s(e)}},\mathbb{K}^{a^{0}_{t(e)}+b^{ \operatorname{f}}_{t(e)}})\qquad\text{and}\qquad G_{\mathbf{a},\mathbf{b}}= \prod_{i\in V_{Q_{Y}}}\operatorname{Gl}_{a_{i},b_{i}}(\mathbb{K})\] where \(\operatorname{Hom}_{m_{1},m_{2}}(\mathbb{K}^{m_{1}+n_{1}},\mathbb{K}^{m_{2}+n_ {2}})\) denotes the space of block upper triangular linear maps, that is, satisfying \(\varphi(\mathbb{K}^{m_{1}})\subset\mathbb{K}^{m_{2}}\), and \(\operatorname{Gl}_{m,n}(\mathbb{K})\subset\operatorname{Gl}_{m+n}(\mathbb{K})\) is the parabolic subgroup preserving the flag \(\{0\}\subset\mathbb{K}^{m}\subset\mathbb{K}^{m+n}\). Geometrically, \(\mathfrak{M}^{\rm f}_{\bf a,b}\) is the moduli stack of short exact sequences of representations of the framed quiver \(Q^{\rm f}_{M}\) with sub object and quotient object of dimension \({\bf a}^{0}\) and \({\bf b}^{\rm f}\), respectively. Note that since the sub object has framing dimension \(0\) and thus is equivalent to a representation of the unframed quiver \(Q_{Y}\). In particular, note that we again have a closed embedding \(X^{\rm f}_{\bf a,b}\to X^{\rm f}_{\bf a+b}\) by definition, as well as a smooth map \(X^{\rm f}_{\bf a,b}\to X_{\bf a}\times X^{\rm f}_{\bf b}\) given by forgetting the extension data. These maps define a correspondence of algebraic \(\mathbb{K}\)-stacks Analogously to the previous section, this correspondence will induce the desired module structure on the Borel-Moore homology of \(\mathfrak{M}^{\rm f,\zeta}\) with coefficients in the sheaf of vanishing cycles determined by \(W^{\rm f}\), but we must modify the correspondence to ensure it preserves the moduli spaces of \(\zeta\)-stable representations. We define subschemes \(X^{\rm f,\zeta_{b}}_{\bf a,b},X^{\rm f,\zeta_{\bf a+b}}_{\bf a,b},X^{\rm f, \zeta}_{\bf a,b}\subset X^{\rm f}_{\bf a,b}\) by requiring that the diagrams are Cartesian. Note that the subschemes \(X^{\rm f,\zeta_{b}}_{\bf a,b},X^{\rm f,\zeta_{\bf a+b}}_{\bf a,b},X^{\rm f, \zeta}_{\bf a,b}\subset X^{\rm f}_{\bf a,b}\) are all open, by base change, and we have that \(X^{\rm f,\zeta}_{\bf a,b}=X^{\rm f,\zeta_{\bf a+b}}_{\bf a,b}\times_{X^{\rm f} _{\bf a,b}}X^{\rm f,\zeta_{\bf b}}_{\bf a,b}\). Similarly, the maps \[X^{\rm f,\zeta_{b}}_{\bf a,b}\to X_{\bf a}\times X^{\rm f,\zeta}_{\bf b}\qquad \text{and}\qquad X^{\rm f,\zeta_{\bf a+b}}_{\bf a,b}\to X^{\rm f,\zeta}_{\bf a +b}\] are smooth and a closed embedding, respectively, by base change. In order to complete the construction, we will need to make an additional hypothesis on the stability condition \(\zeta\): _Definition 5.5_.: A stability condition \(\zeta\) is called _compatible_ with \(M\) if the composition \[[X^{\rm f,\zeta}_{\bf a,b}/G_{\bf a,b}]\to[X^{\rm f,\zeta_{\bf a+b}}_{\bf a,b} /G_{\bf a,b}]\to[X^{\rm f,\zeta}_{\bf a+b}/G_{\bf a,b}]\] is proper for each \({\bf a,b}\in\mathbb{N}^{V_{Q_{Y}}}\). We now construct the desired representations \(\mathbb{V}=\mathbb{V}^{\rm f,\zeta}(M)\) defined by an object \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) together with a choice of framing structure \({\rm f}\) and a compatible stability condition \(\zeta\). The underlying graded vector space is given by \[\mathbb{V}^{\rm f,\zeta}(M)=\bigoplus_{{\bf d}\in\mathbb{N}^{V_{Q}}}\mathbb{ V}^{\rm f,\zeta}_{\bf d}(M)\qquad\text{with}\qquad\mathbb{V}^{\rm f,\zeta}_{\bf d }(M)=H^{G_{\bf d}(Q)}_{\bullet}(X^{\zeta}_{\bf d}(Q^{\rm f}_{M}),\varphi_{W^{ \rm f}_{M,\bf d}})\,\] where we recall that \(H^{G}_{\bullet}(X,\varphi):=H^{\bullet}_{G,c}(X,\varphi)^{\vee}\) denotes the equivariant Borel-Moore homology with coefficients in the sheaf of vanishing cycles, defined as the dual of the compactly supported cohomology with coefficients in the sheaf of vanishing cycles, as in Section 5.3. The main result of this section is the following, which closely follows the proof of Theorem 5.3 above from [11]: Let \(Y\to X\) be as in Section 3.2, \(M\in\mathrm{D}^{b}\mathrm{Coh}(Y)^{T}\) satisfy the hypotheses of Sections 4.1 and 4.6, \(\mathrm{f}\in Z_{d_{\infty}}(Q_{M},R_{M})(\mathbb{K})\) be a framing structure on \(M\), and \(\zeta\) a compatible stability condition. Then we have the following result, which establishes Theorem B from the introduction: _Theorem 5.6_.: There exists a natural (twisted \(\mathbb{N}^{V_{\mathcal{G}}}\)-graded) representation \[\rho_{M}:\mathcal{H}(Y)\to\mathrm{End}_{\mathrm{Vect}_{{}_{\mathbb{N}^{V_{ \mathcal{G}}}}}}(\mathbb{V}^{\mathrm{f},\zeta}(M))\] of the cohomological Hall algebra \(\mathcal{H}(Y)\) on \(\mathbb{V}^{\mathrm{f},\zeta}(M)\). Proof.: The module structure map \(\rho:\mathcal{H}\otimes^{\mathrm{tw}}\mathbb{V}\to\mathbb{V}\) for \(\mathbb{V}=\mathbb{V}^{\mathrm{f},\zeta}(M)\) over the cohomological Hall algebra \(\mathcal{H}=\mathcal{H}(Y)\) is defined in components by \[\rho_{\mathbf{a},\mathbf{b}}:\mathcal{H}_{\mathbf{a}}\otimes\mathbb{V}_{ \mathbf{b}}\to\mathbb{V}_{\mathbf{a}+\mathbf{b}}[\chi(\mathbf{a},\mathbf{b})- \chi(\mathbf{b},\mathbf{a})]\.\] The components \(\rho_{\mathbf{a},\mathbf{b}}\) will be constructed in terms of the equivalent dual maps \[\rho_{\mathbf{a},\mathbf{b}}^{\vee}:H^{\bullet}_{G_{\mathbf{a}+\mathbf{b}},c} (X^{\mathrm{f},\zeta}_{\mathbf{a}+\mathbf{b}},\varphi_{W^{\mathrm{f}}_{\mathbf{ a}+\mathbf{b}}})\to H^{\bullet}_{G_{\mathbf{a}},c}(X_{\mathbf{a}},\varphi_{W_{ \mathbf{a}}})\otimes H^{\bullet}_{G_{\mathbf{b}},c}(X^{\mathrm{f},\zeta}_{ \mathbf{b}},\varphi_{W^{\mathrm{f}}_{\mathbf{b}}})[\chi(\mathbf{a},\mathbf{b} )-\chi(\mathbf{b},\mathbf{a})]\,\] or after accounting explicitly for the perverse shifts, \[\rho_{\mathbf{a},\mathbf{b}}^{\vee}:H^{\bullet}_{G_{\mathbf{a}+\mathbf{b}},c} (X^{\mathrm{f},\zeta}_{\mathbf{a}+\mathbf{b}},\phi_{W^{\mathrm{f}}_{\mathbf{ a}+\mathbf{b}}}\underline{\mathbb{Q}}_{X^{\mathrm{f},\zeta}_{\mathbf{a}+ \mathbf{b}}})\to H^{\bullet}_{G_{\mathbf{a}},c}(X_{\mathbf{a}},\phi_{W_{ \mathbf{a}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}}})\otimes H^{\bullet}_{G_{ \mathbf{b}},c}(X^{\mathrm{f},\zeta}_{\mathbf{b}},\phi_{W^{\mathrm{f}}_{ \mathbf{b}}}\underline{\mathbb{Q}}_{X^{\mathrm{f},\zeta}_{\mathbf{b}}})[2\chi (\mathbf{a},\mathbf{b})]\, \tag{5.7}\] where the claimed cohomological degree shift again follows from the equality \[\chi(\mathbf{a}+\mathbf{b},\mathbf{a}+\mathbf{b})-\chi(\mathbf{a},\mathbf{a} )-\chi(\mathbf{b},\mathbf{b})+\chi(\mathbf{a},\mathbf{b})-\chi(\mathbf{b}, \mathbf{a})=2\chi(\mathbf{a},\mathbf{b})\.\] We now proceed with the construction of the desired maps: There is again a restriction of equivariance map \[H^{\bullet}_{G_{\mathbf{a}+\mathbf{b}},c}(X^{\mathrm{f},\zeta}_{\mathbf{a}+ \mathbf{b}},\phi_{W^{\mathrm{f}}_{\mathbf{a}+\mathbf{b}}}\underline{\mathbb{Q} }_{X^{\mathrm{f},\zeta}_{\mathbf{a}+\mathbf{b}}})\to H^{\bullet}_{G_{\mathbf{ a},\mathbf{b}},c}(X^{\mathrm{f},\zeta}_{\mathbf{a}+\mathbf{b}},\phi_{W^{ \mathrm{f}}_{\mathbf{a}+\mathbf{b}}}\underline{\mathbb{Q}}_{X^{\mathrm{f}, \zeta}_{\mathbf{a}+\mathbf{b}}})\] given by the proper pullback map of Equation 5.2 induced by the map \[X_{\mathbf{a}+\mathbf{b}}/G_{\mathbf{a},\mathbf{b}}\to X_{\mathbf{a}+\mathbf{b }}/G_{\mathbf{a}+\mathbf{b}}\] which is proper and schematic, as its fibres are isomorphic to the partial flag variety \(G_{\mathbf{a}+\mathbf{b}}/G_{\mathbf{a},\mathbf{b}}\). Similarly, pullback along the map \([X^{\mathrm{f},\zeta}_{\mathbf{a},\mathbf{b}}/G_{\mathbf{a},\mathbf{b}}]\to[X^ {\mathrm{f},\zeta}_{\mathbf{a}+\mathbf{b}}/G_{\mathbf{a},\mathbf{b}}]\), which is proper by the compatibility of \(\zeta\), gives a map \[H^{\bullet}_{G_{\mathbf{a},\mathbf{b}},c}(X^{\mathrm{f},\zeta}_{\mathbf{a}+ \mathbf{b}},\phi_{W^{\mathrm{f}}_{\mathbf{a}+\mathbf{b}}}\underline{\mathbb{Q} }_{X^{\mathrm{f},\zeta}_{\mathbf{a}+\mathbf{b}}})\to H^{\bullet}_{G_{\mathbf{ a},\mathbf{b}},c}(X^{\mathrm{f},\zeta}_{\mathbf{a},\mathbf{b}},\phi_{W^{ \mathrm{f}}_{\mathbf{a},\mathbf{b}}}\underline{\mathbb{Q}}_{X^{\mathrm{f}, \zeta}_{\mathbf{a},\mathbf{b}}})\.\] Next, there is again a further restriction of equivariance map \[H^{\bullet}_{G_{\mathbf{a},\mathbf{b}},c}(X^{\mathrm{f},\zeta}_{\mathbf{a}, \mathbf{b}},\phi_{W^{\mathrm{f}}_{\mathbf{a},\mathbf{b}}}\underline{\mathbb{Q} }_{X^{\mathrm{f},\zeta}_{\mathbf{a},\mathbf{b}}})\to H^{\bullet}_{G_{\mathbf{ a}}\times G_{\mathbf{b}},c}(X^{\mathrm{f},\zeta}_{\mathbf{a},\mathbf{b}},\phi_{W^{ \mathrm{f}}_{\mathbf{a},\mathbf{b}}}\underline{\mathbb{Q}}_{X^{\mathrm{f}, \zeta}_{\mathbf{a},\mathbf{b}}})[2(\dim G_{\mathbf{a},\mathbf{b}}-\dim G_{ \mathbf{a}}\times G_{\mathbf{b}})]\] given by the inverse pushforward map, as in Equation 5.4, induced by the affine fibration \[X^{\mathrm{f},\zeta}_{\mathbf{a},\mathbf{b}}/(G_{\mathbf{a}}\times G_{\mathbf{ b}})\to X^{\mathrm{f},\zeta}_{\mathbf{a},\mathbf{b}}/G_{\mathbf{a},\mathbf{b}}\] with fibres given by the unipotent group \(G_{\mathbf{a},\mathbf{b}}/(G_{\mathbf{a}}\times G_{\mathbf{b}})\). There is also a pushforward map \[H^{\bullet}_{G_{\mathbf{a}}\times G_{\mathbf{b}},c}(X^{\mathrm{f},\zeta}_{ \mathbf{a},\mathbf{b}},\phi_{W^{\mathrm{f}}_{\mathbf{a},\mathbf{b}}} \underline{\mathbb{Q}}_{X^{\mathrm{f},\zeta}_{\mathbf{a},\mathbf{b}}})\to H^{ \bullet}_{G_{\mathbf{a}}\times G_{\mathbf{b}},c}(X^{\mathrm{f},\zeta_{ \mathbf{b}}}_{\mathbf{a},\mathbf{b}},\phi_{W^{\mathrm{f}}_{\mathbf{a},\mathbf{b}}} \underline{\mathbb{Q}}_{X^{\mathrm{f},\zeta_{\mathbf{b}}}_{\mathbf{a},\mathbf{b}}})\] induced as in Equation 5.3 by the map \(X^{\mathrm{f},\zeta}_{\mathbf{a},\mathbf{b}}\to X^{\mathrm{f},\zeta_{\mathbf{b}}}_{ \mathbf{a},\mathbf{b}}\) since it is an open immersion and thus smooth. Similarly, there is again a pushforward map \[H^{\bullet}_{G_{\mathbf{a}}\times G_{\mathbf{b}},c}(X^{\mathrm{f},\zeta_{ \mathbf{b}}}_{\mathbf{a},\mathbf{b}},\phi_{W_{\mathbf{a}}^{\mathrm{f}},\underline {\mathbb{Q}}}_{X^{\mathrm{f},\zeta_{\mathbf{b}}}_{\mathbf{a},\mathbf{b}}})\to H ^{\bullet}_{G_{\mathbf{a}}\times G_{\mathbf{b}},c}(X_{\mathbf{a}}\times X^{ \mathrm{f},\zeta}_{\mathbf{b}},\phi_{W_{\mathbf{a}}\boxtimes W^{\mathrm{f}}_{ \mathbf{b}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}}\times X^{\mathrm{f},\zeta_ {\mathbf{b}}}_{\mathbf{b}}})[-2(\dim X^{\mathrm{f}}_{\mathbf{a},\mathbf{b}}- \dim X_{\mathbf{a}}\times X^{\mathrm{f}}_{\mathbf{b}})]\] induced by the smooth map \[X^{\mathrm{f},\zeta_{\mathbf{b}}}_{\mathbf{a},\mathbf{b}}/(G_{\mathbf{a}} \times G_{\mathbf{b}})\to(X_{\mathbf{a}}\times X^{\mathrm{f},\zeta}_{\mathbf{ b}})/(G_{\mathbf{a}}\times G_{\mathbf{b}})\.\] Composing the above sequence of maps with the Thom-Sebastiani isomorphism \[H^{\bullet}_{G_{\mathbf{a}}\times G_{\mathbf{b}},c}(X_{\mathbf{a}}\times X^{ \mathrm{f},\zeta}_{\mathbf{b}},\phi_{W_{\mathbf{a}}\boxtimes W^{\mathrm{f}}_{ \mathbf{b}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}}\times X^{\mathrm{f},\zeta}_ {\mathbf{b}}})\cong H^{\bullet}_{G_{\mathbf{a}},c}(X_{\mathbf{a}},\phi_{W_{ \mathbf{a}}}\underline{\mathbb{Q}}_{X_{\mathbf{a}}})\otimes H^{\bullet}_{G_{ \mathbf{b}},c}(X^{\mathrm{f},\zeta}_{\mathbf{b}},\phi_{W^{\mathrm{f}}_{ \mathbf{b}}}\underline{\mathbb{Q}}_{X^{\mathrm{f},\zeta}_{\mathbf{b}}})\,\] gives the desired map of Equation 5.7, where the correct shift follows from the equality of Equation 5.5. Commutativity of the diagram follows from a standard argument, similar to that of the associativity of the product on the cohomological Hall algebra; see for example Section 2.3 of [10]. ### Enumerative invariants from perverse coherent extensions The vector spaces \(\mathbb{V}^{\zeta}_{M}(Y)\) underlying the modules constructed in the preceding section should be understood as a family of categorified enumerative invariants of the threefold \(Y\), which are heuristically given by counting the number of \(\zeta\)-stable iterated extensions of the object \(M\) with compactly supported perverse coherent sheaves. We now formally define the corresponding enumerative invariants and recall their relation with geometry, which follow from the seminal results of [1] and [11]. Let \(M\in\mathrm{PervCoh}^{\mathrm{p}}(Y)^{T}\), f a framing structure for \(M\) of rank \(d_{\infty}\), and \((Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\) the associated framed quiver with potential, and \(\zeta\) a stability condition on \(X^{\mathrm{f}}(Q_{M})\) as in the preceding section, and recall that the graded vector space underlying \(\mathbb{V}^{\zeta}_{M}(Y)\) is given by \[\mathbb{V}^{\mathrm{f},\zeta}(M)=\bigoplus_{\mathbf{d}\in\mathbb{N}^{V}Q_{ \mathbf{d}}}\mathbb{V}^{\mathrm{f},\zeta}_{\mathbf{d}}(M)\qquad\text{with} \qquad\mathbb{V}^{\mathrm{f},\zeta}_{\mathbf{d}}(M)=H^{G_{\mathbf{d}}(Q)}_{ \bullet}(X^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M}),\varphi_{W^{\mathrm{f}}_{ M,\mathbf{d}}})\.\] We define the generating functional for the corresponding enumerative invariants by \[\mathcal{Z}^{\mathrm{f},\zeta}_{M}(\mathbf{q})=\sum_{\mathbf{d}\in\mathbb{N}^{ V}Q_{Y}}\mathbf{q}^{\mathbf{d}}\chi(\mathbb{V}^{\mathrm{f},\zeta}_{\mathbf{d}}(M)) \quad\in\mathbb{Z}[\![\mathbf{q}]\!]\, \tag{5.8}\] where we introduce the shorthand notation \(\mathbf{q}^{\mathbf{d}}=\prod_{i\in V_{Q_{Y}}}q^{d_{i}}_{i}\) and \(\mathbb{Z}[\![\mathbf{q}]\!]=\mathbb{Z}[\![q_{i}]\!]_{i\in V_{Q_{Y}}}\), and \(\chi\) denotes the Euler characteristic of the cohomologically graded vector space. Now, suppose that the stability condition \(\zeta\) is chosen so that there are no strictly semi-stable points and \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M})=X^{\zeta}_{\mathbf{d}}(Q^ {\mathrm{f}}_{M})/G_{\mathbf{d}}(Q)\) is a smooth scheme for each \(\mathbf{d}\in\mathbb{N}^{V_{Q_{Y}}}\), so that the function \(W^{\mathrm{f}}_{M,\mathbf{d}}:X^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M})\to \mathbb{K}\) descends to a regular function \(\overline{W}^{\mathrm{f}}_{M,\mathbf{d}}:\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^ {\mathrm{f}}_{M})\to\mathbb{K}\), and we have that \[\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})=\left[ \mathrm{Crit}(W^{\mathrm{f}}_{M,\mathbf{d}}|_{X^{\zeta}_{\mathbf{d}}(Q^{ \mathrm{f}}_{M})})/G_{\mathbf{d}}(Q_{Y})\right]=\mathrm{Crit}(\overline{W}^{ \mathrm{f}}_{M})\subset\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M})\,\] that is, the moduli space \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\) of \(\zeta\)-stable representations of the quiver with potential \((Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\) is a global critical locus of a regular function on a smooth scheme. In particular, we have \[\mathbb{V}^{\mathrm{f},\zeta}_{\mathbf{d}}(M)=H^{G_{\mathbf{d}}(Q)}_{\bullet}(X ^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M}),\varphi_{W^{\mathrm{f}}_{M,\mathbf{ d}}})=H_{\bullet}(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M}), \varphi_{\overline{W}^{\mathrm{f}}_{M,\mathbf{d}}}). \tag{5.9}\] Moreover, the results of [1] imply that \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\) admits a symmetric obstruction theory inducing a virtual fundamental class \([\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})]^{ \mathrm{vir}}\in H_{\bullet}(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}} _{M},W^{\mathrm{f}}_{M}))\) and corresponding invariants \[\tilde{\mathcal{Z}}^{\mathrm{f},\zeta}_{M}(\mathbf{q})=\sum_{\mathbf{d}\in \mathbb{N}^{V}Q_{Y}}\mathbf{q}^{\mathbf{d}}\int_{[\mathfrak{M}^{\zeta}_{ \mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})]^{\mathrm{vir}}}1\in\mathbb{ Z}[\![\mathbf{q}]\!]\,\] which heuristically count the number of points in \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\), such that these invariants can be computed as \[\int_{[\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M} )]^{\mathrm{vir}}}1=\chi(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W ^{\mathrm{f}}_{M}),\nu^{\zeta}_{M,\mathbf{d}}):=\sum_{n\in\mathbb{Z}}n\ \chi((\nu^{\zeta}_{M, \mathbf{d}})^{-1}\{n\})\,\] the weighted Euler characteristic of \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\) with respect to a constructible function \[\nu^{\zeta}_{M,\mathbf{d}}:\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{ M},W^{\mathrm{f}}_{M})\to\mathbb{Z}\.\] Moreover, it is shown in _loc. cit._ that in the case of a critical locus, this constructible function can be computed locally in terms of the Euler characteristic of the Milnor fibre of the potential, inducing an identification \[\chi(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M}),\nu^{\zeta}_{M,\mathbf{d}})=\chi(\mathbb{V}^{\mathrm{f},\zeta}_{\mathbf{d}}( M))\qquad\text{and thus}\qquad\mathcal{Z}^{\mathrm{f},\zeta}_{M}=\tilde{\mathcal{Z}}^{ \mathrm{f},\zeta}_{M}\.\] We now recall some additional hypotheses that allow for a concrete combinatorial approach to calculating these invariants, and which are valid in many examples of interest. Let \(\tilde{T}\subset T\) denote the maximal torus of \(T\) which preserves the Calabi-Yau structure on \(Y\), and recall that there is a natural action of \(T\) on \(\mathfrak{M}^{\zeta,\mathrm{f}}_{\mathbf{d}}(Q_{M})\) induced by the action on the underlying threefold \(Y\), such that the subtorus \(\tilde{T}\) preserves the potential \(W^{\mathrm{f}}_{M}\) and thus naturally acts on the moduli space \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\). Additionally, let \(T_{\mathrm{f}}\) be a maximal torus of \(G^{\mathrm{c}}_{\mathrm{f}}=\mathrm{Stab}_{\mathrm{Gl}_{d_{\infty}}(\mathbb{K })}(\mathrm{f})\) the subgroup of \(\mathrm{Gl}_{d_{\infty}}(\mathbb{K})\) stabilizing the point \(\mathrm{f}\in Z_{d_{\infty}}(Q_{\infty},R_{\infty})\), and let \(A=\tilde{T}\times T_{\mathrm{f}}\) which acts naturally on \(X^{\mathrm{f}}_{\mathbf{d}}(Q_{M})\) such that the potential \(W^{\mathrm{f}}_{M,\mathbf{d}}\) is \(A\) invariant and this action commutes with that of \(G_{\mathbf{d}}\). Thus, \(A\) also acts naturally on \(Z^{\mathrm{f}}_{\mathbf{d}}(Q_{M},R_{M})\), as well as the quotient stacks \(\mathfrak{M}^{\mathrm{f}}_{\mathbf{d}}(Q_{M})\) and \(\mathfrak{M}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\), and similarly we obtain actions on \(\mathfrak{M}^{\mathrm{f},\zeta}_{\mathbf{d}}(Q_{M})\) and \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\) if \(A\) preserves \(\zeta\)-stability. Throughout the remainder of this section, we make the following hypothesis: _Hypothesis 5.7_.: We assume that the \(A\)-fixed subvariety \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})^{A}\) is a finite set \(\mathfrak{\eta}_{\mathbf{d}}\) of isolated fixed points. For simplicity of presentation in the following corollary, we also assume that for each \(\mathbf{d}\in\mathbb{N}^{V_{Q_{Y}}}\), the dimensions of the tangent spaces to the \(A\) fixed points in the component \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}}_{M},W^{\mathrm{f}}_{M})\) are all of the same parity \((-1)^{k_{\mathbf{d}}}\) for \(k_{\mathbf{d}}\in\mathbb{Z}/2\mathbb{Z}\). Under the preceding hypotheses, we can compute the above enumerative invariants using the localization results of [1], so that we obtain: _Corollary 5.8_.: The above generating function is given by \[\mathcal{Z}_{M}^{\zeta}(\mathbf{q})=\sum_{\mathbf{d}\in\mathbb{N}^{V_{Q_{Y}}}} \mathbf{q}^{\mathbf{d}}(-1)^{k_{\mathbf{d}}}\ |\mathfrak{M}_{\mathbf{d}}^{\zeta}(Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})^{A}| \quad\in\mathbb{Z}[\mathbf{q}]\,\] where \(|\mathfrak{M}_{\mathbf{d}}^{\zeta}(Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})^{A}| \in\mathbb{Z}\) denotes the cardinality of the set of \(A\) fixed points. Proof.: Under the given hypotheses, this follows from Theorem 3.4 and Corollaries 3.5 and 3.6 of _loc. cit._. In fact, our description of the set of \(T\) fixed points induced by the equivalence of Equation 4.31 together with Theorem 4.20 gives a natural enumeration of the fixed points in terms of objects in \(\mathrm{filt}(\Sigma_{M})\), or equivalently in the category of twisted objects \(H^{0}(\mathrm{tw}^{0}(\mathcal{A}_{F\oplus M}))\). ### Towards representations of shifted quiver Yangians In this section, we outline the construction of an extension of the geometric action of \(\mathcal{H}(Y)\) on \(\mathcal{V}^{\zeta}(M)\) to the action of a larger associative algebra, called a shifted quiver Yangian in [10, 11], following the approach of [12] and in turn [12], [13], [14] and [15]. Let \(F\) denote the field of fractions of the polynomial ring \[H_{A}^{\bullet}(\mathrm{pt})=\mathbb{K}[\hbar_{1},\hbar_{2},\hbar_{3},\mathbf{ x}]/(\hbar_{1}+\hbar_{2}+\hbar_{3})\,\] where we recall that \(A=\tilde{T}\times T_{\mathrm{f}}\) is the product of \(\tilde{T}\subset T\) the maximal torus of \(T\) which preserves the Calabi-Yau structure on \(Y\) and \(T_{\mathrm{f}}\) a maximal torus of \(G_{\mathrm{f}}^{\mathrm{c}}=\mathrm{Stab}_{\mathrm{Gl}_{d_{\infty}}( \mathbb{K})}(\mathrm{f})\), and we have introduced the identifications \(H_{T_{\mathrm{f}}}(\mathrm{pt})=\mathbb{K}[\mathbf{x}]:=\mathbb{K}[x_{j}]_{j=1,..,\mathrm{rk}(T_{\mathrm{f}})}\) and \(H_{T}^{\bullet}(\mathrm{pt})=\mathbb{K}[\hbar_{1},\hbar_{2},\hbar_{3}]\) such that the Calabi-Yau structure is of weight \((1,1,1)\) in the corresponding grading. Throughout this section, all constructions are given over the base ring \(F\) by default, though this will be occasionally be suppressed in the notation. In particular, we let \[\mathcal{H}=\mathcal{H}(Q_{Y},W_{Y})^{A}\in\mathrm{Alg}_{\mathrm{Ass}}( \mathrm{Vect}_{F,\mathbb{N}^{V_{Q}}}^{\otimes\mathrm{tw}})\qquad\text{and} \qquad\mathbb{V}^{\zeta}=\mathbb{V}^{\zeta}(M)^{A}\in\mathcal{H}\text{-Mod}( \mathrm{Vect}_{F,\mathbb{N}^{V_{Q}}}^{\otimes\mathrm{tw}})\] denote the localized, \(A\)-equivariant analogues of the objects introduced under this notation in sections 5.3 and 5.4, respectively. In particular, for each \(\mathbf{d}\in\mathbb{N}^{V_{Q_{Y}}}\) the underlying vector spaces of the degree \(\mathbf{d}\) components are given by \[\mathcal{H}_{\mathbf{d}}=H_{\bullet}^{G_{\mathbf{d}}\times A}(X_{\mathbf{d}}, \varphi_{W_{\mathbf{d}}})\otimes_{H_{A}^{\bullet}(\mathrm{pt})}F\qquad\text{ and}\qquad\mathbb{V}^{\zeta}_{\mathbf{d}}=H_{\bullet}^{G_{\mathbf{d}} \times A}(X_{\mathbf{d}}^{\mathrm{f},\zeta},\varphi_{W_{M,\mathbf{d}}^{ \prime}})\otimes_{H_{A}^{\bullet}(\mathrm{pt})}F\.\] We define the universal Cartan algebra \(\mathcal{H}^{0}\) as the polynomial algebra \[\mathcal{H}^{0}=\mathcal{H}^{0}(Y)=F[\phi_{n}^{i}]_{n\in\mathbb{N}}^{i\in I} \qquad\text{and let}\qquad\phi^{i}(z)=1+\sigma_{3}\sum_{n=0}^{\infty}\phi_{n} ^{i}z^{-n-1}\ \in\mathcal{H}^{0}[\![z^{-1}]\!]\] be a formal generating functional for the variables \(\phi_{n}^{i}\), which we will use as a shorthand to write equations involving these variables. Recall that the analogue of the Chern polynomial in this setting defines a map \[c_{1/z}:K_{G_{\mathbf{d}}\times A}(X_{\mathbf{d}}^{\mathrm{f},\zeta})\to \mathrm{End}_{H_{G_{\mathbf{d}}\times A}^{\bullet}(\mathrm{pt})}(H_{\bullet}^{ G_{\mathbf{d}}\times A}(X_{\mathbf{d}}^{\mathrm{f},\zeta},\varphi_{W_{M, \mathbf{d}}^{\prime}})\otimes_{H_{A}^{\bullet}(\mathrm{pt})}F)[\![z^{-1}]\!]\ ;\] it can be defined for example on vector bundles \(E\in K_{G_{\mathbf{d}}\times A}(X_{\mathbf{d}}^{\mathrm{f},\zeta})\) by \[c_{1/z}(E)=\sum_{n=1}^{\mathrm{rk}(E)}c_{n}(E)z^{-n}:=z^{-\mathrm{rk}(E)} \mathrm{Eu}(E\otimes q)=\prod_{i=1}^{\mathrm{rk}(E)}(1+\frac{e_{i}}{z})\] where \(e_{1},...,e_{\mathrm{rk}(E)}\in H^{G_{\mathbf{d}}\times A}_{\bullet}(X^{\mathrm{f },\zeta}_{\mathbf{d}}(Q_{M}),\varphi_{W^{\mathrm{f}}_{M,\mathbf{d}}})\) denote the Chern roots of \(E\) and \[\mathrm{Eu}(E\otimes q)=s^{*}s_{*}\ \in\mathrm{End}_{H^{\bullet}_{G_{\mathbf{d}} \times A\times\mathbb{C}^{\times}}(\mathrm{pt})}(H^{G_{\mathbf{d}}\times A \times\mathbb{C}^{\times}}_{\mathbf{d}}(X^{\mathrm{f},\zeta}_{\mathbf{d}}, \varphi_{W^{\mathrm{f}}_{M,\mathbf{d}}}))\cong\mathrm{End}_{H^{\bullet}_{ \mathcal{G}_{\mathbf{d}}\times A}(\mathrm{pt})}(H^{G_{\mathbf{d}}\times A}_{ \bullet}(X^{\mathrm{f},\zeta}_{\mathbf{d}},,\varphi_{W^{\mathrm{f}}_{M,\mathbf{ d}}}))[z]\] denotes the \(G_{\mathbf{d}}\times A\times\mathbb{C}^{\times}\)-equivariant Euler class, defined in terms of \(s:X^{\mathrm{f},\zeta}_{\mathbf{d}}\to|E\otimes q|\) the zero section, where \(\mathbb{C}^{\times}\) acts trivially on \(X\), \(q\) denotes its standard one dimensional, weight one representation, and we identify \(H^{\bullet}_{\mathbb{C}^{\times}}(\mathrm{pt})\cong\mathbb{C}[z]\). For each \(\mathbf{d}\in\mathbb{N}^{V_{Q}}\), there are \(G_{\mathbf{d}}\times A\) equivariant tautological bundles \(\mathcal{V}^{i}_{\mathbf{d}}\) on \(X^{\mathrm{f},\zeta}_{\mathbf{d}}\) of rank \(d_{i}\) for each \(i\in I_{M}\) which define classes in \(K_{G_{\mathbf{d}}\times A}(X^{\mathrm{f},\zeta}_{\mathbf{d}})\), in terms of which we can define the canonical classes \(\mathcal{F}^{i}_{\mathbf{d}}\) by \[\mathcal{F}^{i}_{\mathbf{d}}=\ _{i}\Sigma\otimes_{S_{M}}\mathcal{V}_{ \mathbf{d}}=\bigoplus_{j\in I_{M}}\ _{i}\Sigma_{j}\otimes\mathcal{V}^{j}_{\mathbf{d}}\ \in K_{G_{\mathbf{d}}\times A}(X^{\mathrm{f},\zeta}_{ \mathbf{d}})\] for each \(i\in I_{M}=V_{Q_{Y}}\cup\{\infty\}\), where we view \(\Sigma\) with its cohomological and \(T\) gradings as a complex of representations of \(A\). We now define an action \(\mathcal{H}^{0}\to\mathrm{End}_{F}(\mathbb{V}^{\zeta})\) of the universal Cartan algebra \(\mathcal{H}^{0}\) on \(\mathbb{V}^{\zeta}\) in components \[\mathcal{H}^{0}\to\mathrm{End}_{F}(H^{G_{\mathbf{d}}\times A}_{\bullet}(X^{ \mathrm{f},\zeta}_{\mathbf{d}},\varphi_{W^{\mathrm{f}}_{M,\mathbf{d}}})\otimes _{H^{\bullet}_{\mathbf{d}}(\mathrm{pt})}F)\,\] by the action of the generators \(\phi^{i}_{n}\) on \(\mathbb{V}^{\zeta}_{\mathbf{d}}(M)\), according to the formula \[\phi^{i}(z)\mapsto c_{-1/z}(\mathcal{F}^{i}_{\mathbf{d}})\ \in\mathrm{End}_{F}(H^{G_{ \mathbf{d}}\times A}_{\bullet}(X^{\mathrm{f},\zeta}_{\mathbf{d}},\varphi_{W^{ \mathrm{f}}_{M,\mathbf{d}}})\otimes_{H^{\bullet}_{\mathbf{d}}(\mathrm{pt})}F)[z ^{-1}]\,\] for each \(i\in V_{Q_{Y}}\). Recall that the representation structure map \(\mathcal{H}\to\mathrm{End}(\mathbb{V}^{\zeta})\) constructed in Theorem 5.6 is defined in terms of the maps The relation between the actions of \(\mathcal{H}\) and \(\mathcal{H}^{0}\) on \(\mathbb{V}^{\zeta}\) is determined by the difference \[\mathrm{p}^{*}\mathcal{F}^{i}_{\mathbf{a}+\mathbf{b}}-\mathrm{q}^{*}(\mathcal{ O}_{X_{a}}\boxtimes\mathcal{F}^{i}_{\mathbf{b}})=q^{*}(\mathcal{G}^{i}_{ \mathbf{a}}\boxtimes\mathcal{O}_{X^{\mathrm{f},\zeta}_{\mathbf{b}}})\] in terms of a class \(\mathcal{G}^{i}_{\mathbf{a}}\in K_{G_{\mathbf{a}}}(X_{\mathbf{a}})\). In particular, we assume that the Chern polynomial of \(\mathcal{G}^{i}_{\mathbf{a}}\) is given by \[c_{-1/z}(\mathcal{G}^{i}_{\mathbf{a}})=P^{i}_{\mathbf{a}}(\hbar_{1},\hbar_{2}, \mathbf{x},e_{k},z)^{-}\quad\in\mathrm{End}_{H^{\bullet}_{\mathcal{G}_{\mathbf{d }}\times A}(\mathrm{pt})}(\ H^{G_{\mathbf{a}}\times A}_{\bullet}(X_{\mathbf{a}},\varphi_{W_{\mathbf{a}}})\otimes_{H^{\bullet}_{\mathbf{d}}(\mathrm{pt})}F)[z ^{-1}]\,\] the power series expansion in \(z^{-1}\) of a rational function \[P^{i}_{\mathbf{a}}=P^{i}_{\mathbf{a}}(z)=P^{i}_{\mathbf{a}}(\hbar_{1},\hbar_{2 },\mathbf{x},e_{k},z)\ \in F(\{e_{k}\}_{k=1}^{\mathrm{rk}(\mathcal{V}^{i}_{\mathbf{a}})},z)\] with coefficients in the Chern roots \(e_{k}\) of \(\mathcal{V}_{\mathbf{a}}\) over the base field \(F\). Thus, we similarly define an action \(\mathcal{H}^{0}\to\mathrm{End}_{F}(\mathcal{H})\) of the universal Cartan algebra \(\mathcal{H}^{0}\) on the cohomological Hall algebra \(\mathcal{H}\) in components \[\mathcal{H}^{0}\to\mathrm{End}_{H^{\bullet}_{\mathcal{G}_{\mathbf{d}}\times A}( \mathrm{pt})}(H^{G_{\mathbf{d}}\times A}_{\bullet}(X_{\mathbf{d}},\varphi_{W_{ \mathbf{d}}})\otimes_{H^{\bullet}_{\mathbf{d}}(\mathrm{pt})}F)\qquad\text{by} \qquad\phi^{i}(z)\mapsto c_{-1/z}(\mathcal{G}^{i}_{\mathbf{a}})\, \tag{5.10}\] and we define the _extended cohomological Hall algebra_ of \(Y\) as the semidirect product algebra \[\mathcal{H}^{\geq 0}=\mathcal{H}^{0}\ltimes\mathcal{H}\] with respect to this action. Then we obtain the following Proposition: _Proposition 5.9_.: The object \(\mathbb{V}^{\zeta}\) admits a canonical (twisted \(\mathbb{N}^{V_{Q}}\)-graded) module structure over the extended cohomological Hall algebra \(\mathcal{H}^{\geq 0}\), that is, there is a natural algebra map \[\mathcal{H}^{\geq 0}\to\operatorname{End}_{F}(\mathbb{V}^{\zeta})\,\] extending the representation of \(\mathcal{H}^{0}\) in Equation 5.10 and that of \(\mathcal{H}\) from Theorem 5.6. Proof.: By the construction of \(\mathcal{G}^{i}_{\mathbf{a}}\), the proof is a straightforward generalization of that of Proposition 6.2.2 in [10]. We define the subspace of spherical generators \(\mathcal{H}^{1}\subset\mathcal{H}\) by \[\mathcal{H}^{1}=\bigoplus_{|\mathbf{d}|=1}\mathcal{H}_{\mathbf{d}}=\bigoplus_ {i\in V_{Q_{Y}}}\mathcal{H}_{\mathbf{1}_{i}}\] where \(\mathbf{1}_{i}\in\mathbb{N}^{V_{Q_{Y}}}\) denotes the \(i^{th}\) standard basis vector, and the spherical subalgebra \[\mathcal{SH}=\mathcal{SH}(Y)=\langle\mathcal{H}^{1}\rangle\subset\mathcal{H}\,\] as the subalgebra generated by this subspace over the base ring (in the present setting given by \(F\)), as well as \(\mathcal{SH}^{\geq 0}=\mathcal{H}^{0}\ltimes\mathcal{SH}\). Note that we have an induced action \[\mathcal{SH}\to\operatorname{End}_{F}(\mathbb{V}^{\zeta})\, \tag{5.11}\] and similarly for \(\mathcal{SH}^{\geq 0}\). Let \(e_{i}=c_{1}(\mathcal{V}^{i}_{\mathbf{1}_{i}})\in\mathcal{H}_{\mathbf{1}_{i}}=H ^{G_{\mathbf{1}_{i}}\times A}_{\bullet}(X_{\mathbf{1}_{i}},\varphi_{W_{ \mathbf{1}_{i}}})\) denote the first Chern class of the one dimensional tautological bundle \(\mathcal{V}^{i}_{\mathbf{1}_{i}}\), and define the formal generating functional \[e^{i}(z)=\sum_{n=0}^{\infty}e^{i}_{n}z^{-n-1}\in\mathcal{SH}[\![z^{-1}]\!] \qquad\text{where}\qquad e^{i}_{n}=(e_{i})^{n}\in\mathcal{SH}\.\] The induced relations in \(\mathcal{SH}^{\geq 0}\) between the elements \(e^{j}_{m}\) and \(\phi^{i}_{n}\) are determined by the action of Equation 5.10 in the case \(\mathbf{a}=\mathbf{1}_{j}\), and the rational function \[P^{i}_{j}=P^{i}_{j}(z)=P^{i}_{\mathbf{1}_{j}}(\hbar_{1},\hbar_{2},\mathbf{x}, e_{j},z)\ \in F(e_{j},z)\] is called the _bond factor_ in [10], where it is denoted \(\varphi^{(i\Rightarrow j)}(z)\). The definition of the action in Equation 5.10 determines relations between the generators summarized in terms of the generating functions as \[\phi^{i}(z)e^{j}(w)=e^{j}(w)\phi^{i}(z)P^{i}_{j}(z-w)\quad\in\mathcal{SH}^{ \geq 0}(\!(z^{-1})\!)[\![w^{-1}]\!]. \tag{5.12}\] For the remainder of this section, we will in additional assume that the \(A\)-fixed subvariety \(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{f}_{M},W^{\mathrm{f}}_{M})^{A}\) is given by a set \(\mathfrak{n}_{\mathbf{d}}\) of isolated fixed points, as in Hypothesis 5.7. In particular, the pullback map induced as in Equation 5.2 gives an isomorphism \[\mathbb{V}^{\zeta}_{\mathbf{d}}\cong H^{A}_{\bullet}(\mathfrak{M}^{\mathrm{ f},\zeta}_{\mathbf{d}},\varphi_{\overline{W}^{\mathrm{f}}_{\mathbf{d}}}) \otimes_{H^{\bullet}_{A}(\mathrm{pt})}F\xrightarrow{\iota^{*}}H^{A}_{\bullet} (\mathfrak{n}_{\mathbf{d}},\varphi_{\overline{W}^{\mathrm{f}}_{\mathbf{d}^{ \mathrm{0}}}\iota})\otimes_{H^{\bullet}_{A}(\mathrm{pt})}F=\bigoplus_{ \lambda\in\mathfrak{I}_{\mathbf{d}}}H^{A}_{\bullet}(\mathrm{pt}_{\lambda}) \otimes_{H^{\bullet}_{A}(\mathrm{pt})}F\,\] for each \(\mathbf{d}\in\mathbb{N}^{V_{Q_{Y}}}\), and letting \(F_{\lambda}\) denote a copy of the base field \(F\), we have an identification \[\bigoplus_{\lambda\in\mathfrak{n}_{\mathbf{d}}}F_{\lambda}\stackrel{{ \cong}}{{\rightarrow}}\bigoplus_{\lambda\in\mathfrak{n}_{\mathbf{d}}}H^{A}_{ \bullet}(\operatorname{pt}_{\lambda})\otimes_{H^{\bullet}_{A}(\operatorname{ pt})}F\qquad\text{defined by}\qquad P\mapsto P\cap[\operatorname{pt}_{\lambda}]\,\] for each \(P\in F_{\lambda}\) so that we obtain a natural basis for the module \(\mathbb{V}_{\mathbf{d}}^{\zeta}\) given by \[\mathbb{V}_{\mathbf{d}}^{\zeta}=\bigoplus_{\lambda\in\mathfrak{n}_{\mathbf{d} }}F_{\lambda}\qquad\text{and thus}\qquad\mathbb{V}^{\zeta}=\bigoplus_{\lambda \in\mathfrak{n}_{\uparrow}}F_{\lambda}\qquad\text{for}\qquad\mathfrak{n}= \bigsqcup_{\mathbf{d}\in\mathbb{N}^{V_{Q_{Y}}}}\mathfrak{n}_{\mathbf{d}}\.\] In particular, there is a natural pairing \[(\cdot,\cdot):\mathbb{V}^{\zeta}\otimes_{F}\mathbb{V}^{\zeta}\to F \qquad\text{defined by}\qquad([\operatorname{pt}_{\lambda}],[\operatorname{pt}_{ \mu}])=\delta_{\lambda,\mu}\text{Eu}_{A}(T_{\lambda})\, \tag{5.13}\] where \(T_{\lambda}\) denotes the tangent space to \(\mathfrak{M}_{\mathbf{d}}^{\zeta}(Q_{M}^{\mathrm{f}},W_{M}^{\mathrm{f}})\) at the fixed point \(\lambda\in\mathfrak{n}\) and \(\text{Eu}_{A}\) denotes the \(A\)-equivariant Euler class. Let \(\mathcal{SH}^{\mathrm{op}}\) denote the opposite algebra of \(\mathcal{SH}\) and note that we have the following proposition: _Proposition 5.10_.: There exists a natural action right action \[\mathcal{SH}^{\mathrm{op}}\rightarrow\operatorname{End}_{F}(\mathbb{V}^{ \zeta})\qquad\text{defined by}\qquad f=e^{\mathrm{op}}\mapsto\rho(e)^{*} \tag{5.14}\] for each \(f=e^{\mathrm{op}}\in\mathcal{SH}^{\mathrm{op}}\), where \(\rho:\mathcal{SH}\rightarrow\operatorname{End}_{F}(\mathbb{V}^{\zeta})\) is the representation of Theorem 5.6 and \((\cdot)^{*}:\operatorname{End}_{F}(\mathbb{V}^{\zeta})\rightarrow\operatorname {End}_{F}(\mathbb{V}^{\zeta})\) denotes the adjoint with respect to the pairing of Equation 5.13 above. Proof.: This follows immediately from the proof of Proposition 3.6 of [13], _mutatis mutandis_. In particular, we define the formal generating functional \[f^{i}(z)=\sum_{n=0}^{\infty}f^{i}_{n}z^{-n-1}\in\mathcal{SH}^{\mathrm{op}}[z^{ -1}]\qquad\text{where}\qquad f^{i}_{n}=(e^{i}_{n})^{\mathrm{op}}\in\mathcal{ SH}^{\mathrm{op}}\,\] and note that the induced relations between the generators \(f^{j}_{m}\) and \(\phi^{i}_{n}\) are summarized in terms of the generating functions as \[\phi^{i}(z)f^{j}(w)=f^{j}(w)\phi^{i}(z)P^{i}_{j}(z-w)^{-1}\quad\in\mathcal{SH} ^{\geq 0}(\!(z^{-1})\!)[\![w^{-1}]\!]. \tag{5.15}\] Finally, we consider the action by the endomorphisms \[\tilde{\psi}^{i,j}_{n,m}=[\rho(e^{i}_{n}),\rho(f^{j}_{m})]\ \in \operatorname{End}_{F}(\mathbb{V}^{\zeta})\.\] We assume that these endomorphisms vanish for \(i\neq j\) and moreover that for \(i=j\) the resulting endomorphism \(\tilde{\psi}^{i}_{n+m}\) depends only on the sum \(m+n\) and is determined \[\tilde{\psi}^{i}(z):=z^{-s_{i}}+\sum_{n=s_{i}}^{\infty}\tilde{\psi}^{i}_{n}z^{- n-1}=Q^{i}_{M}(z)c_{-1/z}(\mathcal{F}^{i}_{\mathbf{d}})\quad\in\ z^{-s_{i}}\text{End}_{H^{\bullet}_{G_{ \mathbf{d}}\times A}(\operatorname{pt})}(H^{G_{\mathbf{d}}\times A}_{\bullet }(X^{\mathrm{f},\zeta}_{\mathbf{d}},\varphi_{W^{\mathrm{f}}_{M,\mathbf{d}}}) \otimes_{H^{\bullet}_{A}(\operatorname{pt})}F)[\![z^{-1}]\!]\,\] in terms of a rational function \[Q^{i}_{M}=Q^{i}_{M}(z)=Q^{i}_{M}(\hbar_{1},\hbar_{2},\mathbf{x},e_{i},z)\ \in F(e_{i},z) \tag{5.16}\] with coefficients in the first Chern class \(e_{i}\) of \(\mathcal{V}_{\mathbf{1}_{i}}\) over the base field \(F\), where the map is given by taking the power series expansion at \(z=\infty\). The rational function \(Q^{i}_{M}\in F(e_{i},z)\) encodes the shift the associated shifted quiver Yangian and is called the ground state charge function in [10], where it is denoted \({}^{\#}\psi^{(i)}_{0}(z)\). Typically, \(Q^{i}_{M}\) admits a factorization of the form \[Q^{i}_{M}(z)=\frac{\prod_{l=1}^{s_{i}^{+}}(z-q^{+}_{i,l}(\hbar_{1},\hbar_{2}, \mathbf{x},e_{i}))}{\prod_{l=1}^{s_{i}^{-}}(z-q^{-}_{i,l}(\hbar_{1},\hbar_{2}, \mathbf{x},e_{i}))}\] and we call the integer \(s_{i}=s_{i}^{+}-s_{i}^{-}\) the shift of the corresponding root \(i\in I\) for the object \(M\). We define the shifted Cartan algebra \(\mathcal{H}^{0}_{M}\) as the polynomial algebra \[\mathcal{H}^{0}_{M}=\mathcal{H}^{0}_{M}(Y)=F[\psi^{i}_{n}]_{n\in\mathbb{N}} \qquad\text{and let}\qquad\psi^{i}(z)=z^{-s_{i}}+\sum_{n=s_{i}}^{\infty}\psi^ {i}_{n}z^{-n-1}\quad\in z^{-s_{i}}\mathcal{H}^{0}\llbracket z^{-1}\rrbracket\] be the corresponding formal generating functional for the variables \(\psi^{i}_{n}\). Moreover, we obtain a natural action \(\mathcal{H}^{0}_{M}\to\operatorname{End}_{F}(\mathbb{V}^{\zeta})\) of the shifted Cartan algebra \(\mathcal{H}^{0}_{M}\) on \(\mathbb{V}^{\zeta}\) defined in components (5.17) \[\mathcal{H}^{0}_{M}\to\operatorname{End}_{H^{\bullet}_{G_{\mathbf{d}}\times A }(\operatorname{pt})}(H^{G_{\mathbf{d}}\times A}_{\bullet}(X^{\mathrm{f}, \zeta}_{\mathbf{d}},\varphi_{W^{\mathrm{f}}_{M,\mathbf{d}}}){\otimes}_{H^{ \bullet}_{A}(\operatorname{pt})}F)\qquad\text{by}\qquad\psi^{i}(z)\mapsto \tilde{\psi}^{i}(z)=Q^{i}_{M}(z)c_{-1/z}(\mathcal{F}^{i}_{\mathbf{d}})\.\] By construction, the relations between the images of the generators \(\tilde{e}^{i}_{n}=\rho(e^{i}_{n})\) and \(\tilde{f}^{j}_{M}=f^{j}_{M}\) under the respective representations are summarized in terms of generating functions as \[[\tilde{e}^{i}(z),\tilde{f}^{j}(w)]=\delta_{i,j}\frac{\tilde{\psi}^{i}(z)- \tilde{\psi}^{i}(w)}{z-w}\quad\in\operatorname{End}_{F}(\mathbb{V}^{\zeta}) \llbracket(z^{-1})\rrbracket w^{-1}\rrbracket\.\] We now recall the definition of (shifted) quiver Yangians given in [11] (and [10]): _Definition 5.11_.: The _shifted quiver Yangian_\(\mathcal{Y}_{M}=\mathcal{Y}_{M}(Y)\) is the associative algebra over \(F\) generated by elements \(e^{i}_{n},f^{i}_{n}\) and \(\psi^{i}_{m}\) for \(i\in I\), \(n\in\mathbb{N}\), and \(m\in\mathbb{Z}_{\geq s_{i}}\), subject to the relations \[\psi^{i}(z)e^{j}(w) =e^{j}(w)\psi^{i}(z)P^{i}_{j}(z-w) \in\mathcal{Y}_{M}(\llbracket z^{-1})\rrbracket w^{-1}\rrbracket \tag{5.19}\] \[\psi^{i}(z)f^{j}(w) =f^{j}(w)\psi^{i}(z)P^{i}_{j}(z-w)^{-1} \in\mathcal{Y}_{M}(\llbracket z^{-1})\rrbracket w^{-1}\rrbracket\] (5.20) \[[e^{i}(z),f^{j}(w)] =\delta_{i,j}\frac{\psi^{i}(z)-\psi^{i}(w)}{z-w} \in\mathcal{Y}_{M}\llbracket z^{-1},w^{-1}\rrbracket\] (5.21) \[\psi^{i}(z)\psi^{j}(w) =\psi^{j}(w)\psi^{i}(z) \in\mathcal{Y}_{M}\llbracket z^{-1},w^{-1}\rrbracket\] (5.22) \[e^{i}(z)e^{j}(w) =e^{j}(w)e^{i}(z)P^{i}_{j}(z-w) \in\mathcal{Y}_{M}\llbracket z^{-1},w^{-1}\rrbracket\] (5.23) \[f^{i}(z)f^{j}(w) =f^{j}(w)f^{i}(z)P^{i}_{j}(z-w)^{-1} \in\mathcal{Y}_{M}\llbracket z^{-1},w^{-1}\rrbracket \tag{5.18}\] together with the Serre relations between the various \(e^{i}_{n}\) generators, and similarly between the \(f^{i}_{n}\). Under the above hypotheses, we make the following conjecture: _Conjecture 5.12_.: The maps of Equations 5.11, 5.14 and 5.17 define a representation \[\mathcal{Y}_{M}(Y)\to\operatorname{End}_{F}(\mathbb{V}^{\zeta}(M)^{A})\] of the shifted quiver Yangian \(\mathcal{Y}_{M}(Y)\) on \(\mathbb{V}^{\zeta}(M)^{A}\). In particular, it is implicit in the conjecture that the restriction of the representation to the positive half \(\mathcal{Y}^{+}(Y,M)\) agrees with the restriction to the spherical subalgebra \(\mathcal{SH}\) of the representation of \(\mathcal{H}\) constructed in Theorem 5.6, as this is the definition of the representation in Equation 5.11. In the case \(Y=\mathbb{C}^{3}\), the above conjecture holds in the sense that the above construction is equivalent to that of [13], also obtained in unpublished work of Feigin-Tsymbaliuk recalled in [14], and [15], although the shift is trivial in all these cases. Several other examples were also explored in [15] in this framework with non-trivial shifts. Moreover, the cohomological analogue of the recent results of [13], together with the above construction, appear to imply the above conjecture, but we leave a detailed discussion of this for future work. ## 6. Yangians of threefolds and vertex algebras of divisors ### Perverse coherent systems, Donaldson-Thomas theory, and quiver Yangians In this section, we let \(M=\mathcal{O}_{Y}[1]\) be the structure sheaf of the threefold \(Y\) shifted down in cohomological degree by \(1\). As we observed in Example 4.40, the extension group \({}_{\infty}\Sigma^{1}_{\infty}=\operatorname{Ext}^{1}(\mathcal{O}_{Y}[1], \mathcal{O}_{Y}[1])=0\) is trivial so that the only possible framing structure for each rank \(d_{\infty}=r\) is the trivial one \(\operatorname{f}_{r}=\{0\}\). Moreover, the notion of \(\mathcal{F}_{\operatorname{f}_{r}}^{\operatorname{c}}\)-framed perverse coherent extension is closely related to the notion of perverse coherent system, in the sense of [12], as we explain below. We begin by recalling the definition of perverse coherent systems from _loc. cit._: _Definition 6.1_.: A _perverse coherent system_ on \(Y\) is a tuple \((H,W,S)\) where \(H\in\operatorname{PervCoh}(Y)\) is a perverse coherent sheaf and \[s:\mathcal{O}_{Y}\otimes W\to H\,\] is a map of complexes of quasicoherent sheaves. A perverse coherent system on \(Y\) is called _compactly supported_ if the underlying perverse coherent sheaf \(H\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)\). Let \(\mathfrak{M}_{\overline{\operatorname{Per}}_{\operatorname{cs}}(Y)}\) denote the moduli stack of compactly supported perverse coherent systems on \(Y\), and note there is a natural decomposition \[\mathfrak{M}_{\overline{\operatorname{Per}}_{\operatorname{cs}}(Y)}=\bigsqcup_ {r\in\mathbb{N}}\mathfrak{M}_{\overline{\operatorname{Per}}_{\operatorname{ cs}}(Y)}^{r}\,\] where \(\overline{\operatorname{Per}}_{\operatorname{cs}}(Y)\) denotes the component for which the vector space \(W\) is of dimension \(r\in\mathbb{N}\). Recall that we observed in Example 4.40 that the extension group \[{}_{\infty}\Sigma^{1}_{\infty}=\operatorname{Ext}^{1}(\mathcal{O}_{Y}[1], \mathcal{O}_{Y}[[1])=0\,\] so that for each rank \(d_{\infty}=r\in\mathbb{N}\) there is a unique trivial framing structure \(\operatorname{f}_{r}=\{0\}\). We can now state the desired equivalence: _Proposition 6.2_.: There is an equivalence of algebraic \(\mathbb{K}\) stacks \[\mathfrak{M}^{\overline{\operatorname{F}}_{\operatorname{f}_{r}}^{\operatorname{ cs}}}(Y,\mathcal{O}_{Y}[1])\xrightarrow{\cong}\mathfrak{M}_{\overline{ \operatorname{Per}}_{\operatorname{cs}}(Y)}^{r}\,\] where we recall that \(\mathcal{F}_{\operatorname{f}_{0}}^{\operatorname{cs}}\) is as in Equation 4.35. Proof.: By definition, the moduli stack \(\mathfrak{M}^{\overline{\operatorname{F}}_{\operatorname{f}_{r}}^{\operatorname{ cs}}}(Y,\mathcal{O}_{Y}[1])\) parameterizes perverse coherent extensions of \(\mathcal{O}_{Y}[1]\) equipped with an isomorphism of the underlying iterated extension of \(\mathcal{O}_{Y}[1]\) with \(\mathcal{O}_{Y}\otimes W[1]\) for \(W\) a vector space of rank \(r\), by Example 4.39. Further, recall from Example 4.40 that we observed the extension groups between \(\mathcal{O}_{Y}[1]\) and the simple objects \(F_{i}\in\operatorname{PervCoh}_{\operatorname{cs}}(Y)\) are given by \[{}_{j}\Sigma^{1}_{\infty}=\operatorname{Ext}^{1}(F_{j},\mathcal{O}_{Y}[1])=0 \qquad{}_{\infty}\Sigma^{1}_{j}=\operatorname{Ext}^{1}(\mathcal{O}_{Y}[1], F_{j})=\begin{cases}\mathbb{K}&\text{if }j=0\\ \{0\}&\text{otherwise}\end{cases}\,\] where the only non-trivial extension class is given by \[\mathcal{O}_{C}\to\mathcal{I}_{C}[1]\to\mathcal{O}_{Y}[1]\qquad\text{induced by}\qquad \mathcal{I}_{C}[1]\to\mathcal{O}_{Y}[1]\to\mathcal{O}_{C}[1]\.\] Thus, the only possible extensions of \(\mathcal{O}_{Y}[1]\) with the generators \(F_{i}\) is given by the map \(\mathcal{O}_{Y}\to F_{0}\). Similarly, note that in this case the potential \(W^{\mathrm{f}_{r}}_{M}\) is independent of the framing arrows, so there is a canonical map of \(\mathbb{K}\) stacks \[\mathfrak{M}^{\mathcal{F}^{\mathrm{cs}}_{r}}(Y,\mathcal{O}_{Y}[1])\to\mathfrak{ M}(Y)\,\] which corresponds to taking the subobject given by the underlying iterated extension of the generators \(F_{i}\in\mathrm{PervCoh}_{\mathrm{cs}}(Y)\). For each compactly supported perverse coherent sheaf \(H\in\mathfrak{M}(Y)(\mathbb{K})\), the observations above imply that the only possible perverse coherent extension with subobject \(H\) is given by a map from the underlying iterated extension of \(\mathcal{O}_{Y}\), which has a fixed isomorphism with the object \(\mathcal{O}_{Y}\otimes W\), to the object \(H\). This provides the desired map \(s:\mathcal{O}_{Y}\otimes W\to F\). As a corollary of Proposition 6.2 together with Theorem 4.28, we obtain that there is a canonical monad presentation for perverse coherent systems, given by that of Example 4.40. In the case of the resolved conifold studied in [10] and [11], the monad presentation is that of Equation 4.39, though the results also apply in the more general setting studied in [10]. Indeed, the corresponding framed quivers with potential \((Q^{\mathrm{f}_{r}}_{\mathcal{O}_{Y}[1]},W^{\mathrm{f}_{r}}_{\mathcal{O}_{Y}[ 1]})\) in this case are precisely those studied in _loc. cit._. For each choice of compatible stability condition \(\zeta\in\mathbb{R}^{V_{\mathrm{Q}_{Y}}}\), the corresponding homology module \(\mathbb{V}^{\zeta}_{Y}:=\mathbb{V}^{\zeta}(\mathcal{O}_{Y}[1])\) defined in Section 5.4 is given by \[\mathbb{V}^{\zeta}_{Y}=\bigoplus_{\mathbf{d}\in\mathbb{N}^{V_{\mathrm{Q}_{Y}} }}H^{G_{\mathbf{d}}(Q_{Y})\times A}_{\bullet}(X^{\zeta}_{\mathbf{d}}(Q^{ \mathrm{f}_{r}}_{\mathcal{O}_{Y}[1]}),\varphi_{W^{\mathrm{f}_{r}}_{\mathcal{ O}_{Y}[1]}})\otimes_{H^{\bullet}_{A}(\mathrm{pt})}F\, \tag{6.1}\] or equivalently, under the hypotheses of Section 5.5, which hold in our examples of interest, \[\mathbb{V}^{\zeta}_{Y}=\bigoplus_{\mathbf{d}\in\mathbb{N}^{V_{\mathrm{Q}_{Y}} }}H^{A}_{\bullet}(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathrm{f}_{r}}_{ \mathcal{O}_{Y}[1]}),\varphi_{\overline{W}^{\mathrm{f}_{r}}_{\mathcal{O}_{Y}[ 1]}})\otimes_{H^{\bullet}_{A}(\mathrm{pt})}F\.\] We also define the corresponding generating functional, as in Equation 5.8, by \[\mathcal{Z}^{\zeta}_{Y}(\mathbf{q})=\sum_{\mathbf{d}\in\mathbb{N}^{V_{\mathrm{ Q}_{Y}}}}\mathbf{q}^{\mathbf{d}}\chi(\mathbb{V}^{\zeta}_{\mathbf{d}}( \mathcal{O}_{Y}[1]))\ \in\mathbb{Z}[\![\mathbf{q}]\!]\,\] where we recall the shorthand notation \(\mathbf{q}^{\mathbf{d}}=\prod_{i\in V_{\mathrm{Q}_{Y}}}q^{d_{i}}_{i}\) and \(\mathbb{Z}[\![\mathbf{q}]\!]=\mathbb{Z}[\![q_{i}]\!]_{i\in V_{\mathrm{Q}_{Y}}}\). The results of [11] and [10], in particular Proposition 3.17 and Theorem 3.18 of the latter, imply that for stability conditions \(\zeta_{\mathrm{DT}},\zeta_{\mathrm{PT}},\zeta_{\mathrm{NCDT}}\) in appropriately chosen chambers, the corresponding homology modules are given by the (categorified) Donaldson-Thomas series [13], Pandharipande-Thomas series [13], and non-commutative Donaldson-Thomas series [10], that is, we have \[\mathcal{Z}^{\zeta_{\mathrm{DT}}}_{Y}=\mathcal{Z}^{\mathrm{DT}}_{Y}\qquad \mathcal{Z}^{\zeta_{\mathrm{PT}}}_{Y}=\mathcal{Z}^{\mathrm{PT}}_{Y}\qquad \mathcal{Z}^{\zeta_{\mathrm{NCDT}}}_{Y}=\mathcal{Z}^{\mathrm{NCDT}}_{Y}\,\] where the latter are defined as the generating functions of the corresponding enumerative invariants of the threefold \(Y\). In fact, it was proved in [11] using the results of [13] that these invariants can be computed via equivariant localization in terms of signed counts of fixed points, so that we have \[\mathcal{Z}_{Y}^{\zeta}(\mathbf{q})=\sum_{\mathbf{d}\in\mathbb{N}^{V}Q_{Y}} \mathbf{q}^{\mathbf{d}}(-1)^{k_{\mathbf{d}}}|\mathfrak{M}_{\mathbf{d}}^{\zeta}(Q _{\mathcal{O}_{Y}[1]}^{\mathrm{f_{r}}},W_{\mathcal{O}_{Y}[1]}^{\mathrm{f_{r}}} )^{A}|\,\] in keeping with Corollary 5.8, which was written following _loc. cit._. _Example 6.3_.: Let \(Y=\mathbb{A}^{3}\) and \(r=1\), so that the framed quiver with potential \((Q_{\mathcal{O}_{\mathbb{A}^{3}}}^{\mathrm{f_{i}}},W_{\mathcal{O}_{\mathbb{A}^ {3}}}^{\mathrm{f_{i}}})\) is given by that on the left in Equation 4.38. It is well known that for a generic stability condition \(\zeta\), the corresponding moduli space of \(\zeta\)-stable representations such that \(\dim V_{0}=n\) is given by \[\mathfrak{M}_{n}^{\zeta}(Q_{\mathcal{O}_{\mathbb{A}^{3}}}^{\mathrm{f_{i}}},W_ {\mathcal{O}_{\mathbb{A}^{3}}}^{\mathrm{f_{i}}})=\mathrm{Hilb}_{n}(\mathbb{A} ^{3})\,\] the Hilbert scheme of points on \(\mathbb{C}^{3}\). The set of \(T\)-fixed points \(\mathfrak{n}_{n}\) of \(\mathrm{Hilb}_{n}(\mathbb{A}^{3})\) is labeled by the set of length \(n\) plane partitions, so that we have \[\mathbb{V}_{\mathbb{C}^{3}}^{\zeta}\cong\bigoplus_{n\in\mathbb{N}}\bigoplus_{ \lambda\in\mathfrak{n}_{n}}F_{\lambda}\qquad\text{where}\qquad F_{\lambda}=F\] is a copy of the base field \(F\) for each \(n\in\mathbb{N}\) and \(\lambda\in\mathfrak{n}_{n}\). Further, the results of [1] in this case imply that the DT series is given by \[\mathcal{Z}_{\mathbb{C}^{3}}^{\zeta}(q_{0})=\prod_{k=1}^{\infty}\frac{1}{(1-(- q_{0})^{k})^{k}}\quad\in\mathbb{Z}[\![q_{0}]\!]\.\] We also introduce notation for the MacMahon function \[M(x,q)=\prod_{k=1}^{\infty}\frac{1}{(1-xq^{k})^{k}}\qquad\text{and}\qquad M(q )=M(1,q)=\prod_{k=1}^{\infty}\frac{1}{(1-q^{k})^{k}}\,\] so that we can write simply \[\mathcal{Z}_{\mathbb{C}^{3}}^{\zeta}(q)=M(q)\qquad\text{for}\qquad q=-q_{0}\.\] _Example 6.4_.: The approach to studying the DT series of more general threefolds via quivers with potential was pioneered by Szendroi in the seminal paper [10] in the case \(Y_{1,1}=|\mathcal{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}|\), and the framed quiver with potential considered in _loc. cit._ is that on the right in Equation 4.38. In this case, it was conjectured in _loc. cit._ based on extensive computational evidence and proved in [20] that the non-commutative DT series of \(Y_{1,1}\) is given by \[\mathcal{Z}_{Y_{1,1}}^{\mathrm{NCDT}}(q_{0},q_{1})=\prod_{k=1}^{\infty}\frac{( 1+q_{0}^{k}(-q_{1})^{k-1})^{k}(1+q_{0}^{k}(-q_{1})^{k+1})^{k}}{(1-q_{0}^{k}(-q_ {1})^{k})^{2k}}\.\] If we let \(q=-q_{0}q_{1}\) and \(x=q_{1}\) then this expression can equivalently be written \[\mathcal{Z}_{Y_{1,1}}^{\mathrm{NCDT}}(q,x)=\prod_{k=1}^{\infty}\frac{(1-x^{-1 }q^{k})^{k}(1-xq^{k})^{k}}{(1-q^{k})^{2k}}=M(1,q)^{2}M(x^{-1},q)^{-1}M(x,q)^{-1 }. \tag{6.2}\] _Example 6.5_.: Similar results were obtained in [1] for toric quotient singularities. In particular, for \(Y_{2,0}=|\mathcal{O}_{\mathbb{P}^{1}}\oplus\mathcal{O}_{\mathbb{P}^{1}}(-2)|\) with corresponding framed quiver with potential that in the middle of Equation 4.38, the formula for the non-commutative DT series from _loc. cit._ is given by \[\mathcal{Z}_{Y_{2,0}}^{\mathrm{NCDT}}(q,x)=\prod_{k=1}^{\infty}\frac{1}{(1-q^ {k})^{2k}(1-x^{-1}q^{k})^{k}(1-xq^{k})^{k}}=M(1,q)^{2}M(x^{-1},q)M(x,q)\, \tag{6.3}\] where we again let \(q=-q_{0}q_{1}\) and \(x=q_{1}\). More generally, for \(Y_{m,0}=\tilde{A}_{m-1}\times\mathbb{A}^{1}\), the analogous formula from _loc. cit._ is given by \[\mathcal{Z}^{\mathrm{NCDT}}_{Y_{m,0}}(q,x_{i})=M(1,q)^{n}\prod_{1\leq a\leq b \leq m-1}M(x_{[a,b]}^{-1},q)M(x_{[a,b]},q)\, \tag{6.4}\] where we let \(q=-q_{0}q_{1}...q_{m-1}\), \(x_{i}=q_{i}\) for \(i=1,...,m-1\), and \(x_{[a,b]}=x_{a}x_{a+1}\dots x_{b}\). The cohomological Hall algebra of Kontsevich-Soibelman [14] was defined so that it naturally acts on the cohomology of DT theory type moduli spaces (see for example the discussion in [15] and the review [16]) and in this case the results of Section 5.4 simply reproduce this fact: _Corollary 6.6_.: There exists a natural representation \[\rho_{Y}:\mathcal{H}(Y)\to\mathrm{End}_{F}(\mathbb{V}_{Y}^{\zeta}) \tag{6.5}\] of Equation 6.1 is naturally a module for of the cohomological Hall algebra \(\mathcal{H}(Y)\) on the categorified DT-type series \(\mathbb{V}_{Y}^{\zeta}\) for each compatible stability condition \(\zeta\in\mathbb{R}^{V_{Q_{Y}}}\). The extension of this action to a larger associative algebra containing \(\mathcal{H}(Y)\) as a positive subalgebra in some triangular decomposition was widely anticipated in this setting. The basic idea was already present in the original papers of Lusztig [13] and Ringel [14], and was proposed explicitly in the context of cohomological DT theory by Soibelman [15] (see also Section 7.3 of the review [16]). Moreover, by the _dimensional reduction_ equivalence between the critical and preprojective cohomological Hall algebra in some examples, established in [11] and the appendix to [14], the results of [13], [15] and more recently [17] and [17] suggested a relationship between \(\mathcal{H}(Y_{m,0})\) and affine Yangian type quantum groups. Motivated by related considerations in string theory developed in the series of papers [15], [15], and [15], the following conjecture was formulated by Costello. _Conjecture 6.7_.: [15] Let \(Y_{m,n}\to X_{m,n}\) be a resolution of the affine, toric, Calabi-Yau singularity \(X_{m,n}=\{xy-z^{m}w^{n}\}\). Then there exists a natural representation \[\rho:\mathcal{Y}_{-\delta}(\widehat{\mathfrak{gl}}_{m|n})\to\mathrm{End}_{F}( \mathbb{V}_{Y_{m,n}}^{\zeta_{\mathrm{NCDT}}})\,\] of the \(-\delta\) shifted affine Yangian \(\mathcal{Y}_{-\delta}(\widehat{\mathfrak{gl}}_{m|n})\) of \(\mathfrak{gl}_{m|n}\) on \(\mathbb{V}_{Y_{m,n}}^{\zeta_{\mathrm{NCDT}}}\), inducing an isomorphism \[\rho(\mathcal{Y}_{-\delta}(\widehat{\mathfrak{gl}}_{m|n})_{+})\xrightarrow{ \cong}\rho_{Y_{m,n}}(\mathcal{SH}(Y_{m,n}))\,\] and such that \(\mathbb{V}_{Y_{m,n}}^{\zeta_{\mathrm{NCDT}}}\) is identified with the vacuum module \(V_{m,n}\) for \(\mathcal{Y}_{-\delta}(\widehat{\mathfrak{gl}}_{m|n})\). Our expectation is that the proof of this conjecture will follow from Conjecture 5.12 together with the identification \[\mathcal{Y}(Y_{m,n}):=\mathcal{Y}_{\mathcal{O}[1]}(Y_{m,n})\cong\mathcal{Y}_{- \delta}(\widehat{\mathfrak{gl}}_{m|n})\] so that the induced isomorphism of the positive half with \(\mathcal{SH}\) follows by construction \[\mathcal{Y}_{-\delta}(\widehat{\mathfrak{gl}}_{m|n})_{+}\cong\mathcal{Y}(Y_{m,n})_{+}:=\mathcal{SH}(Y_{m,n})\.\] Indeed, our understanding is that some form of this conjecture is essentially proved along these lines in [1] and [1], and we hope that the present paper will help to develop a more robust translation of their work to the language of geometric representation theory. The conjecture was also checked for \(\mathfrak{gl}_{1}\) in [14], following several related results, using precisely the approach described here. Indeed, Section 5.6 was was written following _loc. cit._, as well as the references therein and the many indicated here. Closely related results were also obtained in [11], [12] and [11]. There are also several more conceptual approaches to understanding the appearance of Yangians in this setting, including [10] and [13]. We hope to better understand the results presented here in such terms, but this is not addressed in the present work. As a consequence of Conjecture 6.7, we expect equality between the non-commutative DT series of the threefold \(Y_{m,n}\) and the Poincare series of the vacuum module \(V_{m,n}\) for \(\mathcal{Y}_{-\delta}(\widehat{\mathfrak{gl}}_{m|n})\): _Corollary 6.8_.: There is a natural grading on \(V_{m,n}\) such that \[\mathcal{Z}^{\text{NCDT}}_{Y_{m,n}}(q)=P_{q}(V_{m,n})\ \in\mathbb{Z}[\![q]\!]\] where \(P_{q}\in\mathbb{Z}[\![q]\!]\) denotes the Poincare series. Indeed, in the case \(n=0\), the PBW theorem for the affine Yangian of \(\mathfrak{sl}_{n}\) proved in [10] gives a filtration on \(\mathcal{Y}(\widehat{\mathfrak{sl}}_{m})\) such that we identifications of the associated graded \[\text{gr }\mathcal{Y}(\widehat{\mathfrak{sl}}_{m})\cong\text{Sym}^{\bullet}( \widehat{\mathfrak{sl}}_{m}[u^{\pm},v])\qquad\text{and}\qquad\text{gr }\mathcal{Y}_{\delta}(\widehat{\mathfrak{sl}}_{m})\cong\text{Sym}^{\bullet} (\mathfrak{sl}_{m}[t_{1},t_{2}])\,\] where \(\widehat{\mathfrak{sl}}_{m}[u^{\pm},v]\) denotes the universal central extension of \(\mathfrak{sl}_{m}[u^{\pm 1},v]\) and \(\mathfrak{sl}_{m}[t_{1},t_{2}]\) denotes the subalgebra spanned by polynomials in \(t_{1}=u\) and \(t_{2}=u^{-1}v\) with coefficients in \(\mathfrak{sl}_{m}\), so that the corresponding associated graded of the vacuum module \(V^{\mathfrak{sl}}_{m,0}\) for \(\mathfrak{sl}_{m}\) is given by \[\text{gr }V^{\mathfrak{sl}}_{m,0}\cong\bigotimes_{r=0}^{\infty}\text{Sym}^{ \bullet}(\widehat{\mathfrak{sl}}_{m}[u^{\pm 1}])\otimes_{\text{Sym}^{\bullet}(u^{-r} \mathfrak{sl}_{m}[u])}\mathbb{\ }\mathbb{\ },\] the Poincare polynomial of which is given by \[P_{q}(V^{\mathfrak{sl}}_{m,0})=\prod_{r=0}^{\infty}\prod_{k=1}^{\infty}\frac{ 1}{(1-q^{k+r})^{m^{2}-1}}=\left(\prod_{k=1}^{\infty}\frac{1}{(1-q^{k})^{k}} \right)^{m^{2}-1}=M(q)^{m^{2}-1}\.\] Similarly, this implies that in the \(\mathfrak{gl}_{m}\) case we have \[P_{q}(V_{m,0})=M(1,q)^{m^{2}}\, \tag{6.6}\] which evidently matches the formula from Equation 6.4 specialized to \(x_{i}=1\). Moreover the additional refinement given by the variables \(\{x_{i}\}\) agrees with the refinement by the grading on \(V_{m,0}\) under the Cartan \(\mathfrak{h}\subset\mathfrak{sl}_{m}\). More generally, we conjecture that the modules induced by the construction of Theorem 5.6, and its extension outlined in Section 5.6, in the case of Example 4.49 leads to a family of modules \[\mathbb{V}^{\zeta}_{\alpha,\beta,\gamma}=\bigoplus_{d\in\mathbb{N}^{VQ_{Y}}}H ^{A}_{\bullet}(\mathfrak{M}^{\zeta}_{\mathbf{d}}(Q^{\mathfrak{f}_{\alpha, \beta,\gamma}}_{M}),\varphi_{\overline{W}^{\zeta_{\alpha,\beta,\gamma}}_{M, \mathbf{d}}})\otimes_{H^{\bullet}_{A}(\text{pt})}F\] over the affine Yangian of \(\mathfrak{gl}_{1}\) which were constructed in the closely related setting of modules over the quantum toroidal algebra in [11], where \(f_{\alpha,\beta,\gamma}\) is the framing structure on the extended quiver described in Example 4.49. Their natural analogue in this setting is the following: _Conjecture 6.9_.: There exists a natural representation \[\rho:\mathcal{Y}_{-\delta}(\widehat{\mathfrak{gl}}_{1})\to\text{End}_{F}( \mathbb{V}^{\zeta_{\text{NCDT}}}_{\alpha,\beta,\gamma})\] such that \(\mathbb{V}^{\zeta_{\text{NCDT}}}_{\alpha,\beta,\gamma}\) is identified with the cohomological variant of the MacMahon module \(\mathcal{M}_{\alpha,\beta,\gamma}(u)\) constructed in [11]. ### Perverse coherent extensions of divisors, Vafa-Witten theory, and vertex algebras In this section, we explain the application of our results in the case that \(M\) is given by the structure sheaf of a divisor. Let \(S\) be an effective toric Cartier divisor in \(Y\), \(S^{\rm red}\) the underlying reduced divisor, and \(\mathfrak{D}_{S}\) its set of irreducible components, so that we have \[S^{\rm red}=\bigcup_{d\in\mathfrak{D}_{S}}S_{d}\qquad\quad\mathcal{O}_{S^{\rm red }}^{\rm ss}=\bigoplus_{d\in\mathfrak{D}_{S}}\mathcal{O}_{S_{d}}\qquad\text{ and}\qquad[S]=\sum_{d\in\mathfrak{D}_{S}}r_{d}[S_{d}]\] for some multiplicities \(r_{d}\in\mathbb{N}\) defined for each \(d\in\mathfrak{D}_{S}\). Let \(Q_{\mathcal{O}_{S^{\rm red}}^{\rm ss}[1]}\) denote the extended quiver corresponding to the object \(M=\mathcal{O}_{S^{\rm red}}^{\rm ss}[1]\) and note that the data of a Jordan-Holder filtration on \(\mathcal{O}_{S}\) with subquotients given by the objects \(\mathcal{O}_{S_{d}}\) determines a framing structure \({\rm f}_{S}\) of rank \({\bf r}_{S}=(r_{d})_{d\in\mathfrak{D}_{S}}\), where we identify \(\mathfrak{D}_{S}\) with the set of framing nodes of the extended quiver \(Q_{S}\). Thus, we can consider the stack of \({\rm f}_{S}\)-framed perverse coherent extensions of \(\mathcal{O}_{S^{\rm red}}^{\rm ss}[1]\) \[\mathfrak{M}(Y,S):=\mathfrak{M}^{{\rm f}_{S}}(Y,\mathcal{O}_{S^{\rm red}}^{ \rm ss}[1])\qquad\text{as well as}\qquad\mathfrak{M}^{0}(Y,S):=\mathfrak{M}^{0 _{S}}(Y,\mathcal{O}_{S^{\rm red}}^{\rm ss}[1])\,\] the stack of trivially framed perverse coherent extensions of \(\mathcal{O}_{S^{\rm red}}^{\rm ss}[1]\) of rank \({\bf r}_{S}\), and their corresponding framed quivers with potential \((Q^{{\rm f}_{S}},W^{{\rm f}_{S}})\) and \((Q^{0_{S}},W^{0_{S}})\). We have seen in Examples 4.41 to 4.47 that certain stable loci \(\mathfrak{M}^{\zeta}(Y,S)\) or \(\mathfrak{M}^{0,\zeta}(Y,S)\) in these stacks provide models in algebraic geometry for moduli spaces of framed instantons on \(S^{\rm red}\) of rank \({\bf r}_{S}\), that is, of rank \(r_{d}\) on the irreducible component \(S_{d}\) for each \(d\in\mathfrak{D}_{S}\). Indeed, this gives the desired generalization of the ADHM construction described in Section 1.1 of the introduction. In particular, in the case \(Y=\mathbb{C}^{3}\) and \(S=S_{r_{1},r_{2},r_{3}}=r_{1}[\mathbb{C}_{yz}^{2}]+r_{2}[\mathbb{C}_{xz}^{2}] +r_{3}[\mathbb{C}_{yz}^{2}]\), applying Theorem 4.38 as in Example 4.42 implies the desired description in algebraic geometry of the stack \(\mathfrak{M}_{\rm Nek}^{r_{1},r_{2},r_{3}}(\mathbb{C}^{3})\) of rank \({\bf r}=(r_{1},r_{2},r_{3})\) representations of the spiked instantons quiver with potential studied in [21]: _Theorem 6.10_.: There is an equivalence of algebraic stacks \[\mathfrak{M}_{\rm Nek}^{r_{1},r_{2},r_{3}}(\mathbb{C}^{3})\xrightarrow{\cong} \mathfrak{M}^{0}(\mathbb{C}^{3},S_{r_{1},r_{2},r_{3}})\.\] Similarly, the special cases of this construction in Examples 4.43, 4.47, and 4.46 give 'three dimensional' variants of the corresponding cases of constructions of [15], [15], and [17], respectively, analogous to the spiked instantons variant of the ADHM construction. In general, in the case \(Y=Y_{m,0}\), this construction conjecturally gives the analogous variant of the relationship between rank \(m\), parabolic torsion free sheaves on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and chainsaw quivers from _loc. cit._. For divisors \(S\) in the spaces \(Y_{m,n}\), we obtain a generalization of this variant of their construction to \(\mathfrak{gl}_{m|n}\). We discuss examples of this form in detail following Conjecture 6.27 below, and in the succeeding Section 6.3. For each choice of compatible stability condition \(\zeta\in\mathbb{R}^{V_{Q_{Y}}}\), the corresponding homology module \(\mathbb{V}_{S}=\mathbb{V}^{{\rm f}_{S},\zeta}(Y,\mathcal{O}_{S^{\rm red}}^{ \rm ss}[1])\) defines a model for a rank \({\bf r}_{S}\) cohomological Vafa-Witten type series of \(S^{\rm red}\), given by \[\mathbb{V}_{S}^{\zeta}=\bigoplus_{{\bf d}\in\mathbb{N}^{V_{Q_{Y}}}}H_{\bullet}^ {G_{\bf d}(Q_{Y})\times A}(X_{\bf d}^{\zeta}(Q^{{\rm f}_{S}}),\varphi_{W^{{\rm f }_{S}}})\otimes_{H_{A}^{\bullet}({\rm pt})}F\, \tag{6.7}\] or equivalently, again under the hypotheses of Section 5.5 which hold in our examples of interest, \[\mathbb{V}_{S}^{\zeta}=\bigoplus_{{\bf d}\in\mathbb{N}^{V_{Q_{Y}}}}H_{\bullet}^ {A}(\mathfrak{M}_{\bf d}^{\zeta}(Q^{{\rm f}_{S}}),\varphi_{\overline{W}^{{\rm f }_{S}}})\otimes_{H_{A}^{\bullet}({\rm pt})}F\.\] We also define the corresponding generating functional, as in Equation 5.8, by \[\mathcal{Z}^{\zeta}_{S}(\mathbf{q})=\sum_{\mathbf{d}\in\mathbb{N}^{V}Q_{Y}} \mathbf{q}^{\mathbf{d}}\chi(\mathbb{V}^{\mathrm{f}_{S},\zeta}_{\mathbf{d}}( \mathcal{O}^{\mathrm{ss}}_{S^{\mathrm{red}}}[1]))\ \in\mathbb{Z}[\mathbf{q}]\,\] where we recall the shorthand notation \(\mathbf{q}^{\mathbf{d}}=\prod_{i\in V_{Q_{Y}}}q_{i}^{d_{i}}\) and \(\mathbb{Z}[\mathbf{q}]=\mathbb{Z}[\![q_{i}]\!]_{i\in V_{Q_{S}Y}}\). In particular, we define \[\mathbb{V}_{S}=\mathbb{V}^{\mathrm{f}_{S},\zeta\zeta_{\mathrm{VW}}}(\mathcal{ O}^{\mathrm{ss}}_{S^{\mathrm{red}}}[1])\qquad\text{and}\qquad\mathcal{Z}^{ \mathrm{VW}}_{S}(\mathbf{q})=\sum_{\mathbf{d}\in\mathbb{N}^{V}Q_{Y}}\mathbf{q}^ {\mathbf{d}}\chi(\mathbb{V}_{S,\mathbf{d}}))\ \in\mathbb{Z}[\![\mathbf{q}]\!]\,\] for \(\zeta=\zeta_{\mathrm{VW}}\) the Vafa-Witten stability condition, as well as \[\mathbb{V}^{0}_{S}=\mathbb{V}^{0_{S},\zeta}(\mathcal{O}^{\mathrm{ss}}_{S^{ \mathrm{red}}}[1])\qquad\text{and}\qquad\mathcal{Z}^{0,\zeta}_{S}(\mathbf{q})= \sum_{\mathbf{d}\in\mathbb{N}^{V}Q_{Y}}\mathbf{q}^{\mathbf{d}}\chi(\mathbb{V}^ {0}_{S,\mathbf{d}}))\ \in\mathbb{Z}[\![\mathbf{q}]\!]\,\] corresponding to the trivial framing structure \(0_{S}\) of rank \(\mathbf{r}_{S}\). Note that again the hypotheses necessary to calculate the generating functional in terms of fixed point counts as in Corollary 5.8 will hold in the main examples of interest, as we will use below: _Example 6.11_.: Let \(Y=\mathbb{C}^{3}\) and \(S^{\mathrm{red}}=\mathbb{C}^{2}_{xy}\) as in Example 4.41. For \(S=S^{\mathrm{red}}\) and generic \(\zeta\), the moduli space of \(\zeta\)-stable, trivially-framed perverse coherent extensions of \(M=\mathcal{O}^{\mathrm{ss}}_{S^{\mathrm{red}}}[1]\) of rank \(1\) and dimension \(\mathbf{d}=n\) corresponds under dimensional reduction to the Hilbert scheme of \(n\) points on \(\mathbb{C}^{2}\), so that we have \[\mathbb{V}_{\mathbb{C}^{2}}=\bigoplus_{n\in\mathbb{N}}H^{A}_{\bullet}(\mathrm{ Hilb}_{n}(\mathbb{C}^{2}))\otimes_{H^{\bullet}_{A}(\mathrm{pt})}F\cong \bigoplus_{n\in\mathbb{N}}\bigoplus_{\lambda\in\mathfrak{J}_{n}}F_{\lambda} \tag{6.8}\] where \(\mathfrak{J}_{n}\) denotes the set of \(A\)-fixed points of \(\mathrm{Hilb}_{n}(\mathbb{C}^{2})\), which is in bijection with the set of partitions \(\lambda\) of length \(n\), and \(F_{\lambda}\) denotes a copy of the field of fractions of \(H^{\bullet}_{A}(\mathrm{pt})\). It follows that the corresponding generating function is given by \[\mathcal{Z}^{\mathrm{VW}}_{\mathbb{C}^{2}}(q)=\prod_{k=1}^{\infty}\frac{1}{1- q^{k}}=\eta(q)^{-1}. \tag{6.9}\] More generally, for \(S=r[\mathbb{C}^{2}]\) the module for the trivial framing \(\mathrm{f}=0\) is given by \[\mathbb{V}^{0}_{r[\mathbb{C}^{2}]}\cong\bigoplus_{n\in\mathbb{N}}H^{A}_{ \bullet}(M(r,n))\otimes_{H^{\bullet}_{A}(\mathrm{pt})}F\cong\bigoplus_{n\in \mathbb{N}}\bigoplus_{n_{1}+\ldots+n_{r}=n}\bigotimes_{j=1}^{r}H^{A}_{\bullet} (\mathrm{Hilb}_{n_{j}})\otimes_{H^{\bullet}_{A}(\mathrm{pt})}F\cong\bigotimes_{ j=1}^{r}\mathbb{V}_{\mathbb{C}^{2}} \tag{6.10}\] and thus the generating function for the trivially framed extensions is given by that for \(r\)-tuples of partitions of total length \(n\): \[\mathcal{Z}^{0}_{r[\mathbb{C}^{2}]}(q)=\prod_{j=1}^{r}\prod_{k=1}^{\infty} \frac{1}{1-q^{k}}=\eta(q)^{-r}. \tag{6.11}\] For \(\mathrm{f}=\mathrm{f}_{S}\), again following Example 4.41, the fixed points of the moduli spaces are restricted from arbitrary \(r\)-tuples of partitions to _nested partitions_, those which satisfy the additional requirement that each partition is a strict subset of the previous; this was studied in [10] in precisely this setting and our examples of this form were inspired by their work. Under the identification \[\mathbb{V}^{0}_{r[\mathbb{C}^{2}]}\cong\bigoplus_{n\in\mathbb{N}}\bigoplus_{n_{1} +\ldots+n_{r}=n}\bigotimes_{j=1}^{r}\left(\bigoplus_{\lambda_{j}\in\mathfrak{n}_ {n_{j}}}F_{\lambda_{j}}\right)\cong\bigoplus_{n\in\mathbb{N}}\bigoplus_{n_{1}+ \ldots+n_{r}=n}\bigoplus_{\lambda_{1},\ldots,\lambda_{r};\lambda_{j}\in \mathfrak{n}_{n_{j}}}F_{\lambda_{1},\ldots,\lambda_{r}}\] the module \(\mathbb{V}_{r[\mathbb{C}^{2}]}\) is given by the submodule \[\mathbb{V}_{r[\mathbb{C}^{2}]}\cong\bigoplus_{n\in\mathbb{N}}\bigoplus_{n_{1} +\ldots+n_{r}=n}\bigoplus_{\lambda_{1}\geq\ldots\geq\lambda_{r};\lambda_{j}\in \mathfrak{n}_{n_{j}}}F_{\lambda_{j}}\qquad\text{and thus}\qquad\mathcal{Z}^{ \mathrm{VW}}_{r[\mathbb{C}^{2}]}(q)=\prod_{j=1}^{r}\prod_{k=1}^{\infty}\frac{1 }{1-q^{j+k}}. \tag{6.12}\] _Example 6.12_.: For \(Y=\mathbb{C}^{3}\) and \(S=S_{M,N,0}=M[\mathbb{C}^{2}_{xy}]+N[\mathbb{C}^{2}_{yz}]\), as in the special case of Example 4.42 corresponding to the quiver in Equation 4.42, we expect that the set of \(A\) fixed points of \(\mathcal{M}(Y,S)\) is in bijection with the set of plane partitions with a pit at location \((M,N)\) and trivial asymptotics, in the sense of [1]. _Example 6.13_.: More generally, let \(Y=\mathbb{C}^{3}\), \(S=S_{M,N,0}=M[\mathbb{C}^{2}_{xy}]+N[\mathbb{C}^{2}_{yz}]\) and \[M=\mathcal{O}^{\mathrm{ss}}_{S^{\mathrm{red}}}[1]\oplus\mathcal{O}_{\mathbb{ C}_{x}}\oplus\mathcal{O}_{\mathbb{C}_{y}}\oplus\mathcal{O}_{\mathbb{C}_{z}}\,\] following Example 4.50, with the framing structure given by that described in _loc. cit._, and we expect the fixed points of the resulting moduli space to correspond to plane partitions with a pit at location \((M,N)\) and asymptotics \(\alpha,\beta,\gamma\) determined as in _loc. cit._. A special case of the resulting family of quivers is given by the quiver of Equation 4.46. _Example 6.14_.: Let \(Y=|\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)|\) and \(S=|\mathcal{O}_{\mathbb{P}^{1}}(-1)|\) as in Example 4.47. The vanishing cycles cohomology defining \(\mathbb{V}_{S}\) is equivalent to the ordinary Borel-Moore homology of the dimensional reduction, and thus by localization we have an isomorphism \[\mathbb{V}_{|\mathcal{O}_{\mathbb{P}^{1}}(-1)|}\cong\bigoplus_{k\in\mathbb{Z} }\bigoplus_{n\in\mathbb{N}}\bigoplus_{n_{0}+n_{1}=n}\bigoplus_{\lambda_{0}\in \mathfrak{n}_{n_{0}},\lambda_{1}\in\mathfrak{n}_{n_{1}}}F_{k,\lambda_{0}, \lambda_{1}} \tag{6.13}\] by Proposition 3.2 and Theorem 3.4 in [15], where \(\mathfrak{n}_{n}\) denotes the set of partitions of \(n\). Similarly, Corollary 5.7 of [15] applied in the limit \(m=\infty\) to each term \(k=c_{1}\in\mathbb{Z}\), the partition function is given by the following formula of Corollary 3.15 in [15] \[\mathcal{Z}^{\mathrm{VW}}_{|\mathcal{O}_{\mathbb{P}^{1}}(-1)|}(q)=\sum_{k\in \mathbb{Z}}q^{\frac{k^{2}}{2}}\frac{1}{\prod_{k=1}^{\infty}(1-q^{k})^{2}}. \tag{6.14}\] The representation of the cohomological Hall algebra \(\mathcal{H}(Y)\) on \(\mathbb{V}_{S}\) constructed in Theorem 5.6, and its extension to the action of the shifted quiver Yangian \(\mathcal{Y}_{M}(Y)\) outlined in Section 5.6, were essentially defined to generalize the construction of Grojnowski [11] and Nakajima [14] of the Heisenberg algebra action on the Hilbert scheme of points, following the Schiffmann-Vasserot proof [11] of the AGT conjecture [1] and their respective generalizations in [10] and [12]. We note that these results are also closely related to those obtained in unpublished work of Feigin-Tsymbaliuk, as explained in [13], and in turn to many results in the setting of quantum toroidal algebras such as [14] and [1]; we will elaborate on this relationship further in Section 6.3 below. The proposal that there should be analogous vertex algebras corresponding to more general divisors \(S\) and threefolds \(Y\) was considered already in the original paper of Gaiotto-Rapcak [12], and explored in some detail in [15], but the analogue of the AGT conjecture in this setting was not known in general, as the relevant moduli spaces generalizing the spiked instantons construction of Nekrasov to threefolds \(Y\) other than \(\mathbb{C}^{3}\) had not been constructed previously. There is also a closely related conjecture of Feigin-Gukov [13] that there should exist vertex operator algebras \(\operatorname{VOA}[M_{4},\mathfrak{gl}_{r}]\) associated to four manifolds \(M_{4}\), analogously generalizing the AGT conjecture. This appears to coincide with the predictions of [13] discussed in the preceding paragraph in the case that the underlying reduced scheme \(S^{\operatorname{red}}\) is irreducible and smooth, \(M_{4}\) is the analytification of \(S^{\operatorname{red}}\), and \(r\in\mathbb{N}\) the multiplicity of \(S^{\operatorname{red}}\) in \(S\). However, a mathematical definition of these vertex algebras was also not known in general, for a divisor \(S\) nor four-manifold \(M_{4}\), and relatively few examples were known in the non-abelian case. In the companion paper [1], we give a general combinatorial construction of vertex algebras \(\mathbb{V}(Y,S)\) as the kernel of screening operators acting on lattice vertex algebras determined by the data of the GKM graph of \(Y\) and a Jordan-Holder filtration of \(\mathcal{O}_{S}\) with subquotients structure sheaves \(\mathcal{O}_{S_{d}}\) of the divisors \(S_{d}\) occurring as irreducible, reduced components of \(S\). This construction appears to satisfy the predictions of [13], [14], and [13], and in particular we formulate the following analogue of the AGT conjecture in this setting: _Conjecture 6.15_.: There exists a natural representation \[\rho:\mathcal{U}(\mathbb{V}(Y,S))\to\operatorname{End}_{F}(\mathbb{V}_{S})\] of the algebra of modes \(\mathcal{U}(\mathbb{V}(Y,S))\) of the vertex algebra \(\mathbb{V}(Y,S)\) on \(\mathbb{V}_{S}\), inducing an isomorphism \[\rho(\mathcal{U}(\mathbb{V}(Y,S))_{+})\xrightarrow{\cong}\rho_{S}(\mathcal{SH}( Y))\] and such that \(\mathbb{V}_{S}\) is identified with the vacuum module for the vertex algebra \(\mathbb{V}(Y,S)\). While we are not currently able to give a proof in general, note that the conjectural identification with the action of \(\mathcal{SH}(Y)\) implies that Theorem 5.6 together with Proposition 5.10 provide the desired construction of the action of the algebra of modes. We expect that the proof will follow from the compatibility between the free field realizations of \(\mathbb{V}(Y,S)\) used in their construction in [1], and a family of coproducts on the corresponding shifted quiver Yangians \(\mathcal{Y}_{S}(Y)\), under equivalences of the type conjectured in the following Section 6.3. We hope to explore this further in future work. As a consequence of Conjecture 6.15, we expect equality between the Vafa-Witten-type series \(\mathcal{Z}_{S}^{\operatorname{VW}}(q)\) of the divisor \(S\) defined above and the Poincare series \(P_{q}\) of the vacuum module \(\mathbb{V}_{(Y,S)}\) of the vertex algebra \(\mathbb{V}(Y,S)\). _Corollary 6.16_.: There is a natural grading on \(\mathbb{V}_{(Y,S)}\) such that \[\mathcal{Z}_{S}^{\operatorname{VW}}(q)=P_{q}(\mathbb{V}_{(Y,S)})\ \in\mathbb{Z}[\![q]\!]\.\] Indeed, the above conjecture and corollary can be verified directly in several of the simplest examples, following the previous work on the subject mentioned above: _Example 6.17_.: In the case \(Y=\mathbb{C}^{3}\) and \(S=\mathbb{C}^{2}\), the constructions we have outlined reduce to the original constructions of Grojnowski [10] and Nakajima [16]. The vertex algebra \[\mathbb{V}(\mathbb{C}^{3},\mathbb{C}^{2})=\pi_{k}\] is simply the Heisenberg algebra \(\pi_{k}\) at level \(k=-\frac{1}{h_{1}h_{2}}\), the module \(\mathbb{V}_{\mathbb{C}^{2}}\) of Equation 6.8 is evidently isomorphic to the standard Fock module, and the vacuum character is given by \(\eta(q)^{-1}\), which is indeed the Vafa-Witten-type series for \(\mathbb{C}^{2}\) of Equation 6.9 above. _Example 6.18_.: More generally, the constructions of _loc. cit._ generalize to smooth surfaces \(S\) to give the Heisenberg algebra \(\pi(S)\) on the cohomology \(H^{\bullet}_{A}(S;\mathbb{C})\) over the base field \(F\); we have included \(A\)-equivariance following [26] in keeping with our present setting. Thus, the vacuum module of \(\pi(S)\) is given by \[\pi_{S}\cong\operatorname{Sym}^{\bullet}_{F}(z^{-1}(H^{A}_{\bullet}(S)\otimes_{ H^{\bullet}_{A}(\operatorname{pt})}F)[z^{-1}])\qquad\text{so that}\qquad P_{q}(\pi_{S})=\eta(q)^{-\chi(S)}\,\] by the formula of [10]. Indeed, we show in [11] that the Heisenberg algebra \(\pi(S)\) canonically embeds in the vertex algebra \(\mathbb{V}(Y,S)\), but for \(H_{2}(S;\mathbb{Z})\neq 0\) it is a strict subalgebra. In fact, we explain in _loc. cit._ that in the rank \(1\) case, corresponding to our assumption that \(S\) is a smooth algebraic surface and in particular reduced when considered as a divisor, the vertex algebra \(\mathbb{V}(Y,S)\) is simply the lattice vertex algebra generated by \(H_{2}(S;\mathbb{Z})\) equipped with negative the intersection pairing, tensored with the Heisenberg algebra generated by \(H_{0}(S)\), \[\mathbb{V}(Y,S)=V_{H_{2}(S;\mathbb{Z})}\otimes\pi_{H^{0}(S;\mathbb{C})}\.\] The additional generators of the lattice vertex algebra correspond to the Hecke modifications of the sheaves along the compact curve classes in \(H_{2}(S;\mathbb{Z})\), as in the construction outlined in the final chapter of [26]. This is also related to the results of [26] and [11]. We note that the construction of Negut [24] provides a higher rank analogue of the construction of Grojnowski-Nakajima, generalizing Example 6.20 below to more general surfaces \(S\), but the resulting vertex algebras analogously fail to include the lattice-type extensions corresponding to Hecke modifications along curve classes, which our construction conjecturally provides. _Example 6.19_.: As a special case of the preceding Example 6.18, let \(Y=|\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)|\) and \(S=|\mathcal{O}_{\mathbb{P}^{1}}(-1)|\) following Examples 6.14 and in turn 4.47. Indeed, we have an isomorphism between the vacuum module of Equation 6.13 and that of a canonically normalized rank \(1\) lattice vertex algebra \(V_{\mathbb{Z}}\) tensored with a Heisenberg algebra, \[\mathbb{V}_{|\mathcal{O}_{\mathbb{P}^{1}}(-1)|}\cong V_{\mathbb{Z}}\otimes\pi \qquad\text{and in particular}\qquad P_{q}(V_{\mathbb{Z}}\otimes\pi)=\sum_{k \in\mathbb{Z}}q^{\frac{k^{2}}{2}}\frac{1}{\prod_{k=1}^{\infty}(1-q^{k})^{2}}= \mathcal{Z}_{|\mathcal{O}_{\mathbb{P}^{1}}(-1)|}^{\operatorname{VW}}(q)\,\] we obtain equality of the Poincare polynomial of the vacuum module with the Vafa-Witten type series of \(|\mathcal{O}_{\mathbb{P}^{1}}(-1)|\) given in Equation 6.14. Moreover, we have checked by direct calculation that the conjecture holds following our proposed construction in this example, using results of [23], [23]; this result will appear in future work. _Example 6.20_.: For \(Y=\mathbb{C}^{3}\) and \(S=r[\mathbb{C}^{2}]\), as in Example 6.11, the definition of the vertex algebra \(\mathbb{V}(Y,S)\) from [11] is simply the Feigin-Frenkel free field realization of the principal affine \(W\)-algebra of \(\mathfrak{gl}_{r}\), so that we have \[\mathbb{V}(\mathbb{C}^{3},r[\mathbb{C}^{2}])=W^{\kappa}_{f_{\rm prim}}( \mathfrak{gl}_{r})\cong W^{\kappa}_{f_{\rm prim}}(\mathfrak{sl}_{r})\otimes\pi\,\] where we use the notation \(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r})\) to denote the vertex algebra over the field of fractions of \(H^{\bullet}_{\tilde{T}}(\operatorname{pt})\) defined by \[\kappa=-h^{\vee}-\frac{\hbar_{2}}{\hbar_{1}}\.\] In this case, our proposed construction was defined to reproduce that of [23] used in the proof of the AGT conjecture [1], and moreover the variant thereof studied in [10] which produces the vacuum module, as in the statement of Conjecture 6.15. In [23] the authors prove the following theorem: _Theorem 6.21_.: [13] There exists a natural representation \[\mathcal{U}(W^{\kappa}_{f_{\mathrm{prin}}}(\mathfrak{gl}_{r}))\to\mathrm{End}( \mathbb{V}^{0}_{r[\mathbb{C}^{2}]})\,\] such that \(\mathbb{V}^{0}_{r[\mathbb{C}^{2}]}\) is identified with the universal Verma module \(\mathbb{M}_{r}\) for \(W^{\kappa}_{f_{\mathrm{prin}}}(\mathfrak{gl}_{r})\). In [12], the authors extend these results to give a geometric construction of the vacuum module as the cohomology of the moduli space of stable representations of a variant of the ADHM construction, for which the quiver is precisely that corresponding to taking \(A_{\mathrm{f}}\) equal to a principal nilpotent in Equation 4.40 of Example 4.41. Thus, in the notation we have introduced, they prove: _Theorem 6.22_.: [12] There exists a natural representation \[\mathcal{U}(W^{\kappa}_{f_{\mathrm{prin}}}(\mathfrak{gl}_{r}))\to\mathrm{End} (\mathbb{V}_{r[\mathbb{C}^{2}]})\,\] such that \(\mathbb{V}_{r[\mathbb{C}^{2}]}\) is identified with the vacuum module for \(W^{\kappa}_{f_{\mathrm{prin}}}(\mathfrak{gl}_{r})\). In particular, we have the identification \[P_{q}(W^{\kappa}_{f_{\mathrm{prin}}}(\mathfrak{gl}_{r}))=\prod_{j=1}^{r}\prod_ {k=1}^{\infty}\frac{1}{1-q^{k+j}}=\mathcal{Z}^{\mathrm{VW}}_{r[\mathbb{C}^{2} ]}(q)\,\] as desired, where we recall that \(\mathbb{V}_{r[\mathbb{C}^{2}]}\) and \(\mathcal{Z}^{\mathrm{VW}}_{r[\mathbb{C}^{2}]}(q)\) are given by the expressions of Equation 6.12, and we have used the usual abuse of notation denoting the vacuum module by the underlying vertex algebra \(W^{\kappa}_{f_{\mathrm{prin}}}(\mathfrak{gl}_{r})\). These results are also closely related to the cohomological variant of [10], as well as [1] in the case \(n=0\). In particular, note that the fixed point counting underlying the decomposition of \(\mathbb{V}_{r[\mathbb{C}^{2}]}\) in Equation 6.12 is precisely that considered in _loc. cit._. _Example 6.23_.: More generally, for \(Y=\mathbb{C}^{3}\) and \(S=S_{M,N,0}=M[\mathbb{C}^{2}_{xy}]+N[\mathbb{C}^{2}_{yz}]\) as in Example 6.12, the vertex algebras \(\mathbb{V}(\mathbb{C}^{3},S_{M,N,0})\) are the Gaiotto-Rapcak \(Y\) algebras \(Y_{N,0,M}\) of [10], and the construction from [10] reduces to their definition in [11]. Moreover, _loc. cit._ established the variant of Conjecture 6.15 proved in [13], related to the Verma module rather than the vacuum, for \(Y=\mathbb{C}^{3}\) and arbitrary \(S_{M,N,L}\). Their construction was a significant source of inspiration for the results of this paper. Analogous results were also established in the setting of quantum toroidal algebras for \(L=0\) in [1]. Following Example 6.12, the corresponding quivers are given by the special case of Example 4.42 in Equation 4.42, with \(A_{3}^{2}=A_{2}^{3}=0\) and \(A_{2},A_{3}\) principal nilpotents in \(\mathfrak{gl}_{M}\) and \(\mathfrak{gl}_{N}\), respectively, and the fixed points are in correspondence with the plane partitions with a pit at location \((M,N)\) of [1], with trivial asymptotics \(\alpha=\beta=\gamma=\mathcal{O}\). More generally, in analogy with Conjecture 6.9, we expect that the modules induced by the construction of Theorem 5.6 in the case of Example 6.13 where \(S=S_{M,N,0}=M[\mathbb{C}^{2}_{xy}]+N[\mathbb{C}^{2}_{yz}]\) define certain canonical families of modules \[\mathbb{V}^{M,N}_{\alpha,\beta,\gamma}=\bigoplus_{\mathbf{d}\in\mathbb{N}^{V} \mathcal{Q}_{Y}}H^{A}_{\bullet}(\mathfrak{M}^{\mathrm{f}_{S;\alpha,\beta, \gamma,\zeta\mathrm{VW}}}_{\mathbf{d}}(Q_{M}),\varphi_{\overline{W}^{S;\alpha,\beta,\gamma}_{M,\mathbf{d}}})\otimes_{H^{\bullet}_{A}(\mathrm{pt})}F\] over the vertex algebras \(\mathbb{V}(\mathbb{C}^{3},S_{M,N,0})\), with fixed point bases enumerated by plane partitions with a pit and fixed asymptotics as in [1], determined by the framing structure \(\mathrm{f}_{S;\alpha,\beta,\gamma}\) as in Example 4.50. In particular, we make the following conjecture: _Conjecture 6.24_.: There exists a natural representation \[\rho:\mathcal{U}(\mathbb{V}(\mathbb{C}^{3};S_{M,N,0}))\to\operatorname{End}_{F}( \mathbb{V}_{\alpha,\beta,O}^{M,N})\] of the algebra \(\mathcal{U}(\mathbb{V}(\mathbb{C}^{3};S_{M,N,0}))\) on \(\mathbb{V}_{S;\alpha,\beta,O}\), identifying it with the module \(\mathcal{N}_{\alpha,\beta,O}^{M,N}\) of _loc_. _cit.._. We now describe some applications of our results to examples of divisors \(S\) in a resolution \(Y_{m,n}\) of \(X_{m,n}=\{xy-z^{m}w^{n}\}\). Let \(\mu\) and \(\nu\) be partitions of length \(m\) and \(n\), respectively \[\mu=\{\mu_{1}\geq\ldots\geq\mu_{m}\geq 0\}\qquad\text{and}\qquad\nu=\{\nu_{1} \geq\ldots\geq\nu_{n}\geq 0\}\] and define the corresponding lists of integers \[\mathbf{M}=(M_{i})_{i=0}^{m-1}\quad\mathbf{N}=(N_{i})_{i=0}^{n-1}\qquad\text{ where}\qquad M_{i}=\sum_{k=i+1}^{m}\mu_{k}\quad N_{i}=\sum_{k=i+1}^{n}\nu_{k}\] \(i=0,...,m-1\), \(j=0,...,n-1\); we also write just \[M=M_{0}=\sum_{k=1}^{m}\mu_{k}\qquad\text{and}\qquad N=N_{0}=\sum_{k=1}^{n}\nu _{k}\.\] We define \(S_{\mu,\nu}\) as the toric divisor corresponding to the labeling of the faces of the moment polytope of \(Y_{m,n}\) by the integers \(M_{i}\) and \(N_{i}\) depicted in Figure 1 below in the case \(m=3,n=2\). We also write simply \(S_{\mu}\) or \(S_{\nu}\) if \(n=0\) or \(m=0\), respectively. In this setting, following physical predictions from [11] we conjecture in [11] that the corresponding vertex algebra is given by the affine \(W\) algebra of \(\mathfrak{gl}_{M|N}\) corresponding to the nilpotents \(f_{\mu}\) and \(f_{\nu}\) in \(\mathfrak{gl}_{M}\) and \(\mathfrak{gl}_{N}\), respectively, determined by \(\mu\) and \(\nu\): \[Y_{3,2}\to X_{3,2}=\{xy-z^{3}w^{2}\}\] Figure 1. The resolution \(Y_{3,2}\), the toric divisor \(S_{\mu,\nu}\), and the compact toric curve classes \(C_{i}\) with their corresponding simple roots \(\alpha_{i}\) of \(\mathfrak{gl}_{3|2}\) _Conjecture 6.25_.: [10] There is an isomorphism of vertex algebras \[W^{\kappa}_{f_{\mu},f_{\nu}}(\mathfrak{gl}_{M|N})\xrightarrow{\cong}\mathbb{V}(Y_ {m,n},S_{\mu,\nu})\.\] In fact, we have a proof of this conjecture in the case \(n=0\), which will appear in future work: _Theorem 6.26_.: There is an isomorphism of vertex algebras \[W^{\kappa}_{f_{\mu}}(\mathfrak{gl}_{M})\xrightarrow{\cong}\mathbb{V}(Y_{m,0},S_{\mu})\.\] The quivers that arise in this setting are generalizations of the variant of the \(\mathfrak{gl}_{2}\) chainsaw quiver considered in Example 4.46, and in general we conjecture there exists an appropriate framing structure and dimensional reduction relating the resulting module with the ordinary Borel-Moore homology of the moduli space of representations of the corresponding chainsaw quiver. We now formally state this conjecture: Let \(\mathfrak{M}^{\mu}_{\mathbf{d}}(Q^{\mathrm{Ch}}_{m},W^{\mathrm{Ch}}_{m})\) denote the stack of \(\mathbf{d}\) dimensional representations of the \(\mathfrak{gl}_{m}\) chainsaw quiver as in [10], with fixed framing dimension given by the integers \(\mu_{i}\), so that following the conjectures of Kanno-Tachikawa [10] and in turn [1], we expect that \[\mathbb{M}_{f_{\mu}}\xrightarrow{\cong}\bigoplus_{\mathbf{d}\in\mathbb{N}^{ \mathrm{V}_{QY}}}H^{A}_{\bullet}(\mathfrak{M}^{\mu,\zeta}_{\mathbf{d}}(Q^{ \mathrm{Ch}}_{m},I^{\mathrm{Ch}}_{m}))\otimes_{H^{\bullet}_{\mathbf{d}}( \mathrm{pt})}F\, \tag{6.15}\] the \(A\)-equivariant Borel-Moore homology of the moduli space of stable representations of the \(\mathfrak{gl}_{m}\) chainsaw quiver \((Q^{\mathrm{Ch}}_{m},I^{\mathrm{Ch}}_{m})\) of framing dimension \(\mu\) is identified with the universal Verma module \(\mathbb{M}_{f_{\mu}}\) for the affine W-algebra \(W_{f_{\mu}}(\mathfrak{gl}_{M})\) associated to the nilpotent \(f_{\mu}\) in \(\mathfrak{gl}_{M}\). Let \(\mathrm{f}_{0}\) be the framing structure for \(S_{\mu}\) of rank \(\mathbf{M}\) generalizing that of Equation 4.44, that is, such that the map \(V_{\infty_{i}}\to V_{\infty_{i-1}}\) is injective for each \(i=1,...,m-1\) and all other framing endomorphisms are zero. We denote the corresponding module by \[\mathbb{V}^{\mathrm{f}_{0}}_{S_{\mu}}=\bigoplus_{\mathbf{d}\in\mathbb{N}^{ \mathrm{V}_{QY}}}H^{A}_{\bullet}(\mathfrak{M}^{\mathrm{f}_{0},\zeta_{\mathrm{ VW}}}_{\mathbf{d}}(Y_{m,0},S_{\mu}),\varphi_{\overline{W}^{\mathrm{f}_{0}}}) \otimes_{H^{\bullet}_{\mathbf{d}}(\mathrm{pt})}F\.\] Following the equivalence outlined at the end of Example 4.46, we make the following conjecture: _Conjecture 6.27_.: There is a canonical isomorphism \[H^{A}_{\bullet}(\mathfrak{M}^{\mathrm{f}_{0},\zeta}_{\mathbf{d}}(Y_{m,0},S_{ \mu}),\varphi_{\overline{W}^{\mathrm{f}_{0}}})\cong H^{A}_{\bullet}(\mathfrak{ M}^{\mu,\zeta}_{\mathbf{d}}(Q^{\mathrm{Ch}}_{m},I^{\mathrm{Ch}}_{m}))\.\] As explained in _loc. cit._, the expectation is that this follows from dimensional reduction, in the sense of Theorem A.1 of [11], for example, together with an equivalence given by passing to cokernels of the injective maps determined by the framing structure. In particular, note that the dimensions of the framing vector spaces in the two quiver descriptions of the above equivalence differ in the following way: \(\mathfrak{M}^{\mathrm{f}_{0},\zeta}(Y_{m,0},S_{\mu})\) parameterizes representations of framing dimension \(\mathbf{M}\) while \(\mathfrak{M}^{\mu,\zeta}(Q^{\mathrm{Ch}}_{m},I^{\mathrm{Ch}}_{m})\) parameterizes representations of framing dimension \(\mu\), noting that \[\mu_{i}=M_{i-1}-M_{i}\.\] This is the generalization of the relation outlined between the framing vector spaces \(V_{\infty_{0}}\) and \(\tilde{V}_{\infty_{0}}\) in the quivers of Equations 4.43 and 4.44, respectively. In summary, we expect that the preceding Conjecture 6.27 together with an identification of the form in Equation 6.15 imply that the module constructed by the framing structure \(\mathrm{f}_{0}\) above is identified with the universal Verma module \(\mathbb{M}_{f_{\mu}}\) of the corresponding \(W\)-algebra: _Conjecture 6.28_.: There exists a natural representation \[\mathcal{U}(W^{\kappa}_{f_{\mu}}(\mathfrak{gl}_{M}))\to\operatorname{End}_{F}( \mathbb{V}^{\mathrm{f}^{\mathrm{f}^{\mathrm{0}}}_{S_{\mu}}}_{S_{\mu}})\,\] such that \(\mathbb{V}^{\mathrm{f}^{\mathrm{0}}}_{S_{\mu}}\) is identified with the universal Verma module \(\mathbb{M}_{f_{\mu}}\). The preceding equivalence is the analogue of Theorem 6.21 recalled from [11], and is closely related to several existing mathematical generalizations of the AGT conjecture, such as [1], [10], [11], [12], and [8]. However, we have not considered the modules corresponding to the framing structures that feature in Conjecture 6.15. Indeed, the framing structure \(\mathrm{f}_{S_{\mu}}\) can be described by adding to \(\mathrm{f}_{0}\) an additional principal nilpotent \(G_{\mathrm{f}}\) in \(\mathfrak{gl}_{M_{0}}\) acting at the framing vertex corresponding to \(\mathbb{C}^{2}\); the modification of the framed quiver with potential in Equation 4.43, which corresponds to the framing structure \(\mathrm{f}_{0}\), is depicted in Figure 2 above: We recall that the module associated to the divisor \(S\) which features in Conjecture 6.15 in defined in terms of the corresponding quiver with potential as \[\mathbb{V}_{S_{\mu}}=\mathbb{V}^{\mathrm{f}_{S_{\mu}},\zeta_{\mathrm{VW}}}_{S_ {\mu}}=\bigoplus_{\mathbf{d}\in\mathbb{N}^{V}\!_{Q_{Y}}}H^{G_{\mathbf{d}}(Q_{Y })\times A}_{\bullet}(X^{\zeta_{\mathrm{VW}}}_{\mathbf{d}}(Q^{\mathrm{f}_{S_{ \mu}}}),\varphi_{W^{\mathrm{f}_{S_{\mu}}}})\otimes_{H^{\mathbf{a}}_{\mathbf{ d}}(\mathrm{pt})}F\.\] The statement of _loc. cit._ in this setting, by Theorem 6.26, gives the following conjecture: _Conjecture 6.29_.: There exists a natural representation \[\mathcal{U}(W^{\kappa}_{f_{\mu}}(\mathfrak{gl}_{M}))\to\operatorname{End}_{F}( \mathbb{V}_{S_{\mu}})\,\] such that \(\mathbb{V}_{S_{\mu}}\) is identified with the vacuum module \(W^{\kappa}_{f_{\mu}}(\mathfrak{gl}_{M})\). More generally, following Conjecture 6.25, we make the analogous conjectures in the \(\mathfrak{gl}_{m|n}\) case: _Conjecture 6.30_.: There exists a natural representations \[\mathcal{U}(W^{\kappa}_{f_{\mu},f_{\nu}}(\mathfrak{gl}_{M|N}))\to\operatorname {End}_{F}(\mathbb{V}^{\mathrm{0}}_{S_{\mu},\nu})\qquad\text{and}\qquad\mathcal{ U}(W^{\kappa}_{f_{\mu},f_{\nu}}(\mathfrak{gl}_{M|N}))\to\operatorname{End}_{F}( \mathbb{V}_{S_{\mu},\nu})\,\] such that \(\mathbb{V}^{\mathrm{0}}_{S_{\mu},\nu}\) and \(\mathbb{V}_{S_{\mu},\nu}\) are identified with the Verma and vacuum modules of \(W^{\kappa}_{f_{\mu},f_{\nu}}(\mathfrak{gl}_{M|N})\). Figure 2. The toric divisor \(S_{\mu}\) in \(Y_{2,0}\) and framed quiver with potential \((Q^{\mathrm{f}_{S_{\mu}}},W^{\mathrm{f}_{S_{\mu}}})\) Towards isomorphisms between \(W\)-superalgebras and Yangians for \(\widehat{\mathfrak{gl}}_{m|n^{*}}\) As at the end of the preceding section, let \(Y_{m,n}\to X_{m,n}\) be a resolution of the affine, toric, Calabi-Yau singularity \(X_{m,n}=\{xy-z^{m}w^{n}\}\). In this section, we explain the analogue of Conjecture 6.7 for \(\mathbb{V}_{S}\), the module constructed from a divisor \(S\) in \(Y_{m,n}\) in the preceding section, and moreover explain its expected relationship with Conjecture 6.15. This gives a geometric approach to prove a variant of the main theorem of [1]: an isomorphism between affine \(W\)-superalgebras and truncated shifted Yangians of \(\widehat{\mathfrak{gl}}_{m|n}\). To begin, recall from Proposition 3.1 that \[\operatorname{Pic}(\hat{Y})\xrightarrow{\cong}\mathbb{Z}^{I_{+}}\qquad \text{by}\qquad\mathcal{L}\mapsto\deg\iota_{i}^{*}\mathcal{L}\] where \(I_{+}\) is the index set of the irreducible components \(C_{i}\) of the fibre \(C\) of the resolution over the \(T\)-fixed point \(x\in X\), and \(\iota_{i}:C_{i}\to Y\) are their the inclusions. For \(Y=Y_{m,n}\), we thus have identifications of the Picard group of \(\hat{Y}_{m,n}\) with the root lattice of \(\mathfrak{sl}_{m|n}\) and the index set \(I_{+}\) with the set of positive simple roots for \(\mathfrak{sl}_{m|n}\), \[\operatorname{Pic}(\hat{Y}_{m,n})\cong Q_{\mathfrak{sl}_{m|n}}\qquad\text{ and}\qquad I_{+}\cong\Pi_{\mathfrak{sl}_{m|n}}\.\] For example, in Figure 1 we have depicted the four compact toric curve classes \(C_{i}\) in \(Y_{3,2}\) and the corresponding positive simple roots \(\alpha_{i}\) in \(\mathfrak{sl}_{m|n}\). Following [1], the shifted Yangian is defined in terms of a square matrix \(\sigma\) of size \(|\Pi|+1\) called the _shift matrix_, which is uniquely determined by its entries \(s_{i,i+1}\) and \(s_{i+1,i}\) immediately above and below the diagonal, for \(i=1,...,|\Pi|\). Now, given a divisor \(S\) in \(Y_{m,n}\), we define the corresponding shifts of real roots as the intersection numbers \[s_{i+1,i}=\langle[C_{i}],S\rangle\qquad\text{for}\qquad i=1,...,|\Pi_{ \mathfrak{sl}_{m|n}}|=m+n-1\] where we have assumed for simplicity that all these numbers are non-negative, as will be the case for \(S_{\mu,\nu}\) as long as \(\mu_{m}\geq\nu_{1}\). We let \(\sigma_{S}\) denote the induced shift matrix, and \(\mathcal{Y}_{\sigma_{S}}(\widehat{\mathfrak{gl}}_{m|n})\) the corresponding shifted Yangian of \(\widehat{\mathfrak{gl}}_{m|n}\) with respect to the shift of real roots determined by \(\sigma_{S}\); for example, the divisor \(S_{\mu,\nu}\) in \(Y_{3,2}\) from the example of Figure 1 induces the shift matrix given by \[\sigma_{S_{\mu,\nu}}=\left[\begin{array}{cccc|ccc}0&0&0&0&0\\ \mu_{1}-\mu_{2}&0&0&0&0\\ \mu_{1}-\mu_{3}&\mu_{2}-\mu_{3}&0&0&0\\ \hline\mu_{1}-\nu_{1}&\mu_{2}-\nu_{1}&\mu_{3}-\nu_{1}&0&0\\ \mu_{1}-\nu_{2}&\mu_{2}-\nu_{2}&\mu_{3}-\nu_{2}&\nu_{1}-\nu_{2}&0\end{array} \right]\,.\] We can now state the analogue of the Conjecture 6.7 of Costello [11] for the modules \(\mathbb{V}_{S}\) corresponding to divisors \(S\) in \(Y_{m,n}\): _Conjecture 6.31_.: There exists a natural representation \[\rho:\mathcal{Y}_{\sigma_{S}}(\widehat{\mathfrak{gl}}_{m|n})\to\operatorname{ End}_{F}(\mathbb{V}_{S})\,\] of the \(\sigma_{S}\) shifted affine Yangian \(\mathcal{Y}_{\sigma_{S}}(\widehat{\mathfrak{gl}}_{m|n})\) of \(\mathfrak{gl}_{m|n}\) on \(\mathbb{V}_{S}\), inducing an isomorphism \[\rho(\mathcal{Y}_{\sigma_{S}}(\widehat{\mathfrak{gl}}_{m|n})_{+})\xrightarrow {\cong}\rho_{S}(\mathcal{SH}(Y_{m,n}))\.\] As for Conjecture 6.7, we expect that the proof of the preceding conjecture will follow from Conjecture 5.12 together with an identification \[\mathcal{Y}_{S}(Y_{m,n}):=\mathcal{Y}_{\mathcal{O}_{S}[1]}(Y_{m,n})\cong \mathcal{Y}_{\sigma_{S}}(\widehat{\mathfrak{gl}}_{m|n})\.\] In fact, for general \(Y\) and \(S\), we expect that this action factors as a surjection \[\mathcal{Y}_{S}(Y)\to\mathcal{U}(\mathbb{V}(Y,S))\,\] or a map with dense image though we will ignore this subtlety in what follows, composed with the representation \(\mathcal{U}(\mathbb{V}(Y,S))\to\operatorname{End}_{F}(\mathbb{V}_{S})\) of Conjecture 6.15. In particular, we conjecture that for \(Y=Y_{m,n}\) and \(S=S_{\mu,\nu}\) this induces a surjection \[\mathcal{Y}_{\sigma_{S_{\mu,\nu}}}(\widehat{\mathfrak{gl}}_{m|n})\to\mathcal{ U}(\mathbb{V}(Y_{m,n},S_{\mu,\nu})) \tag{6.16}\] with kernel given by the \(\ell^{th}\)-truncation ideal for \(\ell=\mu_{1}\), so that we have \[\mathcal{Y}_{\sigma_{S_{\mu,\nu}}}^{\ell}(\widehat{\mathfrak{gl}}_{m|n}) \xrightarrow{\cong}\mathcal{U}(\mathbb{V}(Y_{m,n},S_{\mu,\nu}))\.\] In summary, together with Conjecture 6.25, we obtain: _Conjecture 6.32_.: There is a canonical isomorphism \[\mathcal{Y}_{\sigma_{S_{\mu,\nu}}}^{\ell}(\widehat{\mathfrak{gl}}_{m|n}) \xrightarrow{\cong}\mathcal{U}(W_{f_{\mu},f_{\nu}}^{\kappa}(\mathfrak{gl}_{M |N}))\, \tag{6.17}\] where \(\mu,\nu\) are the unique dominant coweights such that \(\sigma=\sigma_{S_{\mu,\nu}}\) and \(\ell=\mu_{1}\). This is precisely the analogue for \(\widehat{\mathfrak{gl}}_{m|n}\) of the main theorem of [1], which was originally proved for the Yangian of \(\mathfrak{gl}_{N}\). Some results along these lines have been obtained recently in [13], [14] and [15] in the case \(\sigma=0\) and \(\mu\) and \(\nu\) are given by rectangular partitions, but the statement of the conjecture for general shift appears to be new. There is also an example in the non-rectangular case given in [12], but given the apparent absence of a shift of real roots in this result, we are unsure whether the map constructed in _loc. cit._ has an analogous geometric origin. We also propose the existence of a natural limiting vertex algebra, generalizing the usual \(W_{1+\infty}\) algebra, as follows: let \(\sigma\) be a shift matrix for \(\mathfrak{gl}_{m|n}\) and let \(\mu^{1},\nu^{1}\) be the minimal in the dominance order among the coweights corresponding to \(\sigma\) as in Conjecture 6.32, or equivalently those such that \(\ell^{1}=\mu_{1}^{0}\) is minimal, where we have assumed \(\mu_{n}\geq\nu_{1}\) for simplicity. More generally, let \(\mu^{r}\) and \(\nu^{r}\) denote the \(r^{th}\) smallest coweights corresponding to \(\sigma\), and let \(M_{r}=|\mu^{r}|\) and \(N_{r}=|\nu^{r}|\). Then we propose to consider the large \(r\) limit algebra, conjecturally enhanced to include \(r\) as a parameter, \[W_{1+\infty,\sigma}^{\kappa}(\mathfrak{gl}_{m|n}):=\ ``\ \lim_{r\to\infty}W_{f_{\mu^{r}},f_{\nu^{r}}}^{ \kappa}(\mathfrak{gl}_{M_{r},N_{r}})\ "\,\] which defines a \(\sigma-\)shifted variant of the \(\mathfrak{gl}_{m|n}\)-extended \(W_{1+\infty}\) vertex algebra, which was introduced in [10] for \(n=0\), and studied further and generalized to \(n\neq 0\) in [1], [1], [2], [3] and [1]. In terms of this vertex algebra, the preceding conjecture implies the following, which has an interpretation in string theory as an example of the twisted holographic principle of Costello-Li [13] for M5 branes in the \(\Omega\)-background, following the approach of [10]: _Conjecture 6.33_.: There is a canonical isomorphism of associative algebras \[\mathcal{Y}_{\sigma}(\widehat{\mathfrak{gl}}_{m|n})\xrightarrow{\cong} \mathcal{U}(W_{1+\infty,\sigma}^{\kappa}(\mathfrak{gl}_{m|n}))\,\] inducing isomorphisms as in Equation 6.17 for each compatible \(l\in\mathbb{N}\), \(\mu\), \(\nu\), and an identification \[V_{m,n;\sigma}\xrightarrow{\cong}W_{1+\infty,\sigma}^{\kappa}(\mathfrak{gl}_{ m|n})\] between the vacuum module \(V_{m,n;\sigma}\) of the \(\sigma\)-shifted affine Yangian of \(\widehat{\mathfrak{gl}}_{m|n}\) and that of \(W_{1+\infty,\sigma}^{\kappa}(\mathfrak{gl}_{m|n})\). We hope to provide a proof of this conjecture in future work, using the geometric approach developed here. In what follows, we explain the conjecture in the case of \(\mathfrak{gl}_{1}\) for which it follows from [13], and provide some numerical evidence for the general statement. In particular, we note the following corollary of the preceding conjecture, which we verify holds in the examples below: _Corollary 6.34_.: Let \(\sigma\) be a shift matrix for \(\mathfrak{gl}_{m|n}\), and for each \(r\in\mathbb{N}\) let \(\mu^{r}\) and \(\nu^{r}\) be the dominant coweights defined as above. Then \[P_{q}(V_{m,n;\sigma})=\lim_{r\to\infty}P_{q}(W^{\kappa}_{f_{\mu^{r}},f_{\nu^{r }}}(\mathfrak{gl}_{M^{r}|N^{r}})). \tag{6.18}\] Note that the choice of shift matrix \(\sigma\) together with \(r\in\mathbb{N}\) determines not only the dominant coweights \(\mu^{r}\) and \(\nu^{r}\) but an enhancement of them to a _pyramid_, in the sense of [1]. This determines a good grading with respect to which we define \(W^{\kappa}_{f_{\mu^{r}},f_{\nu^{r}}}(\mathfrak{gl}_{M^{r},N^{r}})\) and the Poincare polynomial on the right hand side of Equation 6.18. We now proceed to the discussion of examples: _Example 6.35_.: Let \(m=1\) and \(n=0\) so that \(Y=Y_{1,0}=\mathbb{C}^{3}\). In this case, there are no real roots with which to shift the Yangian, and the isomorphism \[\mathcal{Y}(\widehat{\mathfrak{gl}}_{1})\xrightarrow{\cong}\mathcal{U}(W^{ \kappa}_{1+\infty}(\mathfrak{gl}_{1}))\] was explained in [1], following the proof of the AGT conjecture from [13] and the unpublished results of Feigin-Tsymbaliuk explained in [16]. Moreover, the results of [13] and their generalizations in [16], which we recall establish special cases of Conjecture 6.15 as outlined in Examples 6.20 and 6.23, respectively, also prove the special cases of Conjecture 6.31 in these examples. Further, in these cases, the proofs of the two conjectures are related precisely as described in the discussion following _loc. cit._. In particular, for each \(r\in\mathbb{N}\) we can compute the Poincare polynomial of the vacuum module of \(\mathbb{V}(\mathbb{C}^{3},r[\mathbb{C}^{2}])=W^{\kappa}_{f_{\rm prim}}( \mathfrak{gl}_{r})\) following the spectral sequence argument from Section 15.2.9 of [10], as we recall below; this is summarized in Figure 3. \[Y_{1,0}=X_{1,0}=\mathbb{C}^{3}\] \[\pi=\] \[\pi=\] [MISSING_PAGE_POST] The computation of _loc. cit._ proceeds as follows: given the divisor \(S=r[\mathbb{C}^{2}]\), consider the corresponding pyramid \(\pi\) as explained above, the grading \(\Gamma\) induced by \(\pi\), and compute the degree with respect to \(\Gamma\) of the highest weights of each of the irreducible representations occurring in the decomposition of the adjoint of \(\mathfrak{gl}_{r}\) under the \(\mathfrak{sl}_{2}\) embedding, representatives of which are written in bold in Figure 4. Each such highest weight can be lifted to a field of \(W^{\kappa}(\mathfrak{gl}_{r})\) of the corresponding conformal weight, and it is proved in _loc. cit._ that these fields form a set of strong generators, in terms of which we can compute the Poincare polynomial of the vacuum module, usually called the _character_ of the vertex algebra. In general, the character of \(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r})\) computed as outlined above is given by \[P_{q}(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r}))=\prod_{j=0}^{r-1}\prod_{k= 1}^{\infty}\frac{1}{1-q^{j+k}}\] and we find a direct verification of Corollary 6.34 above \[\lim_{r\to\infty}P_{q}(W^{\kappa}_{f_{\rm prim}}(\mathfrak{gl}_{r}))=\lim_{r \to\infty}\prod_{j=0}^{r-1}\prod_{k=1}^{\infty}\frac{1}{1-q^{j+k}}=\prod_{k=1} ^{\infty}\frac{1}{(1-q^{k})^{k}}=M(q)=P_{q}(V_{1,0})\,\] where we have used the equality of Equation 6.6 above in the case \(m=1\). _Example 6.36_.: Let \(m=2\) and \(n=0\), so that \(Y=Y_{2,0}\to X_{2,0}=\{xy-z^{2}\}\times\mathbb{C}\). The corresponding lattice of real roots is given by \(\operatorname{Pic}(\hat{Y}_{2,0})\cong Q_{\mathfrak{sl}_{2}}\cong\mathbb{Z}\), and to begin we suppose the shift is trivial, that is \(\sigma=\begin{bmatrix}0&0\\ 0&0\end{bmatrix}\). For each \(r\in\mathbb{N}\), there is a dominant coweight \(\mu^{r}\) corresponding to \(\sigma\) determined as above, and the corresponding divisor \(S_{\mu^{r}}\) is pictured on the left of Figure 4 below. Moreover, the corresponding pyramid \(\pi\) and good grading \(\Gamma\), together with the character of the vacuum module for the \(W\)-algebra \(W_{f_{\mu^{r}}}(\mathfrak{gl}_{2r})\) determined by this grading, are pictured on the right. \[Y_{2,0}\to X_{2,0}=\{xy-z^{2}\}\times\mathbb{C}\] \[\pi=\begin{array}{|c|c|c|c|c|c|c|c|}\hline\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\ For example, for \(r=1\) the coweight \(\mu^{1}\) corresponds to the trivial nilpotent \(f_{\mu^{1}}=0\), so that the corresponding \(W\)-algebra is the affine Kac-Moody algebra \(V^{\kappa}(\mathfrak{gl}_{2})\), and the conjectural map \(\mathcal{V}(\widehat{\mathfrak{gl}}_{2})\to\mathcal{U}(V^{\kappa}(\mathfrak{gl} _{2}))\cong\mathcal{U}(\widehat{\mathfrak{gl}}_{2})\) of Equation 6.16 is simply the evaluation homomorphism constructed in [10]. The vertex algebra \(V^{\kappa}(\mathfrak{gl}_{2})\) is strongly generated by a collection of fields of conformal weight \(1\) enumerated by any basis of the underlying Lie algebra, and thus the character of the vacuum module is given by \(\eta(q)^{-4}\). Evidently this agrees with the \(r=1\) case of the general formula \[P_{q}(W_{f_{\mu^{r}}}(\mathfrak{gl}_{2r}))=\prod_{j=0}^{r-1}\prod_{k=1}^{ \infty}\frac{1}{(1-q^{k+j})^{4}}\,\] from Figure 4. Moreover, we can again verify Corollary 6.34 directly in this case: \[\lim_{r\to\infty}P_{q}(W_{f_{\mu^{r}}}(\mathfrak{gl}_{2r}))=\lim_{r\to\infty} \prod_{j=0}^{r-1}\prod_{k=1}^{\infty}\frac{1}{(1-q^{k+j})^{4}}=\prod_{k=1}^{ \infty}\frac{1}{(1-q^{k})^{4k}}=M(q)^{4}=P_{q}(V_{2,0})\,\] where again we have used the equality of Equation 6.6. _Example 6.37_.: Let \(m=2\) and \(n=0\), so that \(Y=Y_{2,0}\to X_{2,0}=\{xy-z^{2}\}\times\mathbb{C}\), as in the preceding example, and suppose the shift matrix is given by \(\sigma_{1}=\begin{bmatrix}0&0\\ 1&0\end{bmatrix}\). Again, the sequence of toric divisors, and their corresponding pyramids, good gradings, and vacuum module characters are pictures in Figure 5 below. For example, in the case \(r=2\) the coweight \(\mu^{2}\) corresponds to the subregular nilpotent \(f_{\mu^{2}}\in\mathfrak{gl}_{3}\), so that the corresponding vertex algebra is given by the Bershadsky-Polyakov \(W^{(2)}_{3}\) algebra, the subregular \(W\)-algebra for \(\mathfrak{sl}_{3}\), tensored with a Heisenberg algebra. It is well known that the \(W^{(2)}_{3}\) vertex algebra is strongly generated by four fields \(J,L,G^{+}\) and \(G^{-}\) of conformal weights \(1\), \(2\), \(\frac{3}{2}\) and \(\frac{3}{2}\), respectively. \[Y_{2,0}\to X_{2,0}=\{xy-z^{2}\}\times\mathbb{C}\qquad\qquad\pi=\] \[\Gamma=\] \[\Gamma=\] \[\begin{bmatrix}\mathbf{0}&\mathbf{2}&\mathbf{4}&\dots&\mathbf{2r-2}&\mathbf{ 2}&\mathbf{4}&\mathbf{6}&\dots&\mathbf{2r-2}\\ -2&0&2&\dots&2r-4&0&2&4&\dots&2r-4\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 4-2r&6-2r&8-2r&\dots&2&6-2r&8-2r&10-2r&\dots&2\\ 2-2r&4-2r&6-2r&\dots&0&4-2r&6-2r&8-2r&\dots&0\end{bmatrix}\] \[P_{q}(W_{f_{\mu^{r}}}(\mathfrak{gl}_{2r-1}))=\prod_{j=0}^{r-1}\prod_{k=1}^{ \infty}\frac{(1-q^{k})(1-q^{k+(r-1)})^{2}}{(1-q^{k+j})^{4}}\] Figure 5. The toric divisor \(S_{\mu^{r}}\) in \(Y_{2,0}\) for \(\sigma=\begin{bmatrix}0&0\\ 1&0\end{bmatrix}\), with associated pyramid \(\pi\), induced grading \(\Gamma\), and Poincare polynomial \(P_{q}\) of the vacuum module of \(W_{f_{\mu^{r}}}(\mathfrak{gl}_{2r-1})\) In fact, the traditional conformal weights for the Bershadsky-Polyakov algebra correspond to the Dynkin grading on \(W_{f_{\mu^{2}}}(\mathfrak{sl}_{3})\), while with respect to the good grading induced by the pyramid above, the fields \(G^{+}\) and \(G^{-}\) are of degree \(1\) and \(2\). Thus, including the auxiliary Heisenberg generator, the character of \(\mathbb{V}(Y_{2,0},S_{\mu^{2}})=W_{f_{\mu^{2}}}(\mathfrak{gl}_{3})\) is given by \[P_{q}(W_{f_{\mu^{2}}}(\mathfrak{gl}_{3}))=\prod_{k=1}^{\infty}\frac{1}{(1-q^{k })^{3}(1-q^{k+1})^{2}}\,\] in agreement with the general formula given in Figure 5 above. We can again verify Corollary 6.34 directly in this case: \[\lim_{r\to\infty}P_{q}(W_{f_{\mu^{r}}}(\mathfrak{gl}_{2r-1}))=\lim_{r\to\infty }\prod_{j=0}^{r-1}\prod_{k=1}^{\infty}\frac{(1-q^{k})(1-q^{k+(r-1)})^{2}}{(1-q ^{k+j})^{4}}=\prod_{k=1}^{\infty}\frac{(1-q^{k})}{(1-q^{k})^{4k}}=P_{q}(V_{2,0 ;\sigma_{1}})\.\] _Example 6.38_.: Let \(m=2\) and \(n=0\), so that \(Y=Y_{2,0}\to X_{2,0}=\{xy-z^{2}\}\times\mathbb{C}\), as in the preceding two examples, and suppose now that the shift matrix is given by \(\sigma_{2}=\begin{bmatrix}0&0\\ 2&0\end{bmatrix}\). The sequence of toric divisors, and their corresponding pyramids, good gradings, and vacuum module characters are pictures in Figure 6 below. We can yet again verify Corollary 6.34 directly in this case: \[\lim_{r\to\infty}P_{q}(W_{f_{\mu^{r}}}(\mathfrak{gl}_{2r-1})) =\lim_{r\to\infty}\prod_{j=0}^{r-1}\prod_{k=1}^{\infty}\frac{(1-q^ {k})(1-q^{k+1})(1-q^{k+(r-1)})^{2}}{(1-q^{k+j})^{4}(1-q^{k+r})^{2}}\] \[=\prod_{k=1}^{\infty}\frac{(1-q^{k})(1-q^{k+1})}{(1-q^{k})^{4k}}= P_{q}(V_{2,0;\sigma_{2}})\.\] Figure 6. The toric divisor \(S_{\mu^{r}}\) in \(Y_{2,0}\) for \(\sigma=\begin{bmatrix}0&0\\ 2&0\end{bmatrix}\), with associated pyramid \(\pi\), induced grading \(\Gamma\), and Poincare polynomial \(P_{q}\) of the vacuum module of \(W_{f_{\mu^{r}}}(\mathfrak{gl}_{2r})\) _Example 6.39_.: Let \(m=n=1\) so that \(Y_{1,1}=|\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)| \to X_{1,1}=\{xy-zw\}\), so that the lattice of real (super) roots is given by \(\operatorname{Pic}(Y_{1,1})\cong Q_{\mathfrak{sl}_{1|1}}\cong\mathbb{Z}\), and we consider the case that the shift is trivial, so that \(\sigma=\begin{bmatrix}0&0\\ 0&0\end{bmatrix}\). As in Example 6.36, in the unshifted case with \(r=1\) the corresponding \(W\)-algebra is the affine Kac-Moody vertex superalgebra \(V^{\kappa}(\mathfrak{gl}_{1|1})\), which is strongly generated by two odd and two even fields all of conformal weight \(1\), so that the character of the vacuum module is given by \[P_{q}(V^{\kappa}(\mathfrak{gl}_{1|1}))=\prod_{k=1}^{\infty}\frac{(1+q^{k})^{2} }{(1-q^{k})^{2}}\.\] More generally, the vacuum character of the rectangular \(W\)-superalgebra which corresponds to the divisor \(S_{\mu^{r},\mu^{r}}\) via Conjecture 6.30 is computed analogously, as summarized in Figure 7: \[P_{q}(W_{f_{\mu^{r}},f_{\nu^{r}}}(\mathfrak{gl}_{r|r}))=\prod_{j=0}^{r-1}\prod _{k=1}^{\infty}\frac{(1+q^{k+j})^{2}}{(1-q^{k+j})^{2}}\.\] Again, we can verify Corollary 6.34 directly in this case: \[\lim_{r\to\infty}P_{q}(W_{f_{\mu^{r}},f_{\nu^{r}}}(\mathfrak{gl}_{r|r}))=\lim_ {r\to\infty}\prod_{j=0}^{r-1}\prod_{k=1}^{\infty}\frac{(1+q^{k})^{2}}{(1-q^{k +j})^{2}}=\prod_{k=1}^{\infty}\frac{(1+q^{k})^{2k}}{(1-q^{k})^{2k}}=P_{q}(V_{1 |1})\ \,\] in agreement with the computations of the character of the \(\mathcal{N}=2\) superconformal \(W_{1+\infty}\) algebras studied in [1] and [1]. More generally, the analogous computations to those of Examples 6.37 and 6.38 can evidently be checked similarly for divisors on \(Y_{1|1}\) with non-trivial shift. Figure 7. The toric divisor \(S_{\mu^{r},\nu^{r}}\) in \(Y_{1,1}\) for \(\sigma=\begin{bmatrix}0&0\\ 0&0\end{bmatrix}\), with pyramid \(\pi\), induced grading \(\Gamma\), and Poincare polynomial \(P_{q}\) of the vacuum module of \(W_{f_{\mu^{r}},f_{\nu^{r}}}(\mathfrak{gl}_{r|r})\)
2305.19564
About Decisiveness of Dynamic Probabilistic Models
Decisiveness of infinite Markov chains with respect to some (finite or infinite) target set of states is a key property that allows to compute the reachability probability of this set up to an arbitrary precision. Most of the existing works assume constant weights for defining the probability of a transition in the considered models. However numerous probabilistic modelings require (dynamic) weights that depend on both the current state and the transition. So we introduce a dynamic probabilistic version of counter machine (pCM). After establishing that decisiveness is undecidable for pCMs even with constant weights, we study the decidability of decisiveness for subclasses of pCM. We show that, without restrictions on dynamic weights, decisiveness is undecidable with a single state and single counter pCM. On the contrary with polynomial weights, decisiveness becomes decidable for single counter pCMs under mild conditions. Then we show that decisiveness of probabilistic Petri nets (pPNs) with polynomial weights is undecidable even when the target set is upward-closed unlike the case of constant weights. Finally we prove that the standard subclass of pPNs with a regular language is decisive with respect to a finite set whatever the kind of weights.
Alain Finkel, Serge Haddad, Lina Ye
2023-05-31T05:18:12Z
http://arxiv.org/abs/2305.19564v1
# About Decisiveness of Dynamic Probabilistic Models ###### Abstract Decisiveness of infinite Markov chains with respect to some (finite or infinite) target set of states is a key property that allows to compute the reachability probability of this set up to an arbitrary precision. Most of the existing works assume constant weights for defining the probability of a transition in the considered models. However numerous probabilistic modelings require (dynamic) weights that depend on both the current state and the transition. So we introduce a dynamic probabilistic version of counter machine (pCM). After establishing that decisiveness is undecidable for pCMs even with constant weights, we study the decidability of decisiveness for subclasses of pCM. We show that, without restrictions on dynamic weights, decisiveness is undecidable with a single state and single counter pCM. On the contrary with polynomial weights, decisiveness becomes decidable for single counter pCMs under mild conditions. Then we show that decisiveness of probabilistic Petri nets (pPNs) with polynomial weights is undecidable even when the target set is upward-closed unlike the case of constant weights. Finally we prove that the standard subclass of pPNs with a regular language is decisive with respect to a finite set whatever the kind of weights. infinite Markov chain, reachability probability, decisiveness 10.4230/LIPIcs. The first one is to consider the Markov chains associated with a particular class of probabilistic models (like pPDA or pPN) and some specific target sets and to exploit the properties of these models to design a CRP-algorithm. For instance in [8], the authors exhibit a PSPACE algorithm for pPDA and PTIME algorithms for single-state pPDA and for one-counter automata. The second one consists in exhibiting a property of Markov chains that yields a generic algorithm for solving the CRP problem and then looking for models that generate Markov chains that fulfill this property. _Decisiveness_ of Markov chains is such a property and it has been shown that pLCS are decisive and that probabilistic Petri nets (pPN) are decisive when the target set is upward-closed [1]. **Two limits of the previous approaches.** In most of the works, the probabilistic models associate a constant (also called _static_) weight for transitions and get transition probabilities by normalizing these weights among the enabled transitions in the current state (except for some semantics of pLCS like in [17] where transition probabilities depend on the state due to the possibility of message losses). This forbids to model phenomena like congestion in networks (resp. performance collapsing in distributed systems) when the number of messages (resp. processes) exceeds some threshold leading to an increasing probability of message arrivals (resp. process creations) before message departures (resp. process terminations). In order to handle them, one needs to consider _dynamic_ weights i.e., weights depending on the current state. Generally given some probabilistic model and some kind of target set of states, it may occur that some instances of the model are decisive and some others are not. This raises the issue of the decidability status of the decisiveness problem. Interestingly, the decidability of the decisiveness property has only be studied and shown decidable for pPDA with constant weights [15]. **Our contributions.** * In order to unify our analysis of decisiveness, we introduce a dynamic probabilistic version of counter machine (pCM) and we first establish that decisiveness is undecidable for pCMs even with constant weights. * Then we study the decidability of decisiveness of one-counter pCMS. We show that, without restrictions on dynamic weights, decisiveness is undecidable for one-counter pCM even with a single state. On the contrary, with polynomial weights, decisiveness becomes decidable for a large subclass of one-counter pCMs, called homogeneous probabilistic counter machine (pHM). * Then we show that decisiveness of probabilistic Petri nets (pPNs) with polynomial weights is undecidable when the target set is finite or upward-closed (unlike the case of constant weights). Finally we prove that the standard subclass of pPNs with a regular language is decisive with respect to a finite set whatever the kind of weights. * Some of our results are not only technically involved but contain new ideas. In particular, the proof of undecidability of decisiveness for pPN with polynomial weights with respect to a finite or upward closed set is based on an original weak simulation of CM. Similarly the model of pHM can be viewed as a dynamic extension of quasi-birth-death processes well-known in the performance evaluation field [5]. **Organisation.** Section 2 recalls decisive Markov chains, presents the classical algorithm for solving the CRP problem and shows that decisiveness is somehow related to recurrence of Markov chains. In section 3, we introduce pCM and show that decisiveness is undecidable for static pCM. In section 4, we study the decidability status of decisiveness for probabilistic one-counter pCM and in section 5, the decidability status of decisiveness for pPN. Finally in Section 6 we conclude and give some perspectives to this work. All missing proofs can be found in Appendix. ## 2Decisive Markov chains As usual, \(\mathbb{N}\) and \(\mathbb{N}^{*}\) denote respectively the set of non negative integers and the set of positive integers. The notations \(\mathbb{Q}\), \(\mathbb{Q}_{\geq 0}\) and \(\mathbb{Q}_{>0}\) denote the set of rationals, non-negative rationals and positive rationals. Let \(F\subseteq E\). When there is no ambiguity about \(E\), \(\overline{F}\) will denote \(E\setminus F\). ### Markov chains: definitions and properties **Notations.** A set \(S\) is _countable_ if there exists an injective function from \(S\) to the set of natural numbers: hence it could be finite or countably infinite. Let \(S\) be a countable set of elements called states. Then \(Dist(S)=\{\Delta:S\rightarrow\mathbb{Q}_{\geq 0}\mid\sum_{s\in S}\Delta(s)=1\}\) is the set of _rational distributions_ over \(S\). Let \(\Delta\in Dist(S)\), then \(Supp(\Delta)=\Delta^{-1}(\mathbb{Q}_{>0})\). Let \(T\subseteq S\), then \(S\setminus T\) will also be denoted \(\overline{T}\). [(Effective) Markov chain] A Markov chain \(\mathcal{M}=(S,p)\) is a tuple where: [leftmargin=*] \(S\) is a countable set of states, \(p\) is the transition function from \(S\) to \(Dist(S)\); When for all \(s\in S\), \(Supp(p(s))\) is finite and the function \(s\mapsto p(s)\) is computable, one says that \(\mathcal{M}\) is effective. When \(S\) is countably infinite, we say that \(\mathcal{M}\) is _infinite_ and we sometimes identify \(S\) with \(\mathbb{N}\). We also denote \(p(s)(s^{\prime})\) by \(p(s,s^{\prime})\) and \(p(s,s^{\prime})>0\) by \(s\xrightarrow{p(s,s^{\prime})}s^{\prime}\). A Markov chain is also viewed as a transition system whose transition relation \(\rightarrow\) is defined by \(s\to s^{\prime}\) if \(p(s,s^{\prime})>0\). Let \(\mathcal{M}_{1}\) be the Markov chain of Figure 1. In any state \(i>0\), the probability for going to the "right", \(p(i,i+1)=\frac{f(i)}{f(i)+g(i)}\) and for going to the "left", \(p(i,i-1)=\frac{g(i)}{f(i)+g(i)}\). In state \(0\), one goes to \(1\) with probability \(1\). \(\mathcal{M}_{1}\) is effective if the functions \(f\) and \(g\) are computable. We denote \(\rightarrow^{*}\), the reflexive and transitive closure of \(\rightarrow\) and we say that \(s^{\prime}\) is _reachable from \(s\)_ if \(s\rightarrow^{*}s^{\prime}\). We say that a subset \(A\subseteq S\) is _reachable_ from \(s\) if some \(s^{\prime}\in A\) is reachable from \(s\) and we denote \(s\rightarrow^{*}A\). Let us remark that every finite path of \(\mathcal{M}\) can be extended into (at least) one infinite path. Given an initial state \(s_{0}\), the _sampling_ of a Markov chain \(\mathcal{M}\) is an _infinite random sequence of states_ (i.e., a path) \(\sigma=s_{0}s_{1}\ldots\) such that for all \(i\geq 0\), \(s_{i}\to s_{i+1}\). As usual, Figure 1: A pCM and its Markov chain \(\mathcal{M}_{1}\) with for all \(n\in\mathbb{N}\), \(0<f(n)\) and \(0<g(n)\). the corresponding \(\sigma\)-algebra is generated by the finite prefixes of infinite paths and the probability of a measurable subset \(\Pi\) of infinite paths, given an initial state \(s_{0}\), is denoted \(\mathbf{Pr}_{\mathcal{M},s_{0}}(\Pi)\). In particular denoting \(s_{0}\ldots s_{n}S^{\omega}\) the set of infinite paths with \(s_{0}\ldots s_{n}\) as prefix \(\mathbf{Pr}_{\mathcal{M},s_{0}}(s_{0}\ldots s_{n}S^{\omega})=\prod_{0\leq i<n}p (s_{i},s_{i+1})\). **Notations.** From now on, **G** (resp. **F**, **X**) denotes the always (resp. eventual, next) operator of LTL, and **E** the existential operator of CTL\({}^{*}\)[4]. Let \(A\subseteq S\). We say that \(\sigma\)_reaches_\(A\) if \(\exists i\in\mathbb{N}\)\(s_{i}\in A\) and that \(\sigma\)_visits_\(A\) if \(\exists i>0\)\(s_{i}\in A\). The probability that starting from \(s_{0}\), the path \(\sigma\) reaches (resp. visits) \(A\) will be denoted by \(\mathbf{Pr}_{\mathcal{M},s_{0}}(\mathbf{F}A)\) (resp. \(\mathbf{Pr}_{\mathcal{M},s_{0}}(\mathbf{XF}A)\)). The next definition states qualitative and quantitative properties of a Markov chain. [Irreducibility, recurrence, transience] Let \(\mathcal{M}=(S,p)\) be a Markov chain and \(s\in S\). Then: * \(\mathcal{M}\) is _irreducible_ if for all \(s,s^{\prime}\in S\), \(s\rightarrow^{*}s^{\prime}\); * \(s\) is _recurrent_ if \(\mathbf{Pr}_{\mathcal{M},s}(\mathbf{XF}\{s\})=1\) otherwise \(s\) is _transient. The next proposition states that in an irreducible Markov chain, all states are in the same category [18]. Let \(\mathcal{M}=(S,p)\) be an irreducible Markov chain and \(s,s^{\prime}\in S\). Then \(s\) is recurrent if and only if \(s^{\prime}\) is recurrent. Thus an irreducible Markov chain will be said transient or recurrent depending on the category of its states (all states are in the same category). In the remainder of this section, we will relate this category with techniques for computing reachability probabilities. [\(\mathcal{M}_{1}\) of Figure 1 is clearly irreducible. Let us define \(p_{n}=\frac{f(n)}{f(n)+g(n)}\). Then (see Proposition 2.2 in the Appendix), \(\mathcal{M}_{1}\) is recurrent if and only if \(\sum_{n\in\mathbb{N}}\prod_{1\leq m<n}\rho_{m}=\infty\) with \(\rho_{m}=\frac{1-p_{m}}{p_{m}}\), and when transient, the probability that starting from \(i\) the random path reaches \(0\) is equal to \(\frac{\sum_{i\leq n}\prod_{1\leq m<n}\rho_{m}}{\sum_{n\in\mathbb{N}}\prod_{1 \leq m<n}\rho_{m}}\). ### Decisive Markov chains One of the goals of the quantitative analysis of infinite Markov chains is to approximately compute reachability probabilities. Let us formalize it. Given a finite representation of a subset \(A\subseteq S\), one says that this representation is _effective_ if one can decide the membership problem for \(A\). With a slight abuse of language, we identify \(A\) with any effective representation of \(A\). [frame=single,leftmargin=0pt] **The Computing of Reachability Probability (CRP) problem** [frame=single,leftmargin=0pt] \(\bullet\) Input: an effective Markov chain \(\mathcal{M}\), an (initial) state \(s_{0}\), an effective subset of states \(A\), and a rational \(\theta>0\). \(\bullet\) Output: an interval \([low,up]\) such that \(up-low\leq\theta\) and \(\mathbf{Pr}_{\mathcal{M},s_{0}}(\mathbf{F}A)\in[low,up]\). In finite Markov chains, there is a well-known algorithm for computing exactly the reachability probabilities in polynomial time [4]. In infinite Markov chains, there are (at least) two possible research directions: (1) either using the specific features of a formalism to design such a CRP algorithm [15], (2) or requiring a supplementary property on Markov chains in order to design an "abstract" algorithm, then verifying that given a formalism this property is satisfied and finally transforming this algorithm into a concrete one. _Decisiveness_-based approach follows the second direction [1]. In words, decisiveness w.r.t. \(s_{0}\) and \(A\) means that almost surely the random path \(\sigma\) starting from \(s_{0}\) will reach \(A\) or some state \(s^{\prime}\) from which \(A\) is unreachable. A Markov chain \(\mathcal{M}\) is _decisive_ w.r.t. \(s_{0}\in S\) and \(A\subseteq S\) if: \[\mathbf{Pr}_{\mathcal{M},s_{0}}(\mathbf{G}(\overline{A}\cap\mathbf{E}\mathbf{ F}A))=0\] Then under the hypotheses of decisiveness w.r.t. \(s_{0}\) and \(A\) and decidability of the reachability problem w.r.t. \(A\), Algorithm 1 solves the CRP problem. Let us explain Algorithm 1. If \(A\) is unreachable from \(s_{0}\) it returns the singleton interval \([0,0]\). Otherwise it maintains a lower bound \(pmin\) (initially 0) and an upper bound \(pmax\) (initially 1) of the reachability probability and builds some prefix of the infinite execution tree of \(\mathcal{M}\). It also maintains the probability to reach a vertex in this tree. There are three possible cases when examining the state \(s\) associated with the current vertex along a path of probability \(q\): (1) either \(s\in A\) and the lower bound is incremented by \(q\), (2) either \(A\) is unreachable from \(s\) and the upper bound is decremented by \(q\), (3) or it extends the prefix of the tree by the successors of \(s\). The lower bound always converges to the searched probability while due to the decisiveness property, the upper bound also converges to it ensuring termination of the algorithm. [1]Algorithm 1 terminates and computes an interval of length at most \(\theta\) containing \(\mathbf{Pr}_{\mathcal{M},s_{0}}(\mathbf{F}A)\) when applied to a decisive Markov chain \(\mathcal{M}\) w.r.t. \(s_{0}\) and \(A\) with a decidable reachability problem w.r.t. \(A\). Algorithm 1 can be applied to probabilistic Lossy Channel Systems (pLCS) since they are decisive (Corollary 4.7 in [1] and see [3] for the first statement) and reachability is decidable in LCS [2]. It can be also applied to pVASSs w.r.t. upward closed sets because Corollary 4.4 in [1] states that pVASSs are decisive w.r.t. _upward closed sets_. Observe that these results hold due to restrictions on transition probabilities that we will discuss later on. **Observations.** The test \(pmin=0\) is not necessary but adding it avoids to return 0 as lower bound, which would be inaccurate since entering this loop means that \(A\) is reachable from \(s_{0}\). Extractions from the front are performed in a way that the execution tree will be covered (for instance by a breadth first exploration). Let \(\mathcal{M}\) be a Markov chain. One denotes \(Post^{\star}_{\mathcal{M}}(A)\), the set of states that can be reached from some state of \(A\) and \(Pre^{\star}_{\mathcal{M}}(A)\), the set of states that can reach \(A\). While decisiveness has been used in several contexts including uncountable probabilistic systems [6], its relation with standard properties of Markov chains has not been investigated. This is the goal of the next definition and proposition. Let \(\mathcal{M}\) be a Markov chain, \(s_{0}\in S\) and \(A\subseteq S\) such that \(s_{0}\not\in A\cup\overline{Pre^{\star}_{\mathcal{M}}(A)}\). The Markov chain \(\mathcal{M}_{s_{0},A}=(S_{s_{0},A},p_{s_{0},A})\) is defined as follows: * \(S_{s_{0},A}\) is the union of (1) the smallest set containing \(s_{0}\) and such that for all \(s\in S_{s_{0},A}\) and \(s^{\prime}\not\in A\cup\overline{Pre^{\star}_{\mathcal{M}}(A)}\) with \(s\to s^{\prime}\), one have: \(s^{\prime}\in S_{s_{0},A}\) and (2) \(\{s_{\perp}\}\) where \(s_{\perp}\) is a new state; * for all \(s,s^{\prime}\neq s_{\perp}\), \(p_{s_{0},A}(s,s^{\prime})=p(s,s^{\prime})\) and \(p_{s_{0},A}(s,s_{\perp})=\sum_{s^{\prime}\in A\cup\overline{Pre_{\mathcal{M}} (A)}}p(s,s^{\prime})\); * \(p_{s_{0},A}(s_{\perp},s_{0})=1\). The equivalence between decisiveness of \(\mathcal{M}\) w.r.t. \(s_{0}\in S\) and \(A\subseteq S\) and recurrence of \(\mathcal{M}_{s_{0},A}\) allows to apply standard criteria for recurrence in order to check decisiveness. For instance we will use the criterion of the Markov chain of Figure 1 in Section 4. **Proposition 9**.: _Let \(\mathcal{M}=(S,p)\) be a Markov chain, \(s_{0}\in S\) and \(A\subseteq S\) such that \(s_{0}\not\in A\cup\overline{Pre^{*}_{\mathcal{M}}(A)}\). Then \(\mathcal{M}_{s_{0},A}\) is irreducible. Furthermore \(\mathcal{M}\) is decisive w.r.t. \(s_{0}\) and \(A\) if and only if \(\mathcal{M}_{s_{0},A}\) is recurrent._ Proof.: Let \(s\in S_{s_{0},A}\setminus\{s_{\perp}\}\). Then \(s\) is reachable from \(s_{0}\) and \(A\) is reachable from \(s\) in \(\mathcal{M}\) implying that \(s\rightarrow^{*}s_{\perp}\) in \(\mathcal{M}_{s_{0},A}\) (using a shortest path for reachability). Since \(s_{\perp}\to s_{0}\), \(s_{\perp}\rightarrow^{*}s\). Thus \(\mathcal{M}_{s_{0},A}\) is irreducible. \(\mathcal{M}_{s_{0},A}\) is recurrent iff \(\mathbf{Pr}_{\mathcal{M}_{s_{0},A},s_{\perp}}(\mathbf{XF}\{s_{\perp}\})=1\) iff \(\mathbf{Pr}_{\mathcal{M}_{s_{0},A},s_{0}}(\mathbf{F}\{s_{\perp}\})=1\) iff \(\mathbf{Pr}_{\mathcal{M},s_{0}}(\mathbf{F}A\cup\overline{Pre^{*}_{\mathcal{M}} (A)})=1\) iff \(\mathcal{M}\) is decisive w.r.t. \(s_{0}\) and \(A\). ## 3 Probabilistic counter machines We now introduce _probabilistic Counter Machines (pCM)_ in order to study the decidability of the decisiveness property w.r.t. several relevant subclasses of pCM. **Definition 10** (pcm).: _A probabilistic counter machine (pCM) is a tuple \(\mathcal{C}=(Q,P,\Delta,W)\) where:_ * \(Q\) _is a finite set of control states;_ * \(P=\{p_{1},\ldots,p_{d}\}\) _is a finite set of counters (also called places);_ * \(\Delta=\Delta_{0}\uplus\Delta_{1}\) _where_ \(\Delta_{0}\) _is a finite subset of_ \(Q\times P\times\mathbb{N}^{d}\times Q\) _and_ \(\Delta_{1}\) _is a finite subset of_ \(Q\times\mathbb{N}^{d}\times\mathbb{N}^{d}\times Q\)_;_ * \(W\) _is a computable function from_ \(\Delta\times\mathbb{N}^{d}\) _to_ \(\mathbb{N}^{*}\)_._ **Notations**.: A transition \(t\in\Delta_{0}\) is denoted \(t=(q_{t}^{-},p_{t},\mathbf{Post}(t),q_{t}^{+})\) and also \(q_{t}^{-}\xrightarrow{p_{t},\mathbf{Post}(t)}q_{t}^{+}\). A transition \(t\in\Delta_{1}\) is denoted \(t=(q_{t}^{-},\mathbf{Pre}(t),\mathbf{Post}(t),q_{t}^{+})\) and also \(q_{t}^{-}\xrightarrow{\mathbf{Pre}(t),\mathbf{Post}(t)}q_{t}^{+}\). Let \(t\) be a transition of \(\mathcal{C}\). Then \(W(t)\) is the function from \(\mathbb{N}^{d}\) to \(\mathbb{Q}_{>0}\) defined by \(W(t)(\mathbf{m})=W(t,\mathbf{m})\). A polynomial is _positive_ if all its coefficients are non-negative and there is a positive constant term. When for all \(t\in T\), \(W(t)\) is a positive polynomial whose variables are the counters, we say that \(\mathcal{C}\) is a _polynomial_ pCM. A _configuration_ of \(\mathcal{C}\) is an item of \(Q\times\mathbb{N}^{d}\). Let \(s=(q,\mathbf{m})\) be a configuration and \(t=(q_{t}^{-},p_{t},\mathbf{Post}(t),q_{t}^{+})\) be a transition in \(\Delta_{0}\). Then \(t\) is _enabled_ in \(s\) if \(\mathbf{m}(p_{t})=0\) and \(q=q_{t}^{-}\); its _firing_ leads to the configuration \((q_{t}^{+},\mathbf{m}+\mathbf{Post}(t))\). Let \(t=(q_{t}^{-},\mathbf{Pre}(t),\mathbf{Post}(t),q_{t}^{+})\in\Delta_{1}\). Then \(t\) is _enabled_ in \(s\) if \(\mathbf{m}\geq\mathbf{Pre}(t)\) and \(q=q_{t}^{-}\); its _firing_ leads to the configuration \(s^{\prime}=(q_{t}^{+},\mathbf{m}-\mathbf{Pre}(t)+\mathbf{Post}(t))\). One denotes the configuration change by: \(s\xrightarrow{t}s^{\prime}\). One denotes \(En(s)\), the set of transitions enabled in \(s\) and \(Weight(s)=\sum_{t\in En(s)}W(t,\mathbf{m})\). Let \(\sigma=t_{1}\dots t_{n}\) be a sequence of transitions. We define the enabling and the firing of \(\sigma\) by induction. The empty sequence is always enabled in \(s\) and its firing leads to \(s\). When \(n>0\), \(\sigma\) is enabled if \(s\xrightarrow{t_{1}}s_{1}\) and \(t_{2}\dots t_{n}\) is enabled in \(s_{1}\). The firing of \(\sigma\) leads to the configuration reached by \(t_{2}\dots t_{n}\) from \(s_{1}\). A configuration \(s\) is _reachable_ from some \(s_{0}\) if there is a firing sequence \(\sigma\) that reaches \(s\) from \(s_{0}\). When \(Q\) is a singleton, one omits the control states in the definition of transitions and configurations. We now provide the semantic of a pCM as a countable Markov chain. Let \(\mathcal{C}\) be a pCM. Then the Markov chain \(\mathcal{M}_{\mathcal{C}}=(S,p)\) is defined by: * \(S=Q\times\mathbb{N}^{d}\); * For all \(s=(q,\mathbf{m})\in S\), if \(En(s)=\emptyset\) then \(p(s,s)=1\). Otherwise for all \(s^{\prime}\in S\): \[p(s,s^{\prime})=Weight(s)^{-1}\sum_{s\xrightarrow{t}s^{\prime}}W(t,\mathbf{m})\] **Notation.** Let \(s\in S\). For sake of clarity, \(\mathbf{Pre}_{\mathcal{C},s}\) will denote \(\mathbf{Pr}_{\mathcal{M}_{\mathcal{C},s}}\). For establishing the undecidability results, we will reduce an undecidable problem related to _counter programs_, which are a variant of CM. Let us recall that a _\(d\)-counter program_\(\mathcal{P}\) is defined by a set of \(d\) counters \(\{c_{1},\dots,c_{d}\}\) and a set of \(n+1\) instructions labelled by \(\{0,\dots,n\}\), where for all \(i<n\), the instruction \(i\) is of type * either (1) \(c_{j}\gets c_{j}+1;\mathbf{goto}\ i^{\prime}\) with \(1\leq j\leq d\) and \(0\leq i^{\prime}\leq n\) * or (2) **if**\(c_{j}>0\)**then**\(c_{j}\gets c_{j}-1;\mathbf{goto}\ i^{\prime}\), **else goto**\(i^{\prime\prime}\) with \(1\leq j\leq d\) and \(0\leq i^{\prime},i^{\prime\prime}\leq n\) and the instruction \(n\) is **halt**. The program starts at instruction \(0\) and halts if it reaches the instruction \(n\). The halting problem for two-counter programs asks, given a two-counter program \(\mathcal{P}\) and initial values of counters, whether \(\mathcal{P}\) eventually halts. It is undecidable [19]. We introduce a subclass of two-counter programs that we call _normalized_. A normalized two-counter program \(\mathcal{P}\) starts by resetting its counters and, on termination, resets its counters before halting. **Normalized two-counter program.** The first two instructions of a normalized two-counter program reset counters \(c_{1},c_{2}\) as follows: * if \(c_{1}>0\)**then**\(c_{1}\gets c_{1}-1;\mathbf{goto}\ 0\)**else goto**\(1\) * 1: if \(c_{2}>0\)**then**\(c_{2}\gets c_{2}-1;\mathbf{goto}\ 1\)**else goto**\(2\) The last three instructions of a normalized two-counter program are: * \(n\!-\!2:\) **if**\(c_{1}>0\)**then**\(c_{1}\gets c_{1}-1;\mathbf{goto}\ n\!-\!2\)**else goto**\(n\!-\!1\) * \(n\!-\!1:\) **if**\(c_{2}>0\)**then**\(c_{2}\gets c_{2}-1;\mathbf{goto}\ n\!-\!1\)**else goto**\(n\) \(n\!:\)**halt** For \(1<i<n-2\), the labels occurring in instruction \(i\) belong to \(\{0,\dots,n-2\}\). In a normalized two-counter program \(\mathcal{P}\), given any initial values \(v_{1},v_{2}\), \(\mathcal{P}\) halts with \(v_{1},v_{2}\) if and only if \(\mathcal{P}\) halts with initial values \(0,0\). Moreover when \(\mathcal{P}\) halts, the values of the counters are null. The halting problem for normalized two-counter programs is also undecidable (see Lemme 25 in Appendix). We now show that decisiveness is undecidable even for _static_ pCM, by considering only _static_ weights: for all \(t\in\Delta\), \(W(t)\) is a constant function. Decisiveness w.r.t. a finite set is undecidable in (static) pCM. ## 4 Probabilistic safe one-counter machines We now study decisiveness for pCMs that only have one counter denoted \(c\). We also restrict \(\Delta_{1}\): a single counter PCM is _safe_ if for all \(t\in\Delta_{1}\), \((\mathbf{Pre}(t),\mathbf{Post}(t))\in\{1\}\times\{0,1,2\}\). In words, in a safe one-counter pCM, a transition of \(\Delta_{1}\) requires the counter to be positive and may either let it unchanged, or incremented or decremented by a unit. ### One-state and one-counter pCM We first prove that decisiveness is undecidable for the probabilistic version of one-state and one-counter machines. Then we show how to restrict the weight functions and \(\Delta_{1}\) such that this property becomes decidable. Both proofs make use of the relationship between decisiveness and recurrence stated in Proposition 4.1, in an implicit way. The decisiveness problem for safe one-counter pCM is undecidable even with a single state. Proof.: We will reduce the Hilbert's tenth problem to decisiveness problems. Let \(P\in\mathbb{Z}[X_{1},\ldots X_{k}]\) be an integer polynomial with \(k\) variables. This problem asks whether there exist \(n_{1},\ldots,n_{k}\in\mathbb{N}\) such that \(P(n_{1},\ldots,n_{k})=0\). We define \(\mathcal{C}\) as follows. There are two transitions both in \(\Delta_{1}\): * _dec_ with \(\mathbf{Pre}(dec)=1\) and \(\mathbf{Post}(dec)=0\); * _inc_ with \(\mathbf{Pre}(inc)=0\) and \(\mathbf{Post}(inc)=1\). The weight of \(dec\) is the constant function \(1\), i.e., \(W(dec,n)=f(n)=1\), while the weight of \(inc\) is defined by the following (non polynomial) function: \[W(inc,n)=g(n)=\min(P^{2}(n_{1},\ldots,n_{k})+1\mid n_{1}+\ldots+n_{k}\leq n)\] This function is obviously computable. Let us study the decisiveness of \(\mathcal{M}_{\mathcal{C}}\) w.r.t. \(s_{0}=1\) and \(A=\{0\}\). Observe that \(\mathcal{M}_{\mathcal{C}}\) is the Markov chain \(\mathcal{M}_{1}\) of Figure 1. Let us recall that in \(\mathcal{M}_{1}\), the probability to reach \(0\) from \(i\) is \(1\) iff \(\sum_{n\in\mathbb{N}}\prod_{1\leq m<n}\rho_{m}=\infty\) and otherwise it is equal to \(\frac{\sum_{i<n}\prod_{1\leq m<n}\rho_{m}}{\sum_{n\in\mathbb{N}}\prod_{1\leq m <n}\rho_{m}}\) with \(\rho_{m}=\frac{1-p_{m}}{p_{m}}\). \(\bullet\) Assume there exist \(n_{1},\ldots,n_{k}\!\in\!\mathbb{N}\) s.t. \(P(n_{1},\ldots,n_{k})\!=\!0\). Let \(n_{0}\!=\!n_{1}+\cdots+n_{k}\). Thus for all \(n\!\geq\!n_{0}\), \(W(inc,n)=1\), which implies that \(p_{\mathcal{C}}(n,n-1)=p_{\mathcal{C}}(n,n+1)=\frac{1}{2}\). Thus due to the results on \(\mathcal{M}_{1}\), from any state \(n\), one reaches \(0\) almost surely and so \(\mathcal{M}_{\mathcal{C}}\) is decisive. \(\bullet\) Assume there do not exist \(n_{1},\ldots,n_{k}\in\mathbb{N}\) s.t. \(P(n_{1},\ldots,n_{k})=0\). For all \(n\in\mathbb{N}\), \(W(inc,n)\geq 2\), implying that in \(\mathcal{M}_{1}\), \(\rho_{n}\leq\frac{1}{2}\). Thus \(\mathcal{M}_{\mathcal{C}}\) is not decisive. Due to the negative result for single state and single counter pCM stated in Theorem 4.1, it is clear that one must restrict the possible weight functions. The decisiveness problem w.r.t. \(s_{0}\) and finite \(A\) for polynomial safe one-counter pCM \(\mathcal{C}\) with a single state is decidable in linear time. ### Homogeneous one-counter machines Let \(\mathcal{C}\) be a one-counter safe pCM. For all \(q\in Q\), let \(S_{q,1}=\sum_{t=(q,\mathbf{Pre}(t),\mathbf{Post}(t),q_{t}^{+})\in\Delta_{1}}W(t)\) and \(\mathbf{M}_{\mathcal{C}}\) be the \(Q\times Q\) matrix defined by: \[\mathbf{M}_{\mathcal{C}}[q,q^{\prime}]=\frac{\sum_{t=(q,\mathbf{Pre}(t), \mathbf{Post}(t),q^{\prime})\in\Delta_{1}}W(t)}{S_{q,1}}\] (thus \(\mathbf{M}_{\mathcal{C}}[q,q^{\prime}]\) is a function from \(\mathbb{N}\) to \(\mathbb{Q}_{\geq}0\)). A probabilistic homogeneous machine (pHM) is a probabilistic safe one-counter machine \(\mathcal{C}=(Q,\Delta,W)\) where: * For all \(t\in\Delta\), \(W(t)\) is a positive polynomial in \(\mathbb{N}[X]\); * For all \(q,q^{\prime}\in Q\), \(\mathbf{M}_{\mathcal{C}}[q,q^{\prime}]\) is constant. Observe that by definition, in a pHM, \(\mathbf{M}_{\mathcal{C}}\) is a transition matrix. Here \(\mathbf{M}_{\mathcal{C}}[q,q^{\prime}]=\mathbf{M}_{\mathcal{C}}[q,q^{\prime \prime}]=\frac{X^{2}+X+1}{2(X^{2}+X+1)}=\frac{1}{2}\). The family \((r_{q})_{q\in Q}\) of the next proposition is independent of the function \(W\) and is associated with the qualitative behaviour of \(\mathcal{C}\). Let \(\mathcal{C}\) be a pHM. Then one can compute in polynomial time a family \((r_{q})_{q\in Q}\) such that for all \(q\), \(r_{q}\in\{0,\ldots,|Q|-1\}\cup\{\infty\}\), and \(Q\times\{0\}\) is reachable from \((q,k)\) iff \(k\leq r_{q}\). Let \(\mathcal{C}\) be a pHM such that \(\mathbf{M}_{\mathcal{C}}\) is irreducible. Then the decisiveness problem of \(\mathcal{C}\) w.r.t. \(s_{0}=(q,n)\in Q\times\mathbb{N}\) and \(A=Q\times\{0\}\) is decidable in polynomial time. Proof.: With the notations of previous proposition, assume that there exist \(q\) with \(r_{q}<\infty\) and \(q^{\prime}\) with \(r_{q^{\prime}}=\infty\). Since \(\mathbf{M}_{\mathcal{C}}\) is irreducible, there is a sequence of transitions in \(\Delta_{1}\)\(q_{0}\xrightarrow{1,v_{1}}q_{1}\cdots\xrightarrow{1,v_{m}}q_{m}\) with \(q_{0}=q\) and \(q_{m}=q^{\prime}\). Let \(sv=\min(\sum_{i\leq j}(v_{i}-1)|j\leq m)\) and pick some \(k>\max(r_{q},-sv)\). Then there is a path in \(\mathcal{M}_{\mathcal{C}}\) from \((q,k)\) to \((q^{\prime},k+\sum_{i\leq m}(v_{i}-1))\), which yields a contradiction since \((q,k)\) cannot reach \(Q\times\{0\}\) while \((q^{\prime},k+\sum_{i\leq m}v_{i})\) can reach it. Thus either (1) for all \(q\in Q\), \(r_{q}<\infty\) or (2) for all \(q\in Q\), \(r_{q}=\infty\). \(\bullet\) First assume that for all \(q\in Q\), \(r_{q}<\infty\). Thus for all \(k>r_{q}\), \((q,k)\) cannot reach \(Q\times\{0\}\) and thus \(\mathcal{C}\) is decisive w.r.t. \((q,k)\) and \(Q\times\{0\}\). Now consider a configuration \((q,k)\) with \(k\leq r_{q}\). By definition there is a positive probability say \(p_{(q,k)}\) to reach \(Q\times\{0\}\) from \((q,k)\). Let \(p_{\min}=\min(p_{(q,k)}\mid q\in Q\wedge k\leq r_{q})\). Then for all \((q,k)\) with \(k\leq r_{q}\), there is a probability at least \(p_{\min}\) to reach either \(Q\times\{0\}\) or \(\{(q,k)\mid q\in Q\wedge k>r_{q}\}\) by a path of length \(\ell=\sum_{q\in Q}(r_{q}+1)\). This implies that after \(n\ell\) transitions the probability to reach either \(Q\times\{0\}\) or \(\{(q,k)\mid q\in Q\wedge k>r_{q}\}\) is at least \(1-(1-p_{\min})^{n}\). Thus \(\mathcal{C}\) is decisive w.r.t. \((q,k)\) and \(Q\times\{0\}\). Summarizing for all \((q,k)\), \(\mathcal{C}\) is decisive w.r.t. \((q,k)\) and \(Q\times\{0\}\). \(\bullet\) Now assume that for all \((q,k)\in Q\times\mathbb{N}\), \(Q\times\{0\}\) is reachable from \((q,k)\). Thus the decisiveness problem boils down to the almost sure reachability of \(Q\times\{0\}\). Since the target of decisiveness is \(Q\times\{0\}\), we can arbitrarily set up the outgoing transitions of these states (i.e., \(\Delta_{0}\)) without changing the decisiveness problem. So we choose these transitions and associated probabilities as follows. For all \(q,q^{\prime}\) such that \(\mathbf{M}_{\mathcal{C}}[q,q^{\prime}]>0\), there is a transition \(t=q\stackrel{{ c,0}}{{\longrightarrow}}q^{\prime}\) with \(W(t)=\mathbf{M}_{\mathcal{C}}[q,q^{\prime}]\). Since \(\mathbf{M}_{\mathcal{C}}\) is irreducible, there is a unique invariant distribution \(\pi_{\infty}\) (i.e., \(\pi_{\infty}\mathbf{M}_{\mathcal{C}}=\pi_{\infty}\)) fulfilling for all \(q\in Q\), \(\pi_{\infty}(q)>0\). Let \((Q_{n},N_{n})_{n\in\mathbb{N}}\) be the stochastic process defined by \(\mathcal{M}_{\mathcal{C}}\) with \(N_{0}=k\) for some \(k\) and for all \(q\in Q\), \(\mathbf{Pr}(Q_{0}=q)=\pi_{\infty}(q)\). Due to the invariance of \(\pi_{\infty}\) and the choice of transitions for \(Q\times\{0\}\), one gets by induction that for all \(n\in\mathbb{N}\) : * \(\mathbf{Pr}(Q_{n}=q)=\pi_{\infty}(q)\); * for all \(k>0\), \(\mathbf{Pr}(N_{n+1}=k+v-1|N_{n}=k)=\sum_{q\in Q}\pi_{\infty}(q)\frac{\sum_{t=( q,1,v,q^{\prime})\in\Delta_{1}}W(t,k)}{S_{q,1}(k)}\) \(=\frac{\sum_{q\in Q}\pi_{\infty}(q)\prod_{t^{\prime}q}S_{q^{\prime},1}(k)\sum_ {t=(q,1,v,q^{\prime})\in\Delta}W(t,k)}{\prod_{q^{\prime}\in Q}S_{q^{\prime},1}( k)}\); * \(\mathbf{Pr}(N_{n+1}=0|N_{n}=0)=1\). For \(v\in\{-1,0,1\}\), let us define the polynomial \(P_{v}\) by: \[\sum_{q\in Q}\pi_{\infty}(q)\prod_{q^{\prime}\neq q}S_{q^{\prime},1}\sum_{t=( q,1,v+1,q^{\prime})\in\Delta_{1}}W(t)\] Due to the previous observations, the stochastic process \((N_{n})_{n\in\mathbb{N}}\) is the Markov chain defined below where the weights outgoing from a state have to be normalized: Using our hypothesis about reachability, \(P_{-1}\) is a positive polynomial (while \(P_{1}\) could be null) and thus the decisiveness of this Markov chain w.r.t. state \(0\) is equivalent to the decisiveness of the Markov chain below: Due to Theorem 14, this problem is decidable (in linear time) and either (1) for all \(k\in\mathbb{N}\) this Markov chain is decisive w.r.t \(k\) and \(0\) or (2) for all \(k>0\) this Markov chain is not decisive w.r.t \(k\) and \(0\). Let us analyze the two cases w.r.t. the Markov chain of the pHM. **Case (1)** In the stochastic process \((Q_{n},N_{n})_{n\in\mathbb{N}}\), the initial distribution has a positive probability for \((q,k)\) for all \(q\in Q\). This implies that for all \(q\), \(\mathcal{C}\) is decisive w.r.t. \((q,k)\) and \(Q\times\{0\}\). Since \(k\) was arbitrary, this means that for all \((q,k)\), \(\mathcal{C}\) is decisive w.r.t. \((q,k)\) and \(Q\times\{0\}\). **Case (2)** Choosing \(k=1\) and applying the same reasoning as for the previous case, there is some \((q,1)\) which is not decisive (and so for all \((q,k^{\prime})\) with \(k^{\prime}>0\)). Let \(q^{\prime}\in Q\), since \(\mathbf{M}_{\mathcal{C}}\) is irreducible, there is a (shortest) sequence of transitions in \(\Delta_{1}\) leading from \(q^{\prime}\) to \(q\) whose length is at most \(|Q|-1\). Thus for all \((q^{\prime},k^{\prime})\) with \(k^{\prime}\geq|Q|\) there is a positive probability to reach some \((q,k)\) with \(k>0\). Thus \((q^{\prime},k)\) is not decisive. Now let \((q^{\prime},k^{\prime})\) with \(k^{\prime}<|Q|\). Then we compute by a breadth first exploration the configurations reachable from \((q^{\prime},k^{\prime})\) until either (1) one reaches some \((q^{\prime\prime},k^{\prime\prime})\) with \(k^{\prime\prime}\geq|Q|\) or (2) the full (finite) reachability set is computed. In the first case, there is a positive probability to reach some \((q^{\prime\prime},k^{\prime\prime})\) with \(k^{\prime\prime}\geq|Q|\) and from \((q^{\prime\prime},k^{\prime\prime})\) to some \((q,k)\) with \(k>0\) and so \((q^{\prime},k^{\prime})\) is not decisive. In the second case, it means that the reachable set is finite and from any configuration of this set there is a positive probability to reach \(Q\times\{0\}\) by a path of length at most the size of this set. Thus almost surely \(Q\times\{0\}\) will be reached and \((q^{\prime},k^{\prime})\) is decisive. ## 5 Probabilistic Petri nets We now introduce probabilistic Petri nets as a subclass of pCM. [pPN] A probabilistic Petri net (pPN) \(\mathcal{N}\) is a pCM \(\mathcal{N}=(Q,P,\Delta,W)\) where \(Q\) is a singleton and \(\Delta_{0}=\emptyset\). **Notations.** Since there is a unique control state in a pPN, a configuration in a pPN is reduced to \(\mathbf{m}\in\mathbb{N}^{P}\) and it is called a _marking_. A pair \((\mathcal{N},\mathbf{m}_{0})\), where \(\mathcal{N}\) is a pPN and \(\mathbf{m}_{0}\in\mathbb{N}^{P}\) is some (initial) marking, is called a _marked pPN_. In previous works [1, 7] about pPNs, the weight function \(W\) is a _static_ one: i.e., a function from \(\Delta\) to \(\mathbb{N}^{*}\). As above, we call these models _static_ probabilistic Petri nets. Static-probabilistic VASS (and so pPNs) are _decisive_ with respect to _upward closed sets_ (Corollary 4.4 in [1]) but they may not be decisive w.r.t. an arbitrary finite set. Surprisingly, the decisiveness problem for Petri nets or VASS seems not to have been studied. We establish below that even for polynomial pPNs, decisiveness is undecidable. Figure 4: **if \(c_{j}>0\) then \(c_{j}\gets c_{j}-1;\mathbf{goto}\ i^{\prime}\)else goto \(i^{\prime\prime}\)** Figure 3: halt instruction and cleaning stage **Theorem 20**.: _The decisiveness problem of polynomial pPNs w.r.t. a finite or upward closed set is undecidable._ Proof.: We reduce the reachability problem of normalized two-counter machines to the decisiveness problem of pPN. Let \(\mathcal{C}\) be a normalized two-counter machine with an instruction set \(\{0,\dots,n\}\). The corresponding marked pPN \((\mathcal{N}_{\mathcal{C}},\mathbf{m}_{0})\) is built as follows. Its set of places is \(P=\{p_{i}\mid 0\leq i\leq n\}\cup\{q_{i}\mid i\text{ is a test instruction}\}\cup\{c_{j}\mid 1\leq j\leq 2\}\cup\{ sim,stop\}\). The initial marking is \(\mathbf{m}_{0}=p_{0}\). The set \(\Delta\) of transitions is defined by a pattern per type of instruction. The pattern for the incrementation instruction is depicted in Figure 2. The pattern for the test instruction is depicted in Figure 4. The pattern for the halt instruction is depicted in Figure 3 with in addition a _cleaning stage_. Before specifying the weight function \(W\), let us describe the qualitative behaviour of this net. \((\mathcal{N}_{\mathcal{C}},\mathbf{m}_{0})\) performs repeatedly a _weak_ simulation of \(\mathcal{C}\). As usual since the zero test does not exist in Petri nets, during a test instruction \(i\), the simulation can follow the zero branch while the corresponding counter is non null (transitions \(begZ_{i}\) and \(endZ_{i}\)). If the net has cheated then with transition \(rm_{i}\), it can remove tokens from \(sim\) (two per two). In addition when the instruction is not **halt**, instead of simulating it, it can _exit_ the simulation by putting a token in \(stop\) and then will remove tokens from the counter places including the simulation counter as long as they are not empty. The simulation of the **halt** instruction consists in restarting the simulation and incrementing the simulation counter \(sim\). Thus the set of reachable markings is included in the following set of markings \(\{p_{i}+xc_{1}+yc_{2}+zsim\mid 0\leq i\leq n,x,y,z\in\mathbb{N}\}\cup\{q_{i}+xc_{ 1}+yc_{2}+zsim\mid i\text{ is a test instruction},x,y,z\in\mathbb{N}\}\cup\{ stop+xc_{1}+yc_{2}+zsim\mid x,y,z\in\mathbb{N}\}\). By construction, the marking \(stop\) is always reachable. We will establish that \(\mathcal{N}_{\mathcal{C}}\) is decisive w.r.t. \(\mathbf{m}_{0}\) and \(\{stop\}\) if and only if \(\mathcal{C}\) does not halt. Let us specify the weight function. For any incrementation instruction \(i\), \(W(inc_{i},\mathbf{m})=\mathbf{m}(sim)^{2}+1\). For any test instruction \(i\), \(W(begZ_{i},\mathbf{m})=\mathbf{m}(sim)^{2}+1\), \(W(dec_{i},\mathbf{m})=2\mathbf{m}(sim)^{4}+2\) and \(W(rm_{i},\mathbf{m})=2\). All other weights are equal to \(1\). \(\bullet\) Assume that \(\mathcal{C}\) halts and consider its execution \(\sigma_{\mathcal{C}}\) with initial values \((0,0)\). Let \(\ell=|\sigma_{\mathcal{C}}|\) be the length of this execution. Consider now \(\sigma\) the infinite sequence of \((\mathcal{N}_{\mathcal{C}},\mathbf{m}_{0})\) that infinitely performs the correct simulation of this execution. The infinite sequence \(\sigma\) never marks the place \(stop\). We now show that the probability of \(\sigma\) is non null implying that \(\mathcal{N}_{\mathcal{C}}\) is not decisive. After every simulation of \(\sigma_{\mathcal{C}}\), the marking of \(sim\) is incremented and it is never decremented since (due to the correctness of the simulation) every time a transition \(begZ_{i}\) is fired, the corresponding counter place \(c_{j}\) is unmarked which forbids the firing of \(rm_{i}\). So during the \((n+1)^{th}\) simulation of \(\rho\), the marking of \(sim\) is equal to \(n\). So consider the probability of the correct simulation of an instruction \(i\) during the \((n+1)^{th}\) simulation. * If \(i\) is an incrementation then the weight of \(inc_{i}\) is \(n^{2}\) and the weight of \(exit_{i}\) is \(1\). So the probability of a correct simulation is \(\frac{n^{2}+1}{n^{2}+2}=1-\frac{1}{n^{2}+2}\geq e^{-\frac{2}{n^{2}+2}}\). 1 Footnote 1: We use \(1-x\geq e^{-2x}\) for \(0\leq x\leq\frac{1}{2}\). * If \(i\) is a test of \(c_{j}\) and the marking of \(c_{j}\) is non null then the weight of \(dec_{i}\) is \(2n^{4}+2\), the weight of \(begZ_{i}\) is \(n^{2}+1\) and the weight of \(exit_{i}\) is \(1\). So the probability of a correct simulation is \(\frac{2n^{4}+2}{2n^{4}+n^{2}+4}\geq\frac{2n^{4}+2}{2n^{4}+2n^{2}+4}=\frac{n^{2 }+1}{n^{2}+2}=1-\frac{1}{n^{2}+2}\geq e^{-\frac{2}{n^{2}+2}}\). * If \(i\) is a test of \(c_{j}\) and the marking of \(c_{j}\) is null then the weight of \(begZ_{i}\) is \(n^{2}+1\) and the weight of \(exit_{i}\) is \(1\). So the probability of a correct simulation is \(\frac{n^{2}+1}{n^{2}+2}=1-\frac{1}{n^{2}+2}\geq e^{-\frac{2}{n^{2}+2}}\). So the probability of the correct simulation during the \((n+1)^{th}\) simulation is at least \((e^{-\frac{2}{n^{2}+2}})^{\ell}=e^{-\frac{2\ell}{n^{2}+2}}\). Hence the probability of \(\sigma\) is at least \(\prod_{n\in\mathbb{N}}e^{-\frac{2\ell}{n^{2}+2}}=e^{-\sum_{n\in\mathbb{N}}\frac {2\ell}{n^{2}+2}}>0\), as the sum in the exponent converges. \(\bullet\) Assume that \(\mathcal{C}\) does not halt (and so does not halt for any initial values of the counters). We partition the set of infinite paths into a countable family of subsets and prove that for all of them the probability to infinitely avoid to mark \(stop\) is null which will imply that \(\mathcal{N}_{\mathcal{C}}\) is decisive. The partition is based on \(k\in\mathbb{N}\cup\{\infty\}\), the number of firings of \(again\) in the path. **Case \(k<\infty\)**. Let \(\sigma\) be such a path and consider the suffix of \(\sigma\) after the last firing of \(again\). The marking of \(sim\) is at most \(k\) and can only decrease along the suffix. Consider a simulation of an increment instruction \(i\). The weight of \(inc_{i}\) is at most is \(k^{2}+1\) and the weight of \(exit_{i}\) is \(1\). So the probability of avoiding \(exit_{i}\) is at most \(\frac{k^{2}+1}{k^{2}+2}=1-\frac{1}{k^{2}+2}\leq e^{-\frac{1}{k^{2}+2}}\). Consider the simulation of a test instruction \(i\). Then the weight of \(dec_{i}\) is at most \(2k^{4}+2\), the weight of \(begZ_{i}\) is at most \(k^{2}+1\) and the weight of \(exit_{i}\) is \(1\). So the probability of avoiding \(exit_{i}\) is at most \(\frac{2k^{4}+k^{2}+2}{2k^{4}+k^{4}+4}\leq\frac{4k^{4}+1}{4k^{4}+2}=1-\frac{1} {4k^{4}+2}\leq e^{-\frac{1}{4k^{4}+2}}\). Thus after \(n\) simulations of instructions in the suffix, the probability to avoid to mark \(stop\) is at most \(e^{-\frac{1}{4k^{4}+2}}\). Letting \(n\) go to infinity yields the result. **Case \(k=\infty\)**. We first show that almost surely there will be an infinite number of simulations of \(\mathcal{C}\) with the marking of \(sim\) at most \(1\). Observe that all these simulations are incorrect since they mark \(p_{n}\) while \(\mathcal{C}\) does not halt. So at least once per simulation some place \(q_{i}\) and the corresponding counter \(c_{j}\) must be marked and if the marking of \(sim\) is at least \(2\) with probability \(\frac{2}{3}\) two tokens of \(sim\) are removed (recall that the weight of \(rm_{i}\) is \(2\) and the weight of \(endZ_{i}\) is \(1\)). Thus once the marking of \(sim\) is greater than \(1\), considering the successive random markings of \(sim\) after the firing of \(again\) until it possibly reaches \(1\), this behaviour is _stochastically bounded_ by the following random walk: In this random walk, one reaches the state \(1\) with probability \(1\). This establishes that almost surely there will be an infinite number of simulations of \(\mathcal{C}\) with the marking of \(sim\) at most \(1\). Such a simulation must simulate at least one instruction. If this instruction is an incrementation, the exiting probability is at least \(\frac{1}{3}\); if it is a test instruction the exiting probability is at least \(\frac{1}{3}\). Thus after \(n\) such simulations of \(\mathcal{C}\), the probability to avoid to mark \(stop\) is at most \((\frac{6}{7})^{n}\). Letting \(n\) go to infinity yields the result. Observe that the result remains true when substituting the singleton \(\{stop\}\) by the set of markings greater than or equal to \(stop\). We deduce thus that decisiveness of extended (probabilistic) Petri nets is undecidable : in particular for Reset Petri nets [12], Post-Self-Modifying Petri nets [20], Recursive Petri nets, etc. The language of a marked Petri net \((\mathcal{N},\mathbf{m}_{0})\) is defined by \(\mathcal{L}(\mathcal{N},\mathbf{m}_{0})=\{\sigma\in\Delta^{*}\mid\mathbf{m}_{0 }\xrightarrow{\sigma}\}\). \((\mathcal{N}\mathbf{m}_{0})\) is regular if \(\mathcal{L}(\mathcal{N},\mathbf{m}_{0})\) is regular. Given a marked Petri net \((\mathcal{N},\mathbf{m}_{0})\), the problem who asks whether it is regular is decidable [16, 21] and belongs to \(\mathsf{EXPSPACE}\)[11]. For establishing the next proposition, we only need the following result: There exists a computable bound \(B(\mathcal{N},\mathbf{m}_{0})\) such that for all markings \(\mathbf{m}_{1}\) reachable from \(\mathbf{m}_{0}\) and all markings \(\mathbf{m}_{2}\) with some \(p\in P\) fulfilling \(\mathbf{m}_{2}(p)+B(\mathcal{N},\mathbf{m}_{0})<\mathbf{m}_{1}(p)\), \(\mathbf{m}_{2}\) is unreachable from \(\mathbf{m}_{1}\) ([16]). Let \((\mathcal{N},\mathbf{m}_{0})\) be a regular marked pPN and \(\mathbf{m}_{1}\) be a marking. Then \((\mathcal{N},\mathbf{m}_{0})\) is decisive with respect to \(\mathbf{m}_{0}\) and \(\{\mathbf{m}_{1}\}\). Proof.: Consider the following algorithm that, after computing \(B(\mathcal{N},\mathbf{m}_{0})\), builds the following finite graph: * Push on the stack \(\mathbf{m}_{0}\). * While the stack is not empty, pop from the stack some marking \(\mathbf{m}\). Compute the set of transition firings \(\mathbf{m}\xrightarrow{t}\mathbf{m}^{\prime}\). Push on the stack \(\mathbf{m}^{\prime}\) if: 1. \(\mathbf{m}^{\prime}\) is not already present in the graph, 2. and \(\mathbf{m}^{\prime}\neq\mathbf{m}_{1}\), 3. and for all \(p\in P\), \(\mathbf{m}_{1}(p)+B(\mathcal{N},\mathbf{m}_{0})\geq\mathbf{m}^{\prime}(p)\). Due to the third condition, this algorithm terminates. From above, if \(\mathbf{m}_{1}\) does not occur in the graph then \(\mathbf{m}_{1}\) is unreachable from \(\mathbf{m}_{0}\) and thus \(\mathcal{N}\) is decisive w.r.t. \(\mathbf{m}_{1}\). Otherwise, considering the weights specified by \(W\) and adding loops for states without successors, this graph can be viewed as a finite Markov chain and so reaching some bottom strongly connected component (BSCC) almost surely. There are three possible cases: (1) the BSCC consisting of \(\mathbf{m}_{1}\), (2) a BSCC consisting of a single marking \(\mathbf{m}\) for which there exists some \(p\in P\) fulfilling \(\mathbf{m}_{1}(p)+B(\mathcal{N},\mathbf{m}_{0})<\mathbf{m}(p)\) and thus from which \(\mathbf{m}_{1}\) is unreachable or (3) a BSCC that is also a BSCC of \(\mathcal{M}_{\mathcal{N}}\) and thus from which one cannot reach \(\mathbf{m}_{1}\). This establishes that \(\mathcal{N}\) is decisive w.r.t. \(\mathbf{m}_{1}\). **Observation.** In this particular case, instead of using Algorithm 1 to frame the reachability probability, one can use the Markov chain of the proof to exactly compute this probability. ## 6 Conclusion and perspectives We have studied the decidability of decisiveness with respect to several subclasses of probabilistic counter machines. The results are summarized in the following table. When \(A\) is not mentioned it means that \(A\) is finite. \begin{tabular}{|l|l|l|l|} \hline model & constant & polynomial & general \\ \hline pHM & D & D [Th 14] & U [Th 13] \\ & & & even with a single state \\ \hline pPN &? & U [Th 20] & U \\ & & also w.r.t. upward closed sets[Th 20] & but D when regular [Th 22] \\ \hline pCM & U [Th 12] & U & U \\ \hline \end{tabular} In the future, apart for solving the left open problem in the above table, we plan to introduce sufficient conditions for decisiveness for models with undecidability of decisiveness like pNs with polynomial weights. This could have a practical impact for real case-study modellings. In another direction, we have established that the decisiveness and recurrence properties are closely related. It would be interesting to define a property related to transience in Markov chains. In fact we have identified such a property called divergence and the definition and analysis of this property will appear in a forthcoming paper.
2309.04186
The Prime Geodesic Theorem in Arithmetic Progressions
We address the prime geodesic theorem in arithmetic progressions, and resolve conjectures of Golovchanski\u{\i}-Smotrov (1999). In particular, we prove that the traces of closed geodesics on the modular surface do not equidistribute in the reduced residue classes of a given modulus.
Dimitrios Chatzakos, Gergely Harcos, Ikuya Kaneko
2023-09-08T08:00:00Z
http://arxiv.org/abs/2309.04186v1
# The prime geodesic theorem in arithmetic progressions ###### Abstract. We address the prime geodesic theorem in arithmetic progressions, and resolve conjectures of Golovchanskii-Smotrov (1999). In particular, we prove that the traces of closed geodesics on the modular surface do not equidistribute in the reduced residue classes of a given modulus. Key words and phrases:prime geodesic theorem, arithmetic progressions, Kuznetsov-Bykovskii formula 2020 Mathematics Subject Classification: 11F72 (primary); 11M36 (secondary) The first author acknowledges the financial support from the ELKE of the University of Patras (MEDIKOS Program no. 82043). The second author was supported by the Renyi Intezet Lendulet Automorphic Research Group and NKFIH (National Research, Development and Innovation Office) grant K 143876. The third author acknowledges the support of the Masason Foundation. ## 1. Introduction In this paper we study the \(L\)-functions of the Dirichlet Laplacian \(L\ **Theorem 1.1**.: _Let \(\Gamma=\mathrm{PSL}_{2}(\mathbb{Z})\), and let \(p\geq 3\) be a prime. Then we have that_ \[\Psi_{\Gamma}(x;p,a)=\begin{cases}\dfrac{1}{p-1}\cdot x+O_{\varepsilon}(x^{ \frac{3}{4}+\frac{\phi}{2}+\varepsilon})&\text{if $\left(\dfrac{a^{2}-4}{p}\right)=1$,}\\ \dfrac{1}{p+1}\cdot x+O_{\varepsilon}(x^{\frac{3}{4}+\frac{\phi}{2}+ \varepsilon})&\text{if $\left(\dfrac{a^{2}-4}{p}\right)=-1$,}\\ \dfrac{p}{p^{2}-1}\cdot x+O_{p,\varepsilon}(x^{\frac{3}{4}+\frac{\theta}{2}+ \varepsilon})&\text{if $\left(\dfrac{a^{2}-4}{p}\right)=0$,}\end{cases} \tag{1.2}\] _where \(\vartheta\) is a subconvex exponent for quadratic Dirichlet \(L\)-functions._ _Remark 1_.: The implied constant in the error term is independent of \(p\) when \(a\not\equiv\pm 2\,(\mathrm{mod}\,p)\). _Remark 2_.: When \(p=3\), the first case of (1.2) is void, while the second case is covered with a stronger error term by [13, Theorem 1]. _Remark 3_.: Apart from the size of the error term, Theorem 1.1 resolves [13, Conjecture 2] for level \(N=1\). In fact, we expect that our method works for \(\Gamma=\Gamma_{0}(N)\) when \((N,p)=1\), but we restrict to \(\Gamma=\mathrm{PSL}_{2}(\mathbb{Z})\) for simplicity. Furthermore, we expect that the error term can be improved significantly by a more careful analysis (e.g. by combining the Kuznetsov-Bykovskii formula with an adelic trace formula), but we solely focused on determining the main term according to the sign of the Legendre symbol. We leave such pursuits for future work. The method of Golovchanskii-Smotrov [13, Theorem 1] is different from ours, and they delve into properties of the norms and traces, expressing \(\Psi_{\Gamma}(x;p,a)\) as a linear combination of \(\Psi_{\Gamma_{0}(2^{k})}(x)\) for some \(k\geq 0\), for which an asymptotic formula is already known as in (1.1). For example, they derived a general linear combination of the shape \[3\Psi_{\Gamma_{0}(N)}(x)-3\Psi_{\Gamma_{0}(2N)}(x)+\Psi_{\Gamma_{0}(4N)}(x)=3 \Psi_{\Gamma_{0}(N)}(x;2,1),\] from which it follows that1 Footnote 1: We emphasise that the case of \(p=2\) is not contained in Theorem 1.1. \[\Psi_{\Gamma_{0}(N)}(x;2,1)=\frac{1}{3}\cdot x+\mathcal{E}_{\Gamma_{0}(N)}(x), \qquad\Psi_{\Gamma_{0}(N)}(x;2,0)=\frac{2}{3}\cdot x+\mathcal{E}_{\Gamma_{0}( N)}(x).\] The level structure that they developed is delicate, and it appears that their idea only works for some specific values of \(p\) and \(a\). Hence, some new ideas are needed to prove Theorem 1.1. An elementary counting argument implies the following result. **Corollary 1.2**.: _Let \(\Gamma=\mathrm{PSL}_{2}(\mathbb{Z})\), and let \(p\geq 3\) be a prime. Then we have that_ \[\sum_{\begin{subarray}{c}a\,(\mathrm{mod}\,p)\\ \left(\frac{a^{2}-4}{p}\right)=1\end{subarray}}\Psi_{\Gamma}(x;p,a)=\frac{p-3} {2(p-1)}\cdot x+O_{p,\varepsilon}(x^{\frac{3}{4}+\frac{\phi}{2}+\varepsilon}),\] \[\sum_{\begin{subarray}{c}a\,(\mathrm{mod}\,p)\\ \left(\frac{a^{2}-4}{p}\right)=-1\end{subarray}}\Psi_{\Gamma}(x;p,a)=\frac{p-1 }{2(p+1)}\cdot x+O_{p,\varepsilon}(x^{\frac{3}{4}+\frac{\phi}{2}+\varepsilon}),\] \[\sum_{\begin{subarray}{c}a\,(\text{mod}\,p)\\ \left(\frac{a^{2}-4}{p}\right)=0\end{subarray}}\Psi_{\Gamma}(x;p,a)=\frac{2p}{p^ {2}-1}\cdot x+O_{p,\varepsilon}(x^{\frac{3}{4}+\frac{\phi}{2}+\varepsilon}),\] _where \(\vartheta\) is a subconvex exponent for quadratic Dirichlet \(L\)-functions._ _Remark 4_.: Apart from the size of the error term, Corollary 1.2 resolves [11, Conjecture 1] in the case of full level \(N=1\), and again the method should work for \(\Gamma=\Gamma_{0}(N)\) when \((N,p)=1\). ### Acknowledgements The authors thank Mikhail Nikolaevich Smotrov for sending us the preprint [11]. ## 2. Key propositions This section prepares for the proof of Theorem 1.1. Throughout, we follow [10, Section 2] closely. Let \(\Gamma=\operatorname{PSL}_{2}(\mathbb{Z})\) as before. Sarnak [12, Proposition 1.4] showed that the primitive hyperbolic conjugacy classes in \(\Gamma\) correspond bijectively to the \(\Gamma\)-equivalence classes of primitive indefinite binary quadratic forms. We recall this correspondence briefly. For a given primitive quadratic form \(ax^{2}+bxy+cy^{2}\) of discriminant \(d>0\), the automorphs are the elements \[P(t,u)=\begin{pmatrix}\dfrac{t-bu}{2}&-cu\\ au&\dfrac{t+bu}{2}\end{pmatrix}\in\Gamma,\] with \(t^{2}-du^{2}=4\) being a solution of the Pell equation. For \(u\) nonzero, \(P(t,u)\) is hyperbolic with norm \((t+u\sqrt{d})^{2}/4\) and trace \(t\). Because \(P(-t,-u)=P(t,u)\) holds in \(\Gamma\), we shall restrict to \(t>0\) without loss of generality. This is in harmony with our convention in Section 1.2 that \(\operatorname{tr}(P)>2\) for a hyperbolic element \(P\in\Gamma\). If \((t_{d},u_{d})\) denotes the fundamental solution of the Pell equation, then \(P(t_{d},u_{d})\) is a primitive hyperbolic matrix of norm \(\varepsilon_{d}^{2}\) and trace \(t_{d}\). Moreover, every automorph \(P(t,u)\) with \(u>0\) (resp. \(u<0\)) is a unique positive (resp. negative) integral power of \(P(t_{d},u_{d})\). Sarnak's bijection sends the quadratic form \(ax^{2}+bxy+cy^{2}\) to the conjugacy class of \(P(t_{d},u_{d})\) in \(\Gamma\). Thus, for a given discriminant \(d>0\), there are \(h(d)\) primitive hyperbolic conjugacy classes in \(\Gamma\), each of norm \(\varepsilon_{d}^{2}\) and trace \(t_{d}\). Now every hyperbolic conjugacy class \(\{P\}\) can be written uniquely as \(\{P_{0}^{n}\}\) for \(n\geq 1\) and a primitive hyperbolic conjugacy class \(\{P_{0}\}\) (cf. [13]). Combining this with Sarnak's bijection described above, we obtain \[\Psi_{\Gamma}(x;p,a)=2\sum_{\begin{subarray}{c}3\leq t\leq X\\ t\equiv a\,(\text{mod}\,p)\end{subarray}}\sum_{t^{2}-du^{2}=4}h(d)\log \varepsilon_{d},\] where \(X\) abbreviates \(\sqrt{x}+\frac{1}{\sqrt{x}}\), and \(d>0\) (resp. \(u>0\)) runs through discriminants (resp. integers). The class number formula \(h(d)\log\varepsilon_{d}=\sqrt{d}L(1,\chi_{d})\), where \(\chi_{d}\) is the not necessarily primitive quadratic Dirichlet character associated to the discriminant \(d\), allows us to write \[\Psi_{\Gamma}(x;p,a)=2\sum_{\begin{subarray}{c}3\leq t\leq X\\ t\equiv a\,(\text{mod}\,p)\end{subarray}}\sum_{t^{2}-du^{2}=4}\sqrt{d}L(1, \chi_{d}). \tag{2.1}\] For an arbitrary discriminant \(\delta>0\), we define Zagier's \(L\)-series by (cf. [13, (6) & (3)]) \[L(s,\delta)\coloneqq\sum_{du^{2}=\delta}L(s,\chi_{d})u^{1-2s}=\sum_{q=1}^{ \infty}\frac{\lambda_{q}(\delta)}{q^{s}},\] where \(d>0\) (resp. \(u>0\)) runs through discriminants (resp. integers). This series admits a transparent Euler product expansion (to be discussed below), while it simplifies (2.1) as \[\Psi_{\Gamma}(x;p,a)=2\sum_{\begin{subarray}{c}3\leq t\leq X\\ t\equiv a\,(\operatorname{mod}p)\end{subarray}}\sqrt{t^{2}-4}L(1,t^{2}-4). \tag{2.2}\] If \(\delta=Dl^{2}\), where \(D>0\) is a fundamental discriminant and \(l>0\) is an integer, then we obtain the following Euler product expansion of Zagier's \(L\)-series (cf. [13, (2)]): \[L(s,\delta) =\sum_{u|l}L(s,\chi_{Dl^{2}/u^{2}})u^{1-2s}\] \[=L(s,\chi_{D})\sum_{u|l}u^{1-2s}\prod_{\mathbf{p}|\frac{l}{u}}(1-\chi _{D}(\mathbf{p}))\] \[=\prod_{\mathbf{p}}\left(\sum_{0\leq m<v_{\mathbf{p}}(l)}\mathbf{p}^{m(1-2s) }+\frac{\mathbf{p}^{v_{\mathbf{p}}(l)(1-2s)}}{1-\chi_{D}(\mathbf{p})\mathbf{p}^{-s}}\right). \tag{2.3}\] In particular, for fixed \(\delta\), the arithmetic function \(q\mapsto\lambda_{q}(\delta)\) is multiplicative. The idea of the proof of Theorem 1.1 is to group together certain values of \(t\) in (2.2) such that the corresponding Zagier \(L\)-series \(L(s,t^{2}-4)\) has a constant Euler factor at \(\mathbf{p}=p\). Thus we are led to consider \(L(s,\delta)\) with its Euler factor at \(\mathbf{p}=p\) removed: \[L^{p}(s,\delta)\coloneqq\sum_{\begin{subarray}{c}q\geq 1\\ (q,p)=1\end{subarray}}\frac{\lambda_{q}(\delta)}{q^{s}}.\] **Proposition 2.1**.: _Let \(p\geq 3\) be a prime, and let \(n\geq 1\) be an integer. Let \(r\,(\operatorname{mod}p^{n})\) be an arbitrary residue class. If \((q,p)=1\) and \(b\) denotes the squarefree part of \(q\), then_ \[\sum_{\begin{subarray}{c}3\leq t\leq X\\ t\equiv r\,(\operatorname{mod}p^{n})\end{subarray}}\lambda_{q}(t^{2}-4)= \frac{X}{p^{n}}\cdot\frac{\mu(b)}{b}+O_{\varepsilon}(q^{\frac{1}{2}+ \varepsilon}).\] Proof.: It follows from [13, Lemma 2.3] that \[\lambda_{q}(t^{2}-4)=\sum_{q_{1}^{2}q_{2}=q}\frac{1}{q_{2}}\sum_{k\,( \operatorname{mod}q_{2})}e\left(\frac{kt}{q_{2}}\right)S(k^{2},1;q_{2}).\] This leads to \[\sum_{\begin{subarray}{c}3\leq t\leq X\\ t\equiv r\,(\operatorname{mod}p^{n})\end{subarray}}\lambda_{q}(t^{2}-4)=\sum_ {q_{1}^{2}q_{2}=q}\frac{1}{q_{2}}\sum_{k\,(\operatorname{mod}q_{2})}S(k^{2},1 ;q_{2})\sum_{\begin{subarray}{c}3\leq t\leq X\\ t\equiv r\,(\operatorname{mod}p^{n})\end{subarray}}e\left(\frac{kt}{q_{2}} \right).\] When \(k\equiv 0\,(\operatorname{mod}q_{2})\), the inner sum over \(t\) is \(\frac{X}{p^{n}}+O(1)\), and \(S(0,1;q_{2})=\mu(q_{2})\), yielding the expected main term. When \(k\not\equiv 0\,(\operatorname{mod}q_{2})\), we apply the Weil bound for Kloosterman sums and the estimate \[\sum_{\begin{subarray}{c}3\leq t\leq X\\ t\equiv r\,(\mathrm{mod}\,p^{n})\end{subarray}}e\left(\frac{kt}{q_{2}}\right)\ll \left\|\frac{kp^{n}}{q_{2}}\right\|^{-1}\leq q_{2},\] where \(\left\|\cdot\right\|\) is the distance to the nearest integer. The proof is complete. Guided by Proposition 2.1 and (2.2), we consider the sum \[\Psi_{\Gamma}^{\star}(x;p^{n},r)\coloneqq 2\sum_{\begin{subarray}{c}3\leq t \leq X\\ t\equiv r\,(\mathrm{mod}\,p^{n})\end{subarray}}\sqrt{t^{2}-4}L^{p}(1,t^{2}-4). \tag{2.4}\] We shall deduce Theorem 1.1 from the following analogue of [13, Theorem 3.2]: **Proposition 2.2**.: _Let \(p\geq 3\) be a prime, and let \(n\geq 1\) be an integer. Let \(r\,(\mathrm{mod}\,p^{n})\) be an arbitrary residue class. Then_ \[\Psi_{\Gamma}^{\star}(x+u;p^{n},r)-\Psi_{\Gamma}^{\star}(x;p^{n},r)=\frac{u}{ p^{n}}+O_{\varepsilon}(u^{\frac{1}{2}}x^{\frac{1}{4}+\frac{\theta}{2}+ \varepsilon}),\qquad\sqrt{x}\leq u\leq x. \tag{2.5}\] Proof.: Let \(\sqrt{x}\leq u\leq x\), and set \[X\coloneqq\sqrt{x}+\frac{1}{\sqrt{x}}\qquad\text{and}\qquad X^{\prime} \coloneqq\sqrt{x+u}+\frac{1}{\sqrt{x+u}}.\] From the definition (2.4), it is clear that \[\Psi_{\Gamma}^{\star}(x+u;p^{n},r)-\Psi_{\Gamma}^{\star}(x;p^{n}, r) =2\sum_{\begin{subarray}{c}X<t\leq X^{\prime}\\ t\equiv r\,(\mathrm{mod}\,p^{n})\end{subarray}}\sqrt{t^{2}-4}L^{p}(1,t^{2}-4)\] \[=\big{(}2+O\left(x^{-1}\right)\big{)}\sum_{\begin{subarray}{c}X< t\leq X^{\prime}\\ t\equiv r\,(\mathrm{mod}\,p^{n})\end{subarray}}tL^{p}(1,t^{2}-4),\] because \(\sqrt{t^{2}-4}=t(1+O(t^{-2}))\). We shall approximate \(L^{p}(1,t^{2}-4)\) in terms of a suitable Dirichlet series. Let \(V\geq 1\) be a parameter to be chosen later, and let \[\delta=t^{2}-4=Dl^{2},\] where \(D>0\) is a fundamental discriminant and \(l>0\) is an integer. Consider \[\Lambda_{V}^{p}(\delta)\coloneqq\sum_{\begin{subarray}{c}q\geq 1\\ (q,p)=1\end{subarray}}\frac{\lambda_{q}(\delta)}{q}e^{-\frac{q}{V}}.\] Shifting the contour yields the expression \[\Lambda_{V}^{p}(\delta)=\int_{(1)}L^{p}(1+s,\delta)V^{s}\Gamma(s)\frac{ds}{2 \pi i}=L^{p}(1,\delta)+\int_{(-\frac{1}{2})}L^{p}(1+s,\delta)V^{s}\Gamma(s) \frac{ds}{2\pi i}.\] On the right-hand side, for some \(A>0\), \[L^{p}(1+s,\delta)\ll|L(1+s,\delta)|\ll_{\varepsilon}|L(1+s,\chi_{D})|l^{ \varepsilon}\ll_{\varepsilon}\delta^{\theta+\varepsilon}|s|^{A},\qquad \mathrm{Re}s=-\frac{1}{2},\] while \(\Gamma(s)\) decays exponentially. It follows that \[L^{p}(1,\delta)=\Lambda_{V}^{p}(\delta)+O_{\varepsilon}(\delta^{\theta+ \varepsilon}V^{-\frac{1}{2}}),\] and hence \[\Psi_{\Gamma}^{\star}(x+u;p^{n},r)-\Psi_{\Gamma}^{\star}(x;p^{n},r)=(2+O(x^{-1})) \sum_{\begin{subarray}{c}X<t\leq X^{\prime}\\ t\equiv r\,(\mathrm{mod}\,p^{n})\end{subarray}}t\Lambda_{V}^{p}(t^{2}-4)+O_{ \varepsilon}(ux^{\vartheta+\varepsilon}V^{-\frac{1}{2}}).\] If \((q,p)=1\) and \(q=bc^{2}\) with \(b\) squarefree, then Proposition 2.1 along with partial summation leads to \[2\sum_{\begin{subarray}{c}X<t\leq X^{\prime}\\ t\equiv r\,(\mathrm{mod}\,p^{n})\end{subarray}}t\lambda_{q}(t^{2}-4)=\frac{u}{ p^{n}}\cdot\frac{\mu(b)}{b}+O_{\varepsilon}(Xq^{\frac{1}{2}+\varepsilon}).\] It thus follows that \[2\sum_{\begin{subarray}{c}X<t\leq X^{\prime}\\ t\equiv r\,(\mathrm{mod}\,p^{n})\end{subarray}}t\Lambda_{V}^{p}(t^{2}-4)=\frac {u}{p^{n}}\sum_{\begin{subarray}{c}b,c\geq 1\\ (bc,p)=1\end{subarray}}\frac{\mu(b)}{b^{2}c^{2}}e^{-\frac{bc^{2}}{V}}+O_{ \varepsilon}(XV^{\frac{1}{2}+\varepsilon}).\] A standard contour shift argument gives \[\sum_{\begin{subarray}{c}b,c\geq 1\\ (bc,p)=1\end{subarray}}\frac{\mu(b)}{b^{2}c^{2}}e^{-\frac{bc^{2}}{V}}=\int_{(1 )}V^{s}\Gamma(s)\frac{\zeta^{p}(2+2s)}{\zeta^{p}(2+s)}\frac{ds}{2\pi i}=1+O(V ^{-\frac{1}{2}}),\] where \(\zeta^{p}(s)\) is the Riemann zeta function with the Euler factor at \(p\) removed. As a result, \[\Psi_{\Gamma}^{\star}(x+u;p^{n},r)-\Psi_{\Gamma}^{\star}(x;p^{n},r)=\frac{u}{ p^{n}}+O_{\varepsilon}(x^{\frac{1}{2}}V^{\frac{1}{2}+\varepsilon}+ux^{\vartheta+ \varepsilon}V^{-\frac{1}{2}}).\] Setting \(V=ux^{-\frac{1}{2}+\vartheta}\) yields (2.5). **Corollary 2.3**.: _Let \(p\geq 3\) be a prime, and let \(n\geq 1\) be an integer. Let \(r\,(\mathrm{mod}\,p^{n})\) be an arbitrary residue class. Then_ \[\Psi_{\Gamma}^{\star}(x;p^{n},r)=\frac{x}{p^{n}}+O_{\varepsilon}(x^{\frac{3}{ 4}+\frac{\vartheta}{2}+\varepsilon}). \tag{2.6}\] Proof.: Setting \(u=x\) in (2.5) yields \[\Psi_{\Gamma}^{\star}(2x;p^{n},r)-\Psi_{\Gamma}^{\star}(x;p^{n},r)=\frac{x}{ p^{n}}+O_{\varepsilon}(x^{\frac{3}{4}+\frac{\vartheta}{2}+\varepsilon}).\] Now (2.6) follows readily by a dyadic subdivision. ## 3. Proof of Theorem 1.1 We shall approximate the \(t\)-sum in (2.2). As before, we write \[\delta=t^{2}-4=Dl^{2},\] where \(D>0\) is a fundamental discriminant and \(l>0\) is an integer. If \(a\not\equiv\pm 2\,(\mathrm{mod}\,p)\), then for every \(t\) participating in (2.2), we have that \(p\nmid l\) and \[\chi_{D}(p)=\left(\frac{D}{p}\right)=\left(\frac{Dl^{2}}{p}\right)=\left(\frac {t^{2}-4}{p}\right)=\left(\frac{a^{2}-4}{p}\right).\] By (2.3), the corresponding Zagier \(L\)-series \(L(s,t^{2}-4)\) factorises as \[L(s,t^{2}-4)=\left(1-\left(\frac{a^{2}-4}{p}\right)p^{-s}\right)^{-1}L^{p}(s, t^{2}-4),\] hence by (2.2) and (2.4), it also follows that \[\Psi_{\Gamma}(x;p,a)=\left(1-\left(\frac{a^{2}-4}{p}\right)p^{-1}\right)^{-1} \Psi_{\Gamma}^{\star}(x;p,a).\] Applying (2.6), we obtain the first two cases of (1.2). If \(a\equiv\pm 2\,(\operatorname{mod}p)\), then we shall assume (without loss of generality) that \(a=\pm 2\). We subdivide the \(t\)-sum in (2.2) according to the exponent of \(p\) in the positive integer \(t-a\): \[\Psi_{\Gamma}(x;p,a)=\sum_{k=1}^{\infty}\Psi_{\Gamma}(x;p,a;k),\] where \[\Psi_{\Gamma}(x;p,a;k)\coloneqq 2\,\sum_{\begin{subarray}{c}3\leq t\leq X\\ v_{p}(t-a)=k\end{subarray}}\sqrt{t^{2}-4}L(1,t^{2}-4).\] We shall approximate these pieces individually. Note that \(\Psi_{\Gamma}(x;p,a;k)=0\) for \(p^{k}>t-a\). Moreover, the condition \(v_{p}(t-a)=k\) constrains \(t\) to \(p-1\) residue classes modulo \(p^{k+1}\), and it also yields \(v_{p}(t^{2}-4)=k\). If \(k=2n-1\) is odd, then \(p\mid D\) and \(v_{p}(l)=n-1\), hence by (2.3), \[L(s,t^{2}-4)=\frac{1-p^{n(1-2s)}}{1-p^{1-2s}}L^{p}(s,t^{2}-4).\] Using also (2.4) and (2.6), we obtain \[\Psi_{\Gamma}(x;p,a;2n-1) =\frac{p-1}{p^{2n}}\cdot\frac{1-p^{-n}}{1-p^{-1}}\cdot x+O_{p, \varepsilon}(x^{\frac{3}{4}+\frac{\partial}{2}+\varepsilon})\] \[=(p^{1-2n}-p^{1-3n})x+O_{p,\varepsilon}(x^{\frac{3}{4}+\frac{ \partial}{2}+\varepsilon}). \tag{3.1}\] It is important that the implied constant in the error term is independent of \(n\). If \(k=2n\) is even, then \(p\nmid D\) and \(v_{p}(l)=n\), hence by (2.3), \[L(s,t^{2}-4)=\left(\frac{1-p^{n(1-2s)}}{1-p^{1-2s}}+\frac{p^{n(1-2s)}}{1- \chi_{D}(p)p^{-s}}\right)L^{p}(s,t^{2}-4).\] We can understand \(\chi_{D}(p)\) by writing \(t=a+p^{2n}r\). Indeed, then \(t^{2}-4=2ap^{2n}r+p^{4n}r^{2}\), hence \[\chi_{D}(p)=\left(\frac{D}{p}\right)=\left(\frac{Dl^{2}p^{-2n}}{p}\right)= \left(\frac{2ar}{p}\right).\] This means that among the \(p-1\) choices for \(t\,(\operatorname{mod}p^{2n+1})\), half the time \(\chi_{D}(p)\) equals \(1\) and half the time it equals \(-1\). Using also (2.4) and (2.6), we obtain \[\Psi_{\Gamma}(x;p,a;2n) =\frac{p-1}{p^{2n+1}}\left(\frac{1-p^{-n}}{1-p^{-1}}+\frac{(1/2)p ^{-n}}{1-p^{-1}}+\frac{(1/2)p^{-n}}{1+p^{-1}}\right)x+O_{p,\varepsilon}(x^{ \frac{3}{4}+\frac{\partial}{2}+\varepsilon})\] \[=\left(p^{-2n}-\frac{p^{-3n}}{p+1}\right)x+O_{p,\varepsilon}(x^ {\frac{3}{4}+\frac{\partial}{2}+\varepsilon}). \tag{3.2}\] Again, the implied constant in the error term is independent of \(n\). Summing up the pieces \(\Psi_{\Gamma}(x;p,a;2n-1)\) and \(\Psi_{\Gamma}(x;p,a;2n)\) for \(1\leq n\leq\log(X+2)\), and inserting the approximations (3.1)-(3.2), we deduce the asymptotic formula \[\Psi_{\Gamma}(x;p,\pm 2)=c_{p}x+O_{p,\varepsilon}(x^{\frac{3}{4}+\frac{ \partial}{2}+\varepsilon}),\] where \[c_{p}\coloneqq\sum_{n=1}^{\infty}\left(p^{1-2n}-p^{1-3n}+p^{-2n}-\frac{p^{-3n}}{p +1}\right)=\frac{p}{p^{2}-1}.\] This is the third case of (1.2), and the proof of Theorem 1.1 is complete. ## 4. Proof of Corollary 1.2 There are \(\frac{p-3}{2}\) residues \(a\,(\operatorname{mod}p)\) such that \(a^{2}-4\) is a nonzero quadratic residue. Indeed, this occurs if and only if \(a^{2}-4\equiv b^{2}\,(\operatorname{mod}p)\) for some \(b\not\equiv 0\,(\operatorname{mod}p)\), which can be written as \((a+b)(a-b)\equiv 4\,(\operatorname{mod}p)\). Making the change of variables \(a+b\equiv 2x\,(\operatorname{mod}p)\) and \(a-b\equiv 2x^{-1}\,(\operatorname{mod}p)\), we obtain \(a\equiv x+x^{-1}\,(\operatorname{mod}p)\) and \(b\equiv x-x^{-1}\,(\operatorname{mod}p)\) with the condition \(x\not\equiv-1,0,1\,(\operatorname{mod}p)\). This restriction leaves \(\frac{p-3}{2}\) different ways to choose \(a\,(\operatorname{mod}p)\). Since there are two solutions to \((\frac{a^{2}-4}{p})=0\), we conclude that there are \(\frac{p-1}{2}\) residues \(a\,(\operatorname{mod}p)\) such that \(a^{2}-4\) is a nonzero quadratic nonresidue. Now Corollary 1.2 is immediate from Theorem 1.1.
2309.14395
Implicit Sensing in Traffic Optimization: Advanced Deep Reinforcement Learning Techniques
A sudden roadblock on highways due to many reasons such as road maintenance, accidents, and car repair is a common situation we encounter almost daily. Autonomous Vehicles (AVs) equipped with sensors that can acquire vehicle dynamics such as speed, acceleration, and location can make intelligent decisions to change lanes before reaching a roadblock. A number of literature studies have examined car-following models and lane-changing models. However, only a few studies proposed an integrated car-following and lane-changing model, which has the potential to model practical driving maneuvers. Hence, in this paper, we present an integrated car-following and lane-changing decision-control system based on Deep Reinforcement Learning (DRL) to address this issue. Specifically, we consider a scenario where sudden construction work will be carried out along a highway. We model the scenario as a Markov Decision Process (MDP) and employ the well-known DQN algorithm to train the RL agent to make the appropriate decision accordingly (i.e., either stay in the same lane or change lanes). To overcome the delay and computational requirement of DRL algorithms, we adopt an MEC-assisted architecture where the RL agents are trained on MEC servers. We utilize the highly reputable SUMO simulator and OPENAI GYM to evaluate the performance of the proposed model under two policies; {\epsilon}-greedy policy and Boltzmann policy. The results unequivocally demonstrate that the DQN agent trained using the {\epsilon}-greedy policy significantly outperforms the one trained with the Boltzmann policy.
Emanuel Figetakis, Yahuza Bello, Ahmed Refaey, Lei Lei, Medhat Moussa
2023-09-25T15:33:08Z
http://arxiv.org/abs/2309.14395v1
# Implicit Sensing in Traffic Optimization: Advanced Deep Reinforcement Learning Techniques ###### Abstract A sudden roadblock on highways due to many reasons such as road maintenance, accidents, and car repair is a common situation we encounter almost daily. Autonomous Vehicles (AVs) equipped with sensors that can acquire vehicle dynamics such as speed, acceleration, and location can make intelligent decisions to change lanes before reaching a roadblock. A number of literature studies have examined car-following models and lane-changing models. However, only a few studies proposed an integrated car-following and lane-changing model, which has the potential to model practical driving maneuvers. Hence, in this paper, we present an integrated car-following and lane-changing decision-control system based on Deep Reinforcement Learning (DRL) to address this issue. Specifically, we consider a scenario where sudden construction work will be carried out along a highway. We model the scenario as a Markov Decision Process (MDP) and employ the well-known DQN algorithm to train the RL agent to make the appropriate decision accordingly (i.e., either stay in the same lane or change lanes). To overcome the delay and computational requirement of DRL algorithms, we adopt an MEC-assisted architecture where the RL agents are trained on MEC servers. We utilize the highly reputable SUMO simulator and OPENAI GYM to evaluate the performance of the proposed model under two policies; \(c\)-greedy policy and Boltzmann policy. The results unequivocally demonstrate that the DQN agent trained using the \(c\)-greedy policy significantly outperforms the one trained with the Boltzmann policy. Lane-changing, car-following, MDPs, DQN, Autonomous vehicles, Intelligent transportation systems ## I Introduction In the past two decades, researchers have been working on innovative ways to make significant improvements in security and comfort in Intelligent Transportation Systems (ITS) [1]. Safety is one of the major design goals for autonomous driving as it allows autonomous vehicles to navigate roads independently with fewer human interventions. This will eventually lead to fewer accidents compared to human drivers who are mostly impaired due to many reasons such as sickness and fatigue after a long drive to name a few. A multitude of onboard sensors is used in most current autonomous driving systems to gather relevant data [2][3]. Sharing these relevant data among multiple autonomous cars will be beneficial to have an efficient ITS. The advent of advanced communication technologies such as Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), and Vehicle-to-everything (V2X) allows for highly efficient wireless communication within the ITS domain [4]. This is very important for autonomous vehicles to communicate with one another, central servers, and off-board Road Side Units (RSUs). An essential aspect of autonomous vehicle operation in complex driving scenarios is having a decision-making control module that is accurate and that can be executed almost instantly. This module is responsible for sending instructions to the vehicle's action execution module to perform numerous actions like following cars, avoiding obstacles, changing lanes, and overtaking [5]. Driving maneuvers such as following a car in front and changing lanes are the two most important driving situations that occur most frequently. Consequently, multiple research works have emerged with various car-following models and lane-changing models [6, 7, 8, 9, 10, 11]. However, most of these research works studied and proposed car-following models and lane-changing models individually. The effects of lane-changing behavior on vehicles in the opposite lane have been pointed out by many scholars. To this end, the importance of joint research on vehicle driving systems that considers both lane-changing behavior and car-following behavior cannot be overstated. Recently, Machine Learning (ML) based car-following models have yielded outstanding performance as reported in the literature. Specifically, Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL) stand out among other ML approaches adopted [6, 7, 8]. Similarly, lane-changing models are seeing similar research trends in the literature (i.e., researchers are adopting RL and DRL such as Deep Deterministic Policy Gradient (DDPG) and Deep Q-Network (DQN) algorithms to solve the lane-changing maneuver problem) [9, 10, 11]. However, only a few research works have been reported in the literature [12]. For any ITS to be effective, vehicle-to-everything (V2X) applications must be deployed to enable vehicles to exchange information with nearby vehicles and infrastructure to coordinate maneuvers. It is very common to execute several intensive computational operations in a very short period of time in the ITS domain to have safe and effective coordination among vehicles. This necessitates the deployment of servers with high computational resources along the road. As part of the V2X infrastructure deployment, operators install RSUs to reduce communication delays between vehicles and central servers, which improves the coverage range. In response to concurrent delay and computational requirements, the European Telecommunications Standards Institute (ETSI) introduced the multi-access edge computing (MEC) concept [13]. With MEC, computational resources are moved closer to the vehicles. MEC-assisted ITS applications are being heavily investigated in the literature. The contributions of this paper are summarized as follows: * Develop a cohesive decision control framework for car-following and lane-changing operations, utilizing DRL techniques. This is specifically tailored for scenarios involving abrupt highway construction work. * Formulate the given scenario as an MDP and employ the Deep Q-Network (DQN) algorithm during the experimentation phase to train an RL agent in making optimal decisions. * Integrate a MEC-assisted architecture to address latency and computational demands associated with DRL algorithms. * Evaluate and contrast two distinct decision-making policies, namely Boltzmann and Epsilon Greedy, to ascertain the superior approach in enhancing traffic flow efficiency within the simulation environment. The structure of the rest of the paper is as follows: Section II presents the background knowledge and the relevant related works in the literature. Section III presents the proposed system model and its description. Section IV presents the description of the implementation of the environment and model used for training the agent. The section concludes with results and a discussion of the performance analysis of the proposed model. Section V gives conclusion remarks and highlights the intended future work. ## II Background and Related Works The purpose of this section is to discuss the necessary background knowledge required and the related works from the literature. The first part discusses the concept of RL and DRL in general. The second part dive into the car-following models, and lane-changing models. ### _RL and DRL_ Generally, RL describes an autonomous agent learning by interacting with its surroundings to improve its performance. A reward function \(R\) determines whether an agent performs well or not in a typical RL environment. A decision is made for every state experienced and an occasional reward, \(R\), is received as a result based on the usefulness of that decision by the agent. In order to maximize cumulative rewards over a specified period of time, the agent must maximize each reward received during the specified period, which is the ultimate goal of the agent. As the agent learns more about the expected utility of different state-action pairs (i.e. discounted sum of future rewards), it can gradually increase its long-term reward. When formalizing sequential decision problems with an RL agent, Markov decision processes (MDPs) seem to be the standard that is widely adopted [14]. A typical MDP is characterized as a tuple \(<S,A,P,R,\gamma>\) where \(S\) is the set of state space, \(A\) is the set of action space, \(P\) is the probability transition function, \(R\) is the reward function, and \(\gamma\) is a discount factor used to adjust immediate and future rewards. In short, when an agent in a state \(s\in S\) chooses an action \(a\in A\), the agent will transition to the next state \(s^{\prime}\in S\) according to the transition probability \(P\in[0,1]\) and gain a reward \(R(s,a)\). Note that the end goal of an agent is to find the optimal policy \(\pi^{*}\) (which is a mapping of the environment states to actions) that will result in the maximum expected sum of discounted rewards [14]. The agent uses this policy to determine how to behave at every step of the way. A policy can either be deterministic or stochastic in nature. A deterministic policy typically returns the action to be taken in each state whereas as a stochastic policy returns a probability distribution over the actions (usually donated as \(\pi(a|s)\)). Based on a given policy, a value-function or a Q-function can be defined to measure the expected rewards starting from any given state \(s\in S\) or any state-action pair \((s,a)\). For more details on the different RL methods for finding the optimal value-function, Q-function, and policy, refer to [15]. Despite the fact that learning in RL is a convergent process, it takes a long time to come up with the best policy because it requires a lot of exploration and knowledge, so it is not suitable and inapplicable for large-scale networks. This limits the wide adoption of RL in many applications. The advancement of deep learning paves the way for a new RL approach called DRL, which solves some of the limitations of the traditional RL. By utilizing Deep Neural Networks (DNNs) as a learning tool, DRL is able to increase the learning speed and performance of RL algorithms. Consequently, DRL is being adopted in many applications such as ITS, healthcare, robotics, trading and finance, news recommendation platforms, and networking. Several algorithms such as DQN, DDPG, and Double Deep Q Network (DDQN) emerged as a result of the advancement in the DRL paradigm [16]. ### _Car-following models and lane-changing models_ Several studies emerged in the domain of ITS tackling the problem of car-following and lane-changing decisions. A car-following framework for Autonomous Vehicles (AVs) is developed in [6] using a navigation decision module and an automatic object detection module. The authors explore Q-learning and DQN algorithms against the proposed car-following framework. In [7], the authors explored the characteristics of demand for AVs in mixed traffic flow and proposed a car-following model with a warning module. An automated entropy adjustment algorithm using Tsallis actor-critic (ATAC) is established in [8] to propose a car-following model. The authors explore Twin Delayed DDPG (TD3), Soft Actor-critic (SAC) and DDPG algorithms to test the performance of the proposed car-following model. Similarly, numerous research works have emerged in the domain of lane-changing maneuvers. The authors in [9] proposed a lane-changing decision module for AVs using a game theoretical approach. The authors demonstrated that during the course of the lane-changing decision, the objective quantification of lane-changing intention must be used in order to determine the lane-changing intentions of AVs. The authors in [10] proposed an improved method for planning the trajectory of changing lanes for AVs on closed highways based on segmented data. For predicting target vehicles' driving intentions, a Bayesian Network-based model using Long Short-Term Memory and the maximum entropy Inverse Reinforcement Learning (IRL) algorithm is used. Some research works that explore the integration of car-following and lane-changing models for better decision options are starting to trend in the literature. For example, the authors in [12] proposed a two-layered framework that utilizes DRL algorithms to handle large-scale mixed-state spaces and produces lane-changing and car-following decisions as a composite output. A Duelling Double Deep Q-Network (D3QN) algorithm is used to distinguish the value of selected lane-changing actions and the potential value of the environment in the upper layer model. A DDPG algorithm is used in the second layer of the model to control vehicle speed continuously based on car-following decisions. This paper differs from the work in [12] in the sense that we use a single RL algorithm (i.e., DQN algorithm) to make decisions on car-following or lane-changing actions. Moreover, the work in [12] did not adopt a MEC-assisted architecture, rather assuming the vehicle is equipped with the required processing resources to run the RL agents. ## III System Model The proposed integrated car-following and lane-changing model is presented in Figure 1. We consider a scenario where a sudden construction or roadblock is up ahead on a highway. To model the scenario properly, we assume the autonomous vehicle is equipped with multiple sensors and appropriate wireless communication systems such as V2V or V2I systems, which can be depicted in Figure 1 as the vehicle dynamic module. This means that the target vehicle can be able to gather information such as its own speed, acceleration, position, distance to the roadblock, and the same information about neighboring vehicles. We also assume to have off-board Road Side Units (RSUs) that the vehicles can exchange data. The vehicles' data/information is passed through a DQN algorithm to train the agent to make appropriate decisions regarding lane-changing or car-following. The DQN algorithm is implemented on a central MEC edge server. This central edge server handles the computational calculations of the MDP problem and then communicates the decision to the vehicles accordingly through V2I technology using the nearby RSU. Mathematically, we model the scenario as an MDP as follows: * **State space definition:** We model each state \(s\in S\) as tuple \(<d_{c},d_{f},v_{EH},x_{p},y_{p}>\) where \(d_{c}\) represents the distance of the target car from the roadblock, \(d_{f}\) represents the distance of the target car to the next car in its path, \(v_{EH}\) represents the speed of the target car, \(x_{p}\) and \(y_{p}\) represents the location of the target car along the x-axis and y-axis respectively. * **Action space definition:** The action space \(a\in A\) is given as \(<M,C>\) where \(M\) represents the decision of merging to the other lane (i.e., lane-changing decision) and avoiding reaching the roadblock and \(C\) represents the decision of Fig. 1: Proposed integrated car-following and lane-changing model running on MEC server staying on the same lane (i.e., a car-following decision for the cars on the right lane or cars that are far away from the roadblock). Note that this action space is represented by 12 discrete action spaces, which correlate to the positions in the lane with the roadblock where lane-changing action can be initiated. This will be explained in more detail in section IV. * **Reward definition:** The reward definition is given as a tuple \(<R_{M}^{+},R_{M}^{-},-R_{M}^{+}>\) where \(R_{M}^{+}\) represents a positive reward when the target car successfully changes lane and merge with average speed before reaching the roadblock, \(R_{M}^{-}\) represents a negative reward when the target car fails to change lane after reaching the roadblock (i.e., the end of the lane in this case) and \(-R_{M}^{+}\) represents negative positive reward when the target car change lane and merge with minimum speed before reaching the roadblock. The last reward will deal with the situation where the target car eventually merges but at the end of the lane (which will cause high traffic jams). To solve the above modeled MDP problem, we utilize the well-known DQN algorithm because it achieves good performance in related decision control problems [6]. We employ two policies; \(\epsilon\)-greedy policy and Boltzmann policy as explained in the DQN algorithm's pseudocode in algorithm 1. The vehicles gather information about the state space as defined in the state space definition and send it to the nearest off-board RSUs. The off-board RSUs forward the state space to the central MEC edge server, which uses the DQN algorithm to determine the correct actions to execute. The current action is then sent to the corresponding vehicle via the nearby RSU. It is worth mentioning that we adopt the two policies (i.e., \(\epsilon\)-greedy and Boltzmann) for the sake of performance evaluation as shown in algorithm 1. ``` 0: Initial \(\mathbf{w}\) set randomly, Initial replay buffer \(D\) set to \(N\), Initial policy \(\pi(s)\) as either \(\epsilon\)-greedy policy or Boltzmann policy, Initial \(S_{0}\) 1:for\(t=0\) to \(T\)do 2: Execute action \(A_{t}=\pi(S_{t})\) according to either \(\epsilon\)-greedy policy or Boltzmann policy 3: Observe state \(S_{t+1}\) and reward \(R_{t+1}\) 4: Store the transition parameters \(S_{t},A_{t},R_{t+1},St+1)\) in \(D\) 5: Sample minibatch transitions randomly from \(D\) as \((S_{j},A_{j},R_{j+1},Sj+1)\) 6:for\(j\) in minibatchdo 7: Set \(y_{j}=\left\{\begin{array}{c}R_{j+1},S_{j+1}\\ R_{j+1}+\gamma max_{a^{\prime}}Q(S_{j+1},a^{{}^{\prime}};w)\end{array}\right.\) 8: Update \(\mathbf{w}\) using stochastic descent to minimize \((y_{j}-Q(S_{j},Aj;\mathbf{w}))^{2}\) 9:endfor 10: Improve policy \(\pi(s)\) as either \(\epsilon\)-greedy or Boltzmann with the new \(\mathbf{w}\) 11:endfor ``` **Algorithm 1** DQN algorithm on the MEC server ## IV Implementation and Experiment The implementation of the system model is a multi-layer problem with the end goal for the extermination to include two RL agents with different policies to take action in the system model. The following tasks are defined in order to achieve the goal of our experimentation: * Model the MDP from the System Model into a Python OpenAi GYM environment * Create the simulation space in Simulation of Urban Mobility(SUMO) * Create the model and add the agent and policy to the Model * Interface the OpenAI GYM environment with the simulation space in SUMO * Interface the Agent with the environment ### _GYM Environment_ The GYM environment was modeled after our MDP formulation. The GYM environment allows us to define the parameters of our problem such as state space, action space, observation space, reward function, and simulation length. The library is also not limited to these parameters and can include more complex functions and parameters however, since the GYM environment is being interfaced with SUMO there are fewer parameters needed. The definition of our parameters is as follows, for the state space it is determined by the observation space which is the range that is determined by each lane's average speed. Our action space is a discrete 12-action space that correlates to positions in the lane of construction where a lane change will be initiated. ### _SUMO Environment_ The SUMO environment is a 300-unit, three-lane highway with a blocked lane at 280 units, causing vehicles to slow down and make lane-changing decisions. A vehicle can change lanes every 20 units in the lane leading up to the roadblock. The RL agent must determine when to start changing lanes to avoid a negative reward if the average speed of the lanes Fig. 2: Sumo environment and model representation falls below a certain threshold. The work features seamless integration between the two libraries, and more scenarios will be added in future work. Figure 2 depicts the environment with yellow boxes representing positions for lane changes, including one at the construction site and the topmost detectors used to gather information about average speed. ### _Model_ A shallow network with 948 parameters was used to support a DQN model (depicted in Figure 2) for the GYM environment. It consisted of an input layer based on possible states, followed by a 24-dense layer with ReLu activation, another 24-dense layers, and a model for 12 discrete actions. Testing showed that the program completed one step per second, although this was due to the SUMO simulation rather than the network. The DQN implemented two policies - the Boltzmann Q-Policy and the Greedy Q-Policy - with all other hyperparameters kept consistent. In future works, an actor-critic network could be used to evaluate actions as they occur, which would benefit multi-agent testing. ### _Interfacing Agent, GYM, and SUMO_ At this point in the implementation all modules have been created so comes the time for all modules to work together to create the simulation. The agent that utilizes Tensorflow and Keras interface works with OpenAI GYM without any problems, however, the challenges began when trying to implement SUMO with the agent and the environment. There was no standard way found to implement the two together; the only use factor that was found during research was that SUMO had a Python library integration tool called TraCI. From this Python library the SUMO application can be called from Python, this is what was needed for the implementation. A function was created that took a number between 1 and 12, which corresponds to the point at which the lane change was supposed to take place, and then launched the SUMO simulation with those parameters. ## V Simulation Results and Analysis This section presents the performance evaluation of the proposed model. For each model, the system was trained for 400 episodes which correlate to 20,000 steps. Several different metrics were recorded and a small noise factor was included in each simulation to simulate randomness. Along with this randomness, the vehicles starting positions were not always the same. With all of this included each policy was able to perform and find optimal solutions. ### _DQN Boltzmann Q-Policy and Epsilon Greedy Q-Policy_ The DQN model with the Boltzmann Q-Policy is intended to take a more exploratory approach. It tries different actions and learns random actions based on a scale of the Q function. The Greedy Q-Policy will quickly find which actions correlate to the highest rewards and based on the state take different actions. Figure 3 shows that for this environment the Greedy policy is actually taking more diverse actions whereas Boltzmann is taking the more consistent actions. However, the rewards for the Greedy Policy are greater than the Boltzmann, this is because they are working in a dynamic environment where the optimal actions change per step. The Greedy policy is able to learn this and adapt whereas Boltzmann is stuck evaluating random actions for every scenario on its scale. The problem that the Boltzmann policy is facing is analogous to an optimization problem where gradient descent is getting stuck in local minima. Figure 4(a) helps us understand why the Boltzmann policy is operating poorly compared to the Greedy policy. The Q-Values of Boltzmann are converging with the Greedy policy however, figure 3 shows that this is not the case otherwise the rewards would be similar. The Boltzmann Policy is evaluating its actions Q-Values as good but since the environment is dynamic and is changing after every step this is not the case. This is due to the limited memory of the algorithm and is shown by Figures 4(a) and 3. Fig. 4: (Q-Value Function at each episode and Rewards of Both Models Fig. 3: Action and Reward Data After both models were trained they were saved and used to play in the environment. Before running both models, based on the results that were received during testing, we predicted that the Greedy policy would outperform the Boltzmann policy. Looking at figure 4(b), shows that our prediction was correct. The Boltzmann policy takes more consistent actions that result in the same rewards, while the Greedy Q starts taking different actions to increase rewards as the simulation runs. This is because the Boltzmann Q operates on a range of actions that it determines based on Q values, while the Greedy Q takes actions with the intention of increasing rewards and therefore Q Value. Another reason the Greedy Q outperforms the Boltzmann later in the simulation is the discount factor in the equation as it focuses on gathering rewards over a longer period of time. Figure 5 shows the program created for this implementation has a reasonable run time. It also shows that Greedy policy is taking a little more time and this can be because the algorithm includes memory for its previous steps before making a new action whereas Boltzmann is making different actions every step. ## VI Conclusion and Future Work In this paper, the task of modeling traffic and optimal merge patterns was accomplished through the use of Deep Reinforcement Learning. This task presented many challenges that required creativity when creating the training and testing programs. The problem had to be modeled into a game environment, and then the RL algorithm had to be trained and tested to gather results. It features a unique and novel implementation with Python, OpenAI GYM, and SUMO. With the use of this novel program, two different RL policies were able to be tested. From the experiment, it can be determined for the application of merge patterns the Epsilon Greedy Q-Policy performs better than the exploratory Boltzmann Q-Policy. Also, due to the dynamic nature of the environment that was created, the conclusion is that Greedy Q performs better in a random environment. Since the framework for the problem has been created this presents opportunities for different algorithms and policies to be tested. The framework can be scaled to almost any RL or ML framework. We hope to further contribute by evaluating different algorithms to test performance against one another, as well as perform tweaks to the environment to add several different scenarios.
2310.20153
Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision
Large language models (LLMs) have demonstrated remarkable capabilities in various tasks. However, their suitability for domain-specific tasks, is limited due to their immense scale at deployment, susceptibility to misinformation, and more importantly, high data annotation costs. We propose a novel Interactive Multi-Fidelity Learning (IMFL) framework for the cost-effective development of small domain-specific LMs under limited annotation budgets. Our approach formulates the domain-specific fine-tuning process as a multi-fidelity learning problem, focusing on identifying the optimal acquisition strategy that balances between low-fidelity automatic LLM annotations and high-fidelity human annotations to maximize model performance. We further propose an exploration-exploitation query strategy that enhances annotation diversity and informativeness, incorporating two innovative designs: 1) prompt retrieval that selects in-context examples from human-annotated samples to improve LLM annotation, and 2) variable batch size that controls the order for choosing each fidelity to facilitate knowledge distillation, ultimately enhancing annotation quality. Extensive experiments on financial and medical tasks demonstrate that IMFL achieves superior performance compared with single fidelity annotations. Given a limited budget of human annotation, IMFL significantly outperforms the human annotation baselines in all four tasks and achieves very close performance as human annotations on two of the tasks. These promising results suggest that the high human annotation costs in domain-specific tasks can be significantly reduced by employing IMFL, which utilizes fewer human annotations, supplemented with cheaper and faster LLM (e.g., GPT-3.5) annotations to achieve comparable performance.
Jiaxin Zhang, Zhuohang Li, Kamalika Das, Sricharan Kumar
2023-10-31T03:39:23Z
http://arxiv.org/abs/2310.20153v1
Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision ###### Abstract Large language models (LLMs) have demonstrated remarkable capabilities in various tasks. However, their suitability for domain-specific tasks, is limited due to their immense scale at deployment, susceptibility to misinformation, and more importantly, high data annotation costs. We propose a novel Interactive Multi-Fidelity Learning (IMFL) framework for the cost-effective development of small domain-specific LMs under limited annotation budgets. Our approach formulates the domain-specific fine-tuning process as a multi-fidelity learning problem, focusing on identifying the optimal acquisition strategy that balances low-fidelity automatic LLM annotations and high-fidelity human annotations to maximize model performance. We further propose an exploration-exploitation query strategy that enhances annotation diversity and informativeness, incorporating two innovative designs: 1) prompt retrieval that selects in-context examples from human-annotated samples to improve LLM annotation, and 2) variable batch size that controls the order for choosing each fidelity to facilitate knowledge distillation, ultimately enhancing annotation quality. Extensive experiments on financial and medical tasks demonstrate that IMFL achieves superior performance compared with single fidelity annotations. Given a limited budget of human annotation, IMFL significantly outperforms the \(\mathbf{3}\times\) human annotation baselines in all four tasks and achieves very close performance as \(\mathbf{5}\times\) human annotation on two of the tasks. These promising results suggest that the high human annotation costs in domain-specific tasks can be significantly reduced by employing IMFL, which utilizes fewer human annotations, supplemented with cheaper and faster LLM (e.g., GPT-3.5) annotations to achieve comparable performance. ## 1 Introduction Large language models (LLMs) like GPT-3/ChatGPT/GPT-4 [4; 47; 5] have lately attracted great interest from both academia and industry due to their impressive in-context learning (ICL) abilities. However, the current state-of-the-art LLMs have since quickly grown from hundreds of billions [7] to even a trillion [20] parameters. Models of this scale require specialized hardware, massive-scale training data, and extensive computational power, which are inaccessible for most product or research teams. In addition, the generalizability of LLMs is predominantly decided by the scope of the underlying pre-training data. In fact, LLMs do not perform well out of the box in many real-world domains where specialized knowledge beyond the standard fields of pre-training is required (i.e., domain shifts), such as healthcare [26] and finance [46]. As an alternative to general-purpose LLMs, practitioners oftentimes find small domain-specific language models (LMs) to be more favorable as they require less training data and are faster to compute, leading to faster development cycles and lower operating costs [55; 12]. A common practice of developing such models is through the classic pre-training and then fine-tuning paradigm. Unfortunately, to achieve comparable performance as LLMs, tuning small LMs requires high-quality manual annotations on target domain data, which in many fields requires extensive human effort and expert knowledge, making supervised fine-tuning very expensive. One promising approach to alleviate human annotation efforts is to leverage LLMs as knowledge bases for automatically annotating new data [43; 45]. Unfortunately, such an approach is susceptible to the misinformation [8; 39; 32; 2] of LLMs through hallucination [16; 53; 48; 30; 52], which risks generating unreliable or falsified labels and will, in turn, demolish the model's utility for high-stakes applications like healthcare and finance, where the truth is of utmost importance. As a result, the key challenge at hand is how to effectively gather sufficient high-quality data given limited budgets on human annotation, which is a critical component in fine-tuning domain-specific LMs. In this paper, we present a novel framework, named IMFL, for achieving cost-effective development of domain-specific LMs, as illustrated in Fig. 1. Our approach capitalizes on the insight that different data samples inherently exhibit different levels of hardness for learning [6; 1]. Therefore, it is dispensable to request human annotation for every sample. By discerning each sample's hardness level, we can delegate the majority of the annotation tasks to automatic annotation tools such as LLMs while exclusively assigning a limited number of highly uncertain samples to human annotators, thereby reducing human effort significantly while still maintaining high annotation quality. To improve cost efficiency, we formulate the domain-specific fine-tuning process as an _interactive multi-fidelity learning_ problem. We deem LLMs and humans to be two sources of annotation with distinct fidelities, and aim to determine the optimal acquisition strategy that balances low-fidelity LLM-generated annotations and high-fidelity human annotations to maximize model performance under limited annotation budgets. We thus introduce an exploration-exploitation query strategy, wherein human annotations emphasize _exploitation_ geared toward maximizing informativeness while LLM annotations concentrate on _exploration_ to foster diversity and improve representatives. To reduce the misinformation in LLM-generated annotations and improve model usability and reliability in the target domain, we incorporate two innovative designs. First, we utilize prompt retrieval to select in-context learning examples for each queried sample, thereby improving the \begin{table} \begin{tabular}{l|c c c} \hline \hline & Human & LLM & IMFL \\ \hline Cost Saving & Low & **Very High** & **High** \\ Quality & **Very High** & Low & **High** \\ Efficiency & Low & **Very High** & **High** \\ \hline Performance & **Very High** & Low & **High/Very High** \\ \hline \hline \end{tabular} \end{table} Table 1: A qualitative comparison of human annotation, LLM annotation, and IMFL. Figure 1: (a) Proposed Interactive Multi-Fidelity Learning Framework (IMFL). IMFL aims at solving the best acquisition strategy that balances between low-fidelity automatic LLM annotations and high-fidelity human annotations to maximize model performance given limited annotation budgets. (b) IMFL significantly outperforms the \(3\times\) human annotation baselines in all four tasks and is very close to \(5\times\) upper bound in the Headline dataset (showed). This result indicates that the high human annotation cost in domain-specific tasks can be greatly reduced by employing IMFL, which utilizes fewer human annotations combined with cheaper GPT-3.5 annotations to achieve competitive performance. accuracy of LLM-generated annotations. Second, we implement variable batch sizes throughout the interactive annotation process, which manage the order in which each fidelity is chosen; this facilitates knowledge distillation and ultimately enhances annotation quality while stabilizing the LLM annotations. We evaluate our approach on four language understanding tasks across two specialized application domains, i.e., finance and medicine. Our results highlight that LMs tuned through the proposed IMFL framework with GPT-3.5 as an auto-annotator significantly outperform LMs tuned with \(3\times\) human annotations, and are even on par with LMs tuned with \(5\times\) human annotations in some cases. In contrast to single-fidelity annotations such as only human or only LLMs, IMFL effectively addresses limitations related to cost saving, annotation quality, and efficiency (see Table 1). Furthermore, IMFL not only surpasses the performance of LLM annotators but also achieves highly competitive performance compared to human annotators, albeit at a substantially lower cost and effort. ## 2 Interactive Multi-fidelity Learning We propose IMFL that builds on two key insights: (1) leveraging a substantial amount of low-fidelity annotations generated by LLMs to compensate for the insufficiency of high-fidelity human annotations during fine-tuning, and (2) utilizing high-fidelity human annotations as supervision signals to distill knowledge from LLMs while simultaneously enhancing their output annotation quality through in-context learning. Essentially, our approach, IMFL, can be regarded as a synergy between fine-tuning and knowledge distillation under sparse human supervision. ### Problem Formulation Given a total annotation budget \(\mathcal{B}\) and a computational cost \(\mathcal{C}\) (e.g., costs for fine-tuning, inference, and query), we aim to fine-tune a small LM \(f(\mathbf{x};\theta^{*}):\mathcal{X}\rightarrow\mathcal{Y}\) with pre-trained parameters \(\theta^{*}\) on a downstream task by annotating samples from an unannotated data pool \(\mathcal{U}=\{x_{i}\}_{i=1}^{U}\) to constitute the annotated sample set \(\mathcal{A}\) (\(|\mathcal{A}|\leq\mathcal{B}\) and initially \(\mathcal{A}=\varnothing\)) such that its performance is maximized. Note that in our multi-fidelity setting, the annotated set contains a human-annotated subset \(\mathcal{A}_{H}\) and an LLM-annotated subset \(\mathcal{A}_{G}\), so \(\mathcal{A}=\mathcal{A}_{H}\cup\mathcal{A}_{G}\). Similarly, the total annotation budget is composed of human annotation budget \(\mathcal{B}_{H}\) and LLM annotation budget \(\mathcal{B}_{G}\) (\(\mathcal{B}_{H}\) is typically much smaller than \(\mathcal{B}_{G}\)), i.e., \(\mathcal{B}=\mathcal{B}_{H}+\mathcal{B}_{G}\). To solve for the best annotation strategy to maximize annotation and computation efficiency, we pose the annotation acquisition process as a multi-fidelity learning problem with interactions allowed for \(R\) rounds. In the \(r\)-th round (\(1\leq r\leq R\)), we query a set of instances \(\mathcal{Q}^{r}\) and annotate acquired samples \(\mathcal{A}^{r}\) from the unannotated pool to add annotation, i.e., \(\mathcal{U}=\mathcal{U}\setminus\mathcal{A}^{r}\) and fine-tune the target model \(f\) on \(\mathcal{A}^{r}\) to update \(\theta^{(r)}\). The goal is to minimize the empirical risk \(\mathcal{R}(f)\) of the final LM \(f(\mathbf{x};\theta^{(R)})\) on the downstream task, subject to preset annotation budget and computational cost constraints. ### Multi-fidelity Learning Framework Initialization.We initialize the multi-fidelity learning loop by randomly selecting a small set of samples \(\mathcal{A}_{H}^{0}\) from the unannotated set \(\mathcal{U}\) to be annotated by human annotators. The pre-trained LM with parameters \(\theta^{*}\) is then tuned on the initial annotated dataset: \[\theta^{(0)}=\operatorname*{arg\,min}_{\theta^{*}}\frac{1}{|\mathcal{A}_{H}^{ 0}|}\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{A}_{H}^{0}}\mathcal{L}\left(f(\mathbf{x}_ {i};\theta^{*}),y_{i}\right),\quad i=1,...,n_{s} \tag{1}\] where \(\mathcal{L}\) is the loss function, e.g., cross-entropy for classification, and \(n_{s}\) is the annotation size. This enables the uncertainty score of the target LM to be initially updated on domain-specific data, which helps to mitigate the _cold-start_ issues [31; 49; 50]. Interactive fine-tuning.After model initialization, we begin query samples from the unannotated pool \(\mathcal{U}^{0}=\mathcal{U}\setminus\mathcal{A}_{H}^{0}\) for either human or LLM annotation. Existing methods [54] often consider the entire unannotated pool during sampling. These approaches scale poorly to large unlabeled datasets as acquiring informative samples usually involves making inferences or executing clustering which can be time-consuming if these operations were to be computed over all data samples. Thus, for any interaction round \(r\), we propose to randomly sub-sample from \(\mathcal{U}^{r}\) to obtain a smaller candidate set \(\mathcal{U}^{r}_{s}\) where the acquisition strategy can be efficiently computed. In \(r\)-th round of interactive fine-tuning, we first perform the _exploration-exploitation query_ (EEQ) strategy \(\mathcal{S}\) (described in detail in Section 2.3) to determine the human annotation set \(\mathcal{A}^{r}_{H}\) and LLM annotation set \(\mathcal{A}^{r}_{G}\) from the sub-sampled unannotated pool \(\mathcal{U}^{r}_{s}\). Then the interactive multi-fidelity learning can be solved by minimizing the following total loss objective: \[\mathcal{L}_{total}=\frac{1}{|\mathcal{A}^{r}_{H}|}\sum_{(\mathbf{x}_{i},y_{i}) \in\mathcal{A}^{r}_{H}}\mathcal{L}\left(f(\mathbf{x}_{i};\theta^{(r)}),y_{i}\right) +\frac{1}{|\mathcal{A}^{r}_{G}|}\sum_{(\mathbf{x}_{j},y_{j})\in\mathcal{A}^{r}_{G} }\mathcal{L}\left(f(\mathbf{x}_{j};\theta^{(r)}),y_{j}\right) \tag{2}\] Unlike the existing approaches that use simultaneous annotation with equal batch sizes for each round, we emphasize the importance of annotation order (human first and then LLM) and variable batch sizes for each query step (verified in Section 4.2) and identify the following two key designs that improve query efficiency and annotation effectiveness: _Design 1 - In-context learning with similarity-based prompt retrieval._ According to the annotation budget \(\mathcal{B}^{r}_{H}\) and \(\mathcal{B}^{c}_{G}\), we acquire \(\mathcal{C}^{r}_{H}\) and \(\mathcal{Q}^{c}_{G}\) instances for human and LLM annotators respectively. We first annotate acquired samples \(\mathcal{C}^{r}_{H}\) by humans, obtain \(\mathcal{A}^{r}_{H}\), and update the human-annotated set \(\mathcal{A}_{H}=\mathcal{A}_{H}\cup\mathcal{A}^{r}_{H}\). When using LLM to automatically generate annotations for new data, we then retrieve a few examples from the current human-annotated set \(\mathcal{A}_{H}\) as in-context examples for improving the predicted annotation quality, see Fig. 2. Leveraging recent advances in prompt retrieval [25], we compute embeddings from all annotated samples using Sentence-BERT [34] and find the most similar examples for each queried instance measured by cosine similarity. This design improves in-context learning by better utilizing human supervision which empirically helps to further improve the accuracy and robustness of LLM annotations (verified in Section 4.2). More implementation details are provided in Appendix A. _Design 2 - Variable batch-size query._ We propose a variable batch-size query strategy that puts more human budgets towards the initial steps of the learning process to annotate the most uncertain instances and gradually decrease the batch sizes until the total budget is reached, as illustrated in Fig. 2. Naturally, another benefit of this design is that by acquiring more human-annotated examples in the early stage, we can have access to a larger pool of high-fidelity samples for conducting similarity-based prompt retrieval, which further improves the in-context learning performance and stabilizes the LLM annotations. Inspired by infinite geometric series, we design a budget decay scheme and thus set the human annotation budget for the \(r\)-th round to be \(\mathcal{B}^{r}_{H}=\mathcal{B}_{H}/2^{r}\) and iterate until the total budget is reached, i.e. \[\frac{\mathcal{B}_{H}}{2^{1}}+\frac{\mathcal{B}_{H}}{2^{2}}+\frac{\mathcal{B} _{H}}{2^{3}}+\frac{\mathcal{B}_{H}}{2^{4}}+\cdots+\frac{\mathcal{B}_{H}}{2^{r} }=\sum_{r=1}^{R}\left(\frac{1}{2}\right)^{r}\mathcal{B}_{H}\rightarrow\mathcal{ B}_{H}. \tag{3}\] Note that the residual budget after \(R\) rounds will be jointly applied to the last round. Leveraging the benefits of novel designs, we efficiently acquired larger high-quality data \(\mathcal{A}^{r}_{G}\) annotated by LLMs (e.g., GPT-3.5). The next step is to update the annotated sample set in the \(r\)-th round \(\mathcal{A}^{r}=\mathcal{A}^{r}_{H}\cup\mathcal{A}^{r}_{G}\) and unannotated data pool \(\mathcal{U}=\mathcal{U}\setminus\mathcal{A}^{r}\). Then we fine-tune the target model \(f\) using the annotated sample set \((\mathbf{x}_{i},y_{i})\in\mathcal{A}^{r}\) and update the model parameters \(\theta^{(r)}\). Termination.The multi-fidelity learning process is stopped if either of the two stopping criteria is satisfied: (1) Annotation budget \(\mathcal{B}\): if the annotation budget after \(R\) rounds is greater than the total budget limit, i.e., \(\mathcal{B}_{H}+\mathcal{B}_{G}\geq\mathcal{B}\), we terminate the interactive process. (2) Computational cost \(\mathcal{C}\): Compared with inference and query calculation cost, the computation cost of each fine-tuning round \(\mathcal{C}_{ft}\) is typically much more expensive and we thus stop the fine-tuning process if \(R\times\mathcal{C}_{ft}\geq\mathcal{C}\). Finally, we return the fine-tuned target LM \(f(\mathbf{x};\theta^{(r)})\) and annotated sample set \(\mathcal{A}\). Algorithm 1 illustrates the step-by-step workflow of our IMFL framework. ### Exploration-Exploitation Query Strategy Based on the multi-fidelity learning framework, we introduce an innovative query strategy. This approach harnesses human annotation for _exploitation_ by maximizing informativeness through uncertainty sampling, and LLM annotation for _exploration_ by enhancing representativeness through diversity sampling. The core idea is a two-stage selection: executing 1) diversity sampling, e.g., selecting cluster centers to reduce intra-iteration redundancy, and 2) uncertainty sampling, e.g., selecting instances with the least confidence, to avoid inter-iteration redundancy. Fig. 3 presents the key components and steps of the EEQ strategy. Specifically, we apply \(k\)-means cluster algorithm to embeddings of the sub-sampled unannotated data \(\mathcal{U}^{r}_{s}\). Based on the annotation budget, we set \(k=\mathcal{B}_{H}/2^{r}+\mathcal{B}_{G}/R\) as the clustering parameters and identify the cluster centers (or samples closest to the cluster center) as samples, thus enforcing diversity exploration. We then calculate the uncertainty score for all selected samples and rank them from high to low. The top \(\mathcal{B}_{H}/2^{r}\) uncertain samples are assigned to the human annotator following the least confidence strategy: \[\mathbf{x}^{*}_{i}=\operatorname*{arg\,max}_{\mathbf{x}_{i}}\left[1-p(f(\mathbf{x}_{i}; \theta^{(r)})\mid\mathbf{x}_{i};\theta^{(r)})\right], \tag{4}\] which has shown to be simple and effective in a variety of settings, resulting in enforcing uncertainty exploitation [28; 42]. As discussed in Section 2.2, we then update the human-annotated pool \(\mathcal{A}_{H}\) which enables us to retrieve a few examples as in-context examples for the LLM annotator which can annotate \(\mathcal{B}_{G}/R\) samples with better quality and stability. More detailed discussions about the query strategy are presented in Appendix B. ## 3 Experiments ### Datasets We empirically validate the effectiveness of the proposed interactive multi-fidelity learning framework on four diverse datasets, spanning two important real-world application domains, namely, finance and medicine. A summary of the four datasets is provided in Table 2. For the FPB, Headline, and MedQA datasets, we use the publicly available test data for evaluation. As for the PubMedQA dataset, we follow prior work [18] and use the dev./valid data for evaluation. We evaluate the methods by (average) F-1 score for financial datasets following the same setting used by BloombergGPT [46] and accuracy for medical datasets, as is used in prior work [38; 32]. Details about the datasets are available in Appendix D.1. Figure 3: Illustration of exploration-exploitation query strategy with core components and steps. Figure 2: Illustration of interactive fine-tuning. Human annotations are first executed and the resulting annotated data are iteratively merged into the human annotation tool, which provides rich examples for prompt retrieval when calling LLM for annotation. The batch size for human annotation varies and gradually decreases as the round progresses. ### Experiments Setup Fine-tuning.We adopt Dolly 2.0, the first open-source, instruction-following LLM, as the target LM for fine-tuning. It is based on the EleutherAI Pythia [3] model family, which is a suite of decoder-only auto-regressive language models ranging from 70M to 12B parameters. Limited by our computational budget, we choose to use dolly-v2-3b as the pre-trained LM for our main results. We also provide additional results using larger LMs (e.g., dolly-v2-7b and dolly-v2-12b) in Appendix C.2 to show the impact of pre-trained LM size on the final performance. For efficiency, we leverage low-rank adaption techniques (LoRA) [13] to optimize the fine-tuning process for reducing memory and time cost. We execute all experiments on a GPU node with 8 NVIDIA V100 32G cores. More experiment setup details, e.g., hyperparameters, can be found in Appendix D.2. Query and annotation.In the query step, we remove all labels in the training data to create a pool of unannotated data. These original ground truth labels are treated as the high-fidelity annotations provided by Human Annotator and are only accessed at the cost of budget consumption. For the low-fidelity annotation, we employ GPT-3.5-turbo as the LLM Annotator to automatically generate annotation for unannotated data. We note that, in reality, even collecting a large set of _unannotated_ samples can oftentimes be non-trivial. As such, in our experiments, we limited our unannotated data pool to only contain 3000 data samples (randomly sampled from the original training dataset), from which we perform our query strategy. Each experiment is repeated three times and the mean is reported as the final result to reduce noise. Annotation and computational budget.Unless mentioned otherwise, we assume a total annotation budget of 1000 for all datasets (see more discussions about the budget setting in Appendix C.4). As human annotation is far more expensive than using LLM (i.e., GPT-3.5) to generate annotation, we set the human annotation budget of IMFL to be 200 samples (20%) and the GPT-3.5 annotation budget to be 800 samples (80%). In Appendix C.5, we provide a discussion about the trade-off between annotation accuracy and cost expenses. Regarding the fine-tuning cost, we set the total number of interaction rounds for fine-tuning to be \(R=5\) to reflect the computational budgets. It is worth noting that the performance can be further improved if more rounds (i.e., a higher budget) are allowed. ### Main results In this section, we compare IMFL with single fidelity annotations to validate the effectiveness of the proposed multi-fidelity paradigm. Fig. 4 compares IMFL with using only human annotations, where \(1\times\) Human, \(3\times\) Human, and \(5\times\) Human, represents the results obtained by fine-tuning on 200, 600, and 1000 human annotations, respectively. A detailed version of the main results are shown in Appendix C.1). Note that \(5\times\) Human (1000 human annotations) can be seen as the performance upper bound of IMFL (200 Human + 800 GPT-3.5) if all budget is human annotation. From the results, we can clearly see that IMFL significantly outperforms the \(3\times\) human annotation baselines in all four tasks. Particularly, IMFL achieves very close performance as \(5\times\) human annotation on both Headline and PubMedQA datasets with only marginal differences (0.83% and 1.32% absolute loss respectively). This result indicates that the high human annotation cost in domain-specific tasks can be greatly reduced by employing IMFL, which utilizes fewer human annotations combined with cheaper and faster GPT-3.5 annotations to achieve similar performance. Fig. 5 compares IMFL with using only GPT-3.5 annotations with the same total annotation budget (varied from 260 to 1000 samples). We have the following observations. First, our IMFL outperforms the GPT-3.5 annotation by a large margin (in terms of absolute gain) on PFB (+7.35%), Headline (+8.3%), PubMedQA (+6.89%) and MedQA (+19.95%) given the same 1000 annotation budget. \begin{table} \begin{tabular}{l c c c c} \hline \hline Domain & Name & Task & Size (train/test) & Metric \\ \hline Financial & FPB [29] & Sentiment Analysis & 3876/969 & F-1 score \\ Financial & Headline [40] & News Classification & 9130/2282 & Average F-1 score \\ Medical & PubMedQA [18] & Biomedical QA & 500/500 & Accuracy \\ Medical & MedQA [17] & Medical knowledge QA & 11450/1273 & Accuracy \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the four domain-specific datasets used in our experiments. Second, on three out of four datasets (FPB, PubMedQA, and MedQA), models tuned using IMFL with a _total_ annotation budget of 260 (100 Human + 160 GPT-3.5) are able to achieve better performance than using 1000 GPT-3.5 annotations. On the Headline dataset, using 1000 GPT-3.5 annotations performs slightly better than using IMFL with a total budget of 260, but still worse if the total budget is increased to 470 ((100 + 50) Human + (160 + 160) GPT-3.5). This shows that while GPT-3.5 demonstrates promising abilities to reproduce human-generated labels [14, 56], relying solely on low-fidelity GPT-3.5 labels is not ideal for fine-tuning LMs for domain-specific tasks. In addition, compared with using only GPT-3.5 annotations, IMFL shows more reliable results with smaller variance, which benefits from a combination of human annotation and the similarity-based prompt retrieval strategy for improving the in-context learning capability of LLMs. These results verified that IMFL can efficiently utilize sparse human supervision to enhance GPT-3.5 annotations and consequently achieve better performance. ## 4 Analysis ### Exploitation-Exploration Query vs Random Query Strategy Table 3 compares the proposed EEQ strategy with the random query strategy on multiple settings given a limited annotation budget. Under the multi-fidelity setting, our EEQ strategy outperforms the random query strategy by a large margin (5.91% absolution gain on average). Although fine-tuning with only human annotations is supposed to produce the best results as it has the highest annotation accuracy, we observe that using only human annotations with random query is generally worse than using human and GPT-3.5 annotations with EEQ query (on three out of four datasets), thereby validating the effectiveness of the proposed EEQ query strategy. We observe that one exception is the MedQA dataset, where using only human annotations with random query performs slightly better than the proposed IMFL. This is because GPT-3.5 shows relatively low annotation accuracy on this dataset and consequently induces a negative impact on the fine-tuning performance by injecting noises into the annotated data. However, if using only GPT-3.5 annotations with the random query under the considered annotation budget, the performance would drop significantly. Figure 4: Comparisons between our multi-fidelity learning (200 human annotations + 800 GPT-3.5 annotations) and various sizes, i.e., 200 (\(1\times\)), 600 (\(3\times\)), and 1000 (\(5\times\)) of human annotations. Figure 5: Comparisons between our multifidelity learning paradigm and single low-fidelity (all GPT-3.5) annotation on four domain-specific tasks given the same total 1000 annotation budget. Note that the samples for all GPT-3.5 are drawn based on the uncertainty score. ### Prompt Retrieval with Variable Batch Size To evaluate the effectiveness of similarity-based prompt retrieval and variable batch size, in Table 4, we consider several variants with random prompt retrieval and equal batch size as baselines for comparison. We find that similarity-based retrieval shows superior performance compared to the random prompt retrieval baselines (e.g., 80.28 vs 73.77 on Headline and 72.05 vs 68.10 on PubMedQA), and using variable batch size further boosts the effectiveness of retrieval by providing a larger and more diverse set of candidate examples, which is crucial in the limited budget setting. Our interactive learning is conducted over the course of multiple rounds of interactive, with each round using a single mini-batch of data for adaptive annotation and fine-tuning. Here we also compare the interactive multiple mini-batch update strategy with the full-batch strategy, where all annotations are acquired in a single round and then used to fine-tune the model. The full-batch strategy naturally reduces the computational cost for fine-tuning but loses the benefits of interactive improvements as shown in the results. However, interestingly, we find that the similarity-based retrieval still provides a good amount of improvements in the full-batch setting which can achieve a competitive performance that is slightly better than using mini-batch updates with random retrieval. Therefore, we recommend that practitioners consider this alternative option, i.e., full-batch incorporated with similarity-retrieval, if the computational budget of fine-tuning is limited. ### Alternative Query Strategies Besides the random query and the proposed EEQ query, we also explore several additional query strategies for interactive multi-fidelity learning, including (i) confidence-based strategies: predictive entropy (Entropy) [35], least confidence (Least-c) [22], breaking ties (Breaking-t) [27]; (ii) diversity-based strategies: K-means [50], Diversity [37]; and (iii) hybrid strategy [19], a combination of confidence and diversity by a weighted sum. As shown in Table 5, our EEQ strategy outperforms all the other methods on two representative tasks (Headline and MedQA). These alternative strategies are simple to implement and perform better than the random baseline. Unfortunately, directly applying them to our multi-fidelity paradigm does yield the most desirable performance. The one that achieves the closest performance is the hybrid strategy. However, it ignores the effects of annotation orders and fidelity which are important ingredients for achieving high performance in our multi-fidelity setting. ### Annotation Accuracy by Different GPT-based Annotators The fine-tuning performance relies on the annotation accuracy of the LLM annotator, since noisy annotations may hurt the final model performance in terms of accuracy and reliability. Here we focus \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{3}{c}{Method} & \multicolumn{3}{c}{Dataset} \\ \hline Budget & Batch & Batch size & Retrieval & FPB & Headline & PubMedQA & MedQA \\ \hline 1000 & 5 Mini-Batch & Variable & Similar & **47.88** & **81.09** & **73.76** & **67.98** \\ 1000 & 5 Mini-Batch & Equal & Similar & 46.34 & 80.28 & 72.05 & 66.11 \\ 1000 & 5 Mini-Batch & Variable & Random & 42.09 & 73.98 & 67.44 & 63.56 \\ 1000 & 5 Mini-Batch & Equal & Random & 42.34 & 73.77 & 68.10 & 63.42 \\ 1000 & 1 Full-Batch & NA & Similar & 43.72 & 75.48 & 68.90 & 63.79 \\ 1000 & 1 Full-Batch & NA & Random & 39.80 & 72.11 & 65.94 & 57.23 \\ \hline \hline \end{tabular} \end{table} Table 4: Effects of prompt retrieval, variable batch size, and batch orders. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Method & \multicolumn{2}{c}{Budget} & \multicolumn{2}{c}{Query Strategy} & \multicolumn{3}{c}{Dataset} \\ \hline Multi/Single & Human & GPT-3.5 & EEQ/Random & FPB & Headline & PubMedQA & MedQA \\ \hline Human + GPT-3.5 & 200 & 800 & EEQ & **47.88** & **81.09** & **73.76** & 67.98 \\ Human + GPT-3.5 & 200 & 800 & Random & 41.94 & 74.32 & 66.03 & 63.77 \\ Only Human & 1000 & 0 & Random & 43.81 & 75.46 & 68.87 & **70.17** \\ Only GPT-3.5 & 0 & 1000 & Random & 38.56 & 71.04 & 65.89 & 47.13 \\ \hline \hline \end{tabular} \end{table} Table 3: A comparison of our EEQ query strategy and random query strategy. on evaluating the annotation accuracy of different variants of GPT (i.e., GPT-3 vs GPT-3.5) instead of fine-tuning accuracy through multiple experiments. We notice that GPT-3.5 annotation with zero shot performs much worse than few-shot and our retrieval methods in domain-specific tasks. This is because although naive GPT-3.5 shows promising performance in doing zero/few-shot learning in-distribution scenarios, it lacks domain knowledge to make accurate predictions in the considered out-of-domain tasks. The proposed prompt retrieval leveraging human annotations from domain experts with in-context learning capabilities of LLMs substantially improves the performance of GPT-3.5 annotation. If the annotation budget is very limited, GPT-3 is a cheaper alternative but underperforms GPT-3.5 even with prompt retrieval applied. In contrast, we have a chance to utilize GPT-4 (more expensive and limited access) for annotation in our multi-fidelity paradigm. Please see additional GPT-4 annotation results in Appendix C.3. Recent work shows promising capabilities of GPT-4 on medical challenge problems. For example, GPT-4 achieves 81.38 (5-shot) and 78.87 (0-shot) for the MedQA task as reported by [32]. Note that IMFL uses GPT-3.5 as our LLM annotator but is easy to extend to other LLM annotators, e.g., GPT-based models or open-source models such as LLaMA which depends on the annotation budget. An exhaustive study of different LLM annotators is beyond the scope of this work. ### Ablation Study of Human Annotation Ratio Given a total 1000 annotation budget, a key question is how to assign the budget to human annotators or GPT-3.5 annotators. In our original setting, we use a 20/80 ratio since the human annotations are much more expensive (money cost, time cost, and training cost, specifically in domain-specific areas, e.g., finance and medicine) than GPT-3.5 annotations. We also expect to minimize the ratio of human annotations so we conduct an ablation study for 10/90 and 5/95 ratios and evaluate their effect on performance in our framework. Table 7 shows the performance comparisons of various ratios of human annotations, i.e., 0.5\(\times\) and 0.25\(\times\) human annotations. We can note that the performance drops obviously when less human effort is conducted. For the case of 0.5\(\times\) human annotations, it is lower than our original setting but still comparable to 3\(\times\) human annotations. However, the case of 0.25\(\times\) human annotations shows a significant decrease because too few human annotations weaken the effect of in-context prompt retrieval and reduce the accuracy of initial uncertainty estimation. In short, a certain amount of human annotation is necessary for our framework even though we seek minimal human effort. We thus need to consider a trade-off between accuracy and annotation budgets. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{GPT-3 Annotation} & \multicolumn{3}{c}{GPT-3.5 Annotation} \\ \cline{2-7} & retrieval & 5-shot & 0-shot & retrieval & 5-shot & 0-shot \\ \hline Headline & 75.59 & 72.51 & 70.25 & **79.40** & 76.15 & 73.31 \\ MedQA & 51.42 & 44.89 & 42.03 & **59.45** & 53.57 & 50.82 \\ \hline \hline \end{tabular} \end{table} Table 6: A comparison of annotation accuracy by GPT-3 and GPT-3.5 in zero/few-shot learning. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & \multicolumn{2}{c}{Number of Annotations} & \multicolumn{3}{c}{Dataset} \\ \cline{2-7} & Human & GPT-3.5 & FPB & Headline & PubMedQA & MedQA \\ \hline IMFL & 200 (1\(\times\)) & 800 & **47.88 \(\pm\) 0.98** & **81.09 \(\pm\) 0.58** & **73.76 \(\pm\) 0.95** & **67.98 \(\pm\) 1.45** \\ IMFL & 100 (0.5\(\times\)) & 900 & 43.66 \(\pm\) 1.42 & 75.41 \(\pm\) 1.01 & 70.88 \(\pm\) 1.08 & 61.44 \(\pm\) 1.83 \\ IMFL & 50 (0.25\(\times\)) & 950 & 40.76 \(\pm\) 1.48 & 73.65 \(\pm\) 1.09 & 68.18 \(\pm\) 1.11 & 52.38 \(\pm\) 1.93 \\ \hline \hline \end{tabular} \end{table} Table 7: Performance comparisons of various ratios of human annotations on four datasets. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Dataset & Random & Entropy & Least-c & Breaking-t & K-means & Diversity & Hybrid & EEG \\ \hline Headline & 74.32 & 76.42 & 77.55 & 77.34 & 76.59 & 77.61 & 79.23 & **81.09** \\ MedQA & 63.77 & 65.15 & 65.21 & 65.28 & 66.44 & 66.41 & 66.94 & **67.98** \\ \hline \hline \end{tabular} \end{table} Table 5: A comparison of various alternative query strategies on two representative tasks. Related Work Domain-specific LLMs.The significance of domain-specific training for encoder-only masked language models is widely recognized. Two commonly adopted methods are to either train BERT models [9] from scratch using domain-specific data or to continue pre-training an existing model on new domain-specific data. Following these approaches, several BERT-based models are built by domain experts, e.g., BioBERT [21], ClinicalBERT [15], etc. A recent trend is to train decoder-only models by utilizing domain-specific data, such as Med-PaLM [38], BioGPT [26] and BloombergGPT [46]. These findings highlight the advantages of in-domain pre-training, especially if sufficient data is available. However, an underlying challenge is how to train domain-specific LMs when there is insufficient data for large-scale pre-training due to a limited annotation budget. Our work addresses this underexplored problem by developing a cost-effective fine-tuning paradigm with limited budgets of human annotation. Multi-fidelity learning.Our work is motivated by recent findings that suggest GPT models are capable of replicating or even outperforming human annotation as indicated by several studies [14; 43; 56]. However, these approaches suffer from unreliable annotations and lack confidence when applied to domain-specific tasks, especially in specialized high-stakes fields like finance and medicine. Our key idea originates from multi-fidelity optimization approaches [33; 23; 24] that optimize the objective function by utilizing varying approximations with different levels of precision and cost. Previous studies in the field of NLP have explored "dual supervision" to train models by combining two types of labels [45]. In contrast to this naive combination approach, we delve into a novel multi-fidelity framework that achieves cost-effective adaptation of domain-specific language models through fine-tuning and in-context learning. Active learning.Active Learning (AL) is an extensively studied field concerned with improving the performance of language models with fewer labeled instances [54; 44; 31]. Current works within AL typically focus on two main scenarios: active fine-tuning [42; 28] and active in-context learning [10; 41]. The former involves iteratively updating model parameters but is not well-suited for directing training/fine-tuning LLMs such as GPT-3.5 which would induce high computational costs. Conversely, the latter is efficient but the performance solely relies on the few-shot learning ability of LLMs, which is unreliable for domain-specific tasks that require expert knowledge beyond standard pre-training data. In contrast, the proposed IMFL, which fully utilizes few high-fidelity annotations from human annotators to guide the LLM-annotator, can be regarded as synergizing the power of both fine-tuning and knowledge distillation from LLMs under sparse human supervision. Our experiments demonstrate that our approach can significantly reduce human annotation efforts while achieving highly competitive performance given a limited budget for annotation and computational resources, which enables flexible and effective deployment in real-world applications. ## 6 Discussion and Limitation We compare IMFL to single fidelity annotations to evaluate the effectiveness of our proposed multi-fidelity paradigm. The extensive experimental results reveal that employing IMFL can significantly reduce the high cost of human annotation in domain-specific tasks. Furthermore, we demonstrate that IMFL efficiently uses sparse human supervision to improve GPT-3.5 annotations through prompt retrieval and in-context learning, ultimately leading to enhanced performance. Despite the promising performance, we note there are certain _limitations_ to our approach. First, the current IMFL framework assumes that the annotation budget is defined by the number of annotations, rather than reflecting the true cost which typically involves multiple complex factors (e.g., administrative cost, training cost of human annotators, time, etc.) in real-world scenarios. Second, IMFL's performance is limited by the size of the unannotated dataset and the diversity of examples presented in the dataset as IMFL only seeks to improve performance through annotating existing samples rather than creating new samples. Lastly, limited by budgets and the capacity of the LM to be fine-tuned, IMFL does not achieve state-of-the-art performance in some general NLP tasks, where directly adopting the latest LLMs remains a better choice. Nevertheless, we anticipate the performance of IMFL to continue to grow by incorporating stronger LLM annotators, such as GPT-4, to further improve annotation accuracy. We leave this as our future work.
2310.00283
Active Learning Based Fine-Tuning Framework for Speech Emotion Recognition
Speech emotion recognition (SER) has drawn increasing attention for its applications in human-machine interaction. However, existing SER methods ignore the information gap between the pre-training speech recognition task and the downstream SER task, leading to sub-optimal performance. Moreover, they require much time to fine-tune on each specific speech dataset, restricting their effectiveness in real-world scenes with large-scale noisy data. To address these issues, we propose an active learning (AL) based Fine-Tuning framework for SER that leverages task adaptation pre-training (TAPT) and AL methods to enhance performance and efficiency. Specifically, we first use TAPT to minimize the information gap between the pre-training and the downstream task. Then, AL methods are used to iteratively select a subset of the most informative and diverse samples for fine-tuning, reducing time consumption. Experiments demonstrate that using only 20\%pt. samples improves 8.45\%pt. accuracy and reduces 79\%pt. time consumption.
Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura
2023-09-30T07:23:29Z
http://arxiv.org/abs/2310.00283v1
# After: Active Learning Based Fine-Tuning Framework for Speech Emotion Recognition ###### Abstract Speech emotion recognition (SER) has drawn increasing attention for its applications in human-machine interaction. However, existing SER methods ignore the information gap between the pre-training speech recognition task and the downstream SER task, leading to sub-optimal performance. Moreover, they require much time to fine-tune on each specific speech dataset, restricting their effectiveness in real-world scenes with large-scale noisy data. To address these issues, we propose an active learning (AL) based Fine-Tuning framework for SER that leverages task adaptation pre-training (TAPT) and AL methods to enhance performance and efficiency. Specifically, we first use TAPT to minimize the information gap between the pre-training and the downstream task. Then, AL methods are used to iteratively select a subset of the most informative and diverse samples for fine-tuning, reducing time consumption. Experiments demonstrate that using only 20%pt. samples improves 8.45%pt. accuracy and reduces 79%pt. time consumption. Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura. Tokyo Institute of Technology. Speech Emotion Recognition, Large-scale Pre-trained Model, Fine-Tuning, Active Learning ## 1 Introduction _The language of tones is the oldest and most universal of all our means of communication_[1]. Speech emotion recognition (SER) aims to identify emotional states conveyed in vocal expressions as an essential topic in tone analysis. It has attracted much attention in both the industrial and academic communities, such as medical surveillance systems [2], psychological treatments [3, 4], and intelligent virtual voice assistants [5]. Emerging SER methods are broadly classified into Classic Machine Learning-based methods and Deep Learning-based methods [6]. The former methods [7, 8, 9] typically consist of three main components: feature extraction, feature selection, and emotion recognition. However, selecting and designing features for specific corpora is time-consuming [10], and they always need better generalization on unseen datasets [11]. Deep learning-based methods can address these issues by automatically extracting more abstract features to improve generalization [12, 13, 14], benefiting from various neural network architectures such as convolutional neural networks (CNN) [15] and Transformers [16]. With the development of pre-trained language models [17] and the availability of large-scale datasets, various pre-trained automatic speech recognition (ASR) models, such as wav2vec 2.0 [18], HuBERT [19] and Data2vec [20], have been proposed. These ASR models use speech's acoustic and linguistic properties to provide more robust and context-aware representations for speech signals. Xia et al. [21] proved that fine-tuning SER datasets on wav2vec 2.0 [22] obtained state-of-the-art (SOTA) performance on IEMOCAP [23]. This finding inspired researchers to explore new fine-tuning strategies on ASR models, which has become a new paradigm for SER. For example, Ren et al. [24] proposed a self-distillation SER model for fine-tuning wav2vec 2.0 and obtained SOTA performance on the DEMoS dataset [25]. Alef et al. [26] fine-tuned wav2vec 2.0 by jointly optimizing the SER and ASR tasks and achieving SOTA performance in Portuguese datasets. Although the above methods have achieved great success, several issues still need to be solved. 1) current methods seldom consider the information gap between the pre-trained ASR and downstream SER task. For example, wav2vec 2.0 [18] adopts a masked learning objective to predict missing frames from the remaining context, while the downstream SER [15, 27] task aims to minimize cross-entropy loss between predicted and referenced emotion labels for speech signals. Suchin et al. [28] proved that the information gap would decrease the performance of downstream tasks. To solve it, Pseudo-TAPT [29] first uses K-means to obtain pseudo-labels of speech signals and uses supervised TAPT [28] to continually pre-train. However, K-means is sensitive to the initial value, making Pseudo-TAPT unstable and computationally expensive. 2) current methods only fine-tune and validate the performance on a specific speech dataset. For example, Xia et al. [21] used IEMOCAP, leading to over-fitting and poor generalization for unseen datasets. Real-world scenes contain much heterogeneous and noisy data, which hinders the application of these SER methods. 3) pre-trained ASR models often contain millions of parameters, such as wav2vec 2.0 contains 317 million parameters, which is time-consumption for real-world and large-scale datasets. To alleviate the above issues, we propose an active learning-based fine-tuning framework for SER (After), which can be easily applied to noisy and heterogeneous real world scenes. Specifically, we first propose an unsupervised task adaptation pre-training (TAPT) method [28] to reduce the information gap between the pre-trained and downstream SER task, where the pre-trained model can understand the semantic information of the SER task. Then, we created a large-scale heterogeneous and noisy dataset to simulate real-world scenes. Furthermore, we propose AL strategies with clustering-based initialization to iteratively select a smaller, more informative, and diverse subset of samples for fine-tuning, which can efficiently eliminate noise and outliers, improve generalization, and reduce the time-consuming. Experimental results demonstrate the effectiveness and better generalization of After in noisy real-world scenes. Specifically, by fine-tuning only on 20% pt. of the labeled samples, After can improve the unweighted accuracy by 8.45%pt. compared to SOTA methods and reduce time consumption by 79% pt. compared to the fastest baseline. ## 2 Methodology The overall framework is shown in Figure 1, where After contains three main components: a _task adaptation pre-trained_ module, an _active learning-based fine-tuning_ module and an _emotion classification_ module. First, we will formally give the task definition of SER and subsequently introduce each component of After in detail. ### Task Formulation Given speech datasets \(\mathcal{D}\) = \(\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) where \(\mathbf{x}_{i}\) represents the \(i\)-th speech signal and \(y_{i}\) represents its corresponding emotion label. We aim to fine-tune a pre-trained automatic speech recognition model \(\mathbf{M}\), such as wav2vec 2.0 [29], on labeled speech datasets \(\mathcal{D}\) to obtain accurately predicted emotion labels for all speech signals. ### Task Adaptation Pre-training (TAPT) Here we introduce our TAPT component in detail. To better leverage pre-trained prior knowledge for the benefit of downstream tasks, Gururangan et al. [28] continue training the pre-trained model RoBERTa [30] on downstream datasets via the same loss of the pre-training task (reconstruct the masked tokens [17]) and significantly improved the performance on text classification. Inspired by their work, we added an extra step to After by continuing the wav2vec 2.0 training speech recognition model on downstream SER's training datasets while keeping the same loss function with wav2vec 2.0 unchanged. By conducting this process, we bridge the information gap between the pre-trained ASR task and the target SER task, which we have preliminarily confirmed helps according to our experiments in Section 3.2. As shown in Figure 1 (a), the wav2vec 2.0 model \(\mathbf{M}(\mathbf{W}_{0})\), with pre-trained weights \(\mathbf{W}_{0}\), consists of three sub-modules: the feature encoder module, the transformer module and the quantization module. Specifically, we use a CNN-based encoder to encode the \(i\)-th input unlabeled speech signals into low-dimensional vectors as \(\mathbf{x}_{i}\). Then, we randomly mask 15%pt. features (following BERT [17]) of the speech vectors and decode them with two decoders to obtain quantized and context representations, where the quantization decoder can decode continuous speech vectors \(\mathbf{x}_{i}\) into discrete code word from phonemes codebooks 1 as \(\mathbf{z}_{i}^{q}\) and wav2vec 2.0 decoder (transformer layers) can use self-attention to decode continuous speech vectors \(\mathbf{x}_{i}\) into context-aware representations \(\mathbf{z}_{i}^{c}\). Then, we design contrastive loss [18] (cl) to minimize the differences between quantized and context representations as Footnote 1: A quantized codebook refers to a set of predetermined values or codewords used to represent a continuous signal in a discrete form [18]. \[\mathcal{L}_{cl}=-\sum_{i=1}^{n}\text{log}\frac{\text{exp}(\text{sim}( \mathbf{z}_{i}^{c},\mathbf{z}_{i}^{q})/\kappa)}{\sum_{j=1}^{n}\text{exp}( \text{sim}(\mathbf{z}_{i}^{c},\mathbf{z}_{j}^{q})/\kappa)}, \tag{1}\] where the temperature hyperparameter \(\kappa\) is set to 0.1, and \(\text{sim}(\mathbf{a},\mathbf{b})=\mathbf{a}^{T}\mathbf{b}/\|\mathbf{a}\|_{2} \|\mathbf{b}\|_{2}\) with \(T\) representing the transposition of a vector. Eq. (1) can help to obtain better quantized and context representations because two decoders can provide highly heterogeneous contexts for each speech signal [31]. To minimize the information gap between the pre-trained model and downstream SER task, following BERT [17], we first randomly mask 15%pt. tokens of each speech signal and then use reconstruction loss on the corrupted downstream SER dataset to generate tokens for reconstructing the original data, which can be formulated as \[\mathcal{L}_{rl}=-\frac{1}{|N_{m}|}\sum_{i=\text{First masked token}}^{\text{ Last masked token}}s_{i}^{\text{true}}\text{log}(s_{i}^{\text{predicted}}) \tag{2}\] where \(N_{m}\) is the number of masked tokens, \(s_{i}^{\text{true}}\) and \(s_{i}^{\text{predicted}}\) are the ground-truth and predicted token probability of the \(i\)-th masked token. Finally, we combine contrastive loss and reconstruction loss for the TAPT process as \[\mathcal{L}_{TAPT}=\mathcal{L}_{cl}+\mathcal{L}_{rl}. \tag{3}\] Although Pseudo-TAPT [29] also adopts TAPT, they spent much time using K-means to extract frame-level emotional pseudo labels and continually pre-train their model in a supervised manner. However, K-means is sensitive to the initial value and outliers [32], making Pseudo-TAPT unstable and computationally expensive. ### Active Learning (AL) based Fine-tuning After the TAPT process, we obtain the model \(\mathbf{M}_{\text{TAPT}}(\mathbf{W}_{0}^{{}^{\prime}})\) with \(\mathbf{W}_{0}^{{}^{\prime}}\) as the weight initialization for the AL process (cf. Line 1 of Algorithm 1). A typical AL setup starts by treating \(\mathcal{D}\) as a pool of unlabeled data \(\mathcal{D}_{\text{pool}}\) and performs \(\tau\) iterations of sample selection. Specifically, in the \(i\)-th iteration, \(k\) samples \(Q_{i}\) = {\(\mathbf{x}_{1},\cdots,\mathbf{x}_{k}\)} are selected using a given acquisition function \(\text{ac}()\). For example, we adopt Entropy [33] as \(\text{ac}()\) to measure the uncertainty of the samples and select the most uncertain \(k\) samples. These selected samples are then labeled and added to the \(i\)-th training dataset \(\mathcal{D}_{\text{train}}^{i}\), with which a model is fine-tuned for SER. One primary goal of After is to explore whether AL strategies can reduce the number of annotation samples, as labeling large-scale datasets is the most laborious part of SER. Thus, for simplicity, we adopt five of the most well-known and influential AL strategies for evaluation, including Entropy [33], Least Confidence [34], Margin Confidence [34], ALPs [35] and BatchBald [36]. These methods use different criteria to help select the most uncertain and informative samples from \(\mathcal{D}_{\text{pool}}\). For example, we can adopt entropy to measure the uncertainty of \(\mathbf{x}_{i}\) as \[\text{Entropy}(\mathbf{x}_{i})=-\sum_{j=1}^{c}P(y_{j}|\mathbf{x}_{i})\text{ log}P(y_{j}|\mathbf{x}_{i}), \tag{4}\] where \(c\) is the number of emotional classes and \(P(y_{j}|\mathbf{x}_{i})\) represents the predicted probability of \(\mathbf{x}_{i}\) for the \(j\)-th emotion. And we select the number of \(k\) most uncertainty samples for annotation and add them to the training dataset \(\mathcal{D}_{\text{train}}\). Traditional AL methods use random initialization, while we found that the AL methods are sensitive to initialized samples and easily select redundant samples or outliers in each AL iteration with a bad initialization. Thus, instead of directly using AL methods, we propose a clustering-based initialization for all AL methods (we use K-means in this study) and obtain better performance (details about K-means are given in Sec 3.2). Please note that, as shown in Algorithm 1, clustering-based initialization is only applied in the initialization process once, and the subsequent AL loop does not need a K-means process. ``` Input : Unlabeled data \(\mathcal{D}_{\text{pool}}\), Model \(\mathbf{M}(\mathbf{W}_{0})\), Acquisition size \(k\), Iterations \(\tau\), total number of selected samples \(N_{c}\), and Acquisition function \(\text{ac}()\). 1\(\mathbf{M}_{\text{TAPT}}\) (\(\mathcal{D}_{\text{pool}}\); \(\mathbf{W}_{0}^{\prime}\)) \(\leftarrow\) Train \(\mathbf{M}(\mathbf{W}_{0})\) on \(\mathcal{D}_{\text{pool}}\); 2\(Q_{0}\)\(\leftarrow\) Clustering-based initialization from \(\mathcal{D}_{\text{pool}}\); 3\(\mathcal{D}_{\text{train}}^{0}\)\(\gets\)\(Q_{0}\); \(\mathcal{D}_{\text{pool}}^{0}\)\(\leftarrow\)\(\mathcal{D}_{\text{pool}}\)\(\backslash\)\(Q_{0}\) where \(|Q_{0}|=1\%N_{s}\); 4\(\mathbf{M}_{0}([\mathbf{W}_{0}^{\prime},\mathbf{W}_{c}])\)\(\leftarrow\) Initialized from \(\mathbf{M}_{\text{TAPT}}\) (\(\mathcal{D}_{\text{pool}}\); \(\mathbf{W}_{0}^{\prime}\)); 5\(\mathbf{M}_{0}(\mathcal{D}_{\text{train}}^{0};[\mathbf{W}_{0},\mathbf{W}_{c}])\)\(\leftarrow\) Train \(\mathbf{M}_{0}([\mathbf{W}_{0}^{\prime},\mathbf{W}_{c}])\) on \(\mathcal{D}_{\text{train}}^{0}\); 6for\(i\gets 1\) to \(\tau\)do 7\(Q_{i}\)\(\leftarrow\)\(\text{ac}(\mathbf{M}_{i-1},\mathcal{D}_{\text{pool}}^{i-1},k)\)\(\triangleright\) Annotating \(k\) samples; 8\(\mathcal{D}_{\text{train}}^{i}\)\(\leftarrow\)\(\mathcal{D}_{\text{train}}^{i-1}\)\(\cup\)\(Q_{i}\)\(\triangleright\) Add labeled samples to \(\mathcal{D}_{\text{train}}^{i}\); 9\(\mathcal{D}_{\text{pool}}^{i}\)\(\leftarrow\)\(\mathcal{D}_{\text{pool}}^{i-1}\)\(\backslash\)\(Q_{i}\)\(\triangleright\) Delete samples from \(\mathcal{D}_{\text{pool}}^{i}\); 10\(\mathbf{M}_{i}(\mathcal{D}_{\text{train}}^{i};[\mathbf{W}_{0},\mathbf{W}_{c}])\)\(\leftarrow\) Train \(\mathbf{M}_{i-1}\) on \(\mathcal{D}_{\text{train}}^{i}\); 11 end for ``` **Algorithm 1**Active Learning based Fine-tuning ### Emotion Recognition Classifier As shown in Figure. 1 (b), we add a task-specific classification layer with additional parameters \(\mathbf{W}_{c}\) for emotion recognition on the top of wav2vec 2.0. We fine-tune the classification model \(\mathbf{M}_{i}([\mathbf{W}_{0}^{\prime},\mathbf{W}_{c}])\) in each AL iteration with all labeled samples in \(\mathcal{D}_{\text{train}}\) (cf. Lines 6-10 of Algorithm 1). We use the cross-entropy loss for the emotion recognition classifier: \[\mathcal{L}_{ce}=-\frac{1}{k}\sum_{i=1}^{k}\sum_{j=1}^{c}y_{i}^{j}\log{(\hat{y} _{i}^{j})}, \tag{5}\] where \(c\) is the number of emotion classes, \(k\) is the number of selected samples at \(t\)-th iteration, \(\hat{y}_{i}^{j}\) is the \(i\)-th predicted label, Figure 1: Model overview. We first pre-train an off-the-shelf wav2vec 2.0 in the task adaptation pre-training manner. Then, we adopt an active learning method to select unlabeled samples for annotation iteratively. These labeled samples are used to fine-tune the wav2vec 2.0 model for speech emotion recognition. and \(y_{i}^{j}\) is the \(i\)-th ground-truth of \(j\)-th class. ## 3 Experiments and Discussions ### Experimental Settings #### 3.1.1 Datasets We first evaluated the performance of all baselines using the widely used benchmark dataset, IEMOCAP [23]. IEMOCAP is a multimodal database commonly employed for evaluating SER performance. There are five conversation sessions in IEMOCAP, each with a female and a male actor attempting to act in improvised and scripted scenarios. It consists of 10,039 speech utterances, with all audio signals sampled at 16kHz with a 16 bits resolution. To ensure a fair comparison with previous works, we merge the "excited" class into the "happy" class, resulting in four considered emotions: neutral, happy, angry, and sad. Following Chen et al. [29], we adopted a 5-fold cross-validation approach, where each IEMOCAP session was held out as the test set. We randomly selected 10%pt. data from the remaining four sessions as our validation dataset and the rest as our training dataset. Most existing methods are inadequate for real-world applications and susceptible to noise due to their heavy reliance on fine-tuning models using specific small-scale datasets. To address this issue, we conducted additional experiments by creating a larger training dataset. We achieved this by merging various datasets from different sources to simulate the noisy environments encountered in real-world scenarios. As shown in Table 1, we manually controlled the number of instances for each of the four labels in the Merged dataset to maintain label balance. Please note that EMODB is a German dataset that can help improve the noise of the merge dataset. To explore whether the Merged dataset could improve performance on a single dataset, such as IEMOCAP, we also employed a 5-fold cross-validation approach by leaving each IEMOCAP session out as the test set, randomly selected 10%pt. dataset from the remaining Merged dataset as our validation dataset, leaving the rest for training purposes. Please note that we only employed the training data for both the TAPT and AL-based fine-tuning processes to prevent data leakage during evaluation. Furthermore, the training procedures were conducted from scratch separately for the IEMOCAP and Merged datasets. #### 3.1.2 Baselines We compared various algorithms, including SOTA SER baselines and widely used AL methods. For SER methods, we selected the recently best-performing approaches: GLAM [42], LSSED [43], RH-emo [44], Light [15], Pseudo-TAPT [29], and w2v2-L-r-12 [45]. In terms of AL methods, we opted for the most efficient ones for our framework, which are: Entropy [33], Least Confidence [34], Margin Confidence [34], wav2vec 2.0 & clustering, ALPs [35], and BatchBald [36]. #### 3.1.3 Implementation details All experiments used the same learning rate \(10^{-4}\) with the Adam optimizer. Our implementation of wav2vec 2.0 was based on the Hugging Face framework. The window length of the audio is set to 20 ms. We fine-tuned the model in a few-shot manner, which proposes longer fine-tuning, more evaluation steps during training, and early stopping with 20 epochs based on validation loss. To have a fair comparison with the previous studies, we employed either off-the-shelf software packages or utilized the code provided by the respective authors. Each model was executed ten times, and the average performance across these runs was considered the final result. The choices of (hyper)parameters follow default if provided or tuned if not. Following He et al. [46], we evaluated the models using weighted accuracy (WA) and unweighted accuracy (UA) [47] in speaker-independent settings. Please note that we did not require the data to be labeled by actual annotators. Instead, we used the ground-truth labels available in the training dataset. Specifically, we masked the labels and only revealed them when the AL methods determined that the samples should be labeled, which is a common trick used by AL researchers to test out their ideas [33]. However, it is worth mentioning that human annotators would be responsible for labelling the data in a real-world scenario. ### Active Learning Strategies Selection for After As shown in Figure 1 (c), After incorporates an AL strategy for sample selection. To identify the most suitable AL method for After, we combined it with multiple well-known AL methods and evaluated their performance. Furthermore, we found that AL methods are sensitive to initialization, with most AL methods randomly selecting 1%pt. samples for initialization [48]. Unlike them, we propose a novel clustering-based (K-means) initialization method to improve the performance of SER. Specifically, we first extract sample representations of training data from the wav2vec 2.0 CNN-based encoder. Then, we employed K-means on the training \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{_Datasets_} & \multicolumn{2}{c}{_Characteristics_} \\ \cline{2-3} & Number of Samples & Ratio of Four Labels \\ \hline IEMOCAP [23] (English) & 10,038 & 2.5 : 1 : 2 : 2 : 4 : 1.0 \\ EMODB [37] (German) & 408 & 3.1 : 1 : 3 : 1 : 1.0 : 1.1 \\ SHEMO [38] (English) & 2,737 & 5.3 : 5 : 1 : 2 : 1 : 0.0 : 2 : 0.0 \\ RA/DESS [39] (English) & 672 & 2.0 : 1 : 0 : 2 : 0 : 2 \\ EMov-DB [40] (English) & 3,038 & 1.4 : 1 : 0 : 0 : 0 : 0 \\ CREMA-D [41] (English) & 4,900 & 1.0 : 1 : 7 : 1 0 : 1.0 \\ \hline **Merged Dataset** & 21,793 & 1.5 : 1 : 4 : 1 0 : 1 : 5 \\ \hline \hline \end{tabular} \end{table} Table 1: Descriptive statistics of the Merged dataset. Ratio of four labels is in the order of Anger : Neutral : Sad : Happy. data and selected 1%pt. samples closest to the cluster centres as our initialized samples. Please note that we used elbow method [49] to determine the number of clusters for K-means automatically, and we used the Euclidean distance to measure the distance between sample representations. Figure 2 demonstrates that the clustering-based initialization outperforms the random initialization for all AL methods. The initial set of samples can well influence the selection order of samples in each iteration of AL, and an effective initialization can significantly enhance the performance and stability of AL methods. Figure 2 illustrates that _Entropy+Clustering_ is the most effective AL strategy for After on the Merge dataset. Although we only show the diagram for UA due to space constraints, the diagram for WA is similar to UA. **Therefore, _Entropy+Clustering_ is selected as the primary AL method for After. And we recommend using _Entropy+Clustering_, the simplest yet most efficient strategy for real-world applications. We analyzed the relationship between the ratio of labeled samples, performance, and time consumption of After. Results in Table 2 show that both performance and time consumption of After increased as the ratio of labeled samples increased. **Our findings indicate that using 20%pt. labeled samples yielded a significant improvement in performance while reducing the time consumption by 79%pt. compared to fine-tuning on 100%pt. samples.** Thus, we selected 20%pt. labeled samples as a trade-off between performance and time consumption for subsequent experiments. ### Comparison with Best-performing Baselines Table 3 displays the primary results of After and the baselines on two datasets regarding unweighted and weighted accuracy. After outperforms all baselines with only 20%pt. labeled samples for fine-tuning. Specifically, After improves UA and WA by **2.38%pt. and 0.36%pt.** respectively, compared to the SOTA baseline (UA of Pesdo-TAPT and WA of GLAM), on the Merged dataset. And After improves UA and WA by **8.45%pt. and 4.12%pt.** respectively, compared to SOTA baseline (UA of GLAM and WA of Light), on the Merged dataset. LSSED [43] and RH-emo [44] achieved good results with the IEMOCAP but performed poorly with the Merged dataset. This discrepancy may be attributed to their limited denoising and domain transfer capabilities, preventing them from effectively handling the noise in the Merged dataset. On the other hand, GLAM [42] and Light [15] employed multi-scale feature representations and deep convolution blocks to capture high-level global data features. They are beneficial for filtering out noisy low-level features and enhancing performance on both datasets. Psedp-TAPT [29] improved model robustness by using K-means to capture higher-level frame emotion labels as pseudo labels for supervised TAPT. Although baselines can eliminate the noise of the dataset to a certain extent, they exhibited high time complexity during fine-tuning with large-scale datasets. They did not effectively bridge the gap between pre-training and the downstream SER task. In contrast, After uses unsupervised TAPT to mitigate the information gap between the source domain (ASR) and the target (SER) domain. Additionally, After selects a subset of the most informative and diverse samples for iterative fine-tuning, which has three advantages: Firstly, it reduces labour consumption for manually labelling large-scale SER samples; Secondly, by utilizing a smaller labeled dataset, After significantly reduces the overall time consumption (Figure 3), making it practical and feasible for real-world applications; \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{_Metric_} & \multicolumn{6}{c}{**AFTER (TAPT+ AL-based FT)**} \\ \cline{2-7} & 10\% & 20\% & 40\% & 60\% & 80\% & 100\% \\ \hline \hline **UA** & 71.45 & **77.41** & 78.64 & **79.32** & 79.26 & 79.15 \\ **WA** & 69.01 & **74.32** & 75.48 & **76.03** & 75.92 & 75.94 \\ **Time** (mins) & 262.8 & **316.4** & 785.4 & **942.2** & 1182.6 & 1508.2 \\ \hline \hline \end{tabular} \end{table} Table 2: After with Entropy [33] to select 10%pt.\(\sim\)100%pt. labeled samples of the Merge Dataset for fine-tuning. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{_Methods_} & \multicolumn{2}{c}{**IEMOCAP**} & \multicolumn{2}{c}{**Merged Dataset**} \\ \cline{2-5} & UA \(\uparrow\) & WA \(\uparrow\) & UA \(\uparrow\) & WA \(\uparrow\) \\ \hline \hline GLAM [2022] & 74.01 & 72.98 & 71.38 & 69.28 \\ LSSED [2021] & 73.09 & 68.35 & 25.00 & 36.20 \\ RH-emo [2022] & 68.26 & 67.35 & 43.20 & 42.80 \\ Light [2022] & 70.76 & 70.23 & 69.28 & 71.38 \\ Pseudo-TAPT [2022] & 74.30 & 70.26 & 71.25 & 68.83 \\ w2v2-L-r-12 [2023] & 74.28 & 70.23 & 71.22 & 68.77 \\ \hline **After** & **76.07\({}^{\dagger}\)** & **73.24\({}^{\dagger}\)** & **77.41\({}^{\dagger}\)** & **74.32\({}^{\dagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 3: Overall performance comparison. After adopts Entropy+Clustering and selects 20%pt. samples for fine-tuning. Symbol \(\dagger\) indicates After significantly surpass all baselines with \(p<0.05\) according to t-test. Figure 2: Ratio of labeled samples vs. Unweighted Accuracy. Finally, the iterative fine-tuning process employed by After improves performance and stability by eliminating noise and outliers present in the selected samples, leading to enhanced overall model performance in SER tasks. ### Ablation Study for After We performed an additional ablation study to evaluate the efficacy of After, as shown in Table 4. Specifically, we conducted fine-tuning (FT) and TAPT+FT on random sample selection and AL-based (Entropy) sample selection with varying ratios of labeled samples, ranging from 10%pt. to 100%pt.. From Table 4, we have four interesting observations: (1) Fine-tuning with active learning significantly improves performance compared with random sampling (FT+Entropy vs FT+Random), regardless of the number of labeled samples. This result demonstrates that the AL-based fine-tuning strategy efficiently eliminates noise and outliers and selects the most informative and diverse samples for fine-tuning; (2) TAPT+FT outperforms FT on both random sampling and Entropy sampling, indicating that TAPT can effectively minimize the domain difference and significantly enhance the performance of the downstream SER task; (3) With the same number of labeled samples, After can obtain better results than TAPT+FT+Random on the Merged dataset. However, After with 20%pt. labeled samples performs worse than TAPT+FT+Random with 80%pt. \(\sim\)100%pt. labeled samples. The reason is that TAPT+FT uses more labeled data for fine-tuning to prevent the model from overfitting and improve its robustness. In a fair comparison with the same size of the training data for fine-tuning, TAPT+FT+Random with 20%pt. labeled samples performs worse than After(20%pt.), demonstrating the effectiveness of After; (4) when 100%pt. sample is used, AL-based method significantly outperforms random sampling method (FT+Random vs FT+Entropy). The main reason is that Random-sampling is affected by noise data, and the model constantly corrects the classification boundary, making it difficult to improve the results. Entropy sampling avoids the effect of noisy data by selecting the most informative and diverse samples for FT in advance to fix the classification boundary properly. ### Time Consumption Comparison Figure 3 (A) demonstrates that FT+AL with 20%pt. labeled samples significantly reduces the time consumption of FT (fine-tuning on all labeled samples). Compared to TAPT+FT, TAPT+FT+AL significantly decreases the time consumption with the main cost incurred by TAPT. Additionally, the relationship between time consumption and the ratio of labeled samples is shown in Figure 3 (B). AL-based fine-tuning exhibits a linear increase in time consumption with sample size from 1%pt. \(\sim\)20%pt. (exponential growth from 30%pt. \(\sim\)100%pt. in Table 2), indicating the efficiency of After and its potential to be applied in large-scale unlabeled real-world scenes. ## 4 Conclusion and Future Work In this work, we investigated unsupervised TAPT and the AL-based fine-tuning strategy for improving the performance of SER. To extend SER for real-world applications, we constructed a large-scale noisy and heterogeneous dataset, and we used TAPT to minimize the information gap between the pre-trained and the target SER task. Experimental results indicated that After could dramatically improve performance and reduce time consumption. We plan to design more domain adaptation AL methods to solve the SER task in the future. ## Acknowledgement We would like to thank all the reviewers for their valuable suggestions, which helped us improve the quality of our manuscript. And Dongyuan Li acknowledges the support from the China Scholarship Council (CSC). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{_Methods_} & \multicolumn{3}{c}{**Random Sampling**} & \multicolumn{3}{c}{**Entropy Sampling**} \\ \cline{2-9} & \multicolumn{2}{c}{**FT**} & \multicolumn{2}{c}{**TAPT+FT**} & \multicolumn{2}{c}{**FT**} & \multicolumn{2}{c}{**After**} \\ \cline{2-9} & UA \(\uparrow\) & WA \(\uparrow\) & UA \(\uparrow\) & WA \(\uparrow\) & UA \(\uparrow\) & WA \(\uparrow\) & UA \(\uparrow\) & WA \(\uparrow\) \\ \hline 10\%pt. & 50.82 & 48.96 & 70.21 & 68.85 & 68.21 & 66.32 & 71.45 & 69.01 \\ 20\%pt. & 51.37 & 49.92 & 73.82 & 71.33 & 71.07 & 68.21 & **77.41** & **74.32** \\ 30\%pt. & 52.37 & 50.18 & 74.49 & 71.89 & 72.35 & 69.21 & 78.20 & 75.16 \\ 40\%pt. & 55.68 & 52.21 & 76.01 & 72.28 & 73.55 & 70.18 & 78.64 & 75.48 \\ 60\%pt. & 60.39 & 59.32 & 77.21 & 74.58 & 74.28 & 71.35 & **79.32** & **76.03** \\ 80\%pt. & 58.34 & 56.72 & 78.88 & 75.82 & 73.52 & 70.39 & 79.26 & 75.92 \\ 100\%pt. & 57.21 & 54.12 & 78.21 & 75.36 & 73.89 & 70.89 & 79.15 & 75.94 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study on the Merged dataset. FT means fine-tuning, and TAPT+FT first adopted TAPT and then fine-tuned with corresponding selected labeled samples. After adopts Entropy to select samples for fine-tuning. Figure 3: (A) Time Consumption Comparison and (B) The relationship between ratio of labeled samples and time consumption.
2309.03831
Uncovering Drift in Textual Data: An Unsupervised Method for Detecting and Mitigating Drift in Machine Learning Models
Drift in machine learning refers to the phenomenon where the statistical properties of data or context, in which the model operates, change over time leading to a decrease in its performance. Therefore, maintaining a constant monitoring process for machine learning model performance is crucial in order to proactively prevent any potential performance regression. However, supervised drift detection methods require human annotation and consequently lead to a longer time to detect and mitigate the drift. In our proposed unsupervised drift detection method, we follow a two step process. Our first step involves encoding a sample of production data as the target distribution, and the model training data as the reference distribution. In the second step, we employ a kernel-based statistical test that utilizes the maximum mean discrepancy (MMD) distance metric to compare the reference and target distributions and estimate any potential drift. Our method also identifies the subset of production data that is the root cause of the drift. The models retrained using these identified high drift samples show improved performance on online customer experience quality metrics.
Saeed Khaki, Akhouri Abhinav Aditya, Zohar Karnin, Lan Ma, Olivia Pan, Samarth Marudheri Chandrashekar
2023-09-07T16:45:42Z
http://arxiv.org/abs/2309.03831v1
Uncovering Drift in Textual Data: An Unsupervised Method for Detecting and Mitigating Drift in Machine Learning Models ###### Abstract Drift in machine learning refers to the phenomenon where the statistical properties of data or context, in which the model operates, change over time leading to a decrease in its performance. Therefore, maintaining a constant monitoring process for machine learning model performance is crucial in order to proactively prevent any potential performance regression. However, supervised drift detection methods require human annotation and consequently lead to a longer time to detect and mitigate the drift. In our proposed unsupervised drift detection method, we follow a two step process. Our first step involves encoding a sample of production data as the target distribution, and the model training data as the reference distribution. In the second step, we employ a kernel-based statistical test that utilizes the maximum mean discrepancy (MMD) distance metric to compare the reference and target distributions and estimate any potential drift. Our method also identifies the subset of production data that is the root cause of the drift. The models retrained using these identified high drift samples show improved performance on online customer experience quality metrics. Drift detection unsupervised method machine learning MMD distance metric performance improvement. ## 1 Introduction In the fast-paced world of big data, where the amount of information being generated is constantly growing, efficient data analytics and machine learning techniques are essential to help us make informed decisions. However, with the rapid emergence of new products, markets, and customer behaviors, a new challenge arises - the problem of drift in data. This occurs when the statistical properties of the data being used change over time in unforeseen ways. If left unchecked, data drift can render past data irrelevant, leading to poor decision outcomes. As a result, drift has become a significant obstacle for many data-driven and machine learning systems. In this dynamic and ever-evolving environment, finding ways to provide reliable and accurate data-driven predictions and decision-making capabilities is crucial [Lu et al., 2018, Gemaque et al., 2020]. Drift detection is the process of monitoring the performance of a machine learning model over time and detecting when the model's behavior deviates from its original distribution. There are two general methods for drift detection: supervised and unsupervised [Gemaque et al., 2020]. Supervised drift detection involves using labeled data to detect changes in the model's performance. This method assumes that there is a labeled dataset available that represents the original distribution of the data. The labeled data is used to train the model, and the performance of the model is measured on new labeled data as it becomes available. If the performance of the model deviates significantly from its original performance, the model is flagged for drift (Lu et al., 2018; Maciel et al., 2015; Goncalves Jr et al., 2014; Iwashita and Papa, 2018). On the other hand, unsupervised drift detection involves monitoring the model's behavior without the use of labeled data. This method assumes that there is no labeled dataset available that represents the original distribution of the data. Instead, the model's behavior is monitored using statistical techniques such as change point detection or anomaly detection also distance between two distributions. These techniques compare the model's behavior over time and look for significant deviations that could indicate drift (Friedrich et al., 2023; de Mello et al., 2019; Gemaque et al., 2020). Despite supervised drift detection being accurate, it is not often used in practice compared to unsupervised drift detection methods because if labeled data is not available, the method cannot be used. Additionally, labeled data can be expensive and time-consuming to obtain, which can make supervised drift detection impractical in some situations (Gemaque et al., 2020). In the literature, many studies have used these approaches to detect drift. Harel et al. (2014) proposed a supervised method for identifying drift in data streams through the examination of empirical loss of learning algorithms. This approach involves extracting statistics from the distribution of loss by using resampling to repeatedly reuse the same data. Maciel et al. (2015) proposed a new supervised ensemble classifier called drift detection ensemble (DDE), which is designed to improve the effectiveness of three different concept drift detectors. DDE combines the warnings and drift detections from these three detectors using various strategies and configurations to achieve better performance than the individual methods alone. Li et al. (2019) proposed an unsupervised approach for detecting anomalies in multidimensional sequences in data streams that are susceptible to concept drift, by utilizing a feature selection algorithm based on mutual information and symmetric uncertainty information. The proposed approach, called FAAD, consists of three different algorithms and aims to analyze the ordering relationship in sequences to detect anomalies. Costa et al. (2018) used an unsupervised drift detection approach, DDAL, that utilizes active learning to select the most significant instances for monitoring concept drift based on density variation. The proposed method involves two phases, where the first phase generates a classifier using reference data and the second phase involves drift detection, reaction, and classification for new batches of data based on the occurrence of concept drift. Lughofer et al. (2016) introduces two techniques to handle concept drift in scenarios with few labeled instances or no labeled instances, one based on active learning filters and the other on analyzing classifier output certainty distribution. Haque et al. (2016) proposed a semi-supervised framework for detecting drift and novel classes in data streams using an ensemble of kNN classifiers, with outlier detection and concept change detection modules, and incorporating data self-annotation. In this paper, we present a novel unsupervised approach for detecting drift in unstructured text data used in machine learning models, without requiring human annotation. Our proposed method offers several key novelties which are as follows: 1. Our proposed algorithm is a highly versatile, unsupervised drift detection method that can be applied to any machine learning model for both performance regression detection and mitigation in unstructured text data without the need for human input. 2. Our method includes a mitigation strategy for addressing model performance regression. It offers a fast and reliable solution for improving model performance when high levels of drift are present in production data. 3. We demonstrate the effectiveness of our approach in detecting and mitigating performance regression in a real-world application. By leveraging our novel method, we were able to achieve improved model performance, highlighting the practical value of our approach. ## 2 Methodology In this paper, we introduce an unsupervised drift detection method for unstructured data, specifically text data utilized as input for machine learning models. Our proposed method involves the encoding of unstructured text data into vector representation, followed by a comparison of the encoded data to identify potential drift. To achieve this, we use a maximum mean discrepancy (MMD) distance (Gretton et al., 2012) in a kernel-based statistical test, leveraging bootstrap to provide statistics that characterize the drift, such as the median and mean. Additionally, our approach identifies the subset of data likely to be the root cause of the drift. Subsequently, this subset of data can be used in the retraining of models to minimize performance regression. Specifically, we show how our proposed drift detection method can be used for detecting drift in the inference data for a machine learning model that is in production. Leveraging our proposed method, we can systematically compare the model development data, including training and validation data, with the production data to identify potential drift. Although we demonstrate the proposed method for text data in this paper, it can be used for other data modalities, such as image and voice data. Our method compares the input data of a machine learning model trained on \((X_{tr},Y_{tr})\) with that of \(X_{prod}\), a subset of production data. However, since both the training and production data can be large, we divide them into mini-batches and compare those mini-batches to identify potential drift. To mitigate sampling bias, we first shuffle both the training and production data before dividing them into mini-batches. Given that our proposed method is designed to detect drift in text data, we leverage an encoder such as BERT (Devlin et al., 2018) to obtain fixed-sized embeddings of the text data. Specifically, we compute the average embedding for a mini-batch as its overall vector representation. Subsequently, we use these embeddings as input in our proposed drift detection algorithm, which is presented in Algorithm 2. We used the maximum mean discrepancy (MMD) test (Gretton et al., 2012) in our proposed method to compare the training (reference) and production (target) distributions. MMD is a distribution-free approach that is particularly well-suited for high-dimensional spaces. It measures the difference between two probability distributions by evaluating the distance between their means in a reproducing kernel Hilbert space (RKHS). MMD's strength lies in its ability to capture complex, nonlinear relationships between variables and its ability to handle high-dimensional data. Therefore, we selected MMD as it can provide an accurate representation of the dissimilarity between the reference and target distributions. The proposed method utilizes bootstrap to estimate the drift distribution by comparing the training and production distributions. Initially, both distributions are combined under the null hypothesis of no drift. Subsequently, a bootstrapped distribution is obtained by randomly sampling with replacement. Next, the method calculates the drift between the first and second halves of the bootstrapped samples. If the null hypothesis holds, and there is no difference between the distributions, then the amount of drift (measured by MMD) in the bootstrapped distributions should be minimal. Otherwise, a significant drift is observed. Furthermore, the likelihood of the drift can be estimated using bootstrap. To mitigate the performance regression caused by drift, we propose a process in which we examine the estimated drift between mini-batches from our proposed algorithm. This enables us to identify the samples that exhibit the greatest deviation from the production data, thereby isolating the most significant sources of drift. These samples can then be reintroduced into the training data, allowing the model to be refreshed and better equipped to handle future changes in the data distribution. This approach not only improves the model's overall performance, but also ensures its continued relevance and accuracy in dynamic, real-world settings. ## 3 Experiment and Results This section showcases the effectiveness of our proposed drift detection methodology in three areas: (1) detecting model performance regression, (2) implementing mitigation strategies for model performance regression, and (3) text encoder effect ablation study. The overall approach that we present here can be applied to any domain or dataset. ### Model Performance Regression Detection To demonstrate a significant negative correlation between model performance metrics and estimated drift from our proposed drift detection method, we conducted an experiment based on a binary classification model, where the model is designed to detect if a text sentence is related to shopping in the categories such as searching, buying, and checking price for items. For this binary classification model, we use BERT (Devlin et al., 2018) encoder to obtain the embedding of a sentence and then the embedding goes through three feed forward dense layers for binary classification task. The model was trained on a dataset comprising approximately 800K annotated samples and evaluated on a separate test set of around 150K data points. After training and tuning the model on validation data, we evaluated its performance on test data. Then, we conducted an experiment to examine the relationship between its performance metrics and drift estimated by our proposed drift detection method. For this experiment, we collected data for about 3 years and divided it into monthly buckets. We then calculated the amount of drift between the model's development data and the data from each monthly bucket, and also computed the model's area under the ROC curve (AUC) and binary cross entropy (BCE) for each bucket as performance metrics. In our drift detection method, we used the BERT encoder for getting the embedding of text data (Devlin et al., 2018). The results of this experiment are presented in Figure 1, which clearly shows that as the amount of drift increases, the model's performance significantly decreases. \begin{table} \begin{tabular}{|c|c|} \hline Metric & correlation (\%) \\ \hline MMD vs BCE & 76.9 \\ \hline MMD vs AUC & -65.2 \\ \hline \end{tabular} \end{table} Table 1: Correlation coefficient between estimated drift (MMD) and model performance metrics. ``` 1:Result: Estimated Drift, Samples Causing Drift 2:Inputs: 3:\(D_{1}\) : Reference distribution (e.g. set of embeddings) 4:\(D_{2}\): Target distribution (e.g. set of embeddings) 5:Variable: 6:\(\beta\) : Number of samples to be compared 7:\(M\) : Total number of samples 8:\(K\) : Number of bootstraps 9:\(D_{1,j}\) : \(j\)th sample of reference distribution 10:\(D_{2,j}\) : \(j\)th sample of target distribution 11:\(R\) : List of MMD statistics 12:\(R_{b}\) : List of bootstraped MMD statistics 13:\(X\) : List of Median MMD statistics 14:\(t\) : Index 15:for\(t=\beta:M\)do 16:\(Q_{1}=\{D_{1,1},D_{1,2},...,D_{1,t}\}\)\(\triangleright\) batch of reference distribution 17:\(Q_{2}=\{D_{2,1},D_{2,2},...,D_{2,t}\}\)\(\triangleright\) batch of target distribution 18:\(r=MMD(Q_{1},Q_{2})\)\(\triangleright\) compute Maximum Mean Discrepancy 19:\(T=\{Q_{1},Q_{2}\}\)\(\triangleright\) combine \(Q_{1}\) and \(Q_{2}\) (under \(H_{0}\) of no drift ) 20:\(R=\{R;r\}\)\(\triangleright\) append MMD statistic 21: 22:for\(i=1:K\)do 23:\(T^{\prime}=Bootstrap(T)\)\(\triangleright\) get a bootstrap samples from \(T\) 24:\(Q^{{}^{\prime}}_{1}=T^{{}^{\prime}}_{i;\beta/2}\)\(\triangleright\) get 1st half of \(T^{{}^{\prime}}\) 25:\(Q^{{}^{\prime}}_{2}=T^{{}^{\prime}}_{\beta/2+1;\beta}\)\(\triangleright\) get 2nd half of \(T^{{}^{\prime}}\) 26:\(r^{{}^{\prime}}=MMD(Q^{{}^{\prime}}_{1},Q^{{}^{\prime}}_{2})\) 27:\(R_{b}=\{R_{b};r^{{}^{\prime}}\}\)\(\triangleright\) append MMD statistic 28:endfor 29:\(d_{med}=Median(R_{b})\)\(\triangleright\) compute median of estimated MMDs 30:\(X=\{X;d_{med}\}\)\(\triangleright\) append \(d_{med}\) to \(X\) 31:\(t+1=1\)\(\triangleright\) increase index 32:\(Q_{1}=\{Q_{1};D_{1,t}\}\)\(\triangleright\) add next sample to \(Q_{1}\) 33:\(Q_{2}=\{Q_{2};D_{2,t}\}\)\(\triangleright\) add next sample to \(Q_{2}\) 34:\(Q_{1}\),\(\triangleright\) remove first sample from \(Q_{1}\) 35:\(Q_{2}\),\(\triangleright\) remove first sample from \(Q_{2}\) 36:endfor 37:\(z\)=\(Argmax(R)\)\(\triangleright\) highest drift Index 38:\(U=\{D_{l,z-\beta},D_{l,z-\beta+1},...,D_{l,z}\}\)\(l\in\{1,2\}\)\(\triangleright\) median MMD 39:Return \(d=mean(X)\)\(\triangleright\) samples causing highest drift ``` **Algorithm 1** Proposed Drift Detection Method The results in Table 1 demonstrate a significant correlation between estimated drift (MMD) and model performance metrics. Consequently, our unsupervised drift detection approach can effectively monitor model performance and accurately predict model regression. This capability is particularly valuable in production environments where annotated data may not be available, making our drift detection method the only practical option for model performance monitoring. Figure 2 depicts the estimated drift and model performance over time, revealing a notable increase in drift as time progresses. This increase is attributed to the emergence of new data patterns in the production data, causing an increase in the estimated drift and a corresponding drop in the model's performance. These findings underscore the need for ongoing monitoring and recalibration of the model to ensure its continued effectiveness. ### Model Performance Regression Mitigation To assess the effectiveness of our proposed drift detection method in reducing the decline in model performance from drift when used in production, we outline a comprehensive mitigation process. Firstly, we run our drift detection method by setting the reference and target distributions to be the model's training and inference data during production, respectively. Then, our proposed method identifies the samples that exhibit the highest degree of drift or deviation from the production data. Finally, we incorporate the dataset generated from the previous step into the model's training data and retrain the model, enabling it to improve its performance during production. We implemented the above-mentioned approach on a multi-task model used for domain classification, intent classification, and name entity recognition. The model architecture comprises a blend of Bi-LSTM and transformer models (Devlin et al., 2018). To improve this model performance, we employed our proposed drift detection method to compare the model's training data with one month of production data. This enabled us to identify samples with the highest degree of drift from production data. However, in order to use these data during model training, they need to be annotated. Therefore, we Figure 1: Plot of estimated drift (MMD) vs the model performance metrics for monthly buckets, which indicates that as the amount of drift increases, model performance drops. Figure 2: Left and right plots shows the model performance metric (AUC) and the estimated amount of drift (MMD), respectively. The increase in MMD over time is attributed to the emergence of new data patterns in the production data, causing an increase in the estimated drift and a corresponding drop in the model’s performance utilized a semi-supervised approach, where we used a large transformer model that leverages a variety of offline signals such as previous conversations that are not available during online inference, to generate pseudo-labels for these data. With the help of these pseudo-labels, we added new samples with highest amount of drift to the model's training data and retrained the model. We conducted a thorough evaluation of the new model trained on samples generated from our drift detection method. Specifically, we tested the model's performance on a held-out false accept (FA) dataset, consisting of only false accept data, to evaluate its ability to reject false accept samples. Additionally, we compared the performance of our proposed approach with several common mitigation methods. These methods include the following: **Baseline:** A baseline model trained using the model's original training data without any mitigation. **Bias reduction:** This method involves clustering the training and production data to identify patterns that are absent within the training data. We then added new samples from absent patterns to the training data using this approach. **Upsampling:** This method first identifies the underrepresented samples within the training data by clustering and then employs upsampling to increase their frequency in the training data. Then, we added underrepresented samples to the training data using this approach. We re-trained the model using all these methods, and Table 2 provides a detailed comparison of the various methods used in this experiment. The results indicate that the samples of data added by our proposed drift detection method significantly improve model performance and outperform other methods to varying degrees, without an increase in false reject rate. ### Encoder Effect Ablation Study In this section, we outline an experiment designed to assess the effect of different encoders on the performance of our proposed method. We utilize different encoders to extract text embeddings and focus on simulating data drift within a binary classification scenario. The reference dataset initially maintains a balanced distribution of positive and negative class instances, each accounting for 50% of the dataset. Subsequently, we change the positive class percentage within the target dataset to induce data drift. Following datasets are used in our experiment: **AG news:** AG news comprises over one million news articles, compiled from a diverse range of 2000+ news sources over a span of more than a year (Zhang et al., 2015). We use world and sports classes in our experiment. **Yelp review:** yelp review dataset consists of reviews extracted from the Yelp (Zhang et al., 2016). We use reviews rated 5 as the positive class and reviews rated 1 as the negative class in our experiment. We sample 5000 observations from each dataset where we increase/decrease the percentage of positive class in the target dataset while comparing it to the reference dataset with equal ratio of positive and negative classes. We set the batch size, \(\beta\), and bootstrap steps of our method to be 64, 32, and 50, respectively. We use the following encoders in our experiment: **bert-base-uncased:** BERT base (Devlin et al., 2018) is a transformer encoder model with 12 layers with 768 hidden size that is pretrained on a large corpus of English data. **bert-large-uncased:** this is a BERT model with 24 layers with 1024 hidden size. **all-MiniLM-L12-v2:** This model, known as a sentence-transformers model, effectively encodes sentences and paragraphs into a 384-dimensional dense vector space, offering utility in applications such as clustering or semantic search (SBERT, 2023). This model has 12 layers with 384 hidden size. **all-distilroberta-v1:** This model is also a sentence-transformer with 12 layers and 768 hidden size. We ran our drift detection method using all four different encoders on both datasets and the following Figure 3 shows the results. The results indicate that there is a small amount of drift when the percentage of positive class is close to \begin{table} \begin{tabular}{|c|c|} \hline Method & FAR (\%) \\ \hline Baseline & 73.15 \\ \hline Bias reduction & 61.85 \\ \hline Upsampling & 71.48 \\ \hline Proposed drift detection & **59.68** \\ \hline \end{tabular} \end{table} Table 2: Comparison of different mitigation methods on false accept rate (FAR) test set. Our proposed drift detection method shows superior performance compared to the other methods. Lower FAR values indicate better performance.
2309.08794
Privacy-preserving Early Detection of Epileptic Seizures in Videos
In this work, we contribute towards the development of video-based epileptic seizure classification by introducing a novel framework (SETR-PKD), which could achieve privacy-preserved early detection of seizures in videos. Specifically, our framework has two significant components - (1) It is built upon optical flow features extracted from the video of a seizure, which encodes the seizure motion semiotics while preserving the privacy of the patient; (2) It utilizes a transformer based progressive knowledge distillation, where the knowledge is gradually distilled from networks trained on a longer portion of video samples to the ones which will operate on shorter portions. Thus, our proposed framework addresses the limitations of the current approaches which compromise the privacy of the patients by directly operating on the RGB video of a seizure as well as impede real-time detection of a seizure by utilizing the full video sample to make a prediction. Our SETR-PKD framework could detect tonic-clonic seizures (TCSs) in a privacy-preserving manner with an accuracy of 83.9% while they are only half-way into their progression. Our data and code is available at https://github.com/DevD1092/seizure-detection
Deval Mehta, Shobi Sivathamboo, Hugh Simpson, Patrick Kwan, Terence O`Brien, Zongyuan Ge
2023-09-15T22:29:07Z
http://arxiv.org/abs/2309.08794v1
# Privacy-preserving Early Detection of Epileptic Seizures in Videos ###### Abstract In this work, we contribute towards the development of video-based epileptic seizure classification by introducing a novel framework (SETR-PKD), which could achieve privacy-preserved early detection of seizures in videos. Specifically, our framework has two significant components - (1) It is built upon optical flow features extracted from the video of a seizure, which encodes the seizure motion semiotics while preserving the privacy of the patient; (2) It utilizes a transformer based progressive knowledge distillation, where the knowledge is gradually distilled from networks trained on a longer portion of video samples to the ones which will operate on shorter portions. Thus, our proposed framework addresses the limitations of the current approaches which compromise the privacy of the patients by directly operating on the RGB video of a seizure as well as impede real-time detection of a seizure by utilizing the full video sample to make a prediction. Our SETR-PKD framework could detect tonic-clonic seizures (TCSs) in a privacy-preserving manner with an accuracy of **83.9%** while they are only **half-way** into their progression. Our data and code is available at [https://github.com/DevD1092/seizure-detection](https://github.com/DevD1092/seizure-detection). Keywords:epilepsy early detection knowledge distillation ## 1 Introduction Epilepsy is a chronic neurological condition that affects more than 60 million people worldwide in which patients experience epileptic seizures due to abnormal brain activity [17]. Different types of seizures are associated with the specific part of the brain involved in the abnormal activity [8]. Thus, accurate detection of the type of epileptic seizure is essential to epilepsy diagnosis, prognosis, drug selection and treatment. Concurrently, real-time seizure alerts are also essential for caregivers to prevent potential complications, such as related injuries and accidents, that may result from seizures. Particularly, patients suffering from tonic-clonic seizures (TCSs) are at a high risk of sudden unexpected death in epilepsy (SUDEP) [18]. Studies have shown that SUDEP is caused by severe alteration of cardiac activity actuated by TCS, leading to immediate death or cardiac arrest within minutes after the seizure [5]. Therefore, it is critical to accurately and promptly detect and classify epileptic seizures to provide better patient care and prevent any potentially catastrophic events. The current gold standard practice for detection and classification of epileptic seizures is the hospital-based Video EEG Monitoring (VEM) units [23]. However, this approach is expensive and time consuming which is only available at specialized centers [3]. To address this issue, the research community has developed automated methods to detect and classify seizures based on several modalities - EEG [7, 30], accelerometer [16], and even functional neuroimaging modalities such as fMRI [22] and electrocorticography (ECoG) [24]. Although, there have been developments of approaches for the above modalities, seizure detection using videos remains highly desirable as it involves no contact with the patient and is easier to setup and acquire data compared to other modalities. Thus, researchers have also developed automated approaches for the video modality. Initial works primarily employed hand-crafted features based on patient motion trajectory by attaching infrared reflective markers to specific body key points [4, 15]. However, these approaches were limited in performance due to their inability to generalize to changing luminance (night time seizures) or when the patient is occluded (covered by a bed sheet) [14]. Thus, very recently deep learning (DL) models have been explored for this task [1, 2, 21, 29, 12]. [29] demonstrated that DL models could detect generalized tonic-clonic seizures (GTCSs) from the RGB video of seizures. Authors in [21] radically used transfer learning (from action recognition task) to train DL networks for distinguishing focal onset seizures (FOSs) from bilateral TCSs using features extracted from the RGB video of seizures. Whereas, the authors in [12] developed a DL model to discriminate dystonia and emotion in videos of Hyperkinetic seizures. However, these developed approaches have two crucial limitations - (1) As these approaches directly operate on RGB videos, there is a possibility of privacy leakage of the sensitive patient data from videos. Moreover, obtaining consent from patients to share their raw RGB video data for building inter-cohort validation studies and generalizing these approaches on a large scale becomes challenging; (2) The current approaches consider the full video of a seizure to make predictions, which makes early detection of seizures impossible. The duration of a seizure varies significantly among patients, with some lasting as short as 30 seconds while others can take minutes to self-terminate. Thus, it is unrealistic to wait until the completion of a long seizure to make a prediction and alert caregivers. In this work, we address the above two challenges by building an in-house dataset of privacy-preserved extracted features from a video and propose a frame work for early detection of seizures. Specifically, we investigate two aspects - (1) The feasibility of detecting and classifying seizures based only on _optical flow_, a modality that captures temporal differences in a scene while being intrinsically privacy-preserving. (2) The potential of predicting the type of seizure during its progression by analyzing only a fraction of the video sample. Our early detection approach is inspired by recent developments in early action recognition in videos [9, 10, 19, 21, 28, 31]. We develop a custom feature extractor-transformer framework, named **SE**izure **TR**ansformer (SETR) block for processing a single video sample. To achieve early detection from a fraction of the sample, we propose **P**rogressive **K**nowledge **D**istillation (PKD), where we gradually distill knowledge from SETR blocks trained on longer portions of a video sample to SETR blocks which will operate on shorter portions. We evaluate our proposed SETR-PKD framework on two datasets - an in-house dataset collected from a VEM unit in a hospital and a publicly available dataset of video-extracted features (GESTURES) [21]. Our experiments demonstrate that our proposed SETR-PKD framework can detect TCS seizures with an accuracy of **83.9%** in a privacy-preserving manner when they are only **half-way** into their progression. Furthermore, we comprehensively compare the performance of direct knowledge distillation with our PKD approach on both optical flow features (in-house dataset) and raw video features (public dataset). We firmly believe that our proposed method makes the first step towards developing a privacy-preserving real-time system for seizure detection in clinical practice. ## 2 Proposed Method In this section, we first outline the process of extracting privacy-preserving information from RGB video samples to build our in-house dataset. Later, we explain our proposed approach for early detection of seizures in a sample. ### Privacy Preserving Optical Flow Acquisition Our in-house dataset of RGB videos of patients experiencing seizures resides on hospital premises and is not exportable due to the hospital's ethics agreement8. To work around this limitation, we develop a pipeline to extract optical flow information [11] from the videos. This pipeline runs locally within the hospital and preserves the privacy of the patients while providing us with motion semiotics of the seizures. An example of the extracted optical flow video sample can be seen in Fig 1. We use the TV-L1 algorithm [20] to extract the optical flow features for each video, which we then export out of the hospital for building our proposed approach. We provide more information about our dataset, including the number of patients and seizures, annotation protocol, etc. in section 3. ### Early Detection of Seizures in a Sample Consider an input optical flow video sample \(V_{i}\) as shown in Fig 1(a) with a time period of \(T_{i}\), consisting of \(N\) frames - \(\{f_{0},f_{1},...f_{N-1}\}\), and having a ground truth label of \(y_{i}\in\{0,1,...C\}\) where is \(C\) the total number of categories. Then, the task of early detection is to build a framework that could classify the category of the sample correctly by analyzing the least possible partial segment of the sample. Thus, to define the problem of early detection, we split the sample \(V_{i}\) into \(k\) segments -\(\{0,1,...k-1\}\) starting from the beginning to the end as shown in Fig 1(b). Here \(V_{i}^{k-1}\) corresponds to the full video sample and the descending segments correspond to the reduced partial video samples. We build these partial segments by equally adding the temporal information throughout the sample i.e. the time period for a partial subset \(V_{i}^{j}\) of a sample \(V_{i}\) is computed as \((j+1)\times T_{i}/k\). Thus, the early detection task is to correctly predict the category \(y_{i}\) of the sample \(V_{i}\) from the lowest possible (\(j\)) partial segment \(V_{i}^{j}\) of \(V_{i}\). In Fig 1, we illustrate our proposed framework where - (a) First, we build a Seizure Transformer (SETR) block for processing a single optical flow video sample (b) Later, we employ SETR based Progressive Knowledge Distillation (SETR-PKD) to achieve early detection in a sample. #### 2.2.1 Processing a Single Sample Since seizure patterns comprise of body movements, we implement transfer learning from a feature extractor pre-trained on action recognition task to extract the spatial features from the optical flow frames. Prior work [21] has shown that Temporal Segment Networks (TSNs) [27] pre-trained on RGB videos of various actions are effective at extracting features from videos of seizures. We also utilize TSNs but pretrained on the optical flow modality, since we have privacy-preserved optical flow frames. The TSNs extract a 1D feature sequence for each frame \(f_{j}\), referred as spatial features in Fig 1(a). The spatial features are then processed by a linear transformation (1-layer MLP) that maps them into \(motion_{tokens}\in\mathbb{R}^{N\times D}\), where each token has \(D\)-dimensions. We leverage transformers to effectively learn temporal relations between the extracted spatial features of the seizure patterns. Following the strategy of ViT [6], after extracting the spatial features, we append a trainable class embedding \(class_{embed}\in\mathbb{R}^{D}\) to the motion tokens. This class embedding serves to represent the temporal relationships between the motion tokens and is later used for classification (\(class_{token}\) in Fig 1(a)). As the order of the \(motion_{tokens}\) is not known, we also add a learnable positional encoding \(L_{POS}\in\mathbb{R}^{(N+1)\times D}\) to the combined \(motion_{tokens}\) and \(class_{embed}\). This is achieved using an element-wise addition and we term it as the input \(X_{i}\) for the input sample \(V_{i}\). To enable the interaction between tokens and learn temporal relationships for input sample classification, we employ the Vanilla Multi-Head Self Attention (MHSA) mechanism [26]. First, we normalize the input sequence \(X_{i}\in\mathbb{R}^{(N+1)\times D}\) by passing it through a layer normalization, yielding \(X_{i}^{{}^{\prime}}\). We then use projection matrices (\(Q_{i},K_{i},V_{i}\)) = (\(X_{i}^{{}^{\prime}}W_{i}^{Q},X_{i}^{{}^{\prime}}W_{i}^{K},X_{i}^{{}^{\prime}}W_ {i}^{V}\)) to project \(X_{i}^{{}^{\prime}}\) into queries (Q), keys (K), and values (V), where \(W_{i}^{Q/K/V}\in\mathbb{R}^{D\times D}\) are the projection matrices for query, key, and value respectively. Next, we compute a dot product of \(Q\) with \(K\) and apply a softmax layer to obtain weights on the values. We repeat this self-attention computation \(N_{h}\) times, where \(N_{h}\) is the number of heads, and concatenate their outputs. Eq 1, 2 depict the MHSA process in general. \[A_{i}=Softmax(Q_{i}K_{i}) \tag{1}\] \[MHSA(X_{i}^{{}^{\prime}})=A_{i}\times W_{i}^{V},\hskip 36.135ptX_{i}^{{}^{\prime}} =Norm(X_{i}) \tag{2}\] Subsequently, the output of MHSA is passed to a two-layered MLP with GELU non-linearity while applying layer normalization and residual connections concurrently. Eq 3, 4 represent this overall process. \[m_{l}^{{}^{\prime}}=MHSA(X_{l-1}^{{}^{\prime}})+X_{l-1},\hskip 36.135ptl=1...L \tag{3}\] \[m_{l}=MLP(Norm(m_{l}^{{}^{\prime}}))+m_{l}^{{}^{\prime}},\hskip 36.135ptl=1...L \tag{4}\] where \(m_{L}\in\mathbb{R}^{(N+1)\times D}\) are the final output feature representations and \(L\) is the total number of encoding layers in the Transformer Encoder. Note that the first \(\mathbb{R}^{N\times D}\) features correspond to the \(patch_{tokens}\), while the final \(\mathbb{R}^{D}\) correspond to the \(class_{token}\) of the \(m_{L}\) as shown in Fig 1(a). As mentioned earlier, we then use a one-layer MLP to predict the class label from the \(class_{token}\). We refer to this whole process as a SEizure TRansformer (SETR) block shown in Fig 1(a). Figure 1: Our proposed framework - (a) SEizure TRansformer (SETR) block for a single optical flow video sample (b) SETR based Progressive Knowledge Distillation (SETR-PKD) for early detection of seizures in a sample. (Best viewed in zoom and color). Progressive Knowledge Distillation To achieve early detection, we use **K**nowledge **D**istillation in a **P**rogressive manner (PKD), starting from a SETR block trained on a full video sample and gradually moving to a SETR block trained on a partial video sample, as shown in Fig 1(b). Directly distilling from a SETR block which has seen a significantly longer portion of the video (say \(V_{i}^{k-1}\)) to a SETR block which has only seen a smaller portion of the video sample (say \(V_{i}^{0}\)) will lead to considerable mismatches between the features extracted from the two SETRs as there is a large portion of the input sample that the \(student_{0}\) SETR has not seen. In contrast, our proposed PKD operates in steps. First we pass the knowledge from teacher (\(Teacher_{k-1}\) in Fig 1(b)) SETR trained on \(V_{i}^{k-1}\) to a student (\(Sub-teacher_{k-2}\)) SETR that operates on \(V_{i}^{k-2}\); Later, the \(Sub-teacher_{k-2}\) SETR passes its distilled knowledge to its subsequent student (\(Sub-teacher_{k-3}\)) SETR, and this continues until the final \(Sub-teacher_{1}\) SETR passes its knowledge to the bottom most \(Student_{0}\) SETR. Since the consecutive segments of the videos do not differ significantly, PKD is more effective than direct distillation, which is proven by results in section 3.4. For distilling knowledge we consider both class token and patch tokens of the teacher and student networks. A standard Kullback-Leibler divergence (\(\mathcal{L}_{KL}\)) loss is applied between the probabilities generated from class token of the teacher and student SETR, whereas a mean squared error (\(\mathcal{L}_{MSE}\)) loss is computed between the patch tokens of teacher and student SETR. Overall, a student SETR is trained with three losses - \(\mathcal{L}_{KL}\) and \(\mathcal{L}_{MSE}\) loss for knowledge distillation, and a cross-entropy (\(\mathcal{L}_{CE}\)) loss for classification, given by the equations below. \[\mathcal{L}_{KL}=\tau^{2}\sum_{j}q_{j}^{T}(log(q_{j}^{T}/q_{j}^{S})) \tag{5}\] where \(q_{j}^{S}\) and \(q_{j}^{T}\) are the soft probabilities (moderated by temperature \(\tau\)) of the student and teacher SETRs for the \(j^{th}\) class, respectively. \[\mathcal{L}_{mse}=(\sum_{i=0}^{N}|p_{i}^{T}-p_{i}^{S}\|^{2})/N \tag{6}\] where \(N\) is the number of patches and \(p_{i}^{T}\) and \(p_{i}^{S}\) are the patches of teacher and student SETRs respectively. \[\mathcal{L}_{total}=\mathcal{L}_{CE}+\alpha\mathcal{L}_{KL}+\beta\mathcal{L}_ {mse} \tag{7}\] where \(\alpha\) and \(\beta\) are the weights for \(\mathcal{L}_{KL}\) and \(\mathcal{L}_{MSE}\) loss respectively. ## 3 Datasets & Experimental Results ### In-house and Public Dataset Our in-house dataset9 contains optical flow information extracted from high-definition (1920x1080 pixels at 30 frames per second) video recordings of TCS seizures (infrared cameras are used for nighttime seizures) in a VEM unit in hospital. To annotate the dataset, two neurologists examined both the video and corresponding EEG to identify the clinical seizure onset (\(t_{ON}\)) and clinical seizure offset (\(t_{OFF}\)) times for each seizure sample. We curated a dataset comprising of 40 TCSs from 40 epileptic patients, with one sample per patient. The duration (in seconds) of the 40 TCSs in our dataset ranges from 52 to 367 s, with a median duration of 114 s. We also prepared normal samples (no seizure) for each patient by considering the pre-ictal duration from (\(t_{ON}\) - 300) to (\(t_{ON}\) - 60) seconds, resulting in dataset of 80 samples (40 normal and 40 TCSs). We refrain from using the 60 seconds prior to clinical onset as it corresponds to the transition period to the seizure containing preictal activity [13, 25]. We use a 5-fold cross validation (split based on patients) for training and testing on our dataset. We also evaluate the effectiveness of our early detection approach on the GESTURES dataset [21], which contains features extracted from RGB video samples of seizures. The dataset includes two seizure types - 106 focal onset seizures (FOS) and 77 Tonic-Clonic Seizures (TCS). In contrast to our in-house dataset, the features are provided by the authors, and we directly input them into our SETR block without using a feature extractor. To evaluate our method, we adopt the stratified 10-fold cross-validation protocol as used in GESTURES. ### Training Implementation & Evaluation Metrics We implement all experiments in PyTorch 1.8.1 on a single A100 GPU. The SETR block takes in a total of 64 frames (\(N\)) with 512 1-D spatial feature per frame, has 8 MHSA heads (\(N_{h}\)) with a dropout rate of 0.1, 3 encoder layers (\(L\)), and 256 hidden dimensions (\(D\)). For early detection, we experiment by progressively segmenting a sample into -{4,8,16} parts (\(k\)). We employ a grid search to select the weight of 0.2 and 0.5 for KL divergence (\(\tau\) = 10) and MSE loss respectively. We train all methods with a batch size of 16, a learning rate of 1e-3 and use the AdamW optimizer with a weight decay of 1e-4 for a total 50 epochs. For GESTURES dataset, we implement a weighted BCE loss to deal with the dataset imbalance, whereas for our in-house dataset we implement the standard BCE loss. We use precision, recall and f1-score for benchmarking. ### Performance for Early Detection Table 1 shows the benchmarking performance of all techniques with varying fractions of input video samples on both datasets. We observed three key findings from the results in Table 1. First, transformer-based methods such as our proposed **SETR-PKD** and OaDTR exhibit better performance retention compared to LSTM-based techniques (RULSTM, Slowfast RULSTM, EgoAKD, GESTURES) with a reduction in the fraction of input sample. Second, **SETR-PKD** performance increases with \(k\)=8 from \(k\)=4, but saturates at \(k\)=16 for in-house dataset, whereas it achieves the best performance for \(k\)=4 for GESTURES dataset. The median seizure length for the in-house dataset and GESTURES dataset is 114 seconds and 71 seconds, respectively. As a result, PKD using relatively longer partial segments (_k_=4) is sufficient for GESTURES, while shorter partial segments (_k_=8) are required for our dataset. Thus, the optimal value of \(k\) for PKD may vary depending on a dataset. Finally, we observed better performance on the GESTURES dataset, which is expected given the more detailed and refined features extracted from RGB video compared to optical flow information. ### Progressive v/s Direct Knowledge Distillation To validate our approach of progressive knowledge distillation in a fair manner, we conducted an ablation study to compare it with direct knowledge distillation. Fig 2 shows the comparison of the accuracy of the two approaches for different fractions of the input video sample on both datasets. The results indicate that although direct knowledge distillation can increase performance, it is less effective when the knowledge gap is wide, i.e., from a SETR block trained on a full input sample to a SETR block trained on a minimal fraction of the \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Method/Dataset} & \multicolumn{6}{c}{In-house dataset} & \multicolumn{6}{c}{GESTURES} \\ \cline{2-10} & 1/4 & 1/2 & 3.4 & Full & 1/4 & 1/2 & 3/4 & Full \\ \hline RULSTM [6] & 0.57,056,056.0 & 0.72,071,071 & 0.79,079,079.0 & -0.95,039,049.0 & 0.65,04,064.0 & 0.71,073,072.0 & 0.84,05,08.0 & 0.830,09.04,03.03 \\ Slowfeat RULSTM [6] & 19.05,056,056.0 & 0.73,072,072.0 & 0.81,080,080.0 & -0.94,094.0 & 0.67,056,069.0 & 0.73,072,072.0 & 0.84,05,08.0 & 0.85,07,05.09 \\ Eggback3D [3] & 0.64,055,056.0 & 0.79,080,089.0 & 0.89,05,080.0 & -0.95,04,094.0 & 0.20,069.0 & 0.69,080.0 & 0.79,05,03.09 & 0.93,05,09.01 & 0.97,04,05.05 \\ OnDTR [26] & 0.66,055,056.0 & 0.82,083,082.0 & 0.90,090,090.0 & -0.95,056.0 & -0.95,056.0 & -0.72,069.0 & 0.82,03,082.0 & 0.91,09.02 & 0.91,09.09 & 0.90,09.09 \\ GESTURES [22] & 0.59,050,056.0 & 0.74,073,072.0 & 0.82,083,082.0 & -0.94,094.0 & 0.86,086.0 & -0.66,066.0 & -0.74,072.0 & -0.86,08.0 & -0.85,07.09 & -0.98 \\ \hline **SETR** & 0.61,040,000.0 & 0.76,073,074.0 & 0.84,083.0 & -0.85,056.0 & 0.95,056.0 & 0.66,066.0 & 0.73,073.0 & 0.73,038.0 & 0.83,08.0 & 0.83,09.03 \\ **SETR-PKD** (**k_=8) & 0.63,056,052.0 & 0.76,078.0 & -0.79,089.0 & -0.90,089.0 & -0.96,059.0 & -0.74,073.0 & -0.73,08.0 & -0.85,05.0 & -0.85,09.05 & -0.98,09.09 \\ **SETR-PKD** (**k_=8) & 0.70,040,000.0 & 0.86,044.0 & 0.82,039.0 & -0.96,056.0 & -0.95,056.0 & -0.72,037.0 & -0.74,073.0 & -0.85,05.0 & -0.85,05.0 & -0.95,05.0 & -0.98,09.09 \\ **SETR-PKD** (**k_=8) & 0.60,040,000.0 & 0.85,04,084.0 & 0.92,029.0 & -0.92,096.0 & -0.95,056.0 & -0.72,037.0 & -0.72,085.0 & -0.84,08.0 & -0.96,05.05 & -0.98,09.09 \\ \hline \end{tabular} \end{table} Table 1: Benchmarking of different techniques for different fraction {1/4, 1/2, 3/4, Full} of input video sample. The performance is presented as mean of - {Precision/Recall/F1-score} across the 5-folds & 10-folds for in-house and GES-TURES dataset respectively. (Best viewed in zoom). Figure 2: Performance comparison of direct knowledge distillation and progressive knowledge distillation between SETR blocks for different fractions of input video sample. input sample (1/8, 1/4,.. 1/2) compared to when the knowledge gap is small (5/8,.. 7/8). On the other hand, our SETR-PKD approach significantly improves performance for minimal fractions of input samples on both datasets. ## 4 Conclusion In this work, we show that it is possible to detect epileptic seizures from optical flow modality in a privacy-preserving manner. Moreover, to achieve real-time seizure detection, we specifically develop a novel approach using progressive knowledge distillation which proves to detect seizures more accurately during their progression itself. We believe that our proposed privacy-preserving early detection of seizures will inspire the research community to pursue real-time seizure detection in videos as well as facilitate inter-cohort studies.
2303.17837
Supersymmetry and integrability for a class of XY central spin models
Several studies have exploited the integrable structure of central spin models to deepen understanding of these fundamental systems. In recent years, an underlying supersymmetry for systems with XX interactions has been uncovered. Here we report that a class of central spin models with XY interactions is also supersymmetric and integrable. The associated Bethe Ansatz solution is presented for the case where all particles are spin-1/2.
W J P van Tonder, J Links
2023-03-31T06:59:00Z
http://arxiv.org/abs/2303.17837v1
# Supersymmetry and integrability for a class of \(Xy\) central spin models ###### Abstract Several studies have exploited the integrable structure of central spin models to deepen understanding of these fundamental systems. In recent years, an underlying supersymmetry for systems with \(XX\) interactions has been uncovered. Here we report that a class of central spin models with \(XY\) interactions is also supersymmetric and integrable. The associated Bethe Ansatz solution is presented for the case where all particles are spin-1/2. ## 1 Introduction Central spin models increasingly draw attention for their potential applications in developing quantum technologies. This interest is further driven by studies of their integrability, offering high-fidelity control of mesoscopic quantum systems where the exponentially increasing size of the Hilbert space makes such control challenging [5]. The central spin allows the dynamics of the spin bath to be monitored and for feedback to be used to steer the dynamics of the bath in a desired manner. A physical realisation is accommodated by nitrogen vacancy centres in diamonds where one carbon atom is replaced by a nitrogen atom and an adjacent carbon atom is absent. This acts like a spin-1/2 particle interacting with a bath of nuclear spins from neighbouring carbon atoms [28]. An application exploiting this high degree of control and robustness of the system is memory in quantum computers. The special eigenstates of some of the central spin models provide a means of storing the qubit state of the central spin among the bath spins and recovering it later [22, 23, 24]. Other potential applications include quantum sensing and metrology [20], and utilising a central spin system with several central spins as a model of a quantum battery [15]. In this latter setting the central spins serve as battery cells, that are charged by the bath spins. As mentioned, much of the theoretical interest in central spin models stems from the existence of exact solutions, see e.g. [1, 6, 7, 9, 10, 16, 26, 25], and there are ongoing efforts to extend the body of known results. In recent times it was shown that a central spin-1/2 particle interacting with arbitrary bath spins is integrable for \(XX\) interactions [23]. The result holds when the central spin is subjected to a magnetic field perpendicular to the plane of interaction. A surprising feature of this analysis was the appearance of _supersymmetry_, reminiscent of that observed in a class of \(XY\)Z spin chains [8]. A complementary study showed that integrability persists for \(XX\) interactions when an arbitrarily oriented magnetic field is applied to the central spin [6]. However, the methods of the latter study were valid only for a bath of spin-1/2 particles. Here, we unify these approaches and extend them to establish that integrability holds for a class of \(XY\) interactions, with arbitrary magnetic field applied to a spin-1/2 central spin and with arbitrary bath spins. Similar to [23], we also identify a supersymmetric structure underlying the Hamiltonian. In Sect. 2 we give the Hamiltonian of the model and express it in terms of conjugate supercharges. This formulation shows that the square of the Hamiltonian is, up to a constant, a supersymmetric operator. In Sect. 3 we then provide an extensive set of operators that commute with the square of the Hamiltonian. Restricting to the case where the bath consists entirely of spin-1/2 particles, we derive a Bethe Ansatz solution for the energies of the Hamiltonian in Sect. 4. Concluding remarks are provided in Sect. 5. ## 2 The Hamiltonian and supersymmetry Consider a set of \(L+1\) spin operators \(\{S^{x}_{j},\,S^{y}_{j},\,S^{z}_{j}:j=0,\ldots L\}\) satisfying the standard canonical commutation relations \[[S^{\alpha}_{j},\,S^{\beta}_{k}]=i\delta_{jk}\sum_{\gamma\in\{x,y,z\}}\varepsilon ^{\alpha\beta\gamma}S^{\gamma}_{j}, \tag{1}\] where \(\varepsilon^{\alpha\beta\gamma}\) is the Levi-Civita symbol. We identify the spin labelled by 0 as the _central spin_ and define a central spin Hamiltonian \(H\) with \(XY\) interactions to take the form \[H=B^{x}S^{x}_{0}+B^{y}S^{y}_{0}+B^{z}S^{z}_{0}+\sum_{j=1}^{L}(X_{j}S^{x}_{0}S^{ x}_{j}+Y_{j}S^{y}_{0}S^{y}_{j}). \tag{2}\] We fix the central spin to be spin-1/2, while the \(L\) remaining _bath spins_ have arbitrary spin. Without loss of generality, we may assume that all coupling parameters \(B^{x},\,B^{y},\,B^{z}_{0},\,X_{j},\,Y_{j}\) are non-negative since the commutation relations (1) are invariant under parity-time transformations of the form \(i\mapsto-i,\,\alpha\mapsto-\alpha\) for \(\alpha\in\{x,\,y,\,z\}\). We introduce a set of distinct parameters \(\{\beta\}\cup\{\epsilon_{j}:j=1,\ldots,L\}\) such that \(\epsilon_{j}-\beta>0\) for all \(j=1,\ldots,L\) and we set \(f^{\pm}_{j}=\sqrt{\epsilon_{j}\pm\beta}\). As is usual, define \[S^{\pm}_{0}=S^{x}_{0}\pm iS^{y}_{0}.\] Next we introduce the _supercharges_ \[{\cal A}^{\pm}=S^{\mp}_{0}\left((\gamma\pm i\lambda)I+\sum_{j=1}^{L}(f^{-}_{j }S^{x}_{j}\pm if^{+}_{j}S^{y}_{j})\right), \tag{3}\] where \(I\) denotes the identity operator. These operators satisfy \(({\cal A}^{\pm})^{2}=0\) and \(S_{0}^{z}{\cal A}^{\pm}=-{\cal A}^{\pm}S_{0}^{z}\). It is straightforward to check that \[{\cal A}^{+}+{\cal A}^{-} = 2\gamma S_{0}^{x}+2\lambda S_{0}^{y}+2\sum_{j=1}^{L}(f_{j}^{-}S_{0 }^{x}S_{j}^{x}+f_{j}^{+}S_{0}^{y}S_{j}^{y})\] \[= H-2\mu S_{0}^{z}\] with the identification \[B^{x} = 2\gamma,\qquad Y_{j}^{2}+X_{j}^{2}=2\epsilon_{j},\] \[B^{y} = 2\lambda,\qquad Y_{j}^{2}-X_{j}^{2}=2\beta,\] \[B^{z} = 2\mu.\] Moreover \[H^{2} = \mu^{2}I+({\cal A}^{+}+{\cal A}^{-})^{2} \tag{4}\] \[= \mu^{2}I+{\cal A}^{+}{\cal A}^{-}+{\cal A}^{-}{\cal A}^{+}\] \[= (\gamma^{2}+\lambda^{2}+\mu^{2})I+Q\] where \[Q=-2S_{0}^{z}\sum_{j=1}^{L}f_{j}^{+}f_{j}^{-}S_{j}^{z} + 2\sum_{j=1}^{L}(\gamma f_{j}^{-}S_{j}^{x}+\lambda f_{j}^{+}S_{j}^ {y})\] \[+ \sum_{j,k=1}^{L}(f_{j}^{-}f_{k}^{-}S_{j}^{x}S_{k}^{x}+f_{j}^{+}f_ {k}^{+}S_{j}^{y}S_{k}^{y}).\] The set of operators \(\{H^{2}-\mu^{2}I,\,{\cal A}^{\pm}\}\) provide a realisation of the \(sl(1|1)\) superalgebra and gives an example of a supersymmetric quantum mechanical system. Standard arguments show that the spectrum of \(H^{2}-\mu^{2}I\) is non-negative, and that the non-zero energy eigenstates appear as "boson/fermion" pairs related through the action of \({\cal A}^{\pm}\)[3]. The next step is to establish that there exists a set of \(L\) mutually commuting operators that commute with \(Q\), leading to the claim that the system is also integrable. ## 3 Integrability To expose the integrability of the system we take the approach to identify a set of mutually commuting operators that generalise Gaudin operators [7], for arbitrary spins. Define \[Q_{j}^{\pm} = \pm S_{j}^{z}+\frac{2\gamma}{f_{j}^{+}}S_{j}^{x}+\frac{2\lambda} {f_{j}^{-}}S_{j}^{y}+\frac{f_{j}^{-}}{f_{j}^{+}}(S_{j}^{x})^{2}+\frac{f_{j}^{ +}}{f_{j}^{-}}(S_{j}^{y})^{2} \tag{5}\] \[+ 2\sum_{k\neq j}^{L}\frac{1}{\epsilon_{j}-\epsilon_{k}}\left(f_{j }^{+}f_{k}^{-}S_{j}^{x}S_{k}^{x}+f_{j}^{-}f_{k}^{+}S_{j}^{y}S_{k}^{y}\right)\] \[+ 2\sum_{k\neq j}^{L}\frac{f_{k}^{+}f_{k}^{-}}{\epsilon_{j}-\epsilon _{k}}\left(S_{j}^{z}S_{k}^{z}-\frac{1}{4}I\right).\] Using the commutation relations (1) and the identities \[0 =\frac{f_{i}^{\mp}f_{j}^{\pm}}{(\epsilon_{i}-\epsilon_{j})}\frac{f_{j }^{\mp}f_{k}^{\pm}}{(\epsilon_{j}-\epsilon_{k})}-\frac{f_{i}^{\mp}f_{k}^{\pm}}{( \epsilon_{i}-\epsilon_{k})}\frac{f_{j}^{+}f_{j}^{-}}{(\epsilon_{j}-\epsilon_{k })}-\frac{f_{i}^{\mp}f_{k}^{\pm}}{(\epsilon_{i}-\epsilon_{k})}\frac{f_{j}^{+}f _{j}^{-}}{(\epsilon_{i}-\epsilon_{j})}\] \[=\frac{f_{j}^{+}f_{j}^{-}}{(\epsilon_{i}-\epsilon_{j})}\frac{f_{j }^{\mp}f_{k}^{\pm}}{(\epsilon_{j}-\epsilon_{k})}-\frac{f_{k}^{+}f_{k}^{-}}{( \epsilon_{i}-\epsilon_{k})}\frac{f_{j}^{\pm}f_{k}^{\mp}}{(\epsilon_{j}- \epsilon_{k})}-\frac{f_{i}^{\mp}f_{k}^{\pm}}{(\epsilon_{i}-\epsilon_{k})}\frac {f_{i}^{\mp}f_{j}^{\pm}}{(\epsilon_{i}-\epsilon_{j})}\] it can be shown by direct calculation that \[[Q_{j}^{\pm},\,Q_{k}^{\pm}]=0,\qquad j,k\in\{1,\ldots,L\},\] generalising the results of [11, 23] and being a specific case of those in [19]. Following [23] further, next set \[Q_{j} =\frac{1}{2}(I-2S_{0}^{z})Q_{j}^{+}+\frac{1}{2}(I+2S_{0}^{z})Q_{j }^{-} \tag{6}\] \[=-2S_{0}^{z}S_{j}^{z}+\frac{2\gamma}{f_{j}^{+}}S_{j}^{x}+\frac{2 \lambda}{f_{j}^{-}}S_{j}^{y}+\frac{f_{j}^{-}}{f_{j}^{+}}(S_{j}^{x})^{2}+\frac {f_{j}^{+}}{f_{j}^{-}}(S_{j}^{y})^{2}\] \[\qquad+2\sum_{k\neq j}^{L}\frac{1}{\epsilon_{j}-\epsilon_{k}} \left(f_{j}^{+}f_{k}^{-}S_{j}^{x}S_{k}^{x}+f_{j}^{-}f_{k}^{+}S_{j}^{y}S_{k}^{y }\right)\] \[\qquad+2\sum_{k\neq j}^{L}\frac{f_{k}^{+}f_{k}^{-}}{\epsilon_{j} -\epsilon_{k}}\left(S_{j}^{z}S_{k}^{z}-\frac{1}{4}I\right). \tag{7}\] We find that \[Q=\sum_{j=1}^{L}f_{j}^{+}f_{j}^{-}Q_{j}\] from which \([H^{2},\,Q_{j}]=0\) follows. This establishes that \(H\) is an abstract integrable quantum system in the sense that the number of conserved operators for \(H^{2}\) grows linearly with the number of spins. In the next section we will restrict to a bath of only spin-1/2 particles to illustrate how a Bethe Ansatz solution is obtained for the spectrum of \(H\). ## 4 Bethe Ansatz solution for a bath of spin-1/2 particles For the case when all bath particles are spin-1/2 we have \[\left(S_{j}^{x}\right)^{2}=\left(S_{j}^{y}\right)^{2}=\frac{1}{4}\] in which case it is convenient to define the modified conserved operators \[\tilde{Q}_{j}=Q_{j}-\frac{1}{4}\left(\frac{f_{j}^{-}}{f_{j}^{+}}+\frac{f_{j}^ {+}}{f_{j}^{-}}\right)I,\qquad j=1,...,L. \tag{8}\] It is known that the operators (8) satisfy a set of quadratic identities. For each simultaneous eigenstate of the operators given by (8), let \(\{\tilde{q}_{j}\}\) denote the set of corresponding eigenvalues. These eigenvalues necessarily satisfy analogous quadratic relations that read [2] \[\tilde{q}_{j}^{2} = \frac{1}{4}+\frac{\gamma^{2}}{(f_{j}^{+})^{2}}+\frac{\lambda^{2}}{(f _{j}^{-})^{2}}-\sum_{k\neq j}^{L}f_{k}^{+}f_{k}^{-}\left(\frac{\tilde{q}_{j}- \tilde{q}_{k}}{\epsilon_{j}-\epsilon_{k}}\right) \tag{9}\] \[\mbox{}+\frac{1}{4}\sum_{j\neq i}^{L}\left(\frac{f_{j}^{+}f_{k}^{ -}-f_{j}^{-}f_{k}^{+}}{\epsilon_{j}-\epsilon_{k}}\right)^{2}.\] To extract Bethe Ansatz solutions from the above quadratic relations we will adapt methods developed in [12, 13]. Set \[\alpha_{\pm}=\frac{1}{2}(L+1\pm 1).\] Also assume the eigenvalues of the conserved operators to have the form \[\tilde{q}_{j}=\frac{\alpha_{\pm}\epsilon_{j}}{f_{j}^{+}f_{j}^{-}}-\sum_{m=1}^{ L}\frac{f_{j}^{+}f_{j}^{-}}{\epsilon_{j}-v_{m}}+\frac{1}{2}\sum_{k\neq j}^{L} \frac{f_{j}^{+}f_{j}^{-}-f_{k}^{+}f_{k}^{-}}{\epsilon_{j}-\epsilon_{k}} \tag{10}\] so that the \(\{q_{j}:j=1,\ldots,L\}\) are parametrised in terms of variables \(\{v_{j}:j=1,\ldots,L\}\). Note that this is always possible to achieve. Setting \[Q(u)=\prod_{j=1}^{L}(u-v_{j})=u^{L}+\sum_{j=0}^{L-1}a_{j}u^{j},\] such that \[\frac{Q^{\prime}(u)}{Q(u)}=\sum_{j=1}^{L}\frac{1}{u-v_{j}},\] then (10) provides a system of \(L\) homogeneous linear equations for the \(L+1\) coefficients \(\{a_{j}\}\), and this system admits a non-trivial solution1. Footnote 1: If the solution gives \(Q(u)\) as a polynomial of degree \(M<L\) such that \(a_{L}=0\), this is to be interpreted as \(L-M\) of the roots being _infinite-valued_. See [14] for an example of this feature. The form (10) has been chosen in such a way that the relations (9), which are expressed in terms of irrational algebraic functions of the \(\{\epsilon_{j}\}\), can be transformed into rational functions. Inserting (10) into (9) leads to \[S(\epsilon_{j})=0,\qquad j=1,\ldots,L\] where \[S(u) = \alpha_{\pm}^{2}\beta^{2}-\gamma^{2}(u-\beta)-\lambda^{2}(u+\beta)\] \[\mbox{}+(u^{2}-\beta^{2})\left((2L-2\alpha_{\pm})\sum_{m=1}^{L} \frac{v_{m}}{u-v_{m}}+\sum_{j,m=1}^{L}\frac{v_{m}^{2}-\beta^{2}}{(u-v_{m})( \epsilon_{j}-v_{m})}\right)\] \[\mbox{}+2(u^{2}-\beta^{2})\sum_{m=1}^{L}\sum_{n\neq m}^{L}\frac{ v_{m}v_{n}-\beta^{2}}{(u-v_{m})(v_{m}-v_{n})}.\] By observing that \(Q(u)S(u)\) is a polynomial of degree \(L+1\), and also using \[S(\beta) = \alpha_{\pm}^{2}\beta^{2}-2\lambda^{2}\beta,\] \[S(-\beta) = \alpha_{\pm}^{2}\beta^{2}+2\gamma^{2}\beta,\] it follows from (11) that \[S(u)=f(u)\frac{P(u)}{Q(u)}\] where \[P(u)=\prod_{j=1}^{L}(u-\epsilon_{j})\] and \[f(u)=\frac{1}{2}\left((\alpha_{\pm}^{2}\beta-2\lambda^{2})\frac{Q(\beta)}{P( \beta)}(u+\beta)-(\alpha_{\pm}^{2}\beta+2\gamma^{2})\frac{Q(-\beta)}{P(-\beta )}(u-\beta)\right).\] Evaluating \[\lim_{u\to v_{l}}(u-v_{l})f(u)\frac{P(u)}{Q(u)}=\lim_{u\to v_{l}}(u-v_{l})S(u)\] leads to the Bethe Ansatz equations \[\frac{1}{2}\prod_{n\neq l}^{L}\frac{1}{v_{l}-v_{n}}\prod_{m=1}^{L }(v_{l}-\epsilon_{m})\left(\frac{\alpha_{\pm}^{2}\beta-2\lambda^{2}}{v_{l}- \beta}\frac{Q(\beta)}{P(\beta)}-\frac{\alpha_{\pm}^{2}\beta+2\gamma^{2}}{v_{l }+\beta}\frac{Q(-\beta)}{P(-\beta)}\right)\] \[=(L-2\alpha_{\pm})v_{l}-\sum_{j=1}^{L}\epsilon_{j}+\sum_{j=1}^{L }\frac{\epsilon_{j}^{2}-\beta^{2}}{\epsilon_{j}-v_{l}}+2\sum_{m\neq l}^{L} \frac{v_{l}v_{m}-\beta^{2}}{v_{l}-v_{m}}. \tag{11}\] Since \(P(u)-Q(u)\) has degree less than \(L\), Lagrange basis polynomials may be used to give \[P(u)-Q(u) =\sum_{j=1}^{L}(P(v_{j})-Q(v_{j}))\prod_{k\neq j}^{L}\frac{u-v_{k }}{v_{j}-v_{k}}\] \[=\sum_{j=1}^{L}P(v_{j})\prod_{k\neq j}^{L}\frac{u-v_{k}}{v_{j}-v_ {k}}.\] Then taking the sum over \(l\) in (11) leads to \[\frac{1}{2}\left((2\lambda^{2}-\alpha_{\pm}^{2}\beta)\left(1- \frac{Q(\beta)}{P(\beta)}\right)+(2\gamma^{2}+\alpha_{\pm}^{2}\beta)\left(1- \frac{Q(-\beta)}{P(-\beta)}\right)\right)\] \[=\sum_{l=1}^{L}(L-2\alpha_{\pm})v_{l}-L\sum_{j=1}^{L}\epsilon_{j} +\sum_{l=1}^{L}\sum_{j=1}^{L}\frac{\epsilon_{j}^{2}-\beta^{2}}{\epsilon_{j}-v _{l}}. \tag{12}\] The eigenvalue \(\mathcal{Q}\) of \(Q\) reads \[\mathcal{Q} =\sum_{j=1}^{L}f_{j}^{+}f_{j}^{-}\left(\tilde{q}_{j}+\frac{f_{j} ^{-}}{4f_{j}^{+}}+\frac{f_{j}^{+}}{4f_{j}^{-}}\right)\] \[=\frac{1}{2}(2\alpha_{\pm}+L)\sum_{j=1}^{L}\epsilon_{j}-\sum_{j=1 }^{L}\sum_{m=1}^{L}\frac{\epsilon_{j}^{2}-\beta^{2}}{\epsilon_{j}-v_{m}}.\] Using (12) we obtain the squares of the energies, given by \[E^{2} =\gamma^{2}+\lambda^{2}+\mu^{2}+\mathcal{Q}\] \[=\mu^{2}+\sum_{l=1}^{L}(L-2\alpha_{\pm})v_{l}+\frac{1}{2}(2\alpha _{\pm}-L)\sum_{j=1}^{L}\epsilon_{j}\] \[\qquad+\frac{1}{2}\left((2\lambda^{2}-\alpha_{\pm}^{2}\beta) \frac{Q(\beta)}{P(\beta)}+(2\gamma^{2}+\alpha_{\pm}^{2}\beta)\frac{Q(-\beta)} {P(-\beta)}\right). \tag{13}\] It is important to highlight that the Bethe Ansatz solution given by (11,13), for generic values of the coupling parameters, is complete and accounts for all energies of the Hamiltonian. The line of reasoning follows the same arguments as presented in [12, 13] and is based on the fact that the quadratic relations (9) obtained from operator identities are complete. But in addition it needs to be asserted that for each value of \(E^{2}\) given by (13), both the positive and negative values for \(E\) appear in the spectrum upon taking the square root. This follows from our earlier observation that the signs of the coupling parameters \(\{B^{x},\,B^{y},\,B^{z},\,X_{j},\,Y_{j}\}\), appearing linearly in (2), can be changed by unitary transformations. We remark finally that in the limit \(\beta,\gamma,\lambda\to 0\) expressions (11) and (13) coincide with those found in [23] for the \(XX\) model, up to a change a variables. The solutions for the choice \(\alpha_{+}\) correspond to the entangled _bright states_, while those for the choice \(\alpha_{-}\) correspond to the separable _dark states_ as defined therein. ## 5 Discussion We have shown that the supersymmetry and integrability of the central spin-1/2 model with \(XX\) interactions and arbitrary bath spins extends to a class of \(XY\) interactions. We have explicitly identified the supercharges (3) and a set of \(L\) mutually commuting conserved operators as given by (5). In the case where all bath particles are spin-1/2, we have used a set of known quadratic identities to derive a Bethe Ansatz solution. For future work there are several avenues available. A generalisation of the Bethe Ansatz results for arbitrary spins is in principle attainable following the tensor product methods constructed in [11]. This is feasible because every higher spin with a finite-dimensional state space can be obtained through a tensor product of spin-1/2 spaces and an appropriate projection. In the limit as \(L\to\infty\) integral techniques can be employed to obtain an expression for the ground-state energy from the quadratic identities (9). This calculation was undertaken in [17] for an analogous BCS pairing Hamiltonian. It is known that a correspondence exists between BCS models and central spin models [29], and this may be used to adapt and translate the results of [17] to the \(XY\) central spin model (2) in the spin-1/2 bath case. While the Bethe Ansatz approach described here does not yield expressions for the eigenstates, these are in principle accessible by adapting the algebraic Bethe Ansatz approach developed in [18, 19]. This appears to be a highly technical challenge. But it would be very useful to gain a better physical understanding for the analogues of bright and dark states associated with the choice for \(\alpha_{\pm}\), characterised in [23] for \(XX\) interactions, within the \(XY\) model. Another path to follow is to generalise the studies for higher-order central spins [21, 27] with \(XX\) interactions to the \(XY\) setting. ## Acknowledgments The authors acknowledge the traditional owners of the land on which The University of Queensland at St. Lucia operates, the Turrbal and Jagera people. This work was supported by the Australian Research Council through Discovery Project DP200101339.
2309.12914
VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks
Keyword spotting (KWS) refers to the task of identifying a set of predefined words in audio streams. With the advances seen recently with deep neural networks, it has become a popular technology to activate and control small devices, such as voice assistants. Relying on such models for edge devices, however, can be challenging due to hardware constraints. Moreover, as adversarial attacks have increased against voice-based technologies, developing solutions robust to such attacks has become crucial. In this work, we propose VIC-KD, a robust distillation recipe for model compression and adversarial robustness. Using self-supervised speech representations, we show that imposing geometric priors to the latent representations of both Teacher and Student models leads to more robust target models. Experiments on the Google Speech Commands datasets show that the proposed methodology improves upon current state-of-the-art robust distillation methods, such as ARD and RSLAD, by 12% and 8% in robust accuracy, respectively.
Heitor R. Guimarães, Arthur Pimentel, Anderson Avila, Tiago H. Falk
2023-09-22T15:03:41Z
http://arxiv.org/abs/2309.12914v1
Vic-Kd: Variance-Invariance-covariance knowledge distillation to make keyword spotting more robust against adversarial attacks ###### Abstract Keyword spotting (KWS) refers to the task of identifying a set of predefined words in audio streams. With the advances seen recently with deep neural networks, it has become a popular technology to activate and control small devices, such as voice assistants. Relying on such models for edge devices, however, can be challenging due to hardware constraints. Moreover, as adversarial attacks have increased against voice-based technologies, developing solutions robust to such attacks has become crucial. In this work, we propose VIC-KD, a robust distillation recipe for model compression and adversarial robustness. Using self-supervised speech representations, we show that imposing geometric priors to the latent representations of both Teacher and Student models leads to more robust target models. Experiments on the Google Speech Commands datasets show that the proposed methodology improves upon current state-of-the-art robust distillation methods, such as ARD and RSLAD, by 12% and 8% in robust accuracy, respectively. Heitor R. Guimaraes, Arthur Pimentel, Anderson Avila, Tiago H. Falk Institut national de la recherche scientifique (INRS-EMT), Universite du Quebec, Montreal, Canada, and INRS-UQO Mixed Research Unit on Cybersecurity, Gatineau,Quebec, Canada Keyword Spotting, Adversarial Robustness, Knowledge Distillation, Robust Distillation, VICReg. ## 1 Introduction Recent advances in the deep learning field have profoundly impacted the speech-processing community. As such, it has resulted in cutting-edge systems for tasks such as keyword spotting (KWS) [1] and speech/emotion recognition [2, 3], to name a few. In particular, KWS systems are usually the first layer of virtual assistants, where such models attempt to identify preset words in an utterance to activate the device (e.g., "Hey Siri"). On-device speech processing, such as Google Now and Alexa, is becoming ubiquitous in our daily lives; however, this can impose challenges on energy efficiency, real-time processing, and on user privacy. For KWS, the use of self-supervised speech representation learning (S3RL) has become a popular tool. S3RL models, such as _Wav2Vec 2.0_[4] and WavLM [5], are designed to learn combined acoustic and language models for continuous audio inputs. Notwithstanding, when working with edge devices, it is crucial to take into account model size due to hardware constraints. To this end, knowledge distillation (KD) [6] has emerged as a powerful tool to transfer knowledge from a larger (Teacher) model to a smaller (Student) one, resulting in comparable generalization capabilities. Moreover, it is known that edge processing can provide an extra layer of security protection for the user, ensuring that private data is processed locally on the device and not sent over the cloud to a third-party server. However, recent research has shown that KWS systems based on self-supervised representations can be vulnerable to so-called adversarial attacks during inference time [7]. Adversarial attacks aim to design a small perturbation \(\delta\) that when added to the test signal will force the system to fail. Over-the-air adversarial attacks, for example, have shown that imperceptible noise can be added to the user's voice resulting in misclassifications [8]. Adversarial training (AT) [9] has emerged as a potential defense technique against adversarial attacks. AT is a robust optimization formulation that, in practice, can be seen as a data augmentation technique that generates adversarial versions of a natural sample to be used during training of Teacher models. The TRADES (TRadeoff-inspired Adversarial DEFense via Surrogate-loss minimization) method, for example, has generated inspiring results [10]. Efficiently designing small (Student) models, however, is a challenging task. The Adversarial Robust Distillation (ARD) [11] method combines adversarial training with KD, emphasizing the importance of minimizing the Kullback-Leibler divergence between Student and Teacher logits to enhance Student robustness. In contrast, the Robust Soft Label Adversarial Distillation (RSLAD) [12] method uses soft labels from the Teacher model (in lieu of hard labels) for guidance during distillation, yielding better results than ARD empirically. Notwithstanding, neither of these methods can consistently surpass the accuracy of a smaller model trained with AT, thus suggesting that further innovations are needed if large self-supervised speech representations are to be used in edge devices. In this work, we propose VIC-KD, a novel distillation recipe that compresses the S3RL model size while increasing its robustness against adversarial attacks. In particular, we show that multi-view inputs and geometric constraints on the latent space of the Student model are essential to achieve these advantages. We consider Student models with fewer than 96K parameters and 3 MMACs, and knowledge distillation is performed from fine-tuned versions of _Wav2Vec 2.0_ and WavLM. Results show that our distillation recipe can achieve better robustness than traditional defense techniques, such as TRADES, and improve upon robust distillation methods, such as ARD and RSLAD. ## 2 Variance-Invariance-covariance Knowledge Distillation Herein, instead of using the Teacher model to guide how the Student logits should behave, we induce some geometric properties of the latent space of the Student via the variance-invariance-covariance regularization and the usage of multi-view inputs to each model. Although the usage of multi-view inputs for KD has already been explored in the literature to increase the environmental robustness of Student models [13, 14], its effect on robust distillation still needs to be determined. Figure 1 depicts a block diagram of the proposed VIC-KD distillation recipe. In the self-supervised learning literature, joint embedding architectures (JEA) are becoming popular due to their effectiveness in learning latent factors from data [15, 16]. VICReg [17] is one such method that is based on preserving the information of the embeddings while avoiding representational collapse. Here, we expand those ideas to a robust distillation method. First, we sample an utterance from \(x\sim\mathcal{D}_{\text{train}}\) and two random transformations \(\{t,t^{\prime}\sim\mathcal{T}|t\neq t^{\prime}\}\). As depicted in the upper branch of Figure 1, the Teacher model \(T_{\theta}\) is responsible for extracting representations from the speech input. More precisely, representation \(Z\) is extracted, which is the weighted sum of the intermediate representations from all Transformer layers, and aggregated over the time dimension. The Student branch, on the other hand, receives the other utterance view generated by \(t^{\prime}\) with an added adversary perturbation. First, for both inputs, we extract the latent representations of the Student model, denoted by \(H^{\prime}\). This representation is then fed to a classification head, generating the model logits \(Y^{\prime}\) that attempt to predict the spoken commands. In parallel, \(H^{\prime}\) is fed to a projection head responsible for generating \(Z^{\prime}\), which has the same dimensionality as the Teacher latent representation. Note that, at test time, only the Student encoder \(S_{\theta^{\prime}}\) and the classification head are used. In addition to the mechanism described above, we now describe the contribution of each term that composes the three-factor VIC-KD loss function, presented in equation 1: (1) variance, (2) invariance, and (3) covariance terms. Following our previous notation, we define \(Z=[z_{1},z_{2},...,z_{n}]\) and \(Z^{\prime}=[z^{\prime}_{1},z^{\prime}_{2},...,z^{\prime}_{n}]\) as the \(d\)-dimensional latent representations generated by the Teacher and Student models for natural and adversarially perturbed input, respectively, for a batch of \(n\) utterances. First, the invariance term consists of the mean squared error between the two latent representations, \(Z\) and \(Z^{\prime}\). Next, the variance term induces the learned latent embedding to vary within the other batch elements, thus avoiding collapse on the same vector. Lastly, the covariance term prevents the Student model, which already has limited capacity, from encoding similar features. In our experiments, we consider an equal contribution of each term in the final VICReg loss. The interested reader is referred to [17] for more details about the loss implementation. \[\mathcal{L}_{\text{VICReg}}=\text{Var}(Z^{\prime})+\text{Inv}(Z,Z^{\prime})+ \text{Cov}(Z^{\prime}) \tag{1}\] Finally, the VIC-KD loss is computed as the convex combination between the TRADES [10] and VICReg losses, controlled through a hyperparameter \(\alpha\) as described in Figure 1. ## 3 Experimental Setup To train the KWS system, we use the _Google Speech Commands v0.02_ (GSC) dataset [18]. This dataset has about 100k 1-second utterances spread across 35 commands and sampled at 16 kHz. We consider two versions of the dataset, one with all commands and another with only 12 classes that include labels such as {yes, no, up, down, left, right, on, off, stop, go, unknown, and silence}. The _silence_ class is made up of background noises with no speech, while the _unknown_ label includes utterances uniformly sampled from unused classes. We follow the SpeechBrain [19] recipe, where the dataset is split into train, validation, and test sets in the ratio of 80%, 10%, and 10%, respectively. In this study, we examine two Teacher models, _Wav2Vec 2.0_ and WavLM, along with two Student models, TC-ResNet Figure 1: Diagram of our proposed method VIC-KD for robust distillation. 8 [20] and a custom XVector [21], with a reduced number of TDNN layers and smaller kernel sizes to fit our resource constraints. Furthermore, as summarized in Table 1, we show the results with conventional training, as well as AT via TRADES. Moreover, we report accuracy for clean speech files, as well as speech files corrupted by the AutoAttack method [22]. Experiments herein rely on these reference values for comparison. Baseline models are trained for 100 epochs, with a batch size of 32 samples, using an Adam optimizer, and learning of \(10^{-3}\) that linearly decays to \(10^{-4}\). Conversely, we fine-tune the Teacher model for the KWS task for 10 epochs, and the learning rate is scheduled from \(5\times 10^{-4}\) to \(5\times 10^{-5}\). The Teacher model consists of the S3RL encoder with a weighted sum and aggregation mechanism described previously and a simple linear layer that classifies the input speech signal into the desired number of classes. Note that, for VIC-KD, we can discard the Teacher's linear layer since we are interested in the latent representation. For adversarial attacks, we rely on AutoAttack [22], an ensemble method comprised of APGD [22], APGD-T [22], and FAB [23] attacks. We consider \(\ell_{\infty}\) bounded attacks with an \(\epsilon=1.5\times 10^{-3}\). Three distillation methods, namely KD, ARD, and RSLAD, are used to benchmark the proposed technique. Training is performed for 250 epochs with parameters matching the small baseline models, as specified above. VIC-KD relies on a 10-step PGD-like attack, also with an \(\epsilon=1.5\times 10^{-3}\) and a step size of \(3\times 10^{-4}\), to generate the adversarial perturbation for training. Additionally, multi-view inputs are used for Student and Teacher models, incorporating clean data, noise, reverberation, noise-plus-reverberation, wave chunk dropping, and speed perturbation into the transformations \(\mathcal{T}\). ## 4 Experimental Results and Discussion ### Classification accuracy Table 2 presents the main experimental results exploring two scenarios: standard vs. robust Teachers as guides. We report clean and robust accuracies for each robust distillation method, with differences relative to the respective baseline in parentheses. In fact, given the low robust accuracy and that KD is not a robust method, we do not compute its delta against the robust baseline. KD with standard Teacher outperforms baselines and other robust methods in clean accuracy. However, despite some improvement, the robust accuracy is still below random guess. ARD, on the other hand, significantly enhances overall robust accuracy, but still falls short of the baseline performance under the robust condition. Regardless, we observe an improvement in the overall scenario for RSLAD and VIC-KD, which are built upon TRADES. For the Student model with bigger capacity, e.g., the XVector, we observe the first results where the model can improve upon the baseline for robust accuracy. The TC-ResNet on the RSLAD distillation has inline results with the baseline, but improves upon the ARD method. On the other hand, VIC-KD for both TC-ResNet and XVector can substantially improve upon both robust distillation recipes and the baselines for the clean and under-attack scenarios, thus showing the benefits of the proposed methodology. In fact, VIC-KD with the _Wav2Vec 2.0 /_ XVector pair outperforms the baseline by a relative percentual difference of 7.2% on robust accuracy. Similarly, the proposed model outperforms ARD \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Teacher / Student} & \multicolumn{4}{c}{Standard Teacher} & \multicolumn{4}{c}{Robust Teacher} \\ \cline{2-10} & Clean & AutoAttack & Clean & AutoAttack & Clean & AutoAttack \\ \hline \multicolumn{10}{c}{KD [6]} \\ \hline _Wav2Vec 2.0 /_ TC-ResNet_ & 95.80 & 5.54 & 95.61 & 4.49 \\ _Wav2Vec 2.0 /_ XVector_ & 96.77 & 7.59 & 96.68 & 6.84 \\ _WavLM /_ Y_-ResNet_ & 96.20 & 9.29 & 95.70 & 5.11 \\ _WavLM /_ XVector_ & **96.91** & 8.00 & **96.62** & 7.19 \\ \hline \multicolumn{10}{c}{ARD [11]} \\ \hline _Wav2Vec 2.0 /_ TC-ResNet_ & 94.54 & 76.41 (-2.8\%) & 94.92 & 74.16 (-5.1\%) \\ _Wav2Vec 2.0 /_ XVector_ & 95.37 & 79.74 (-0.6\%) & 95.71 & 79.94 (-0.4\%) \\ _WavLM /_ TC-ResNet_ & 95.17 & 74.13 (-5.1\%) & 95.30 & 76.87 (-2.4\%) \\ _WavLM /_ Y_-Vector_ & 95.27 & 78.33 (-2.0\%) & 95.29 & 74.03 (-6.3\%) \\ \hline \multicolumn{10}{c}{RSLAD [12]} \\ \hline _Wav2Vec 2.0 /_ TC-ResNet_ & 94.65 & 78.06 (-1.2\%) & 94.59 & 76.27 (-3.0\%) \\ _Wav2Vec 2.0 /_ XVector_ & 95.37 & 80.79 (+0.5\%) & 95.45 & 81.98 (+1.7\%) \\ _WavLM /_ TC-ResNet_ & 95.11 & 77.17 (-2.1\%) & 94.11 & 77.99 (-1.3\%) \\ _WavLM /_ XVector_ & 95.34 & 81.30 (+1.0\%) & 95.69 & 81.90 (+1.6\%) \\ \hline \multicolumn{10}{c}{VIC-KD (**Ours**)} \\ \hline _Wav2Vec 2.0 /_ TC-ResNet_ & 95.31 & 83.75 (+4.5\%) & 95.48 & 83.38 (+4.1\%) \\ _Wav2Vec 2.0 /_ XVector_ & **96.50** & **86.12** (**+5.8\%) & 96.29 & 85.33 (+5.0\%) \\ _WavLM /_ TC-ResNet_ & 95.39 & 83.30 (+4.1\%) & 95.37 & 87.37 (+4.5\%) \\ _WavLM /_ Y_-Vector_ & 95.92 & 86.08 (+5.8\%) & 96.39 & **86.31** (**+6.0\%) \\ \hline \hline \end{tabular} \end{table} Table 2: Results on the GSC dataset with 12 classes for distillation methods. Distilling from standard and robust Teachers. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Natural Training} & \multicolumn{4}{c}{Robust Training} \\ \cline{3-10} & & \multicolumn{2}{c}{GSC v12} & \multicolumn{2}{c}{GSC v35} & \multicolumn{2}{c}{GSC v12} & \multicolumn{2}{c}{GSC v35} \\ \cline{3-10} & \#Params & MMACs & Clean & AutoAttack & Clean & AutoAttack & Clean & AutoAttack \\ \hline _Wav2Vec 2.0_ & 94.4M & 6160.9 & 99.07 & 9.40 & 97.53 & 2.68 & 97.58 & 93.80 & 95.77 & 90.63 \\ - & WavLM & 94.4M & 6132.7 & 98.94 & 6.48 & 97.42 & 0.64 & 97.95 & 94.79 & 97.20 & 93.26 \\ - & XVector & 80.4K & 2.9 & 96.43 & 3.09 & 94.54 & 1.36 & 94.70 & 80.33 & 93.56 & 78.27 \\ - & TC-ResNet & 67.7K & 1.6 & 94.95 & 2.67 & 93.53 & 0.09 & 93.69 & 79.25 & 92.95 & 77.30 \\ \hline \hline \end{tabular} \end{table} Table 1: Baseline results obtained directly in a supervised setting without knowledge distillation. and RSLAD on attacked scenarios by 8.0% and 6.6%, respectively. On the downside, the VIC-KD recipe training time is four times slower than RSLAD. Note that, at the inference stage, the time is the same for all distillation recipes since it depends solely on the architecture of the student model. Finally, some conclusions can be drawn from the distillation of robust Teachers. First, from the KD experiment, it is crucial to notice that the Student model does not necessarily inherit robustness from the Teacher; hence, specific techniques are needed if the goal is to design adversarially robust models. However, a tradeoff between clean and robust accuracy needs to be decided. For the robust distillation methods, we observe that using robust Teachers, in general, does not improve the final performance of the Student model; thus, for our specific use case on the GSC dataset, unless one already has off-the-shelf robust Teacher models, it is not worth the extra computation of transforming a standard Teacher into a robust one first and then perform the robust distillation. ### The effects of multi-view inputs Next, we investigate the effect of multi-view (MV) inputs on robust distillation, as shown in Fig.2. Here, we investigate the WavLM/TC-ResNet pair as our Teacher and Student models, respectively. For comparison, a red dashed line shows the baseline robust accuracy of TC-ResNet trained with TRADES without any guidance from Teacher models. As observed in the figure, all distillation methods benefit from multi-view. For instance, ARD and RSLAD, which had a robust accuracy below the baseline, can surpass this landmark after MV. Our proposed VIC-KD method can outperform the baseline with or without MV. However, MV helps us to achieve the best overall robust accuracy. We hypothesize that the reason behind the performance gain by using MV is two-fold. First, MV induces the student model to learn perturbation-invariant features from speech signals and to better disentangle noisy factors from speech features, thus improving generalization. Lastly, other works already have discussed that self-supervised learning methods that employ MV are forcing the model to maximize the mutual information of representations between the two views [17]. We suppose a similar conclusion can be drawn for the KD case, but more studies are still needed. ### Expanding the distillation to more classes Lastly, we further stress the distillation recipes to a more significant number of output classes by using all 35 classes from the GSC dataset. In Fig. 3, we show the accuracy results for both clean and AutoAttack-based methods using the WavLM / XVector models as Teacher and Student respectively. The blue and green dashed lines denote the clean and robust accuracy of the supervised baseline, XVector, trained via TRADES. Similarly to previous findings, KD can surpass the clean accuracy of the baseline, but the robust accuracy is not satisfactory. Among the robust distillation methods, VIC-KD exhibits the overall best performance by achieving the best accuracy and showing the smallest gap between clean and robust accuracies. ## 5 Conclusions Here, we propose VIC-KD, a methodology to improve the adversarial robustness of distilled models. Our recipe is two-fold: (1) use multi-view inputs to induce the Student to learn perturbation-invariant features and (2) apply the variance-invariance-covariance regularization (VICReg) to the latent representation of the Teacher/Student model. Experiments on the Google Speech Command dataset, with 12 and 35 classes, show the proposed methodology outperforming state-of-the-art robust distillation recipes on \(\ell_{\infty}\)-bounded ensemble of attack (AutoAttack). Overall, VIC-KD canbetter balance the tradeoff between clean and robust accuracy, making this technique a strong candidate for developing and deploying trustworthy speech applications on the edge. Figure 3: Scaling robust distillation to a 35-class GSC dataset and comparing it to clean and robust accuracy baselines. Figure 2: Effects of the Multi-View on several distillation Methods for the WavLM / TC-ResNet pair. The dashed line represents the robust accuracy of TC-ResNet via TRADES.
2309.03819
On isomorphisms to a free group and beyond
The isomorphism problem for infinite finitely presented groups is probably the hardest among standard algorithmic problems in group theory. Classes of groups where it has been completely solved are nilpotent groups, hyperbolic groups, and limit groups. In this short paper, we address the problem of isomorphism to particular groups, including free groups. We also address the algorithmic problem of embedding a finitely presented group in a given limit group.
Vladimir Shpilrain
2023-09-07T16:20:47Z
http://arxiv.org/abs/2309.03819v5
# On isomorphism to a free group and beyond ###### Abstract. The isomorphism problem for infinite finitely presented groups is probably the hardest among standard algorithmic problems in group theory. It has been completely solved only in the classes of nilpotent groups, hyperbolic groups, and limit groups. In this short paper, we address the problem of isomorphism to particular groups, including free groups and subgroups of limit groups. _In memory of Ben Fine_ ## 1. Introduction The isomorphism problem has been completely solved in the class of finitely generated nilpotent groups in [8]. Later, it was solved in the class of hyperbolic groups [15] (torsion-free case), [5] (general case), although it is difficult (if at all possible) to "computerize" these algorithms, i.e., to code them in one of the known programming languages. Then, the isomorphism problem was also solved in the class of limit groups (a.k.a. fully residually free groups) [4]. In the class of finitely generated one-relator groups, although the isomorphism problem is still open in general, it has been settled for "most" one-relator groups (in a precise formal sense) in [9]. More specifically, for any \(r\geq 2\), there is a subset \(\mathcal{G}\) of elements of the free group \(F_{r}\) such that: (1) \(\mathcal{G}\) has asymptotic density \(1\) in \(F_{r}\); (2) it is algorithmically possible to find out whether or not a given element \(u\in F_{r}\) is in \(\mathcal{G}\); (3) for any two elements \(u,v\in\mathcal{G}\), it is algorithmically possible to find out whether or not two one-relator groups (with the relators \(u\) and \(v\), respectively) are isomorphic. The "next in line" class of groups where the isomorphism problem may be solvable is the class of finitely presented metabelian groups (see [3], Problem (M1)), where "most" algorithmic problems have solutions by now [2]. We note that the isomorphism problem has a reasonable chance to be solvable only in classes of groups where all groups have solvable word problem. This rules out, for example, the class of finitely presented solvable groups of derived length \(\geq 3\) since this class has groups with unsolvable word problem [10]. In this paper, we address an apparently easier problem of isomorphism to a particular group. Using a simple trick, we establish here the following result that appears to be useful in some situations. **Proposition 1**.: _Let \(G\) be a group with \(n\) given generators. Suppose that \(G\) has solvable word problem. Let \(H\) be a finitely presented group, and suppose either \(G\) or \(H\) is Hopfian. If one can decide whether or not there is an epimorphism from \(G\) onto \(H\) and find it as an explicit map on the generators in case it exists, then one can decide whether or not \(G\) is isomorphic to \(H\)._ Recall that a group is _Hopfian_ if any _onto_ endomorphism of this group is also one-to-one, i.e., is an automorphism. Note that in Proposition 1 we do not require that \(H\) has solvable word problem or that \(G\) is finitely presented. Our main goal actually was to address the problem of isomorphism to the (absolutely) free group \(F_{n}\) of rank \(n\). There is a classical result of Adyan [1] saying that given an arbitrary (finitely presented) group \(B\), there is no algorithm that would decide, given any (finitely presented) group \(G\), whether or not \(G\) is isomorphic to \(B\). However, if we require solvability of the word problem in \(G\), then the problem of isomorphism of \(G\) to the free group \(F_{n}\) becomes algorithmically solvable: **Theorem 1**.: _Let \(G\) be a finitely presented group with \(m\) generators and an algorithm for solving the word problem in \(G\). Then it is algorithmically possible to find out whether or not \(G\) is isomorphic to a free group of rank \(n\leq m\)._ There is a "detour" that leads to this result, see [7, Corollary 4.3] for an explicit mention of this result. Specifically, there is an algorithm that, given a finitely presented group \(G\) with solvable word problem, decides whether or not \(G\) is a limit group [6]. If not, then \(G\) cannot be isomorphic to a free group because any finitely generated free group is a limit group. If \(G\) is a limit group, then one can use an algorithm, due to [4], that decides if there is an isomorphism between two limit groups. Our proof is more straightforward, but it still uses a "big gun", namely Razborov's work on solving (systems of) equations in a free group. It appears that solvability of equations in groups should inevitably be an important ingredient in any solution of the isomorphism problem for infinite groups. However, this is typically not enough. In our proof of Theorem 1, we actually establish an isomorphism (or non-isomorphism) of the group \(G\) to a subgroup of a given fixed finitely generated free group, and then we use the fact that every nontrivial subgroup of a free group is itself free. This is not the case with hyperbolic groups, say; moreover, a finitely generated subgroup of a hyperbolic group may not even be finitely presented, and this makes our method inapplicable in that situation. One class of groups where our method does work is the class of limit groups since every finitely generated subgroup of a limit group is a finitely presented limit group. Also, finitely generated limit groups are Hopfian because they are residually free and therefore residually finite. The following result may be of interest: **Theorem 2**.: _Let \(G\) be a finitely presented group with a given algorithm for solving the word problem in \(G\). Let \(H\) be a limit group with a given algorithm for solving the word problem in \(H\). Then it is algorithmically possible to find out whether or not \(G\) can be embedded in \(H\)._ ## 2. Proof of Proposition 1 Let \(g_{1},\ldots,g_{n}\) be the given generators of the group \(G\), and \(h_{1},\ldots,h_{n}\) generators of the group \(H\). Needless to say, if there is no epimorphism from \(G\) onto \(H\), then \(G\) and \(H\) are not isomorphic. Now suppose the map \(\varphi:g_{i}\to h_{i}\) can be extended to an epimorphism from \(G\) onto \(H\). Then run two algorithms in parallel: **1.** Algorithm \(\mathcal{A}\) will detect non-isomorphism by looking for an element in the kernel of \(\varphi\). To that effect, it goes over nontrivial elements of \(G\) one at a time (this is possible since the word problem in \(G\) is solvable) and checks if \(\varphi\) takes them to the trivial element of \(H\). Here the reader may say: wait, you do not require that the word problem in \(H\) is solvable. Indeed, but here we only need the "yes" part of the word problem (i.e., detecting that the element is trivial), and this part works in any recursively presented group. Specifically, to detect that \(w=1\) one can go over all finite products of conjugates of defining relators and (graphically) compare them to \(w\). We note that if the kernel of \(\varphi\) is nontrivial, then \(H\) is isomorphic to a proper factor group of \(G\) and therefore cannot be isomorphic to \(G\) since we assumed that either \(G\) or \(H\) was Hopfian. **2.** Algorithm \(\mathcal{B}\) will detect isomorphism by looking for a map \(\psi\), given on the generators \(h_{i}\) of \(H\), such that \(\psi(\varphi(g_{i}))=g_{i}\) for all generators \(g_{i}\) of the group \(G\). To that effect, \(\mathcal{B}\) will go over \(n\)-tuples \((y_{1},\ldots,y_{n})\) of elements of \(G\), one at a time, and define \(\psi\) by \(\psi(h_{i})=y_{i}\). First check if \(\psi\) is a homomorphism by computing \(\psi(r_{j})\) for every defining relator \(r_{j}\) of the group \(H\) and checking if \(\psi(r_{j})=1\). This is possible since \(G\) has solvable word problem, although we do not really need this because again, here we only need the "yes" part of the word problem. If \(\psi\) is a homomorphism, then just check if \(\psi(\varphi(g_{i}))=g_{i}\) for all \(g_{i}\), again using the "yes" part of the word problem in \(G\). If \(H\) is isomorphic to \(G\), then eventually a map \(\psi\) like that will be found. Eventually one of the algorithms, \(\mathcal{A}\) or \(\mathcal{B}\), will stop and give an answer. \(\Box\) We note that the only place in the proof where we used solvability of the word problem in \(G\) was where we were trying to detect non-isomorphism by looking for a nontrivial element in the kernel of \(\varphi\). ## 3. Proof of Theorem 1 Let \(g_{1},\ldots,g_{m}\) be the given generators of the group \(G\), and let \(r_{1},\ldots,r_{s}\) be all defining relators of \(G\). Let \(F_{n}\) be a free group of rank \(n\), and let \(\alpha:g_{i}\to x_{i}\) for some \(x_{i}\in F_{n}\), \(i=1,\ldots,m\). This map extends to a homomorphism \(\alpha:G\to F_{n}\) if and only if \(\alpha(r_{j})=1\) for all \(j=1,\ldots,s\). This translates into a system of \(s\) equations in the group \(F_{n}\). First, we will run Razborov's algorithm \(\mathcal{R}\)[14] to see if this system of equations has a solution tuple \((a_{1},\ldots,a_{m})\) that generates a free subgroup of rank \(r\geq n\) in \(F_{n}\); in other words, if there is an epimorphism of \(G\) onto a free group of rank \(r\geq n\). Denote this free group by \(H_{r}\) (recall that every nontrivial subgroup of a free group is free). If the system has no such solutions, then \(G\) is not isomorphic to a free group of rank \(n\). If there is an epimorphism of \(G\) onto \(H_{r}\), then there is also an epimorphism of \(G\) onto a free group of rank \(n\), denote this group by \(H_{n}\). To find an explicit epimorphism of \(G\) onto \(H_{n}\) (as a map on the generators), one can first find generators of \(H_{n}\) and an epimorphism of \(H_{r}\) onto \(H_{n}\) by using Nielsen reduction, see e.g. [13]. After one finds an epimorphism of \(G\) onto \(H_{n}\), Proposition 1 applies (since any finitely generated free group is Hopfian), and this completes the proof. \(\Box\) We note that Razborov's results [14] were crucial for this proof. We also note that we used not only an algorithm for solving systems of equations in a free group, but also the fact (due to [14] as well) that it is algorithmically possible to find a subgroup of \(F_{n}\) of the maximum rank generated by a solution tuple of the given system of equations. ## 4. Proof of Theorem 2 For the most part, the proof is similar to that of Theorem 1. Again, let \(g_{1},\ldots,g_{m}\) be the given generators of the group \(G\), and Let \(r_{1},\ldots,r_{s}\) be all defining relators of \(G\). Let \(\alpha:g_{i}\to x_{i}\) for some \(x_{i}\in H\). This map extends to a homomorphism \(\alpha:G\to H\) if and only if \(\alpha(r_{i})=1\) for all \(i=1,\ldots,s\). This translates into a system of \(s\) equations in the group \(H\). There are known algorithms for solving systems of equations in limit groups (see e.g. [11]). Moreover, the results of [11] imply that in a limit group \(H\), different \(m\)-tuples of solutions of a system of equations generate only finitely many subgroups \(H_{i}\) of the group \(H\) up to isomorphism, and a (finite) presentation of each subgroup \(H_{i}\) can be algorithmically computed according to [12, Theorem 30]. We will therefore first run an algorithm from [11] to see if the system of equations mentioned in the first paragraph of this section has solutions. If not, then \(G\) cannot be embedded in \(H\). If it does have solutions, then we find generating \(m\)-tuples \((h_{i1},\ldots,h_{im})\) of subgroups \(H_{i}\). Then, using an algorithm from [11], we find (finitely many) defining relations for each subgroup \(H_{i}\) representing an isomorphism class mentioned in the previous paragraph. Thus, if \(G\) can be embedded in \(H\), it should be isomorphic to one of the subgroups \(H_{i}\). Suppose there are \(k\) of them. We will then run \(k\) algorithms \(\mathcal{C}_{i}\) in parallel, where each \(\mathcal{C}_{i}\), in turn, is a pair of algorithms \((\mathcal{A}_{i},\mathcal{B}_{i})\) running in parallel. As in the proof of Theorem 1, algorithm \(\mathcal{A}_{i}\) will detect non-isomorphism by looking for a nontrivial element in the kernel of \(\varphi:g_{j}\to h_{ij}\). If the kernel is nontrivial, then the subgroup \(H_{i}\) is isomorphic to a proper factor group of the group \(G\) and therefore cannot be isomorphic to \(G\) itself because all finitely generated subgroups of a limit group are Hopfian. At the same time, algorithm \(\mathcal{B}_{i}\) will detect isomorphism of the subgroup \(H_{i}\) to the group \(G\) by looking for a map \(\psi\), given on the generators \(h_{ij}\) of \(H_{i}\), such that \(\psi(\varphi(g_{i}))=g_{i}\) for all generators \(g_{i}\) of the group \(G\). This is done the same way as in the proof of Theorem 1, but there is one more ingredient needed here. To check if \(\psi\) is a homomorphism, we see if \(\psi\) takes each defining relation of \(H_{i}\) to the identity element of \(G\). Eventually one of the algorithms, \(\mathcal{A}\) or \(\mathcal{B}\), will stop and give an answer about isomorphism (or non-isomorphism) of \(H_{i}\) to \(G\). ### Acknowledgement I am grateful to Olga Kharlampovich and Alexei Myasnikov for useful discussions on equations in groups and on various properties of limit groups.
2301.13596
Confinement of fractional excitations in a triangular lattice antiferromagnet
High-resolution neutron and THz spectroscopies are used to study the magnetic excitation spectrum of Cs$_2$CoBr$_4$, a distorted-triangular-lattice antiferromagnet with nearly XY-type anisotropy. What was previously thought of as a broad excitation continuum [Phys. Rev. Lett. 129, 087201 (2022)] is shown to be a series of dispersive bound states reminiscent of "Zeeman ladders" in quasi-one-dimensional Ising systems. At wave vectors where inter-chain interactions cancel at the Mean Field level, they can indeed be interpreted as bound finite-width kinks in individual chains. Elsewhere in the Brillouin zone their true two-dimensional structure and propagation are revealed.
L. Facheris, S. D. Nabi, A. Glezer Moshe, U. Nagel, T. Rõõm, K. Yu. Povarov, J. R. Stewart, Z. Yan, A. Zheludev
2023-01-31T12:52:36Z
http://arxiv.org/abs/2301.13596v2
# Confinement of fractional excitations in a triangular lattice antiferromagnet ###### Abstract High-resolution neutron and THz spectroscopies are used to study the magnetic excitation spectrum of Cs\({}_{2}\)CoBr\({}_{4}\), a distorted-triangular-lattice antiferromagnet with nearly XY-type anisotropy. What was previously thought of as a broad excitation continuum [Phys. Rev. Lett. 129, 087201 (2022)] is shown to be a series of dispersive bound states reminiscent of "Zeeman ladders" in quasi-one-dimensional Ising systems. At wave vectors where inter-chain interactions cancel at the Mean Field level, they can indeed be interpreted as bound finite-width kinks in individual chains. Elsewhere in the Brillouin zone their true two-dimensional structure and propagation are revealed. In conventional magnetic insulators the dynamic response is typically dominated by coherent single-particle \(S=1\) excitations, aka magnons or spin waves. In many low-dimensional and highly frustrated quantum spin systems elementary excitations carry fractional quantum numbers, be they spinons in Heisenberg spin chains [1; 2; 3; 4] or Majorana fermions in the now-famous Kitaev model [5; 6; 7]. The physical excitation spectrum, such as that measured by neutron spectroscopy, is then dominated by broad multi-particle continua [8; 9; 10; 11]. In addition to the continuum, fractional excitations may also form bound states due to attractive interactions between them. A spectacular new phenomenon emerges when interactions are _confining_, i.e. do not fall off with distance, much like strong forces that bind quarks in hadrons [12]. This produces an entire series of bound states inside the resulting potential well. An example is the sequence of domain wall (kink) bound states in quasi-one-dimensional Ising spin chains [13; 14; 15]. The confining potential for this model is linear and results from 3-dimensional couplings, which generate an effective field acting on individual chains [14]. The binding energies are, in supreme mathematical elegance, spaced according to the negative zeros of the Airy function [13; 15]. The best-known experimental examples of such "Zeeman ladder" spectra are the quasi-one-dimensional Ising ferromagnet CoNb\({}_{2}\)O\({}_{6}\)[16] and antiferromagnet (AF) BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[17], as well as the isostructural compound SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[18], where as many as 8 consecutive bound states are observed. Shorter sequences have been found in another prototypical Ising spin chain material, RbCoCl\({}_{3}\)[19]. In the present work we report the observation of a somewhat similar phenomenon in an entirely different type of system, namely in a _quasi-two-dimensional_ distorted-triangular-lattice AF where the effective magnetic _anisotropy is predominantly of XY, rather than Ising character_. That the quintessentially one-dimensional physics of bound kinks survives in two dimensions is remarkable. We argue that it is "rescued" at certain special wave vectors by the intrinsic frustration in triangular lattice geometry. Elsewhere in the Brillouin zone the bound states are no longer restricted to single chains and are to be viewed as 2-dimensional objects propagating on the entire triangular plane. The material in question, Cs\({}_{2}\)CoBr\({}_{4}\) (space group P\(nma\), \(a=10.19\), \(b=7.73\), \(c=13.51\) A), is a very interesting \(J-J^{\prime}\) model distorted-triangular-lattice AF [20; 21]. Despite a prominent triangular motif in its structure, it demonstrates certain one-dimensional features such as a field-induced incommensurate spin density wave with Tomonaga-Luttinger spin liquid type dynamics and a propagation vector controlled by a one-dimensional nesting in the spinon Fermi sea. Its true 2-dimensional nature is manifest in the presence of a robust \(m=1/3\) magnetization plateau, typical of a triangular AF. The model magnetic Hamiltonian is described in detail in Refs. [20; 21]. The key structural features are chains of Co\({}^{2+}\) ions that run along the crystallographic \(b\) axis of the orthorhombic lattice (see Fig. 1 in Ref. [20]). The chains are coupled in the \((bc)\) plane in a zigzag fashion to form a distorted triangular network (inset of Fig. 1(d)). Easy-plane single-ion anisotropy ensures that the low-energy physics of the spin-3/2 Co\({}^{2+}\) ions can be described in terms of effective \(S=1/2\) pseudo-spins. The components of the effective exchange coupling constants are subject to restrictions imposed by the pseudo-spin projection. A simplistic spin-wave analysis of previous inelastic neutron data provided a rough estimate for the nearest-neighbor in-chain AF exchange tensor components: \(J^{XX}\sim J\), \(J^{YY}\sim 1.1J\), \(J^{ZZ}\sim 0.25J\), \(J=0.8\) meV [21]. Here \(Y\) is chosen along the \(b\) crystallographic direction, and \(X\) and \(Z\) alternate between adjacent chains, where anisotropy planes are almost orthogonal. Note that this is practically a _planar_ exchange anisotropy_, with only a tiny in-plane Ising component to account for the \(\Delta\sim 0.4\) meV spectral gap found in this system. The frustrated _inter-chain coupling_\(J^{\prime}\)_is significant_, of the order of \(0.45J\), and is of predominantly Ising (\(YY\)) character. Inter-plane interactions \(J^{\prime\prime}\) are not frustrated. The material orders magnetically in a colinear stripe-type structure, with an ordering wavevector \((0,1/2,1/2)\) (see inset in Fig. 1(d)). The Neel temperature \(T_{\rm N}=1.3\) K allows us to estimate \(J^{\prime\prime}\). If this were the only coupling between chains with no additional frustration due to \(J^{\prime}\), we could expect \(k_{\rm B}T_{\rm N}\sim 2\Delta/\ln(\Delta/J^{\prime\prime})\)[22]. The actual value of \(J^{\prime\prime}\) must be larger than thus obtained, as the in-plane frustration interferes with the emerging magnetic structure. A certain upper estimate is given by the mean field picture where \(k_{\rm B}T_{\rm N}\sim 2J^{\prime\prime}S(S+1)\). This leads us to conclude that \(3\cdot 10^{-4}\) meV \(\lesssim J^{\prime\prime}\lesssim 0.075\) meV \(\ll J\), confirming the quasi-2-dimensional character of the material. Our previous inelastic neutron scattering experiments indicated that the excitation spectrum in zero applied field is a gapped continuum of states, with intensity concentrated on its lower bound, and a strong dispersion along the chain axis [21]. The central finding of the present work is that this "continuum" is actually a sequence of at least 9 sharp bound states that previously could not be observed due to poor experimental energy resolution. New neutron data were collected at the LET time-of-flight spectrometer at ISIS (UK), using 2.35 meV incident energy neutrons in repetition-rate-multiplication mode [23]. We used the same 1.16 g single crystal as in [21] mounted on a \({}^{3}\)He-\({}^{4}\)He dilution refrigerator. All measurements were performed at a base temperature of 40 mK. In the experiment the sample was rotated \(180^{\circ}\) around the \(\mathbf{a}\) axis in steps of \(1^{\circ}\). The spectra were measured for \(\sim 10\) minute counting time at each sample position. We first focus on the one-dimensional AF zone-centers (\(\mathbf{q}\mathbf{b}=0,\pi\)), where inter-chain interactions within the triangular planes _cancel out at the Mean Field-RPA level_, and where spin wave theory predicts no transverse dispersion or intensity modulation of excitations. Fig. 1(a),(b) show constant-\(\mathbf{q}\) cuts through the data at wave vectors \(\mathbf{q}=(0,0.5,0.5)\) and \(\mathbf{q}=(0,1,0.5)\), respectively. A sequence of sharp peaks is clearly apparent in both cases. A fit to the data using empirical Gaussian profiles yields an accurate measure of the peak positions and shows that their widths are essentially resolution-limited. In Fig. 1(a),(b) this is emphasized by the shaded Gaussians representing the computed experimental resolution [24]. Corroborative evidence is also obtained by THz spectroscopy. The experiment was performed with a Martin-Puplett-type interferometer and a \({}^{3}\)He-\({}^{4}\)He dilution refrigerator with base temperature of 150 mK using a \({}^{3}\)He-cooled Si bolometer at 0.3 K. The sample was a circular plate approximately 1 mm thick in \(\mathbf{c}\) direction and 4 mm in diameter. THz radiation propagating along the Figure 1: (a)-(b) Neutron scattering intensity (solid symbols) measured at \(T=40\) mK versus energy transfer at the one-dimensional AF zone-centers \(\mathbf{q}=(0,0.5,0.5)\) and \(\mathbf{q}=(0,1,0.5)\), respectively. The data are integrated fully along \(h\) direction and in \(\pm 0.025\) r.l.u. and \(\pm 0.25\) r.l.u. along \(k\) and \(l\), respectively. Solid lines are fits to a series of Gaussian peaks. Dashed Gaussians represent the calculated experimental energy resolution. Black dotted lines indicate the fitted flat background. (c) Measured terahertz absorption (solid line) versus absorbed photon energy for light propagating along the \(\mathbf{c}\) axis at 0.2 K. Dashed areas highlight the individual components that find counterparts in the neutron spectra. (d) Measured excitation energy plotted versus the value of negative roots of the Airy function. The solid line is a linear fit as described in the text. The blue area highlights the points used for the fit. Inset: cartoons of the magnetic ground state and a representative \(m=3\) 2-kink bound state. crystal \(\mathbf{c}\) axis was unpolarized and the apodized instrumental resolution was 0.025 meV. The THz absorption spectrum is shown in Fig. 1(c). It is calculated as a difference of spectra measured at 0.2 K and 2 K, _i.e._ in the magnetically ordered phase and above \(T_{\rm N}\). The THz spectrum appears to have some features absent in the neutron spectrum, but all peaks found in the latter are also present here. The positions of these peaks were determined in Gaussian fits (shaded peaks) in a narrow range \(\pm 0.025\) meV near each peak value. The spacing between the excitation peaks present in both measurements corresponds to confinement in an approximately linear one-dimensional potential. To demonstrate this, we plot the excitation energies deduced from neutron spectra at several wave vectors, as well as the positions of corresponding THz peaks, versus the negative roots \(z_{i}\) of the Airy function in Fig. 1(d). For a precise linear attractive potential \(\lambda|x|\) between the dispersive particles, near the minimum \(\epsilon(k)=m_{0}+\hbar^{2}k^{2}/2\mu\) we expect the excitation energies to be [15; 16] \[m_{i}=2m_{0}+(\hbar\lambda)^{2/3}\mu^{-1/3}z_{i}\ \ \text{with}\ \ i=1,2,\dots. \tag{1}\] In the actual data, the linear dependence is apparent for all but the first few points. As will be addressed in more detail below, this slight deviation indicates that the confining force increases somewhat at short distances. From a linear fit to the higher-energy peaks we can immediately extract the slope 0.072(3) meV and the energy of a single particle \(m_{0}=0.18(1)\) meV (half-intercept). Using the single-particle kinetic mass \(\hbar^{2}/\mu=0.39\) meV\(\times b^{2}\)[24], we estimate the confining force constant \(\lambda=0.031(2)\) meV/\(b\)[25]. The next point that we make is that the observed bound states at the one-dimensional AF zone-center are essentially one-dimensional objects. This is concluded by analyzing the neutron spectra shown in Figs. 2(a),(b). The bound states do not propagate in either transverse direction and thus have an essentially flat dispersion. Moreover, their intensity shows no modulation transverse to the chains, as shown for the first two modes in Figs. 2(e),(f). The measured transverse wave vector dependencies are entirely accounted for (solid lines) by the combined effects of i) the magnetic form factor of Co\({}^{2+}\) and ii) a neutron polarization factor for spin components perpendicular to the chain axis (to the direction of ordered moments in the ground state). This implies that these excitations do not involve cross-chain correlations and are confined to a single chain. This consideration prompts a simple interpretation of the observed behavior. Similarly to the situation in CoNb\({}_{2}\)O\({}_{6}\) and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), the observed modes are bound states of two kinks (domain walls) in individual chains. Such an excitation is illustrated by the cartoon in the inset of Fig. 1(d). Since the ordered moments are along the \(\mathbf{b}\) crystallographic axis, they are polarized transverse to that direction [21], in agreement with the measurement. The energy \(m_{0}\) is to be associated with that of a single domain wall. As a consistency check, we can compare that to the computed energy of a domain wall in a classical spin chain. Using \(J^{YY}/J^{XX}\sim 1.1\) as estimated for Cs\({}_{2}\)CoBr\({}_{4}\), with a trivial numerical classical-energy minimization procedure we get \(m_{0}\sim 0.9JS^{2}=0.18\) meV, in excellent agreement with the measured value. Geometric frustration ensures that at the magnetic zone-center these strings of flipped spins within a single chain incur no energy cost due to interactions with adjacent chains within the triangular lattice. Moreover, any transverse dispersion is suppressed. At the same time, the interaction energy due to unfrustrated inter-layer coupling is proportional to the string length, resulting in confinement. In this simplistic picture, the confining force is \(\lambda=2J^{\prime\prime}S^{2}/b\). This yields an inter-layer coupling constant \(J^{\prime\prime}=0.062(4)\) meV, inside the possible range deduced from \(T_{\rm N}\). The first lowest-energy bound state with energy \(m_{1}\) corresponds to a single spin flip in the chain, in other words to a single-magnon excitation. The \(i\)-th higher-energy states are two domain walls separated by a length-\(i\) string of spins that are aligned opposite to the ground state AF spin configuration. Figure 2: (a)-(d) False color plot of neutron scattering intensity measured at \(T=40\) mK plotted versus energy transfer and momentum transfer transverse to the crystallographic \(\mathbf{b}\)-axis. Gray areas mask regions of elastic-incoherent scattering. Background subtraction has been performed as described in [24]. The orange regions represent energy-integration windows used to extract the cuts in panels below. (e)-(h) Intensity-momentum cuts (solid symbols) for the first two modes in the Zeeman ladder. The blue line shows the product of calculated neutron polarization factor for excitations polarized perpendicular to the direction of the ordered moment and the magnetic form-factor-squared for Co\({}^{2+}\). The deviation from linear-potential behavior at low energies is also readily explained by this picture. Since the material is almost planar, the domain walls are not confined to a single bond as in the ideal Ising case, but have a characteristic size \(l\)[26]. We can estimate that quantity in a classical spin chain using the above-mentioned anisotropy parameters: \(l\sim 2b\). The energy of the first few bound states is thus modified due to a physical overlap of the two bounding domain walls. Experimentally, the bound state energy is reduced, which corresponds to an additional attractive interaction between kinks. Once the kinks are separated by a distance of more than \(\sim l\), this interaction becomes negligible and the confinement potential becomes linear, originating only from interlayer interactions. Away from the one-dimensional AF zone-centers, the excitations are considerably more complex. This is very clear in the longitudinal dispersion of the bound states shown in Fig. 3(a),(b). Other than at \(\mathbf{q}\mathbf{b}=0,\pi\) (\(k=0,1/2\)) the \(m_{1}\) mode splits into two branches, each with an asymmetric dispersion relation. In fact, the \(m_{1}\) state at \(\mathbf{q}\mathbf{b}=\pi\) seems to be continuously connected to the \(m_{2}\) excitations at \(\mathbf{q}\mathbf{b}=2\pi\) (\(k=1\)) and vice versa. Fitting the dispersion of the strongest low-energy mode in the vicinity of \(\mathbf{q}\mathbf{b}=\pi\) to a Lorentz-invariant relativistic form \[\left(\hbar\omega_{\mathbf{q}}\right)^{2}=\hbar^{2}\Delta\left(\mathbf{q}\mathbf{ b}\right)^{2}/\mu+\Delta^{2}, \tag{2}\] yields the value of kinetic mass quoted above. A look at the intensities reveals that other than at the special wave vectors, the bound states can no longer be seen as strings in a single chain, but are "dressed" with correlations extending to several neighboring chains in the triangular plane. This conclusion is reached from Fig. 2(c),(d), that show a transverse cut of the spectrum at \(\mathbf{q}\mathbf{b}=5\pi/4\) and \(\mathbf{q}\mathbf{b}=3\pi/2\), respectively. As plotted in Fig. 2(g),(h), the measured intensity of the first two modes now shows a much steeper transverse wave vector dependence than computed from just the polarization and form factors (solid line). The second mode even seems to show signs of intensity oscillations. Our data reveal that away from the special wave vectors the bound states also _propagate_ in two dimensions, albeit with a small bandwidth. Indeed, in Fig. 2(d) one can see that at \(\mathbf{q}\mathbf{b}=3\pi/2\) the bound states develop a non-zero dispersion along the \(\mathbf{c}^{*}\) direction, in contrast to what is seen at \(\mathbf{q}\mathbf{b}=0,\pi\). Although the bandwidth of transverse dispersion, \(0.08\) meV, is at the limit of our experimental resolution, qualitatively one can say that \(\mathbf{q}\mathbf{c}=0,4\pi\) are dispersion minima for the \(m_{1}\) mode, while the maximum is at \(\mathbf{q}\mathbf{c}=2\pi\). That periodicity is consistent with having two chains per unit cell along the \(\mathbf{c}\)-axis direction in the crystal structure. Overall, the differences between our results and spectra of Ising spin chains [16; 17] are striking. In the latter, all bound states, including the first one, are much less dispersive than the lower edge of the entire spectrum, which approximately corresponds to the lower edge of the two-kink continuum in the absence of long-range order. As a result, each bound state persists only in a restricted area in the Brillouin zone. In contrast, in Cs\({}_{2}\)CoBr\({}_{4}\) a few of the lower-energy bound states are highly dispersive and span across the entire zone. In summary, we demonstrate that "Zeeman ladders" of confined fractional excitations can exist in a _bona fide_ quasi-two-dimensional system. These states are inherently related to those in the one-dimensional model, as revealed at special wave vectors where two-dimensional interactions are canceled by geometric frustration. However, elsewhere in reciprocal space their true 2-dimensional character is manifest. Once again, the distorted triangular lattice model provides a link between one- and two-dimensional quantum magnetism. This work was supported by a MINT grant of the Swiss National Science Foundation. We acknowledge support by the Estonian Research Council grants PRG736 and MOBJD1103, and by European Regional Development Fund Project No. TK134. Experiments at the ISIS Neutron and Muon Source were supported by beamtime allocation RB2210048 from the Science and Technology Facilities Council [27]. ## References * (1) Figure 3: (a)-(b) False color plot of neutron scattering intensity measured at \(T=40\) mK plotted versus energy transfer and momentum transfer along \(\mathbf{q}=(0,k,0.5)\) and \(\mathbf{q}=(0,k,1)\) respectively. The data were fully integrated along \(h\), and in the range \(\pm 0.25\) r.l.u. along \(l\) around the central value. The gray areas mask regions where the incoherent scattering dominates the signal. Background subtraction has been performed as described in [24]. ## References * [1]L. D. Faddeev and L. A. Takhtajan (1981) What is the spin of a spin wave?. Phys. Lett. A85, pp. 375. Cited by: SSI. * [2]M. B. Stone, D. H. Reich, C. Broholm, K. Lefmann, C. Rischel, C. P. Landee, and M. M. Turnbull (2003) Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. Phys. Rev. Lett.91, pp. 037205. Cited by: SSI. * [3]M. B. Stone, D. H. Reich, C. Broholm, K. Lefmann, C. Rischel, C. P. Landee, and M. M. Turnbull (2003) Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. Phys. Rev. Lett.91, pp. 037205. Cited by: SSI. * [4]M. B. Stone, D. H. Reich, C. Broholm, K. Lefmann, C. Rischel, C. P. Landee, and M. M. Turnbull (2003) Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. Phys. Rev. Lett.91, pp. 037205. Cited by: SSI. * [5]M. B. Stone, D. H. Reich, C. Broholm, K. Lefmann, C. Rischel, C. P. Landee, and M. M. Turnbull (2003) Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. Phys. Rev. Lett.91, pp. 037205. Cited by: SSI. * [6]M. B. Stone, D. H. Reich, C. Broholm, K. Lefmann, C. Rischel, C. P. Landee, and M. M. Turnbull (2003) Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. Phys. Rev. Lett.91, pp. 037205. Cited by: SSI. * [7]M. B. Stone, D. H. Reich, C. Broholm, K. Lefmann, C. Rischel, C. P. Landee, and M. M. Turnbull (2003) Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. Phys. Rev. Lett.91, pp. 037205. Cited by: SSI. * [8]M. B. Stone, D. H. Reich, C. Broholm, K. Lefmann, C. Rischel, C. P. Landee, and M. M. Turnbull (2003) Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. Phys. Rev. Lett.91, pp. 037205. Cited by: SSI. * [9]M. B. Stone, D. H. Reich, C. Broholm, K. Lefmann, C. Rischel, C. P. Landee, and M. M. Turnbull (2003) Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. C. Rischel, C. P. Landee, and M. M. Turnbull (2003) Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. Extended Quantum Critical Phase in a Magnetized Spin-\(\frac{1}{2}\) Antiferromagnetic Chain. Supplemental Material for "Confinement of fractional excitations in a triangular lattice antiferromagnet" L. Facheris [email protected] Laboratory for Solid State Physics, ETH Zurich, 8093 Zurich, Switzerland S. D. Nabi Laboratory for Solid State Physics, ETH Zurich, 8093 Zurich, Switzerland A. Glezer Moshe National Institute of Chemical Physics and Biophysics, Akadeemia tee 23, 12618 Tallinn, Estonia U. Nagel National Institute of Chemical Physics and Biophysics, Akadeemia tee 23, 12618 Tallinn, Estonia T. Room National Institute of Chemical Physics and Biophysics, Akadeemia tee 23, 12618 Tallinn, Estonia K. Yu. Povarov Department of Physics, ETH Zurich, 8093 Zurich, Switzerland Research and Innovation of Science, Technolde, 100000, Israel. J. R. Stewart ISISIS Neutron and Muon Source, Rutherford Appleton Laboratory, Didcot, OX11 0QX, United Kingdom Z. Yan Laboratory for Solid State Physics, ETH Zurich, 8093 Zurich, Switzerland A. Zheludev Laboratory for Solid State Physics, ETH Zurich, 8093 Zurich, Switzerland Research and Innovation of Science, Technolde, 100000, Israel. November 3, 2021 ###### Abstract This Supplemental Material provides further details supporting the main text that may be of interest to the specialized reader. In particular, the resolution calculations, additional inelastic data, the background subtraction for the neutron spectroscopic measurements, and estimate for the kink's kinetic mass are presented. ###### Contents * I Determination of energy resolution for the LET experiment * II Additional cuts used for Fig. 1(d) * III Background subtraction procedure for LET data * IV Estimating a kink's kinetic mass \(\mu\). ## I Determination of energy resolution for the LET experiment The neutron scattering data presented in the main text were obtained on the direct-geometry time-of-flight LET spectrometer at ISIS (UK) [1]. The instrument was operated in the high-flux mode, with a chopper resolution frequency of 210 Hz and a pulse remover frequency of 140 Hz. A phase delay time for chopper 2 of 87000 \(\mu\)s was introduced to avoid contamination on the main incoming channel \(E_{i}=2.35\) meV by slower neutrons. The resolution calculations were performed with the PyChop interface of Mantid Workbench [2]. The obtained resolution profile is shown in SUPP. FIG. 1. The widths of the shaded Gaussian profiles in Fig. 1(a),(b) of the main text were calculated based on the fitted peak positions and the data in SUPP. FIG. 1. ## II Additional cuts used for Fig. 1(d) The additional cuts at \(\mathbf{q}=(0,0.5,1)\) and \(\mathbf{q}=(0,1,1)\) (not shown in the main text) are displayed in SUPP. FIG. 2. The fit is performed in full analogy to Fig. 1(a),(b) as described in the main text. The extracted peak positions from SUPP. FIG. 2 (a),(b) are plotted in Fig. 1(d) of the main text. ## III Background subtraction procedure for LET data The inelastic neutron scattering data presented in Fig. 2 and Fig. 3 of the main text are background subtracted. Although the dataset was rather clean, a background subtraction similar to that in [3] was nonetheless performed. In this section the model adopted to describe the background is outlined. The analysis was performed using the Horace software package [4]. SUPP. FIG. 3 shows raw data corresponding to Fig. 3 of the main text. Strong sharp lines at the edges of the Figure 1: Calculated energy resolution (solid line) versus neutron energy transfer for the spectrometer settings listed in the text. Dotted lines mark the positions \(m_{i}\) at \(\mathbf{q}=(0,0.5,0.5)\) as obtained from Fig. 1(a) of the main text. dataset below 0.4 meV are known spurious originating from scattering from the sample environment employed. The total background was modeled assuming no magnetic scattering below the gap and above the top of the spectrum. Thus, the background dataset is identical to original data for \(\hbar\omega\leq 0.34\) meV and \(\hbar\omega\geq 1.28\) meV (see dashed horizontal lines in SUPP. FIG. 3 for the background regions projected on these particular cuts). In the intermediate energy region, momentum-dependent boxes were constructed as shown in SUPP. FIG. 3 and numerically interpolated over the total explored \((\mathbf{q},\hbar\omega)\)-space. The so-obtained background was then point-to-point subtracted from the original data. ## IV Estimating a kink's kinetic mass \(\mu\). Near it's minimum at a one-dimensional wave vector \(k_{0}=\frac{\pi}{b}\), the dispersion relation for a single kink can be approximated as \[\epsilon_{k}=m_{0}+\frac{\hbar^{2}}{2\mu}(k-k_{0})^{2}.\] (S.1) The parameter \(\mu\) is the kinetic "mass" of this quasiparticle. We can access it from the experimentally measured spectrum of two-kink excitations. For a two-kink state, energy-momentum conservation dictates \[\hbar\omega_{q}^{(2-\text{kink})}=\epsilon_{k}+\epsilon_{q-k}=2m_{0}+\frac{ \hbar^{2}}{2\mu}\left[(k-k_{0})^{2}+(q-k+k_{0})^{2}\right].\] (S.2) Minimizing (S.2) with respect to the "hidden" quasi-momentum \(k\), we find that the lower boundary of the two-particle continuum lies at \(k=q\). Thus, the lowest magnon-like dispersion is given by: \[\hbar\omega_{q}=2m_{0}+\frac{\hbar^{2}}{2\mu}\left[(q-k_{0})^{2}+k_{0}^{2}\right]\] (S.3) Near the minimum wavevector \(q_{0}=k_{0}\rightarrow\pi/b\), we find that the curvature of the parabola-like dispersion is actually the same for a single kink and the lowest bound state.
2306.17743
Quantum paradoxical knowledge
We generalize the quantum "pigeonhole paradox" to quantum paradoxes involving arbitrary types of particle relations, including orderings, functions and graphs.
Benjamin Schumacher, Michael D. Westmoreland
2023-06-30T15:51:38Z
http://arxiv.org/abs/2306.17743v1
# Quantum paradoxical knowledge ###### Abstract We generalize the quantum "pigeonhole paradox" to quantum paradoxes involving arbitrary types of particle relations, including orderings, functions and graphs. ## I Introduction Aharonov et al. [1] showed that quantum particles can apparently violate the "pigeonhole principle". Ordinarily, if three particles ("pigeons") are placed in two boxes ("pigeonholes"), at least two of the particles must occupy the same box. However, in a quantum scenario involving both state preparation and post-selection, we may be able to infer with certainty that no particular pair is found occupying the same box. An example illustrates the idea. Each particle may occupy either the left or right boxes (states \(\left|L\right>\) and \(\left|R\right>\)), or of course any superposition of these. For example, a particle may be in states \[\begin{split}\left|\pm\right>&=\frac{1}{\sqrt{2}} \left(\left|L\right>\pm\left|R\right>\right),\\ \left|\pm i\right>&=\frac{1}{\sqrt{2}}\left(\left|L \right>\pm i\left|R\right>\right),\end{split} \tag{1}\] and so on. We prepare the three particles in the product state \[\left|\Psi\right>=\left|+_{1},+_{2},+_{3}\right>. \tag{2}\] At an intermediate time, one of three possible projective measurements is performed, testing whether a given pair of particles occupy the same box. The projections for these measurements may be written \(\mathbf{\Pi}_{12}\), \(\mathbf{\Pi}_{23}\) and \(\mathbf{\Pi}_{13}\). These projections commute, and in this sense the observables are compatible. However, the physical procedure for testing \(\mathbf{\Pi}_{12}\) only is distinct from the procedures for \(\mathbf{\Pi}_{23}\) only or \(\mathbf{\Pi}_{13}\) only, and so they are complementary measurements. If we measure the pair \((i,j)\) and find an affirmative answer, the (non-normalized) resulting state is \(\mathbf{\Pi}_{ij}\left|\Psi\right>\). Finally, a measurement is made on all three particles, and we post-select on the eigenstate \[\left|\Phi\right>=\left|+i_{1},+i_{2},+i_{3}\right>. \tag{3}\] This final outcome is possible no matter what intermediate measurement was made. However, for any pair \(i\) and \(j\) we find that \[\left<\Phi\right|\mathbf{\Pi}_{ij}\left|\Psi\right>=0. \tag{4}\] Therefore, in those cases where the final state is found to be \(\left|\Phi\right>\), we are in a position to know with certainty that particles \(i\) and \(j\) were not found in the same box, for any distinct values of \(i\) and \(j\). The preparation of \(\left|\Psi\right>\) and post-selection of \(\left|\Phi\right>\) give us knowledge about pairs of particles that cannot be reconciled with any particular distribution of the particles among the boxes. We may say that we have _paradoxical knowledge_ of the relations among the particles in the boxes. Here we generalize the idea of paradoxical knowledge to other types of relations including orderings, functions and graphs, and we show how such knowledge may arise in quantum systems. Our results provide a new framework for examining the non-classical information provided by quantum measurements and suggest new experiments for small quantum computers. ## II Framework We begin by outlining a simplified framework for our analysis. There is an underlying finite set \(P\) of objects called particles, which we denote \(1,2,\ldots,N\). A binary relation \(r\) on \(P\) is a subset of \(P\times P\), and we say that \(i\) and \(j\) have the \(r\)-relation provided \((i,j)\in r\). We also specify a particular collection \(R\) of relevant binary relations on the particles. For example, we may wish to restrict our attention to total orderings on P, or to equivalence relations on P, or to some other class. We have _knowledge_ of the relation if, for some pairs \((i,j)\), we can either affirm or deny with certainty that \(i\) and \(j\) are found to be related. We can describe this by disjoint sets of pairs \(A\) and \(D\). If \((i,j)\in A\), the affirmable set, then we are certain that \(i\) is related to \(j\); if \((i,j)\in D\), the deniable set, we are certain that \(i\) is not related to \(j\). Our knowledge is _paradoxical_ if it is not consistent with any particular relation in \(R\). That is, \([A,D]\) represents paradoxical knowledge if there does not exist \(r\in R\) such that * \((i,j)\in A\) implies that \((i,j)\in r\), and * \((i,j)\in D\) implies that \((i,j)\notin r\). The sets \(A\) and \(D\) represent a partial description of some binary relation on \(P\), and this is paradoxical if the relation cannot be in \(R\). How can we arrive at quantum paradoxical knowledge? Each relation \(r\in R\) is associated with a quantum state \(\left|r\right\rangle\) of our system. We prepare a fixed initial state \(\left|\Psi\right\rangle\) that is a uniform superposition of all of the relevant relations: \[\left|\Psi\right\rangle=\sum_{r\in R}\left|r\right\rangle. \tag{5}\] (For convenience, we ignore normalization of our states, since we do not need to calculate probabilities other than 0 or 1.) At an intermediate time, we make a projective measurement of whether a particular pair \((i,j)\) is related. The projection has the property that \[\boldsymbol{\Pi}_{ij}\left|r\right\rangle=\left\{\begin{array}{cc}\left|r \right\rangle&(i,j)\in r\\ 0&(i,j)\notin r\end{array}\right. \tag{6}\] As before, all such projections commute, but a measurement of \(\boldsymbol{\Pi}_{ij}\) only is complementary to measurements for other possible pairs. Finally, we make a measurement on the system, one of whose eigenstates is \(\left|\Phi\right\rangle\). This can be written (in "bra" form) as \[\left\langle\Phi\right|=\sum_{r\in R}\phi_{r}\left\langle r\right|, \tag{7}\] for some coefficients \(\phi_{r}\). We post-select on the final result \(\left|\Phi\right\rangle\) for this measurement. Suppose our intermediate measurement is \(\boldsymbol{\Pi}_{ij}\). What sort of knowledge does our preparation and post-selection procedure provide? We note that \(\left\langle\Phi\right|\boldsymbol{\Pi}_{ij}\left|\Psi\right\rangle+\left\langle \Phi\right|\left(\boldsymbol{1}-\boldsymbol{\Pi}_{ij}\right)\left|\Psi \right\rangle=\left\langle\Phi\middle|\Psi\right\rangle\). We therefore require that \(\left\langle\Phi\middle|\Psi\right\rangle\neq 0\), so that the post-selected result \(\left|\Phi\right\rangle\) is always possible given the choice of \(\boldsymbol{\Pi}_{ij}\). There are three possibilities: * If \(\left\langle\Phi\right|\boldsymbol{\Pi}_{ij}\left|\Psi\right\rangle=\left\langle \Phi\middle|\Psi\right\rangle\neq 0\), but \(\left\langle\Phi\right|\left(\boldsymbol{1}-\boldsymbol{\Pi}_{ij}\right)\left| \Psi\right\rangle=0\), then we can affirm with certainty that \(i\) and \(j\) were found to be related. Then \((i,j)\in A\), the affirmable set. * If \(\left\langle\Phi\right|\boldsymbol{\Pi}_{ij}\left|\Psi\right\rangle=0\), but \(\left\langle\Phi\right|\left(\boldsymbol{1}-\boldsymbol{\Pi}_{ij}\right)\left| \Psi\right\rangle=\left\langle\Phi\middle|\Psi\right\rangle\neq 0\), then we can deny with certainty that \(i\) and \(j\) were found to be related. Then \((i,j)\in D\), the deniable set. * If both \(\left\langle\Phi\right|\boldsymbol{\Pi}_{ij}\left|\Psi\right\rangle\neq 0\) and \(\left\langle\Phi\right|\left(\boldsymbol{1}-\boldsymbol{\Pi}_{ij}\right)\left| \Psi\right\rangle\neq 0\), then we cannot infer the intermediate measurement result with certainty, and so \((i,j)\) is in neither \(A\) nor \(D\). We illustrate our framework via the pigeonhole example. In our simplified scheme, only one system state is associated with each relation \(r\). This is different from the original quantum pigeonhole construction, in which different states--e.g., \(\left|L,L,R\right\rangle\) and \(\left|R,R,L\right\rangle\)--have identical togetherness relations among the particles. Nevertheless, we can capture the essential idea. The four relevant togetherness relations for three particles are shown in Figure 1. In relation \(t\), all particles are in the same box. In relation \(a_{i}\), particle \(i\) is alone in one box and the other two particles are together in the other box. Thus, for example, we find that \(\boldsymbol{\Pi}_{12}\left|t\right\rangle=\left|t\right\rangle\) and \(\boldsymbol{\Pi}_{12}\left|a_{3}\right\rangle=\left|a_{3}\right\rangle\), but \(\boldsymbol{\Pi}_{12}\left|a_{1}\right\rangle=\boldsymbol{\Pi}_{12}\left|a_{2} \right\rangle=0\). The initial prepared state is \(\left|\Psi\right\rangle=\left|t\right\rangle+\left|a_{1}\right\rangle+\left|a_{2 }\right\rangle+\left|a_{3}\right\rangle\), and the final post-selected state is \(\left\langle\Phi\right|\boldsymbol{\Pi}_{ij}\left|\Psi\right\rangle\) for the various pairs, we find that _any_ pair of particles would certainly be found to be unrelated--i.e., in different boxes. That is, \(D=\{(1,2),(2,3),(1,3)\}\). This knowledge is inconsistent with any relation in \(R\) and thus constitutes quantum paradoxical knowledge. ## III Penrose Stairs Again we have three particles, each of which can occupy one of three non-degenerate energy levels. We are guaranteed that exactly one particle is in each level. There are thus six possible states, each of which corresponds to a relation among the energies of the three particles. For example, in state \(\left|123\right\rangle\) we have \(E_{1}<E_{2}<E_{3}\) while in state \(\left|312\right\rangle\) we have \(E_{3}<E_{1}<E_{2}\). The relations in \(R\) are the total orderings (in energy) on the particle set \(\{1,2,3\}\). Each intermediate measurement \(\boldsymbol{\Pi}_{ij}\) tests whether \(E_{i}<E_{j}\). Thus \(\boldsymbol{\Pi}_{12}\left|312\right\rangle=\left|312\right\rangle\), but \(\boldsymbol{\Pi}_{12}\left|213\right\rangle=0\), etc. As usual, the initial prepared state is \[\left|\Psi\right\rangle=\left|123\right\rangle+\left|231\right\rangle+\left|312 \right\rangle+\left|132\right\rangle+\left|213\right\rangle+\left|321\right\rangle. \tag{8}\] The final post-selected state is \[\left\langle\Phi\right|=2\left\langle 123\right|+2\left\langle 231\right|+2\left\langle 3 12\right|-\left\langle 132\right|-\left\langle 213 \right|-\left\langle 321\right|. \tag{9}\] That is, \(\phi_{123}=\phi_{231}=\phi_{312}=2\) and \(\phi_{132}=\phi_{213}=\phi_{321}=-1\). The inner product \(\left\langle\Phi\middle|\Psi\right\rangle=3\). Suppose the intermediate measurement is \(\boldsymbol{\Pi}_{12}\). \[\left\langle\Phi\right|\boldsymbol{\Pi}_{12}\left|\Psi\right\rangle=\phi_{123} +\phi_{321}+\phi_{132}=3, \tag{10}\] which implies that this intermediate measurement would find with certainty that \(E_{1}<E_{2}\). But since both \(\left|\Psi\right\rangle\) Figure 1: Togetherness relations for three particles in two boxes. and \(\langle\Phi|\) are invariant under cyclic permutation of the particles, it immediately follows that a \(\mathbf{\Pi}_{23}\) measurement would yield that \(E_{2}<E_{3}\) and that a \(\mathbf{\Pi}_{31}\) measurement would yield \(E_{3}<E_{1}\). Including all possible particle comparisons, our knowledge is represented by \(A=\{(1,2),(2,3),(3,1)\}\) and \(D=\{(2,1),(3,2),(1,3)\}\). This is plainly paradoxical. The situation is analogous to the famous Penrose stairs [2], a visual paradox in which a staircase continually ascends in one direction, but nevertheless appears to form a closed path. In Figure 2, we apparently ascend the stairs (increasing potential energy) to go from 1 to 2, or from 2 to 3, or from 3 to 1. This is precisely the type of quantum paradoxical knowledge that our preparation and post-selection provide. ## IV Paradoxical Functions Our example of three particles distributed among three energy levels can also represent a function relation. Recall that relation \(r\) is a function if, for every \(i\), there exists a unique \(j\) such that \((i,j)\in r\). (We write this \(r(i)=j\).) The state we denoted by \(\left|ijk\right\rangle\) represents a function on three integers in which \(r(1)=i\), \(r(2)=j\) and \(r(3)=k\). We restrict ourselves to one-to-one functions and prepare our system in the state \(\left|\Psi\right\rangle\) of Equation 8. At the intermediate time, we make one of two possible measurements, either determining whether \(r(1)=2\) or whether \(r(1)=3\). Thus, \[\begin{split}\mathbf{\Pi}_{r(1)=2}\left|\Psi\right\rangle& =\left|231\right\rangle+\left|213\right\rangle,\\ \mathbf{\Pi}_{r(1)=3}\left|\Psi\right\rangle&=\left| 312\right\rangle+\left|321\right\rangle.\end{split} \tag{11}\] Finally, we post-select on the state \[\left\langle\Phi\right|=-\left\langle 123\right|+\left\langle 231\right|+\left\langle 3 12\right|-\left\langle 132\right|+\left\langle 213 \right|+\left\langle 321\right|. \tag{12}\] We observe that \(\left\langle\Phi\right|\mathbf{\Pi}_{r(1)=2}\left|\Psi\right\rangle=\left\langle \Phi\right|\mathbf{\Pi}_{r(1)=3}\left|\Psi\right\rangle=\left\langle\Phi| \Psi\right\rangle=2\). Preparation and post-selection have thus yielded the paradoxical knowledge that \(r(1)=2\) and \(r(1)=3\), contradicting the uniqueness of the function value \(r(1)\). ## V Unconditionally paradoxical Knowledge So far, our examples of quantum paradoxical knowledge require post-selection using a particular outcome for the final measurement. Can we construct an example in which _every_ outcome of the final measurement yields paradoxical knowledge? We can. In fact, the original quantum pigeonhole example works in this way, as pointed out in [1]. If the final measurement resolves the \(\{\left|+i\right\rangle,\left|-i\right\rangle\}\) basis for each particle, there are 8 possible outcomes. Two of these yield violations of the pigeonhole principle, in which no two particles can be found together. The remaining six outcomes lead to a different type of paradoxical knowledge. For example, the result \(\left|+i_{1},+i_{2},-i_{3}\right\rangle\) yields \(A=\{(1,3),(2,3)\}\) and \(D=\{(1,2)\}\). Our knowledge of the togetherness relation is not transitive, and thus is paradoxical. Let us construct our own example of this sort of unconditionally paradoxical knowledge. The relations in \(R\) are represented by star graphs on the \(N\) particles. A single particle lies in the center, and it is related to each of the others, but none of the others are related to one another. We designate a particular star graph by identifying the center particle. There are \(N\) such relations in \(R\). The ones for \(N=4\) are shown in Figure 3. Each relation \(r_{i}\in R\) corresponds to a state \(\left|r_{i}\right\rangle\) of the system. The initial state is again a uniform superposition of these, and the final measurement employs the following orthogonal basis states in the 4-dimensional Hilbert space (given in bra form): \[\begin{split}\langle\Phi_{1}|&=-\left\langle r_{1} \right|+\left\langle r_{2}\right|+\left\langle r_{3}\right|+\left\langle r_{4 }\right|\\ \langle\Phi_{2}|&=+\left\langle r_{1}\right|-\left\langle r _{2}\right|+\left\langle r_{3}\right|+\left\langle r_{4}\right|\\ \langle\Phi_{3}|&=+\left\langle r_{1}\right|+\left\langle r _{2}\right|-\left\langle r_{3}\right|+\left\langle r_{4}\right|\\ \langle\Phi_{4}|&=+\left\langle r_{1}\right|+\left\langle r _{2}\right|+\left\langle r_{3}\right|-\left\langle r_{4}\right|\\ \end{split} \tag{13}\] Every possible outcome provides paradoxical knowledge. For example, if we obtain result \(\left|\Phi_{1}\right\rangle\), we find that particle 1 is isolated and the remaining three particles are connected, yielding \(A=\{(2,3),(2,4),(3,4)\}\) and Figure 3: Star graphs on 4 particles. Figure 2: Three particles on the Penrose stairs. \(\{(1,2),(1,3),(1,4)\}\). For each final result, we have one isolated particle and three mutually connected particles. No such graph is in \(R\), and so every one constitutes quantum paradoxical knowledge. ## VI Star graphs and map coloring The star graph example in the previous section has a conditional extension to more particles. Suppose we have \(N>3\) particles that are related in one of the \(N\) possible star graphs, each labeled as before by its central particle. The initial state \(\left|\Psi\right\rangle\) is a uniform superposition of the \(N\) terms for these relations in \(R\). We post-select by a final state \(\left\langle\Phi\right|\) such that \[\begin{split}&\phi_{1}=\ldots\phi_{N-1}=1,\\ &\phi_{N}=-(N-3),\end{split} \tag{14}\] so that \(\left\langle\Phi|\Psi\right\rangle=2\). If both \(i\) and \(j\) are in the range \(1,\ldots,N-1\), we find that \(\left\langle\Phi|\,\mathbf{\Pi}_{ij}\left|\Psi\right\rangle=2\) also, so that \(i\) and \(j\) are definitely related. That is, our post-selected knowledge of the relation entails that each pair from the first \(N-1\) particles is certainly related. Our knowledge includes a _clique_ of size \(N-1\), which is paradoxical given \(R\). This type of paradoxical knowledge has a notable implication. Every star graph is planar, so that it can represent adjacency relations among regions in a 2-D planar map. We are, in effect, distributing our particles among these regions. A celebrated theorem [3] guarantees that, given no more than four distinct colors, we can always assign colors to the regions of a planar map so that adjacent regions are colored differently. However, a clique of \(N-1\) particles cannot be so colored with fewer than \(N-1\) colors. If \(N\geq 6\), therefore, our quantum paradoxical knowledge violates the four-color theorem. In quantum map coloring, four colors do _not_ suffice. It is worth noting that Tang [4] has used a very different framework to connect graph coloring to quantum pigeonhole-like paradoxes involving \(n\) entangled qubits. These paradoxes amount to Hardy-like constructions for excluding local hidden variable descriptions of entangled states. ## VII Beyond binary relations A natural generalization of our formalism would be to extend it to ternary (or \(n\)-ary) relations among the \(N\) particles. For example, suppose we have three particles designated \(a\), \(b\) and \(c\), each of which may have energies \(0\), \(1\) or \(2\) in appropriate units. We require that \(E_{a}+E_{b}=E_{c}\). This is a ternary relation among the particle states. The relevant states may be written \[\left|000\right\rangle,\left|101\right\rangle,\left|011\right\rangle,\left|11 2\right\rangle,\left|202\right\rangle,\text{ and }\left|022\right\rangle. \tag{15}\] Our initial state \(\left|\Psi\right\rangle\) is a uniform superposition of these. At the intermediate time, we measure whether \(E_{a}=1\), whether \(E_{b}=1\) or whether \(E_{c}=2\). We post-select on the state \[\left\langle\Phi\right|=2\left\langle 101\right|+2\left\langle 011\right|+2 \left\langle 112\right|-\left\langle 202\right|-\left\langle 202\right|. \tag{16}\] It is easy to see that \(\left\langle\Phi\right|\mathbf{\Pi}_{E_{a}=1}\left|\Psi\right\rangle=\left\langle \Phi\right|\mathbf{\Pi}_{E_{b}=1}\left|\Psi\right\rangle=\left\langle\Phi| \Psi\right\rangle=4\), but \(\left\langle\Phi|\,\mathbf{\Pi}_{E_{c}=2}\left|\Psi\right\rangle=0\). We thus have the quantum paradoxical knowledge that \(E_{a}=1\) and \(E_{b}=1\), but \(E_{c}=E_{a}+E_{b}\neq 2\). ## VIII Remarks We have generalized the quantum pigeonhole paradox to a larger class of paradoxes involving many different types of relations between particles. These paradoxes can be quite simple. For example, suppose \(R\) includes star graphs on just three particles. In every such graph, no particle is isolated from the others. With a post-selected state \(\left\langle\Phi\right|=-\left\langle r_{1}\right|+\left\langle r_{2}\right| +\left\langle r_{3}\right|\), however, we find that \(\left\langle\Phi|\,\mathbf{\Pi}_{12}\left|\Psi\right\rangle=\left\langle\Phi |\,\mathbf{\Pi}_{13}\left|\Psi\right\rangle=0\). Particle \(1\) can never be found linked to either of the other two; it is paradoxically isolated. The quantum pigeonhole paradox is particularly appealing, because it involves straightforward position observables. Both the preparation and post-selection are actually product states of the particles. This has motivated experimental implementations of the pigeonhole scenario [5; 6; 7]. Our general framework, being more abstract, is somewhat further removed from a simple experimental set-up. The Penrose stairs paradox, for instance, employs entangled preparation and post-selection states for three particles in three energy levels. However, it should be possible to realize all of our examples as quantum computations involving a modest number of qubits. Not every set \(R\) of relevant binary relations on \(N\) particles can lead to quantum paradoxical knowledge. For instance, it is not hard to show that we must have \(\#(R)\geq 3\) for our framework to yield a paradox. The three-particle example above is minimal in this sense. At other end of the spectrum, if the set \(R\) includes every relation, then any pair \([A,D]\) of disjoint affirmable and deniable sets will be consistent with some element \(r\in R\), and thus cannot be paradoxical. Therefore, \(R\) must be neither too small nor too large. For what sets of relations \(R\) on \(N\) particles can generalized quantum paradoxical knowledge arise?
2308.16697
Game semantics for the constructive $μ$-calculus
We define game semantics for the constructive $\mu$-calculus and prove its equivalence to bi-relational semantics. As an application, we use the game semantics to prove that the $\mu$-calculus collapses to modal logic over the modal logic $\mathsf{IS5}$. We then show the completeness of $\mathsf{IS5}$ extended with fixed-point operators.
Leonardo Pacheco
2023-08-31T13:02:30Z
http://arxiv.org/abs/2308.16697v3
# Game semantics for the constructive \(\mu\)-calculus ###### Abstract We define game semantics for the constructive \(\mu\)-calculus and prove its correctness. We use these game semantics to prove that the \(\mu\)-calculus collapses to modal logic over CS5 frames. Finally, we prove the completeness of \(\mu\)C55 over CS5 frames. ## 1 Introduction This paper is a first step into relating two strands of research modal logic: the \(\mu\)-calculus and constructive modal logics. We define a constructive version of the \(\mu\)-calculus by adding fixed-point operators to constructive modal logic. We base our semantics on the bi-relational Kripke semantics of Wijesekera [20] (but we allow fallible worlds in our models). We define game semantics for the constructive \(\mu\)-calculus and prove its equivalence to the bi-relational Kripke semantics. The main advantage of game semantics for the \(\mu\)-calculus is that they allow for easier comprehension of the \(\mu\)-formulas, which are famously hard to understand. In an evaluation game for the classical \(\mu\)-calculus, the players Verifier and Refuter discuss whether a formula holds at a given world of a Kripke model. In an evaluation game for the constructive \(\mu\)-calculus, we still have two players, but they alternate between the roles of Verifier and Refuter depending on their choices. This happens because every formulas can be put in negative normal form over classical semantics, which allows for these simpler evaluation games. Therefore we need a more delicate argument to prove the equivalence of the semantics in the constructive case compared to the classical case. Our proof is based on the proof of the correctness of game semantics for the classical \(\mu\)-calculus by Ong [14]. For applications, we study the logic \(\mu\)C55, a constructive variation of S5 with the addition of fixed-points. We first use the game semantics to show that the (constructive) \(\mu\)-calculus collapses to (constructive) modal logic over CS5 frames. That is, every formula with fixed-point operators is equivalent to a formula without fixed-point operators over CS5 frames. The CS5 frames are a subclass of the bi-relational CS4 frames defined in Alechina _et al._[1]; we require that the modal accessibility relation of the CS5 frames is an equivalence relation. Our proof is a generalization of Alberucci and Facchini's proof of the collapse of the (classical) \(\mu\)-calculus to (classical) modal logic over S5 frames. Note that this collapse does not happen in general. Over arbitrary frames, we have a strict alternation hierarchy, with formulas which are not equivalent to any other formula with less fixed-point alternation. We use the \(\mu\)-calculus' collapse to modal logic over C55 frames to prove the completeness of \(\mu\)C55 over C5 frames. The modal logic \(\mu\)C55 is obtained by adding fixed-point axioms and rules to the modal logic C55. As far as the author is aware, this is the first completeness proof for a system over the constructive \(\mu\)-calculus. We also describe how to use our methods to prove the completeness of \(\mu\)S5 and \(\mu\)GS5 over IS5 and GS5 frames, respectively. The logic \(\mu\)IS5 is obtained by adding fixed-point operators to IS5, an intuitionistic variant of S5. IS5 is also known as MIPQ. An IS5 frame is a C55 frame with no fallible worlds. The completeness of IS5 over IS5 frames is already known [10, 11] The logic \(\mu\)GS5 is obtained by adding fixed-point operators to GS5, a variation of S5 over real-valued semantics. We now briefly review the related literature on the \(\mu\)-calculus and constructive modal logics. The \(\mu\)-calculus was defined by Kozen [12], who also defined a related proof system \(\mu\)K. The completeness of \(\mu\)K was first proved by Walukiewicz [13]. See [14, 15] for surveys on the \(\mu\)-calculus. The \(\mu\)-calculus' alternation hierarchy classifies the \(\mu\)-formulas by how many alternating least and greatest fixed-point operators they contain. The strictness of the hierarchy was open for many years until it was proved by Bradfield [1]. Bradfield later gave a simplified proof of the alternation hierarchy's strictness using evaluation games [1]. The strictness may not hold over restricted classes of frames. For example, Alberucci and Facchini [1] proved that the alternation hierarchy collapses to its alternation-free fragment over transitive frames, and to modal logic over equivalence relations. See Chapter 2 of [1] for a survey on the alternation hierarchy. On a constructive modal logic, the duality of the modalities \(\Box\) and \(\Diamond\) is lost. These logics have been studied for a long time; some of the first texts on the topic are Fitch [10] and Prawitz [12]. We base our Kripke models over Wijesekera [13], who defined a constructive modal logic CK and proved its completeness over CK frames. The difference between Wijesekera's CK frames and our bi-relational frames is that we allow fallible worlds. On fallible worlds, the false proposition \(\bot\) holds. The logic CK was also studied by Acclavio _et al._[1]; they define and prove the correctness of a validity game for CK. See [11, 12] for surveys on constructive modal logic. The modal logic C55 studied in this paper is closely related to the modal logic MIPQ, an intuitionistic variant of S5. MIPQ is also known as IS5, and was first studied by Prawitz [12]. The completeness of IS5 over IS5 frames was proved by Ono [10] and Fischer Servi [11]. The logic GS5 was studied by Caicedo _et al._[1], who proved its completeness over real-valued S5 models. OutlineIn Section 2, we define the syntax and bi-relational Kripke semantics for the constructive \(\mu\)-calculus. We also define the modal logics C55 and \(\mu\)C55, and the class of C55 frames. In Section 3, we define the game semantics for the constructive \(\mu\)-calculus and prove its equivalence to Kripke semantics. In Section 4, we prove the constructive \(\mu\)-calculus' collapse to modal logic over CS5 frames. In Section 5, we prove the completeness of \(\mu\)C55 over CS5 frames. We also describe how to prove completeness results for \(\mu\)S5 and \(\mu\)G55. AcknowledgementsI would like to thank David Fernandez-Duque, Iris van der Giessen, and Konstantinos Papafilippou for the discussions we had about constructive modal logics and the \(\mu\)-calculus. This research was partially funded by the FWF grant TAI-797. ## 2 Preliminaries ### Constructive \(\mu\)-calculus SyntaxThe language of the \(\mu\)-calculus is obtained by adding least and greatest fixed-point operators \(\mu\) and \(\nu\) to the language of modal logic. When defining the fixed-point formulas \(\mu X.\varphi\) and \(\nu X.\varphi\), we will require that the variable symbol \(X\) is positive in \(\varphi\); we will need to do so in order to have well-behaved semantics. We will describe when a variable is positive after we describe a grammar generating the \(\mu\)-formulas. Fix a set \(\mathrm{Prop}\) of proposition symbols and a set \(\mathrm{Var}\) of variable symbols. The constructive \(\mu\)-formulas are defined by the following grammar: \[\varphi:=P\mid X\mid\bot\mid\top\mid\neg\varphi\mid\varphi\land\varphi\mid \varphi\lor\varphi\mid\varphi\to\varphi\mid\Box\varphi\mid\lozenge\varphi \mid\nu X.\varphi,\] where \(P\) is a proposition symbol, \(X\) is a variable symbol. The fixed-point formulas \(\mu X.\varphi\) and \(\nu X.\varphi\) are defined iff \(X\) is positive in \(\varphi\). Denote the set of subformulas of \(\varphi\) by \(\mathrm{Sub}(\varphi)\). We we use \(\eta\) to denote either \(\mu\) or \(\mu\), and \(\triangle\) to denote \(\Box\) or \(\lozenge\). We classify \(X\) as _positive_ or _negative_ in a given formula by structural induction: * \(X\) is positive and negative in \(P\); * \(X\) is positive in \(X\) ; * if \(Y\neq X\), \(X\) is positive and negative in \(Y\) ; * if \(X\) is positive (negative) in \(\varphi\), then \(X\) is negative (positive) in \(\varphi\); * if \(X\) is positive (negative) in \(\varphi\) and \(\psi\), then \(X\) is positive (negative) in \(\varphi\land\psi\), \(\varphi\lor\psi\), and \(\triangle\varphi\); * if \(X\) is negative (positive) in \(\varphi\) and positive (negative) in \(\psi\), then \(X\) is negative (positive) in \(\varphi\to\psi\); * \(X\) is positive and negative in \(\eta X.\varphi\). While burdensome, we need to consider the positiveness and negativeness of variables to guarantee that the semantics for fixed-points formulas are well-defined. This contrasts with the classical \(\mu\)-calculus, where it is common to suppose every formula is in negative normal form, and so we can set up the grammar in a way only positive variables occur. We cannot do the same on constructive semantics: For example, \(\neg\lozenge\neg\varphi\) is not equivalent to \(\Box\varphi\) over constructive semantics. An occurrence of a variable \(X\) in a formula \(\varphi\) is _bound_ iff it in the scope of a fixed-point operator \(\eta X\). An occurrence of \(X\) is _free_ iff it is not bound. A formula \(\varphi\) is _closed_ iff it has no free variables. An occurrence of \(X\) in \(\varphi\) is _guarded_ iff it is in the scope of some modality \(\triangle\). A formula \(\varphi\) is _guarded_ iff, for all \(\eta X.\psi\in\mathrm{Sub}(\varphi)\), \(X\) is guarded in \(\psi\). A formula \(\varphi\) is _well-bounded_ iff, for all variables \(X\) occurring bounded in \(\varphi\), \(X\) occurs only once and there is only one fixed-point operator \(\eta X\) in \(\varphi\). A formula is _well-named_ iff it is guarded and well-bounded. Every formula is equivalent to a well-named formula. If \(\varphi\) is a well-named formula and \(\eta X.\psi\in\mathrm{Sub}(\psi)\), denote by \(\psi_{X}\) the formula \(\psi\) which is bound by the fixed-point operator \(\eta X\). SemanticsWe consider bi-relational Kripke frames \(F=\langle W,W^{\perp},\preceq,\sqsubseteq\rangle\) where: \(W\) is the set of possible worlds; \(W^{\perp}\subseteq W\) is the set of fallible worlds; \(\preceq\) is a reflexive and transitive relation over \(W\); and \(\sqsubseteq\) is a relation over \(W\). We call \(\preceq\) the intuitionistic relation and \(\sqsubseteq\) the modal relation. A bi-relational Kripke model is a tuple \(\langle W,W^{\perp},\preceq,\sqsubseteq,V\rangle\) where \(\langle W,W^{\perp},\preceq,\sqsubseteq\rangle\) is a bi-relational frame, \(V:\mathrm{Prop}\to\mathcal{P}(W)\) is a valuation function. We require that, if \(wRv\) and \(w\in V(P)\), then \(v\in V(P)\). Fix a Kripke model \(M=\langle W,W^{\perp},\preceq,\sqsubseteq,V\rangle\). Given a \(\mu\)-formula \(\varphi\), define the operator \(\Gamma_{\varphi(X)}(A):=\|\varphi(A)\|^{M}\). Define the valuation of the \(\mu\)-formulas over \(M\) by induction on the structure of the formulas: * \(\|P\|^{M}=V(P)\); * \(\|\bot\|^{M}=\emptyset\); * \(\|\top\|^{M}=W\); * \(\|\varphi\land\psi\|^{M}=\|\psi\|^{M}\cap\|\psi\|^{M}\); * \(\|\varphi\lor\psi\|^{M}=\|\psi\|^{M}\cup\|\psi\|^{M}\); and * \(\|\varphi\to\psi\|^{M}=\{w\mid\forall v.\text{if }w\preceq v\text{ then }v\in\|\varphi\|^{M}\text{ implies }v\in\|\psi\|^{M}\}\). * \(\|\neg\varphi\|^{M}=\{w\mid\forall v\succeq w.v\not\models\varphi\}\); * \(\|\Box\varphi\|^{M}=\{w\mid\forall v\succeq w\forall u\sqsubseteq v.u\in\| \varphi\|^{M}\}\); * \(\|\lozenge\varphi\|^{M}=\{w\mid\forall v\succeq w\exists u\sqsubseteq v.u\in \|\varphi\|^{M}\}\); * \(\|\mu X.\varphi(X)\|^{M}\) is the least fixed-point of the operator \(\Gamma_{\varphi(X)}\); and * \(\|\nu X.\varphi(X)\|^{M}\) is the greatest fixed-point of the operator \(\Gamma_{\varphi(X)}\); We also write \(M,w\models\varphi\) when \(w\in\|\varphi\|^{M}\). We omit the reference to \(M\) when it is clear from the context and write \(w\in\|\varphi\|\) and \(w\models\varphi\). We will need to consider models with augmented valuations when proving the correctness of game semantics. When augmenting \(M\), we treat some variable symbol \(X\) as a proposition symbol and assign a value to it. Formally, let \(M=\langle W,W^{\perp},\preceq,\sqsubseteq,V\rangle\) be a Kripke model, \(A\subseteq W\) and \(X\) be a variable symbol; the augmented Kripke model \(M[X\mapsto A]\) is obtained by setting \(V(X):=A\). Given any \(\mu\)-formula \(\varphi\), we also define \(\|\varphi(A)\|^{M}:=\|\varphi(X)\|^{M[X\mapsto A]}\). We will also need to consider the approximants \(\eta^{\alpha}X.\varphi\) of fixed-point formulas \(\eta X.\varphi\), for all ordinal \(\alpha\). Fix a Kripke model \(M\) and a formula \(\varphi(X)\) where \(X\) is positive. We define * \(\|\mu X^{0}\cdot\varphi\|^{M}=\emptyset\), \(\|\nu X^{0}\cdot\varphi\|^{M}=W\); * \(\|\mu X^{\alpha+1}\cdot\varphi\|^{M}=\|\varphi(\|\mu X^{\alpha}\cdot\varphi\|^{ M})\|\), \(\|\nu X^{\alpha+1}\cdot\varphi\|^{M}=\|\varphi(\|\nu X^{\alpha}\cdot\varphi\|^{M})\|\); and * \(\|\mu X^{\lambda}\cdot\varphi\|^{M}=\bigcup_{\alpha<\lambda}\|\mu X^{\alpha} \cdot\varphi\|^{M}\), \(\|\nu X^{\lambda}\cdot\varphi\|^{M}=\bigcap_{\alpha<\lambda}\|\nu X^{\alpha} \cdot\varphi\|^{M}\). The Knaster-Tarski Theorem [1] states that every monotone operator has least and greatest fixed-points. In the proposition below, we prove that if \(X\) is positive in \(\Gamma_{\varphi(X)}\), then \(\Gamma_{\varphi(X)}\) is monotone; therefore the valuation of the fixed-point formulas \(\mu X\cdot\varphi\) and \(\nu X\cdot\varphi\) are well-defined. Indeed, for all model \(M\) and \(\mu\)-formula \(\varphi(X)\) with \(X\) positive there are ordinal \(\alpha\) and \(\beta\) such that \(\|\mu X\cdot\varphi\|^{M}=\|\mu^{\alpha}X^{\lambda}\cdot\varphi\|^{M}\) and \(\|\nu X\cdot\varphi\|^{M}=\|\nu^{\alpha}X^{\beta}\cdot\varphi\|^{M}\). **Proposition 1**.: _Fix a bi-relational model \(M=\langle W,W^{\perp},\preceq,\sqsubseteq,V\rangle\) and sets of worlds \(A\subseteq B\subseteq W\). If \(X\) is positive in \(\varphi\), then \(\|\varphi(A)\|^{M}\subseteq\|\varphi(B)\|^{M}\). Symmetrically, if \(X\) is negative in \(\varphi\), then \(\|\varphi(B)\|^{M}\subseteq\|\varphi(A)\|^{M}\)._ Proof.: Fix a model \(M\) and set \(A\subseteq W\). We prove that if \(X\) is positive in \(\varphi\) then \(\|\varphi(A)\|^{M}\subseteq\|\varphi(B)\|^{M}\); the case for negative \(X\) is similar. The proof is by structural induction on the \(\mu\)-formulas. The cases of formulas of the form \(P\), \(X\), \(Y\), \(\varphi\wedge\psi\), and \(\varphi\lor\psi\) follow by direct calculations. The case for formulas of the form \(\eta X\cdot\varphi\) is trivial, as \(X\) is not free in \(\eta X\cdot\varphi\). We now prove the proposition for formulas of the form \(\varphi\to\psi\). Suppose \(X\) is positive in \(\varphi\to\psi\), then \(X\) is positive in \(\psi\) and negative in \(\varphi\). Therefore: \[w\in\|(\varphi\to\psi)(A)\|^{M} \Longleftrightarrow\forall v\succeq w.v\in\|\varphi(A)\|^{M} \text{ implies }v\in\|\psi(A)\|^{M}\] \[\Longrightarrow\forall v\succeq w.v\in\|\varphi(B)\|^{M}\text{ implies }v\in\|\psi(B)\|^{M}\] \[\Longleftrightarrow w\in\|(\varphi\to\psi)(B)\|^{M}.\] The case for formulas of the form \(\neg\varphi\) is similar. Finally, we prove the proposition for formulas of the form \(\Box\varphi\). Suppose \(X\) is positive in \(\Box\varphi\), then \(X\) is positive in \(\varphi\). Therefore: \[w\in\|\Box\varphi(A)\|^{M} \Longleftrightarrow\forall v\succeq w\forall u\sqsubseteq v.u\in \|\varphi(A)\|^{M}\] \[\Longrightarrow\forall v\succeq w\forall u\sqsubseteq v.u\in \|\varphi(B)\|^{M}\] \[\Longleftrightarrow w\in\|\Box\varphi(B)\|^{M}.\] The proof for formulas of the form \(\Diamond\varphi\) is similar. ### Cs5 and \(\mu\)Cs5 Modal axiomsOur basic modal logic is CS5. The axioms of CS5 are: * all intuitionistic tautologies; * \(K:=\Box(\varphi\to\psi)\to(\Box\varphi\to\Box\psi)\wedge\Box(\varphi\to\psi) \to(\Diamond\varphi\to\Diamond\psi)\); * \(T:=\Box\varphi\to\varphi\wedge\varphi\to\Diamond\varphi\); * \(4:=\Box\varphi\to\Box\Box\varphi\wedge\Diamond\varphi\to\Diamond\varphi\); and * \(5:=\Diamond\varphi\to\Box\Diamond\varphi\wedge\Diamond\Box\varphi\to\Box\varphi\). CS5 is closed under necessitation and _modus ponens_: \[(\mathbf{Nec})\ \frac{\varphi}{\square\varphi}\quad\quad\text{and}\quad\quad( \mathbf{MP})\ \frac{\varphi\ \ \varphi\to\psi}{\psi}.\] A CS5 _frame_ is a bi-relational Kripke frame \(F=\langle W,W^{\perp},\preceq,\equiv\rangle\) where \(\equiv\) is an equivalence relation over \(W\). We denote the modal relation by \(\equiv\) instead of \(\sqsubseteq\) to emphasize that it is an equivalence relation. We also require that CS5 frames are backward confluent: \(w\equiv v\preceq v^{\prime}\) implies there is \(w^{\prime}\) such that \(w\preceq w^{\prime}\equiv v^{\prime}\). A CS5 _model_ is a bi-relational model over a CS5 frame. Fixed-point axioms\(\mu\)CS5 is the logic obtained by adding to CS5 the fixed-point axioms: * \(\nu FP:=\nu X.\varphi\to\varphi(\nu X.\varphi)\); and * \(\mu FP:=\varphi(\mu X.\varphi)\to\mu X.\varphi\); and taking the closure under \(\mathbf{Nec}\), \(\mathbf{MP}\) and the induction rules: \[(\nu\mathbf{Ind})\ \frac{\psi\to\varphi(\psi)}{\psi\to\nu X.\varphi}\quad\quad \text{and}\quad\quad(\mu\mathbf{Ind})\ \frac{\varphi(\psi)\to\psi}{\mu X.\varphi\to\psi}.\] Note that the two fixed-point axioms and the two induction rules are necessary as \(\nu\) and \(\mu\) cannot be defined in terms of each other over constructive semantics. While over classical semantics one has \(\nu X.\varphi\equiv\neg\mu X.\neg\varphi(\neg X)\); we do not have the same here: if \(\varphi:=P\), then \(\nu X.\varphi\equiv P\) and \(\neg\mu X.\neg\varphi(\neg X)\equiv\neg\neg P\) are not equivalent formulas. ## 3 Game semantics for constructive \(\mu\)-calculus ### Definition Fix a bi-relational model \(M=\langle W,W^{\perp},\preceq,\sqsubseteq,V\rangle\), a world \(w\in W\), and a well-named \(\mu\)-formula \(\varphi\). The evaluation game \(\mathcal{G}(M,w\models\varphi)\) has two players: I and II. The two players will alternate the roles of Verifier and Refuter (abbreviated to V and R, respectively). The main positions are of the form \(\langle v,\psi\rangle\) where \(v\in W\) and \(\psi\in\mathrm{Sub}(\varphi)\). We also have auxiliary positions of the form \(\langle\langle v\rangle,\psi\rangle\), \(\langle[v],\psi\rangle\), and \(\langle v,\theta?\theta^{\prime}\rangle\) where \(v\in W\), \(\triangle\psi,\theta\to\theta^{\prime}\in\mathrm{Sub}(\varphi)\). Each position is owned by one of I and II. At a position \(\langle v,\psi\rangle\), the player in the role of V tries to prove that \(v\models\psi\) and the player in the role of R tries to prove that \(v\not\models\psi\). The game begin at the position \(\langle w,\varphi\rangle\), with I in the role of V and II in the role of R. The possible plays are described in Table 1. We explain below the plays available in evaluation game. At the position \(\langle v,P\rangle\) there is no available move and the game ends; V wins iff \(v\models P\), otherwise R wins. If \(v\in W^{\perp}\) and \(\psi\in\mathrm{Sub}(\varphi)\), then V wins automatically at \(\langle v,\psi\rangle\), overruling the other possibilities. At the position \(\langle v,\psi\lor\theta\rangle\), V chooses one of \(\langle v,\psi\rangle\) and \(\langle v,\theta\rangle\). They do so as only one of \(\psi\) and \(\theta\) need to hold at \(v\) for \(\psi\lor\theta\) to hold at \(v\). Similarly, at \(\langle v,\psi\wedge\theta\rangle\), R chooses one of \(\langle v,\psi\rangle\) and \(\langle v,\theta\rangle\). Let \(\eta X.\psi_{X}\in\mathrm{Sub}(\varphi)\); at \(\langle v,\eta X.\psi_{X}\rangle\) and \(\langle v,X\rangle\), the players move to \(\langle v,\psi_{X}\rangle\). When moving from \(\langle v,X\rangle\) to \(\langle v,\psi_{X}\rangle\), we say that the fixed-point \(\eta X\) was regenerated. The ownership of these positions does not matter for the game, but we assign it to the player who does not want \(\eta X\) to be regenerated infinitely often. On a position of the form \(\langle v,\Box\psi\rangle\), R chooses \(v^{\prime}\) such that \(v\preceq v^{\prime}\), and then move the position \(\langle[v^{\prime}],\psi\rangle\); then, at \(\langle[v],\psi\rangle\), R chooses \(v^{\prime\prime}\) such that \(v^{\prime}\sqsubseteq v^{\prime\prime}\) and moves to \(\langle v^{\prime\prime},\psi\rangle\). Similarly, on a position of the form \(\langle v,\Diamond\psi\rangle\), R chooses \(v^{\prime}\) such that \(v\preceq v^{\prime}\), and then move the position \(\langle\langle v^{\prime}\rangle,\psi\rangle\); then, at \(\langle\langle v^{\prime}\rangle,\psi\rangle\), V chooses \(v^{\prime\prime}\) such that \(v^{\prime}\sqsubseteq v^{\prime\prime}\) and moves to \(\langle v^{\prime\prime},\psi\rangle\). At a position of the form \(\langle v,\neg\psi\rangle\), R chooses \(v^{\prime}\succeq v\) and challenges V to show that \(M,v^{\prime}\not\models\varphi\); that is, R modes to \(\langle v^{\prime},\psi\rangle\) and the players exchange roles. Positions of the form \(\langle v,\psi\rightarrow\theta\rangle\) is similar. In this case, R chooses \(v^{\prime}\succeq v\) and V chooses whether to show that \(M,v^{\prime}\not\models\psi\) or \(M,v^{\prime}\models\theta\); in case V chooses \(\langle v^{\prime},\psi\rangle\), the players exchange roles. That is, R chooses \(v^{\prime}\succeq v\) and moves to \(\langle v^{\prime},\psi?\theta\rangle\), and then V chooses one of \(\langle v^{\prime},\psi\rangle\) and \(\langle v^{\prime},\theta\rangle\); in case V chooses \(\langle v^{\prime},\psi\rangle\), the players exchange roles. Note that I and II always play the same role on vertices using the same formulas. That is, if I plays the role of V at \(\langle v,\psi\rangle\) then they also play the role of V at \(\langle v^{\prime},\psi\rangle\) for all \(v^{\prime}\in W\). The same holds for the other combinations of players and roles. We can guarantee this by the positivity requirement on the fixed-point formulas. In particular, between positions \(\langle v,X\rangle\) and \(\langle v^{\prime},X\rangle\), the players must switch roles an even number of times. Let \(\eta X.\psi_{X}\) be the outermost infinitely often regenerated formula, the player on role of V wins iff \(\eta\) is \(\nu\). Formally, let \(\rho\) be an infinite play \(\langle w_{0},\varphi_{0}\rangle,\langle w_{1},\varphi_{1}\rangle,\ldots\) with \(\langle w_{0},\varphi_{0}\rangle=\langle w,\varphi\rangle\). Let \(\eta_{0}X_{0},\ldots,\eta_{n}X_{n}\) be the fixed-point operators infinitely often regenerated in \(\rho\). Let \(i\) be such that \(\eta_{i}X_{i}.\psi_{i}\) is not a subformula of any other \(\eta_{j}X_{j}.\psi_{j}\). The player with the role of V on \(\langle w,\eta_{i}X_{i}.\psi_{i}\rangle\) wins \(\rho\) iff \(\eta_{i}\) is \(\nu\); similarly, R wins \(\rho\) iff \(\eta_{i}\) is \(\mu\). A player wins the game \(\mathcal{G}(M,w\models\varphi)\) iff they have a win strategy. Since the evaluation games are Borel Gale-Stewart games, one of the players must have a winning strategy. ### Correctness of game semantics **Theorem 2**.: _Let \(M=\langle W,W^{\perp},\preceq,\sqsubseteq,V\rangle\) be a bi-relational model, \(w\in W\) and \(\varphi\) be a well-named \(\mu\)-formula. Then_ \[\mathsf{I}\text{ wins }\mathcal{G}(M,w\models\varphi)\text{ iff }M,w \models\varphi\text{, and}\] \[\mathsf{II}\text{ wins }\mathcal{G}(M,w\models\varphi)\text{ iff }M,w \not\models\varphi\text{.}\] Proof.: Suppose \(w\models\varphi\). We assign to each main position \(\langle v,\psi\rangle\) of the game an ordinal signature \(\mathrm{sig}^{\prime}\langle v,\psi\rangle\). We show I is always able to control the truth of the positions in the evaluation game \(\mathcal{G}(M,w\models\varphi)\) and move in a way that the signature is eventually non-increasing. After that, we show how to define a winning strategy for II if \(M,w\not\models\varphi\). This will conclude the proof. Fixed-points subformulasEnumerate the fixed-point subformulas of \(\varphi\) in non-increasing size: \[\eta_{1}Z_{1}.\psi_{1},\eta_{2}Z_{2}.\psi_{2},\ldots,\eta_{n}Z_{n}.\psi_{n}.\] That is, if \(i<j\) then \(\eta_{i}Z_{i}.\psi_{i}\not\in\mathrm{Sub}(\eta_{j}Z_{j}.\psi_{j})\); and if \(\eta_{i}Z_{i}.\psi_{i}\in\mathrm{Sub}(\eta_{j}Z_{j}.\psi_{j})\) then \(j\leq i\). We also enumerate the fixed-point subformulas of \(\varphi\) which are owned by \(\mathsf{I}\) in non-increasing size: \[\eta_{1}^{\prime}Y_{1}.\chi_{1},\eta_{2}^{\prime}Y_{2}.\chi_{2},\ldots,\eta_{ m}^{\prime}Y_{m}.\chi_{m}.\] \(\eta_{i}^{\prime}\) is \(\mu\) if \(\mathsf{I}\) has the role of \(\mathsf{V}\) on \(\langle w,\eta_{i}^{\prime}Y_{i}.\chi_{i}\rangle\); and \(\eta_{i}^{\prime}\) is \(\nu\) if \(\mathsf{I}\) has the role of \(\mathsf{R}\) on \(\langle w,\eta_{i}^{\prime}Y_{i}.\chi_{i}\rangle\). SignaturesAn \(\mathsf{I}\)-signature \(r=\langle r(1),\ldots,r(m)\rangle\) is a sequence of \(m\) ordinals. Denote by \(r(k)\) the \(k\)th component of \(r\). Write \(r=_{k}r^{\prime}\) iff the first \(k\) components of \(r\) are identical. Order the signatures by the lexicographical order: \(r<r^{\prime}\) iff there is \(k\in\{1,\ldots,m\}\) such that \(r=_{k}r^{\prime}\) and \(r(k)<r^{\prime}(k)\). The lexicographical order is a well-ordering of the signatures. Augmented modelsWe want to evaluate subformulas of \(\varphi\) where some \(Z_{1},\ldots,Z_{n}\) occur free, so we augment \(M\) with the correct valuations of these variables \[M_{0} :=V;\] \[M_{i+1} :=M_{i}[Z_{i+1}\mapsto\|\eta_{i}Z_{i}.\psi_{i}\|^{M_{i}}].\] \begin{table} \begin{tabular}{c|c} \multicolumn{2}{c}{Verifier} \\ \hline Position & Admissible moves \\ \(\langle v,\psi_{1}\vee\psi_{2}\rangle\) & \(\{\langle v,\psi_{1}\rangle,\langle v,\psi_{2}\rangle\}\) \\ \(\langle v,\psi_{0}?\psi_{1}\rangle\) & \(\{\langle v,\psi_{0}\rangle\) and exchange roles, \(\langle v,\psi_{1}\rangle\}\) \\ \(\langle\langle v\rangle,\psi\rangle\) & \(\{\langle u,\psi\rangle\mid v\sqsubseteq u\}\) \\ \(\langle v,P\rangle\) and \(v\not\in V(P)\) & \(\emptyset\) \\ \(\langle v,\mu X.\psi_{X}\rangle\) & \(\{\langle v,\psi_{X}\rangle\}\) \\ \(\langle v,X\rangle\) & \(\{\langle v,\psi_{X}\rangle\}\) \\ \multicolumn{2}{c}{Refuter} \\ \hline Position & Admissible moves \\ \(\langle v,\psi_{1}\wedge\psi_{2}\rangle\) & \(\{\langle v,\psi_{1}\rangle,\langle v,\psi_{2}\rangle\}\) \\ \(\langle v,\neg\psi\rangle\) & \(\{\langle u,\psi\rangle\mid v\preceq v\}\) and exchange roles \\ \(\langle v,\psi_{1}\rightarrow\psi_{2}\rangle\) & \(\{\langle u,\psi_{0}?\psi_{1}\rangle\mid v\preceq v\}\) \\ \(\langle v,\Diamond\psi\rangle\) & \(\{\langle u\rangle,\psi\rangle\mid v\preceq u\}\) \\ \(\langle v,\Box\psi\rangle\) & \(\{\langle[u],\psi\rangle\mid v\preceq u\}\) \\ \(\langle[v],\psi\rangle\) & \(\{\langle u,\psi\rangle\mid v\sqsubseteq u\}\) \\ \(\langle v,P\rangle\) and \(v\in V(P)\) & \(\emptyset\) \\ \(\langle v,\nu X.\psi_{X}\rangle\) & \(\{\langle v,\psi_{X}\rangle\}\) \\ \(\langle v,X\rangle\) & \(\{\langle v,\psi_{X}\rangle\}\) \\ \(\langle v,\psi\rangle\), \(v\in W^{\perp}\) and \(\psi\in\mathrm{Sub}(\varphi)\) & \(\emptyset\) \\ \end{tabular} \end{table} Table 1: Rules of evaluation games for the intuitionistic modal \(\mu\)-calculus. By the choice of our enumeration, \(\eta_{i}Z_{i}.\psi_{i}\) does not contain free occurrences of \(Z_{i+1},\ldots,Z_{n}\), and so \(M_{i}\) is well-defined. Given a signature \(r\), we define augmented models \(M_{0}^{r},\ldots,M_{n}^{r}\) by \[M_{0}^{r} :=V;\] \[M_{i+1}^{r} :=\left\{\begin{array}{ll}M_{i}[Z_{i+1}\mapsto\|\eta_{j}^{r}Y_{j }^{r_{j}}.\chi_{j}\|^{M_{i}^{r}}],&\mbox{if $Z_{i+1}=Y_{j}$};\\ M_{i}[Z_{i+1}\mapsto\|\eta_{i}Z_{i}.\psi_{i}\|^{M_{i}}],&\mbox{if there is no $j$ such that $Z_{i+1}=Y_{j}$}.\end{array}\right.\] On \(M_{n}^{r}\), the variables \(Y_{j}\) owned by \(\mathsf{l}\) are assigned their \(r(j)\)th approximant \(\|\eta_{j}^{r(j)}Y_{j}.\chi_{j}\|\), and variables owned by \(\mathsf{l}\) receive their correct value. If \(M_{n},v\models\psi\), we call \(\langle v,\psi\rangle\) a true position; if \(M_{n},v\not\models\psi\), we call \(\langle v,\psi\rangle\) a false position. Now, if \(\langle v,\psi\rangle\) a true position, then there is a least signature \(r\) such that \(M_{n}^{r},v\models\psi\). Similarly, if \(\langle v,\psi\rangle\) a false position, then there is a least signature \(r\) such that \(M_{n}^{r},v\not\models\psi\). Denote these signatures by \(\mathrm{sig}^{\mathsf{l}}\langle t,\psi\rangle\). A strategy for \(\mathsf{l}\)Remember that we are under the assumption that \(M,w\models\varphi\). Note that, \(\mathsf{l}\) starts \(\mathcal{G}(M,w\models\varphi)\) on the role of \(\mathsf{V}\) by the definition of the evaluation game. We will define a strategy for \(\mathsf{l}\) guarantee that when the players are at \(\langle v,\psi\rangle\), \(v\models\psi\) if \(\mathsf{l}\) is in the role of \(\mathsf{V}\), and \(v\not\models\psi\) if \(\mathsf{l}\) is in the role of \(\mathsf{R}\). Furthermore, most of \(\mathsf{l}\)'s moves never increase the signature and \(\mathsf{l}\) cannot move in ways the signature is increasing. The only time the signature may increase is when regenerating some \(Y_{j}\), but in this case the first \(j-1\) positions of the signature are not modified. We define \(\mathsf{l}\)'s strategy as follows: * Suppose the game is at the position \(\langle v,\psi_{1}\vee\psi_{2}\rangle\). If \(\mathsf{l}\) is \(\mathsf{V}\) and \(\langle v,\psi_{1}\vee\psi_{2}\rangle\) is a true position; then \(\mathsf{l}\) moves to \(\langle v,\psi_{i}\rangle\) such that \(M_{n}^{\mathrm{sig}^{\mathsf{l}}\langle v,\psi_{1}\vee\psi_{2}\rangle},v \models\psi_{i}\), with \(i\in\{1,2\}\). By the definition of the signatures, \(\mathrm{sig}^{\mathsf{l}}\langle v,\psi_{1}\vee\psi_{2}\rangle=\mathrm{sig}^{ \mathsf{l}}\langle v,\psi_{i}\rangle\). If \(\mathsf{l}\) is \(\mathsf{R}\) and \(\langle v,\psi_{1}\vee\psi_{2}\rangle\) is a false position; then \(M_{n}^{\mathrm{sig}^{\mathsf{l}}\langle v,\psi_{1}\vee\psi_{2}\rangle},v \not\models\psi_{i}\) and \(\mathrm{sig}^{\mathsf{l}}\langle v,\psi_{1}\vee\psi_{2}\rangle\geq\mathrm{sig} ^{\mathsf{l}}\langle v,\psi_{i}\rangle\) for all \(i\in\{1,2\}\). So whichever way \(\mathsf{l}\) moves, the next position is false and the signature is non-increasing. * Suppose the game is at the position \(\langle v,\psi_{1}\wedge\psi_{2}\rangle\). If \(\mathsf{l}\) is \(\mathsf{V}\) and \(\langle v,\psi_{1}\wedge\psi_{2}\rangle\) is a true position; then \(M_{n}^{\mathrm{sig}^{\mathsf{l}}\langle v,\psi_{1}\wedge\psi_{2}\rangle},v \models\psi_{i}\) and \(\mathrm{sig}^{\mathsf{l}}\langle v,\psi_{1}\wedge\psi_{2}\rangle\geq\mathrm{sig} ^{\mathsf{l}}\langle v,\psi_{i}\rangle\) for all \(i\in\{1,2\}\). So whichever way \(\mathsf{l}\) moves, the next position is true and the signature is non-increasing. If \(\mathsf{l}\) is \(\mathsf{R}\) and \(\langle v,\psi_{1}\wedge\psi_{2}\rangle\) is a false position; then \(\mathsf{l}\) moves to \(\langle v,\psi_{i}\rangle\) such that \(M_{n}^{\mathrm{sig}^{\mathsf{l}}\langle v,\psi_{1}\wedge\psi_{2}\rangle},v \not\models\psi_{i}\) and \(\mathrm{sig}^{\mathsf{l}}\langle v,\psi_{1}\wedge\psi_{2}\rangle=\mathrm{sig} ^{\mathsf{l}}\langle v,\psi_{i}\rangle\), with \(i\in\{1,2\}\). * Suppose the game is at the position \(\langle v,\Diamond\psi\rangle\). If \(\mathsf{l}\) is \(\mathsf{V}\) and \(\langle v,\Diamond\psi\rangle\) is a true position; for all move \(\langle\langle v^{\prime}\rangle,\psi\rangle\) of \(\mathsf{l}\), \(\mathsf{l}\) can move to some \(\langle v^{\prime\prime},\psi\rangle\) such that \(M_{n}^{\mathrm{sig}^{\mathsf{l}}\langle v,\Diamond\psi\rangle},v^{\prime\prime} \models\psi\). By the definition of the signatures, \(\mathrm{sig}^{\mathsf{l}}\langle v,\Diamond\psi\rangle\geq\mathrm{sig}^{ \mathsf{l}}\langle v^{\prime\prime},\psi\rangle\). If \(\mathsf{l}\) is \(\mathsf{R}\) and \(\langle v,\Diamond\psi\rangle\) is a false position; \(\mathsf{l}\) moves to a position \(\langle\langle v^{\prime}\rangle,\psi\rangle\) such that all answers \(\langle v^{\prime\prime},\psi\rangle\) by \(\mathsf{l}\) are false positions. Furthermore, \(M_{n}^{\mathrm{sig}^{\mathsf{l}}\langle v,\Diamond\psi\rangle},v^{\prime \prime}\not\models\psi\) and \(\mathrm{sig}^{\mathsf{l}}\langle v,\Diamond\psi\rangle\geq\mathrm{sig}^{ \mathsf{l}}\langle v^{\prime\prime},\psi\rangle\) for all such \(v^{\prime\prime}\). * Suppose the game is at the position \(\langle v,\Box\psi\rangle\). If \(\mathsf{l}\) is \(\mathsf{V}\) and \(\langle v,\Box\psi\rangle\) is a true position; for all moves \(\langle[v^{\prime}],\psi\rangle\) and \(\langle v^{\prime\prime},\psi\rangle\) of \(\mathsf{l}\), we have \(M_{n}^{\mathrm{sig}^{\mathsf{l}}\langle v,\Box\psi\rangle},v^{\prime\prime} \models\psi\). By the definition of the signatures, \(\mathrm{sig}^{\mathsf{l}}\langle v,\Box\psi\rangle\geq\mathrm{sig}^{\mathsf{l }}\langle v^{\prime\prime},\psi\rangle\). If \(\mathsf{l}\) is \(\mathsf{R}\) and \(\langle v,\square\psi\rangle\) is a false position; I moves to a position \(\langle[v^{\prime}],\psi\rangle\) and then to a position \(\langle v^{\prime},\psi\rangle\) which is a false position. Furthermore, \(M_{n}^{\mathrm{sig}^{l}\langle v,\square\psi\rangle},v^{\prime\prime}\not \models\psi\) and \(\mathrm{sig}^{l}\langle v,\square\psi\rangle\geq\mathrm{sig}^{l}\langle v^{ \prime\prime},\psi\rangle\). * Suppose the game is at the position \(\langle v,\neg\psi\rangle\). If I is V and \(\langle v,\neg\psi\rangle\) is a true position; after all move \(\langle v^{\prime},\psi\rangle\) of II, the players switch roles and we have \(M_{n}^{\mathrm{sig}^{l}\langle v,\neg\psi\rangle},v^{\prime}\not\models\psi\). By the definition of the signatures, \(\mathrm{sig}^{l}\langle v,\neg\psi\rangle\geq\mathrm{sig}^{l}\langle v^{ \prime},\psi\rangle\). If I is R and \(\langle v,\square\psi\rangle\) is a false position; I moves to a position \(\langle v^{\prime},\psi\rangle\) which is a true position and switches roles with II. Furthermore, \(M_{n}^{\mathrm{sig}^{l}\langle v,\neg\psi\rangle},v^{\prime}\models\psi\) and \(\mathrm{sig}^{l}\langle v,\neg\psi\rangle\geq\mathrm{sig}^{l}\langle v^{ \prime},\psi\rangle\). * Suppose the game is at the position \(\langle v,\psi_{1}to\psi_{2}\rangle\). If I is V and \(\langle v,\psi_{1}to\psi_{2}\rangle\) is a true position. After II moves to \(\langle v^{\prime},\psi_{1}?\psi_{2}\rangle\), I moves to \(\langle v^{\prime},\psi_{2}\rangle\) if it is a true position. Otherwise, I moves to \(\langle v^{\prime},\psi_{1}\rangle\) ans switch roles; in this case, \(\langle v^{\prime},\psi_{1}\rangle\) is a false position. Either way, \(\mathrm{sig}^{l}\langle v,\psi_{1}\rightarrow\psi_{2}\rangle\geq\mathrm{sig}^{ l}\langle v^{\prime},\psi_{i}\rangle\). If I is R and \(\langle v,\psi_{1}\rightarrow\psi_{2}\rangle\) is a false position; I moves to a position \(\langle v^{\prime},\psi_{1}?\psi_{2}\rangle\) such that \(\langle v^{\prime},\psi_{1}\rangle\) is a true position and \(\langle v^{\prime},\psi_{2}\rangle\) is a false position. Any answer of II satisfies our requirements. * Suppose there is \(j\) such that \(Z_{i}=Y_{j}\). Suppose the game is at \(\langle v,\eta_{j}^{\prime}Y_{j}.\chi_{j}\rangle\) or at \(\langle v,Y_{j}\rangle\). Then the players must move to \(\langle v,\chi_{j}\rangle\). We have that \(\mathrm{sig}^{l}\langle v,\eta_{j}^{\prime}Y_{j}.\chi_{j}\rangle=_{j-1} \mathrm{sig}^{l}\langle v,Y_{j}\rangle=_{j+1}\mathrm{sig}^{l}\langle w,\chi_{j}\rangle\) and \(\mathrm{sig}^{l}\langle w,Y_{j}\rangle(j)>\mathrm{sig}^{l}\langle w,\chi_{j} \rangle(j)\). * Suppose there is no \(j\) such that \(Z_{i}=Y_{j}\). Suppose the game is at \(\langle v,\eta_{j}^{\prime}Y_{j}.\chi_{j}\rangle\) or at \(\langle v,Y_{j}\rangle\). Then the players must move to \(\langle v,\chi_{j}\rangle\). We have that \(\mathrm{sig}^{l}\langle w,\eta_{i}Z_{i}.\psi_{i}\rangle=\mathrm{sig}^{l}\langle w,Z_{i}\rangle=\mathrm{sig}^{l}\langle w,\psi_{i}\rangle\). I's strategy is winningOn finite plays, I wins by the construction of the strategy. I has the role of V at a true position of the form \(\langle v,P\rangle\). Similarly, I has the role of V at true positions \(\langle v,\psi\rangle\) where \(v\in W^{\perp}\). Also, I has the role of R at a false positions of the form \(\langle v,P\rangle\). Now, consider an infinite play \(\langle w_{0},\varphi_{0}\rangle,\langle w_{1},\varphi_{1}\rangle,\ldots\), let \(i\) be the smallest number in \(\{1,\ldots,n\}\) such that \(\eta_{i}Z_{i}\) is an infinitely often regenerated fixed-point operator. Suppose there is \(j\in\{1,\ldots,m\}\) such that \(Z_{i}=Y_{j}\) for a contradiction. Let \(k_{1},k_{2},\ldots\) be the positions where \(Y_{j}\) occur; that is, all the positions \(\langle w_{k_{1}},\psi\rangle=\langle w_{k_{1}},Y_{j}\rangle\). Without loss of generality, we suppose that for all \(i^{\prime}<i\) no \(Z_{i}\), is regenerated after the \(k_{1}\)th position of the play. The move from \(\langle w_{k_{i}},Y_{j}\rangle\) to \(\langle w_{k_{i}+1},\chi_{j}\rangle\) causes a strict decrease in the signature. The other moves between \(k_{i}+1\) and \(k_{i+1}\) cannot cancel this decrease, since either the signature does not change or one of the first \(i\) positions of the signature is reduced. Therefore the sequence of signatures \[\mathrm{sig}^{l}\langle w_{k_{1}},Y_{j}\rangle,\mathrm{sig}^{l}\langle w_{k_{2 }},Y_{j}\rangle,\mathrm{sig}^{l}\langle w_{k_{2}},Y_{j}\rangle,\ldots\] is strictly decreasing. This is a contradiction, as the signatures are well-ordered. Therefore there is no \(j\) such that \(Z_{i}=Y_{j}\), and so I wins the play. A strategy for II from \(w\not\models\psi\)If we suppose \(M,w\not\models\varphi\), we can define a winning strategy for II similar to the strategy for I defined above. The main difference is that we need to consider II-signatures, denoting approximants for II's variables. We leave the details to the reader. The collapse to modal logic over \(\mathsf{CS5}\) frames ### A short lemma To prove the collapse over \(\mathsf{S5}\) frames, Alberucci and Facchini use the following result: if \(M=\langle M,R,W\rangle\) is a Kripke model over an \(\mathsf{S5}\) frame, then \(wRv\) implies \(w\models\triangle\varphi\) iff \(v\models\triangle\varphi\). We cannot prove the same over \(\mathsf{CS5}\) frames, but the following Lemma will suffice: **Lemma 3**.: _Let \(M=\langle W,W^{\perp},\preceq,\equiv,V\rangle\) be a \(\mathsf{CS5}\) model and \(w\preceq;\equiv w^{\prime}\). Then_ \[M,w\models\triangle\varphi\text{ implies }M,w^{\prime}\models\triangle\varphi,\] _where \(\triangle\in\{\square,\Diamond\}\)._ Proof.: Fix a \(\mathsf{CS5}\) model \(M=\langle W,W^{\perp},\preceq,\equiv,V\rangle\). First note that \(\preceq;\equiv\) is a transitive relation. To see that, suppose \(w\preceq w^{\prime}\equiv v\preceq v^{\prime}\equiv u\). By backward confluence, there is \(u^{\prime}\) such that \(w^{\prime}\preceq u^{\prime}\equiv v^{\prime}\). By the transitivity of \(\preceq\) and \(\equiv\), \(w\preceq u^{\prime}\equiv u\). Also note that the worlds seen in an evaluation game are \(\preceq;\equiv\)-accessible from the previously seen worlds. That is, when if players have gone through a position \(\langle v,\psi\rangle\) and later \(\langle v^{\prime},\psi^{\prime}\rangle\), then \(v\preceq;\equiv v^{\prime}\). This happens because \(\preceq\) and \(\equiv\) are reflexive relations and \(\preceq;\equiv\) is transitive. Now, suppose \(w\preceq;\equiv w^{\prime}\) and \(M,w\models\Diamond\varphi\). For all \(v\succeq w\), there is \(u\equiv v\) such that \(M,u\models\varphi\). Let \(v,v^{\prime}\) be such that \(w\preceq v\equiv w^{\prime}\preceq v^{\prime}\). By downward confluence, there is \(u\) such that \(v\preceq u\equiv v^{\prime}\). By the transitivity of \(\preceq\), \(w\preceq u\). So there is \(u^{\prime}\succeq w\) such that \(u\equiv u^{\prime}\) and \(M,u^{\prime}\models\varphi\). As \(v^{\prime}\equiv u\equiv u^{\prime}\), \(v^{\prime}\equiv u^{\prime}\). So for all \(v^{\prime}\succeq w^{\prime}\) there is \(u^{\prime}\equiv v^{\prime}\) such that \(M,u^{\prime}\models\varphi\). That is, \(M,w^{\prime}\models\Diamond\varphi\). Similarly, suppose \(w\preceq;\equiv w^{\prime}\) and \(M,w\models\Box\varphi\). Therefore \(w\preceq;\equiv u\) implies \(M,u\models\varphi\). Let \(w^{\prime}\preceq;\equiv\)\(u^{\prime}\), then \(w\preceq;\equiv\)\(u^{\prime}\) by the transitiveness of \(\preceq;\equiv\). So \(M,u^{\prime}\models\varphi\). Thus \(M,w^{\prime}\models\Box\varphi\). ### The collapse We first show that the fixed-points for modal formulas can be reached in two steps. Our proof is by contradiction. This contradiction is not essential, but makes the proof easier to understand. **Lemma 4**.: _Let \(M=\langle W,\preceq,\equiv,V\rangle\) be a \(\mathsf{CS5}\) model and \(\varphi\) be a modal formula where \(X\) is positive and appears only once in \(\varphi\). Then_ \[\|\mu X.\varphi\|^{M}=\|\varphi^{2}(\top)\|^{M}\text{ and }\|\nu X.\varphi\|^{M}=\| \varphi^{2}(\bot)\|^{M}.\] Proof.: We first show that \(\|\nu X.\varphi\|=\|\varphi^{2}(\top)\|\). Let \(M=\langle W,R,V\rangle\) be a Kripke model where \(R\) is an equivalence relation, and \(\nu X.\varphi\) is a well-named \(\mu\)-formula. We can also suppose that \(\varphi\) is of the form \(\alpha(\triangle\beta(X))\) with \(\triangle\in\{\square,\Diamond\}\). We show that \(\nu X.\varphi\) is equivalent to \(\varphi^{2}(\top)\). As \(X\) is positive in \(\varphi(X)\), we have that \(\|\varphi^{3}(\top)\|\subseteq\|\varphi^{2}(\top)\|\). So we need only to show that \(\|\varphi^{2}(\top)\|\subseteq\|\varphi^{3}(\top)\|\). For a contradiction, suppose that \(w\in\|\varphi^{2}(\top)\|\) and \(w\not\in\|\varphi^{3}(\top)\|\). Then I has a winning strategy \(\sigma\) for the evaluation game \(\mathcal{G}_{2}=\mathcal{G}(M,w\models\varphi^{2}(\top))\); and II has a winning strategy \(\tau\) for the evaluation game \(\mathcal{G}_{3}=\mathcal{G}(M,w\models\varphi^{3}(\top))\). We use \(\sigma\) and \(\tau\) to define strategies \(\sigma^{\prime}\) for \(\mathsf{I}\) in \(\mathcal{G}_{3}\) and \(\tau^{\prime}\) for \(\mathsf{I}\mathsf{I}\) in \(\mathcal{G}_{2}\). Remember that \(\mathsf{I}\) starts on the role of \(\mathsf{V}\) and \(\mathsf{I}\mathsf{I}\) starts on the role of \(\mathsf{R}\). We have the players use analogous strategies on both games. Suppose the players are in positions \(\langle v,\psi(\top)\rangle\) in \(\mathcal{G}_{2}\) and \(\langle v,\psi(\varphi(\top))\rangle\) in \(\mathcal{G}_{3}\). Both positions have the same owner, in the same role. That is, if \(\mathsf{I}\mathsf{I}\)'s turn in some game, it is \(\mathsf{I}\mathsf{I}\)'s turn in both games; and the owner's role is \(\mathsf{V}\) in some game, their role is \(\mathsf{V}\) in both games. For example, suppose \(\mathsf{I}\) is playing the role of \(\mathsf{R}\) and the players are in positions \(\langle v,\neg\psi(\top)\rangle\) and \(\langle v,\neg\psi(\varphi(\top))\rangle\) in \(\mathcal{G}_{2}\) and \(\mathcal{G}_{3}\). If \(\mathsf{I}\) plays \(\sigma(\langle v,\neg\psi(\top))\rangle=\langle v^{\prime},\psi(\top)\rangle\) in \(\mathcal{G}_{2}\), they play \(\langle v,\psi(\varphi(\top))\rangle\) in \(\mathcal{G}_{3}\). After there moves, \(\mathsf{I}\) is playing the role of \(\mathsf{V}\) in both games. The players continue both games following the strategies described above until they get to a position of the form \(\langle v,P\rangle\) in both games; or they get to positions of the form \(\langle w^{\prime\prime},\triangle\beta(\top)\rangle\) in \(\mathcal{G}_{2}\) and \(\langle w^{\prime\prime},\triangle\beta(\varphi(\top))\rangle\) in \(\mathcal{G}_{3}\). _Case \(\mathsf{I}\)._ Suppose the players are in a position \(\langle v,P\rangle\) in both games. Without loss of generality, suppose \(\mathsf{I}\) is \(\mathsf{V}\) and \(\mathsf{I}\mathsf{I}\) is \(\mathsf{R}\). As \(\sigma\) is winning for \(\mathsf{I}\mathsf{I}\) in \(\mathcal{G}_{2}\), \(v\in\|P\|\). As \(\tau\) is winning for \(\mathsf{I}\mathsf{I}\) in \(\mathcal{G}_{3}\), \(v\not\in\|P\|\). And so we have a contradiction. A similar contradiction is reached if \(\mathsf{I}\mathsf{I}\) is \(\mathsf{R}\) and \(\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I }\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I }\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I }\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I }\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I}\mathsf{I} \mathsf{I}\mathsf{I}\mathsf{I}\mathsf Completeness for \(\mu\)Cs5 In this section, we prove: **Theorem 6**.: _The logic \(\mu\)_CS5 _is complete over_ CS5 _frames. That is, for all closed \(\mu\)-formula \(\varphi\), \(\mu\)_CS5 _proves \(\varphi\) iff \(\varphi\) is true at all_ CS5 _frames._ Proof.: Let \(\varphi\) be a closed \(\mu\)-formula. If \(\mu\)CS5 proves \(\varphi\), then \(\varphi\) is true at all CS5 frames by Lemma 7. Now suppose \(\varphi\) is true at all CS5 frames. In particular, \(\varphi\) holds at all worlds of the canonical model. By the Truth Lemma, \(\varphi\in\Lambda\) for all \(\mu\)CS5-theory \(\Lambda\). As \(\mu\)CS5 itself is a \(\mu\)CS5-theory, \(\mu\)CS5 proves \(\varphi\). We also prove completeness results for \(\mu\)JS5 and \(\mu\)GS5. ### Soundness **Lemma 7**.: _Fix a \(\mu\)-formula \(\varphi\). Then if \(\varphi\in\)_CS5 _then \(\varphi\) holds over all_ CS5 _frames._ Proof.: The soundness arguments for the fixed-point axioms and induction rules are standard. See [1] for the axioms in CS4. We only prove here the soundness of \[5:\mathrel{\mathop{:}}\Diamond\varphi\to\square\Diamond\varphi\wedge\Diamond \square\varphi\to\square\varphi.\] Fix a model \(M=\langle W,W^{\perp},\preceq,\equiv,V\rangle\) over a CS5 frame. Suppose \(w\preceq w^{\prime}\models\Diamond\varphi\). Let \(v,v^{\prime},u\) be such that \(w^{\prime}\preceq v\equiv v^{\prime}\preceq u\). We want to show that there is \(u^{\prime}\equiv u\) such that \(u^{\prime}\models\varphi\). By forward confluence, there is \(s\) such that \(v\preceq s\equiv u\). By the transitiveness of \(\preceq\), \(w^{\prime}\preceq s\). So there is \(s^{\prime}\equiv s\) such that \(s^{\prime}\models\varphi\). But \(\equiv\) is symmetric and transitive, so \(u\equiv s^{\prime}\). Therefore \(w^{\prime}\models\square\Diamond\varphi\). We conclude \(w\models\Diamond\varphi\to\square\Diamond\varphi\). Now, suppose \(w\preceq w^{\prime}\models\Diamond\square\varphi\). We want to show that for all \(v\succeq w^{\prime}\) there is \(v^{\prime}\equiv v\) such that \(v^{\prime}\models\varphi\). By our hypothesis, there is \(u\) such that \(w^{\prime}\preceq v\equiv u\models\square\varphi\). But \(u\preceq u\equiv v^{\prime}\) so \(v^{\prime}\models\varphi\). Therefore \(w^{\prime}\models\square\varphi\). We conclude \(w\models\Diamond\square\varphi\to\square\varphi\). ### Truth Lemma for \(\mu\)Cs5 The canonical modelWe say \(\Gamma\) is a \(\mu\)CS5-theory iff it is closed under under all rules of \(\mu\)CS5, and, if \(\varphi\vee\psi\in\Gamma\) then \(\varphi\in\Gamma\) or \(\psi\in\Gamma\). Denote by \(\Gamma^{\Diamond}\) the set \(\{\varphi\mid\Diamond\varphi\in\Gamma\}\) and by \(\Gamma^{\square}\) the set \(\{\varphi\mid\square\varphi\in\Gamma\}\). Denote by \(\mathcal{L}_{\mu}\) the set of all closed \(\mu\)-formulas. Define the canonical model \(M_{c}\mathrel{\mathop{:}}=\langle W_{c},W_{c}^{\perp},\preceq_{c},\equiv_{c}, V_{c}\rangle\) by: * \(W_{c}\mathrel{\mathop{:}}=\{\Gamma\mid\Gamma\text{ is a }\mu\text{CS5-theory}\}\); * \(W_{c}^{\perp}=\{\mathcal{L}_{\mu}\}\); * \(\Gamma\preceq_{c}\Delta\) iff \(\Gamma\subseteq\Delta\); * \(\Gamma\equiv_{c}\Delta\) iff \(\Delta\subseteq\Gamma^{\Diamond}\) and \(\Gamma^{\square}\subseteq\Delta\); and * \(\Gamma\in V_{c}(\varphi)\) iff \(P\in\Gamma\). **Lemma 8**.: \(M_{c}\) _is an_ CS5 _model._ Proof.: Since the subset relation \(\subseteq\) is a preorder, \(\preceq_{c}\) is a reflexive and transitive relation over \(W_{c}\). We need a little more work to show that \(\equiv_{c}\) is an equivalence relation over \(W_{c}\): * If \(\varphi\in\Gamma\), then \(\Diamond\varphi\in\Gamma\) by \(T\) and \(\mathbf{MP}\). So \(\Gamma\equiv_{c}\Gamma\). By \(\mathbf{Nec}\), \(\Gamma^{\square}\subseteq\Gamma\). * Let \(\Gamma\equiv_{c}\Delta\equiv_{c}\Sigma\). Then \(\Delta\subseteq\Gamma^{\Diamond}\), \(\Gamma^{\square}\subseteq\Delta\), \(\Sigma\subseteq\Delta^{\Diamond}\), and \(\Delta^{\square}\subseteq\Sigma\). Suppose \(\varphi\in\Gamma^{\square}\), then \(\square\varphi\in\Gamma\) and \(\square\square\varphi\in\Gamma\), so \(\square\varphi\in\Gamma^{\square}\subseteq\Delta\). Thus \(\varphi\in\Sigma\). Suppose \(\varphi\in\Sigma\), then \(\varphi\in\Delta^{\Diamond}\) and \(\Diamond\varphi\in\Delta\subseteq\Gamma^{\Diamond}\). So \(\Diamond\Diamond\varphi\in\Gamma\). By \(4\) and \(\mathbf{Nec}\), \(\Diamond\varphi\in\Gamma\) and so \(\varphi\in\Gamma^{\Diamond}\). Therefore \(\Gamma\equiv_{c}\Sigma\). * Let \(\Gamma\equiv_{c}\Delta\), then \(\Delta\subseteq\Gamma^{\Diamond}\) and \(\Gamma^{\square}\subseteq\Delta\). We want to show \(\Gamma\subseteq\Delta^{\Diamond}\), \(\Delta^{\square}\subseteq\Gamma\). Let \(\varphi\in\Gamma\). By \(T\) and \(\mathbf{MP}\), \(\Diamond\varphi\in\Gamma\). By \(5,\square\Diamond\varphi\in\Gamma\), so \(\Diamond\varphi\in\Gamma^{\square}\subseteq\Delta\). Thus \(\varphi\in\Delta^{\Diamond}\). Now, suppose \(\varphi\in\Delta^{\square}\). So \(\square\varphi\in\Delta\subseteq\Gamma^{\Diamond}\). Thus \(\Diamond\square\varphi\in\Gamma\). By \(5,\square\varphi\in\Gamma\) and so \(\varphi\in\Gamma\). Therefore \(\Delta\equiv_{c}\Gamma\). \(M_{c}\) also satisfies the convergence requirement: Suppose \(\Gamma\equiv_{c}\Delta\preceq_{c}\Sigma\). Then \(\Delta\subseteq\Gamma^{\Diamond}\), \(\Gamma^{\square}\subseteq\Delta\), and \(\Delta\subseteq\Sigma\). Let \(\Phi\) be the closure under the deduction rules of \(\Gamma\cup\{\Diamond\varphi\mid\varphi\in\Sigma\}\). Trivially, \(\Gamma\subseteq\Phi\) and so \(\Gamma\preceq_{c}\Phi\). Now we want to show that \(\Phi\equiv_{c}\Sigma\). If \(\varphi\in\Sigma\) then \(\Diamond\varphi\in\Phi\) and so \(\varphi\in\Phi^{\Diamond}\). Suppose \(\varphi\in\Phi^{\square}\). Then \(\square\varphi\in\Phi\) and so \(\varphi\in\Phi\) by \(T\) and \(\mathbf{MP}\). If \(\square\varphi\in\Gamma\), then \(\varphi\in\Delta\subseteq\sigma\) and \(\Gamma^{\square}\subseteq\Delta\). If \(\square\varphi\not\in\Gamma\), then there are \(\varphi_{i}\in\Gamma\) and \(\psi_{j}\in\Sigma\) such that \(\varphi_{1},\dots,\varphi_{m},\Diamond\psi_{1},\dots,\varphi_{n}\vdash\square P\). By \(\mathbf{Nec}\), each \(\square\varphi_{i}\in\Gamma\) too. So \(\psi_{i}\in\Delta\subseteq\Sigma\). As \(\psi_{i}\in\Sigma\), \(T\) implies that \(\Diamond\psi_{i}\in\Sigma\). As \(\Sigma\) is closed under derivation rules, \(\square\varphi\in\Sigma\). By \(T\), \(\varphi\in\Sigma\). Therefore \(\Phi^{\Diamond}\subseteq\Sigma\). We conclude \(\Phi\equiv_{c}\Sigma\). At last, we show that \(\preceq_{c}\) preserves the truth of propositions. Suppose \(\Gamma\preceq_{c}\Delta\) and \(\Gamma\in V_{c}(P)\). So \(P\in\Gamma\subseteq\Delta\), and thus \(\Delta\in V_{c}(P)\) too. The Truth Lemma for modal formulasWe first show the truth lemma for formulas without fixed-point operators. In particular, this implies that \(\mathsf{CS5}\) is complete over \(\mathsf{CS5}\) frames. **Lemma 9**.: _For all formula \(\varphi\) without fixed-point operators,_ \[M_{c},\Gamma\models\varphi\text{ iff }\varphi\in\Gamma.\] Proof.: The proof is by structural induction on modal formulas. * If \(\varphi=P\), then the lemma holds by the definition of \(M_{c}\). * If \(\varphi=\bot\), then the lemma holds by the definition of the semantics and of \(W_{c}^{\bot}\). * If \(\varphi=\psi_{1}\land\psi_{2}\), then \[\Gamma\models\psi_{1}\land\psi_{2}\] iff \(\Gamma\models\psi_{1}\) and \(\Gamma\models\psi_{2}\) iff \(\psi_{1}\in\Gamma\) and \(\psi_{2}\in\Gamma\) iff \(\psi_{1}\land\psi_{2}\in\Gamma\). * If \(\varphi=\psi_{1}\lor\psi_{2}\), then \[\Gamma\models\psi_{1}\lor\psi_{2}\] iff \(\Gamma\models\psi_{1}\) or \(\Gamma\models\psi_{2}\) iff \(\psi_{1}\in\Gamma\) or \(\psi_{2}\in\Gamma\) iff \(\psi_{1}\lor\psi_{2}\in\Gamma\). Here we use that if \(\psi_{1}\vee\psi_{2}\in\Gamma\) then \(\psi_{1}\in\Gamma\) or \(\psi_{2}\in\Gamma\), as \(\Gamma\) is a \(\mu\)CS5 theory. * Let \(\varphi:=\psi_{1}\rightarrow\psi_{2}\). First suppose that \(\psi_{1}\rightarrow\psi_{2}\in\Gamma\). Let \(\Delta\) be a theory such that \(\Gamma\preceq_{c}\Delta\models\psi_{1}\). By the induction hypothesis, \(\psi_{1}\in\Delta\). As \(\Gamma\preceq_{c}\Delta\), \(\psi_{1}\rightarrow\psi_{2}\in\Delta\). By \(\mathbf{MP}\), \(\psi_{2}\in\Delta\). So \(\Gamma\models\psi_{1}\rightarrow\psi_{2}\). Now suppose that \(\psi_{1}\rightarrow\psi_{2}\not\in\Gamma\). Take \(\Sigma\) to be the closure of \(\Gamma\cup\{\psi_{1}\}\) under the derivation rules. If \(\psi_{2}\in\Sigma\), then there is \(\chi\in\Gamma\) such that \((\chi\wedge\psi_{1})\rightarrow\psi_{2}\in\mu\)CS5. And so \(\chi\rightarrow(\psi_{1}\rightarrow\psi_{2})\in\mu\)CS5. As \(\chi\in\Gamma\), this means \(\psi_{1}\rightarrow\psi_{2}\in\Gamma\), a contradiction. Therefore \(\psi_{2}\not\in\Sigma\). By the induction hypothesis, \(\Sigma\models\psi_{1}\) and \(\Sigma\not\models\psi_{2}\). As \(\Gamma\preceq_{c}\Sigma\), \(\Gamma\not\models\psi_{1}\rightarrow\psi_{2}\). * Let \(\varphi=\neg\psi\). This case follows by the equivalence between \(\neg\psi\) and \(\varphi\rightarrow\bot\) over intuitionistic logic. * Let \(\varphi=\Box\psi\). First suppose that \(\Box\psi\in\Gamma\). Let \(\Gamma\preceq_{c}\Delta\equiv_{c}\Sigma\). Then \(\Box\psi\in\Delta\) and \(\psi\in\Sigma\). By induction hypothesis, \(\Sigma\models\psi\). Now suppose that \(\Box\psi\not\in\Gamma\). Define \(\Sigma:=\Gamma^{\Box}\). By definition, \(\psi\not\in\Sigma\). By the induction hypothesis, \(\Sigma\not\models\psi\). Now we show that \(\Gamma\equiv_{c}\Sigma\). \(\Gamma^{\Box}\subseteq\Sigma\) follows by definition. Let \(\theta\in\Sigma\). Then \(\Box\theta\in\Gamma\). By two applications of \(T\), \(\Diamond\theta\in\Gamma\). So \(\theta\in\Gamma^{\Diamond}\). So \(\Gamma\equiv_{c}\Sigma\). Therefore \(\Gamma\preceq_{c}\Gamma\equiv_{c}\Sigma\not\models\psi\), and thus \(\Gamma\not\models\Box\psi\). * Let \(\varphi=\Diamond\psi\). First suppose that \(\Diamond\psi\in\Gamma\). Let \(\Delta\) be a theory such that \(\Gamma\preceq_{c}\Delta\). Furthermore, suppose \(\Delta\) is consistent. Let \(\Sigma\) be the closure under derivation rules of \(\Delta^{\Box}\cup\{\psi\}\). We want to show that \(\Delta\equiv_{c}\Sigma\). \(\Delta^{\Box}\subseteq\Sigma\) holds by definition. Let \(\theta\in\Sigma\), then \(\chi\wedge\psi\rightarrow\theta\in\mu\)CS5 for some \(\chi\in\Delta^{\Box}\). Thus \(\chi\rightarrow(\psi\rightarrow\theta)\in\mu\)CS5 and \(\Box\chi\rightarrow\Box(\psi\rightarrow\theta)\in\mu\)CS5. So \(\Box(\psi\rightarrow\theta)\in\Delta\). By \(K\), \(\Diamond\psi\rightarrow\Diamond\theta\in\Delta\). So \(\Diamond\theta\in\Delta\). Now suppose that \(\Diamond\psi\not\in\Gamma\). \(\Gamma\) is consistent (otherwise all formula is in \(\Gamma\)). Let \(\Delta\) be such that \(\Gamma\equiv_{c}\Delta\) and \(\psi\in\Delta\). By the definition of \(\equiv_{c}\Delta\subseteq\Gamma^{\Diamond}\), so \(\psi\in\Gamma^{\Diamond}\). Therefore \(\Diamond\psi\in\Gamma\), a contradiction. We conclude that for all \(\Delta\), if \(\Gamma\preceq_{c}\Gamma\equiv_{c}\Delta\), then \(\psi\not\in\Delta\). By the induction hypothesis, \(\Delta\not\models\psi\). Therefore \(\Gamma\not\models\Diamond\psi\). The provable collapseWe now show that any \(\mu\)-formula is provably equivalent to a modal formula in \(\mu\)CS5. We first prove a technical lemma showing that monotonicity for formulas without fixed-point operators is provable. We will be able to lift this restriction after we prove completeness for \(\mu\)CS5. **Lemma 10**.: _Let \(\Lambda\) be a \(\mu\)CS5-theory. Suppose \(A\to B\in\Lambda\) and \(\varphi(X)\) is a formula without fixed-point operators. If \(X\) is positive in \(\varphi(X)\), then \(\varphi(A)\rightarrow\varphi(B)\in\Lambda\). If \(X\) is negative in \(\varphi(X)\), then \(\varphi(B)\rightarrow\varphi(A)\in\Lambda\)._ Proof.: We prove this lemma using structural induction. We prove only the cases where \(X\) is positive, as the cases where \(X\) is negative are similar. * For \(\varphi=P\) and \(\varphi=X\), the result is immediate. * Let \(\varphi=\psi\vee\theta\). Then \(\psi(A)\to\psi(B)\in\Lambda\) and \(\theta(A)\to\theta(B)\in\Lambda\). As \([(\psi(A)\to\psi(B))\wedge(\theta(A)\to\theta(B))]\to(\varphi(A)\to\varphi(B))\) is a tautology, \(\varphi(A)\to\varphi(B)\in\Lambda\) follows by \(\mathbf{MP}\). * Let \(\varphi=\psi\wedge\theta\). Then \(\psi(A)\to\psi(B)\in\Lambda\) and \(\theta(A)\to\theta(B)\in\Lambda\). As \([(\psi(A)\to\psi(B))\wedge(\theta(A)\to\theta(B))]\to(\varphi(A)\to\varphi(B))\) is a tautology, \(\varphi(A)\to\varphi(B)\in\Lambda\) follows by \(\mathbf{MP}\). * Let \(\varphi=\Box\psi\). Suppose \(\psi(A)\to\psi(B)\in\Lambda\). Then \(\Box(\psi(A)\to\psi(B))\in\Lambda\) by \(\mathbf{Nec}\). So \(\Box\psi(A)\to\Box\psi(B)\in\Lambda\) by \(K\) and \(\mathbf{MP}\). * Let \(\varphi=\Diamond\psi\). Suppose \(\psi(A)\to\psi(B)\in\Lambda\). Then \(\Box(\psi(A)\to\psi(B))\in\Lambda\) by \(\mathbf{Nec}\). So \(\Diamond\psi(A)\to\Diamond\psi(B)\in\Lambda\) by \(K\) and \(\mathbf{MP}\). * Let \(\varphi=\neg\psi\). Then \(X\) is negative in \(\psi\) and \(\psi(B)\to\psi(A)\in\Lambda\). Since \((\psi(B)\to\psi(A))\to(\neg\psi(A)\to\neg\psi(B))\) is a tautology, \(\neg\psi(A)\to\neg\psi(B)\in\Lambda\) too. * Let \(\varphi=\psi\to\theta\). Then \(X\) is negative in \(\psi\) and positive in \(\theta\). So \(\psi(B)\to\psi(A)\in\Lambda\) and \(\theta(A)\to\theta(B)\in\Lambda\). Since \([(\psi(B)\to\psi(A))\wedge(\theta(A)\to\theta(B))]\to(\neg\psi(A)\to\neg\psi(B))\) is a tautology, \(\varphi(A)\to\varphi(B)\in\Lambda\) too. Now, we show that fixed-points of modal formulas are equivalent to modal formulas over \(\mu\mathsf{CS5}\). This is a formal version of Lemma 4. **Lemma 11**.: _If \(\varphi\) has no fixed-point operators, then \(\nu X.\varphi\leftrightarrow\varphi(\varphi(\top))\in\mu\mathsf{CS5}\) and \(\mu X.\varphi\leftrightarrow\varphi(\varphi(\bot))\in\mu\mathsf{CS5}\)._ Proof.: \(\nu X.\varphi\to\varphi(\nu X.\varphi)\) holds by \(\nu\mathbf{Ind}\). Since \(X\) is positive in \(\varphi\), \(\nu X.\varphi\to\top\) implies \(\varphi(\nu X.\varphi)\) by Lemma 10. By \(\mathbf{MP}\), \(\nu X.\varphi\to\varphi(\top)\). By the same argument, \(\nu X.\varphi\to\varphi(\varphi(\top))\). Now, as \(\varphi(\varphi(\top))\to\varphi(\varphi(\varphi(\top)))\) is valid on any \(\mathsf{CS5}\) model, so is \(\varphi(\varphi(\top))\to\nu X.\theta\). So \(\nu X.\varphi\to\varphi(\varphi(\top))\in\mu\mathsf{CS5}\). The proof for \(\mu X.\varphi\) is similar. Similar to how we proved Theorem 5, we use the Lemma 11 to prove: **Theorem 12**.: _Any \(\mu\)-formula is provably equivalent to a modal formula._ The Truth Lemma for fixed-point formulasWith the provable collapse over \(\mathsf{CS5}\), we are now able to extend the Truth Lemma to all \(\mu\)-formulas. **Lemma 13**.: _For all closed \(\mu\)-formula \(\varphi\),_ \[M_{c},\Gamma\models\varphi\text{ iff }\varphi\in\Gamma.\] Proof.: We prove this lemma by structural induction on formulas, as we proved Lemma 9. We omit cases already treated above. * Let \(\varphi\) be \(\nu X.\psi(X)\). We want to show that \(M_{c},\Gamma\models\nu X.\psi\) iff \(\nu X.\psi\in\Gamma\). By Lemma 11, \(\nu X.\psi\) is provably equivalent to some modal formula \(\varphi^{\prime}\). So \(\varphi\leftrightarrow\varphi^{\prime}\in\Gamma\). Thus: \[\nu X.\psi\in\Gamma\iff\varphi^{\prime}\in\Gamma\iff M_{c},\Gamma\models \varphi^{\prime}\iff M_{c},\Gamma\models\nu X.\psi.\] The first equivalence holds by \(\mathbf{MP}\), the second by completeness for \(\mathsf{CS5}\), and the last from the soundness of \(\mu\mathsf{CS5}\). * Let \(\varphi\) be \(\nu X.\psi(X)\). By a proof similar to the paragraph above, we prove that \(M_{c},\Gamma\models\mu X.\psi\) iff \(\mu X.\psi\in\Gamma\). ### The fixed-point logics \(\mu\mathsf{JS5}\) and \(\mu\mathsf{GS5}\) The logic \(\mathsf{JS5}\) is obtained by adding to \(\mathsf{CS5}\) the axioms \(N:=\neg\Diamond\bot\), \(FS:=(\Diamond\varphi\rightarrow\Box\psi)\rightarrow\Box(\varphi\rightarrow\psi)\), and \(DP:=\Diamond(\varphi\vee\psi)\rightarrow\Diamond\varphi\vee\Diamond\psi\). An \(\mathsf{JS5}\) frame is a \(\mathsf{CS5}\) frame with no infallible world. That \(\langle W,W^{\bot},\preceq,\equiv\rangle\) is a \(\mathsf{CS5}\) frame where \(W^{\bot}=\emptyset\). The logic \(\mathsf{GS5}\) is obtained by adding to \(\mathsf{JS5}\) the axiom \(GD:=(\varphi\rightarrow\psi)\vee(\psi\rightarrow\varphi)\). A \(\mathsf{GS5}\) frame is an \(\mathsf{IS5}\) which is locally linear: \(w\preceq u\) and \(w\preceq v\) imply \(u\preceq v\) or \(v\preceq u\). The logics \(\mu\mathsf{JS5}\) and \(\mu\mathsf{GS5}\) are obtained by adding fixed-point axioms and induction rules to \(\mathsf{IS5}\) and \(\mathsf{GS5}\), respectively. Note that all \(\mathsf{IS5}\) frames and \(\mathsf{GS5}\) frames are also \(\mathsf{CS5}\) frames. Therefore the \(\mu\)-calculus also collapses to modal logic over \(\mathsf{ISS}\) frames and \(\mathsf{GS5}\) frames. Using the methods above, we can show: **Theorem 14**.: \(\mu\mathsf{JS5}\) _is complete over \(\mathsf{ISS}\) frames._ **Theorem 15**.: \(\mu\mathsf{GS5}\) _is complete over \(\mathsf{GS5}\) frames._ The canonical models \(M_{c}^{\mu\mathsf{JS5}}\) and \(M_{c}^{\mu\mathsf{GS5}}\) are respectively obtained by restricting the canonical model \(M_{c}^{\mu\mathsf{CS5}}\) to consistent \(\mu\mathsf{JS5}\)-theories and consistent \(\mu\mathsf{GS5}\)-theories.
2305.19558
Look-Ahead Task Offloading for Multi-User Mobile Augmented Reality in Edge-Cloud Computing
Mobile augmented reality (MAR) blends a real scenario with overlaid virtual content, which has been envisioned as one of the ubiquitous interfaces to the Metaverse. Due to the limited computing power and battery life of MAR devices, it is common to offload the computation tasks to edge or cloud servers in close proximity. However, existing offloading solutions developed for MAR tasks suffer from high migration overhead, poor scalability, and short-sightedness when applied in provisioning multi-user MAR services. To address these issues, a MAR service-oriented task offloading scheme is designed and evaluated in edge-cloud computing networks. Specifically, the task interdependency of MAR applications is firstly analyzed and modeled by using directed acyclic graphs. Then, we propose a look-ahead offloading scheme based on a modified Monte Carlo tree (MMCT) search, which can run several multi-step executions in advance to get an estimate of the long-term effect of immediate action. Experiment results show that the proposed offloading scheme can effectively improve the quality of service (QoS) in provisioning multi-user MAR services, compared to four benchmark schemes. Furthermore, it is also shown that the proposed solution is stable and suitable for applications in a highly volatile environment.
Ruxiao Chen, Shuaishuai Guo
2023-05-31T05:03:40Z
http://arxiv.org/abs/2305.19558v1
# Look-Ahead Task Offloading for Multi-User Mobile Augmented Reality in Edge-Cloud Computing ###### Abstract Mobile augmented reality (MAR) blends a real scenario with overlaid virtual content, which has been envisioned as one of the ubiquitous interfaces to the Metaverse. Due to the limited computing power and battery life of MAR devices, it is common to offload the computation tasks to edge or cloud servers in close proximity. However, existing offloading solutions developed for MAR tasks suffer from high migration overhead, poor scalability, and short-sightedness when applied in provisioning multi-user MAR services. To address these issues, a MAR service-oriented task offloading scheme is designed and evaluated in edge-cloud computing networks. Specifically, the task interdependency of MAR applications is firstly analyzed and modeled by using directed acyclic graphs. Then, we propose a look-ahead offloading scheme based on a modified Monte Carlo tree (MMCT) search, which can run several multi-step executions in advance to get an estimate of the long-term effect of immediate action. Experiment results show that the proposed offloading scheme can effectively improve the quality of service (QoS) in provisioning multi-user MAR services, compared to four benchmark schemes. Furthermore, it is also shown that the proposed solution is stable and suitable for applications in a highly volatile environment. ## I Introduction Mobile augmented reality (MAR) has been widely applied in various fields such as gaming and education, as it can combine digital content with the physical world in real-time. With the development of the fifth generation (5G) communication networks and edge-cloud computing, MAR is gaining increasing attention from both industry and academia. It has been envisioned as one of the ubiquitous interfaces to the Metaverse. In pursuit of a high-quality user experience, MAR generally runs on portable devices such as mobile phones, AR headsets, and AR glasses. However, MAR tasks' requirements on latency, mobility, and endurance are stringent and the limited computation capability and battery life of mobile devices greatly hinder its wide applications. To solve this problem, an edge-cloud computing paradigm has been applied in this field. It is acknowledged that MAR can be divided into five interdependent components: a video capturer, a feature extractor, a mapper, a tracker, an object recognizer, and a render. Among these components, the video capturer and render parts directly interact with users, and thus can only be run locally, while the other four components can be offloaded. The basic idea of this paradigm is to selectively offload the computation tasks toward nearby edge or cloud servers with considerable computation power, thus is suitable for MAR devices to overcome its low battery capacity and computation ability. The implementation of this paradigm in MAR scenario has gained the interest of many researchers, but also raises many questions. Several artificial intelligence-based algorithms like deep reinforcement learning (DRL) and heuristic algorithms like genetic algorithms (GA) and particle swarm optimization (PSO) were proposed for offloading MAR tasks in edge-cloud computing networks. For example, Chen _et al._ modeled offloading decision-making as a joint optimization problem and proposed a DRL-based algorithm using a multiagent deep deterministic policy gradient (MADDPG) framework, which minimized the energy consumption under the constraints of latency requirements [1]. In [2], Shuwaili _et al._ leveraged the inherent collaborative nature of MAR that mobile devices connected to the same base station have partly shared inputs and outputs to avoid extra computing, then using a successive convex approximation method (SCA) to solve the non-convex optimization problem. Later, Brand _et al._ extended the traditional single-path edge offloading model to a multi-path model with extra choices including device-to-device, edge, and cloud offloading in [3], which decreases the instability of mobile devices compared to single-path offloading. In [4], Wang _et al._ proposed a scheme named Closure, which has been shown to efficiently manage heterogeneous devices by calculating the Nash equilibrium of the attack-defense game model. However, the aforementioned works schedule a single task at a time and only take into account the immediate effect of a decision without considering its long-term effect and the interdependency between different MAR components. Consequently, a large number of non-local jumps in the search space will be generated, leading to additional migration overheads. When an unfinished task is transferred from one host to another, it will cost extra transmission overhead and disrupt the original computing process of that host. Besides, it is noteworthy that the DRL-based and heuristic methods for offloading decision-making are poorly scalable. As the number of users grows, the complexity increases dramatically, requiring considerable iterations to converge to a new effective model or a relatively optimal solution, which consumes a large amount of time and energy. The open research direction of this field is to find a task offloading method that can reduce the migration overhead and overcome short-sightedness. The corresponding challenging issues are how to develop an objective function that efficiently models the migration overheads, what features should be used for offloading possess, and how to simulate the migration overheads. Most recently, a workflow scheduling scheme named Monte Carlo tree search with a deep surrogate model (MCDS) algorithm was proposed by Shreshth Tuli _et.al_ in [5]. It is a look-ahead scheme that runs several multi-step executions in advance, which seems to be a solution to the existing problems in MAR offloading. However, its training process is not designed specifically for AR applications, and it uses the traditional Monte Carlo tree search to simulate the overhead of the process with no mechanism to narrow the search scope and to terminate in advance, resulting in a large amount of unnecessary computing resource and time loss. Drawing all the insights above, this article proposed a look-ahead offloading method using a modified Monte Carlo tree (MMCT) search in edge-cloud computing systems. The main contributions of this article can be summarized as follow: * A more specific and refined MAR application model is established using three types of directed acyclic graphs (DAG) according to the MAR task interdependency. * A long-sighted scheduling scheme based on MMCT is formulated, which is able to make offloading decisions with consideration of migration overhead and long-term QoS in real-time. * The offloading method is applied in an edge-cloud collaborative computing system with multiple offloading paths. Experimental results show that it significantly reduces the influence of changing latency of geographically distributed mobile devices. ## II System Architecture Considering the computational heterogeneity and mobility of MAR tasks, we study an edge-cloud collaborative system with multiple edge and cloud servers. The system provides multiple offloading paths, which makes task upload and execution latency more stable [3]. The edge-cloud system can be divided into four layers: a user layer, a communication layer, a management layer, and a computation layer. The overall structure is shown in Fig. 1. As shown in Fig. 1, we consider there are \(k\) users distributed in different geographical locations in the user layer, each of which is connected to a base station in the communication layer according to its location. The management layer is an abstract layer that exists in the form of software, consisting of resource monitors, databases, and brokers. The resource monitor is responsible for monitoring indicators of the volatile environment and MAR tasks. The broker is responsible for making scheduling decisions based onthe remaining resources and tasks. Specifically, the layer is implemented through virtual machines that can be distributed across multiple physical servers to utilize their computing resources in making scheduling decisions.1 In the computation layer, there are two types of servers that can provide computation: 1) Edge servers that are deployed at the end of the networks, which are close to the base stations. 2) Cloud servers that are deployed far from the base stations, which are several hops from the users. Typically, edge servers have lower latency with less computing power, while cloud servers have higher latency but are generally more computation-powerful. Tasks that are not completed in the current time interval will be rescheduled in the next time interval and migration overheads may occur during this process. The migration within the edge or cloud layer can be neglected, while the migration between the cloud layer and the edge layer will cost a lot of time and energy, Fig. 1: System architecture with four layers: user layer, communication layer, management layer, and computation layer. and thus cannot be neglected. Due to the ever-changing number of access users and the heterogeneity of task computation, environmental parameters such as channel bandwidth and host computation resource utilization change constantly. Besides, users' movement in geographical locations will cause regular disconnection and long handovers, which will lead to changes in devices' latency. These explain why the resource monitor is introduced. With these constantly changing state parameters at hand, the broker is responsible for finding the optimal strategy for offloading tasks to different hosts, known as scheduling. Specific scheduling metrics and strategies will be detailed in Section IV. ## III MAR Application Model In this section, we conduct modeling analysis for MAR based on the application case of web browser [6]. Web browser is a typical MAR application that combines geolocation and time information with computationally intensive image processing algorithms to display location-specific websites content over original video stream. Instead of modeling one MAR application as a whole task [7] or splitting an application into several subtasks using standalone workloads without interdependency [8], we develop a MAR workflow model that takes full account of the precedence constraints of MAR applications, as shown in Fig. 2. Compared with standalone workloads, the workflows in Fig. 2 reveal the internal relationship between different MAR tasks, making the structure of the whole system clearer and easier to monitor, which facilitates the execution of the scheduling program. The workflow starts with the mobile camera capturing the raw video frame. The raw video frame then goes through the feature extractor which extracts its feature points. These feature points will then be sent to three interdependent components, i.e., the mapper, the tracker, and the object recognizer. Using the feature points, the mapper can build a digital model of the 3-dimensional (3D) environment, which will be sent to the object recognizer together with the feature points to locate a specific object. Using the results of the object recognizer, the tracker can track the object in the next few frames. The results of these three components will then be transmitted back to the render. Finally, render will combine them together to obtain the positional and image information and overlay the virtual content on top of the original video stream for the user based on that information. For example, suppose Object Recognizer detects a restaurant in the digital model created by the mapper, the tracker can then effectively track the presence of that restaurant in the subsequent video stream. The location of the restaurant in the image is then sent to the Render component, which can use this information to overlay relevant virtual content about the restaurant from the internet on top of the video stream. This virtual content may include details about average spending at the restaurant or other people's comments, providing users with location-specific information in real-time, making the overall browsing experience more interactive and informative. The video capturer and the render are the components that directly interact with users, and thus can only be run locally, while the other components are more computationally-intensive, and are typically offloaded to the edge or cloud servers for execution. For the mapper, we use ORB-SLAM2, which is a complete simultaneous localization and mapping (SLAM) system for monocular, stereo, and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities [9]. For object recognizer, we use the YOLO algorithm2, which is responsible for identifying and locating objects in videos. Tracker refers to object tracker, we use OpenCV3 for tracking in this work, which can track the objects recognized by YOLO in several subsequent frames. Footnote 2: [https://github.com/ultralvtics/yolov5](https://github.com/ultralvtics/yolov5) Footnote 3: [https://opencv.org/](https://opencv.org/) In this article, we consider the frame rate of the videos captured to be \(60\) frames per second, so the deadline for feature extractor \(t_{d}=1/60\). The render has a strict latency requirement as it needs to be executed every frame so as not to affect the user's experience [3]. The tracker is unnecessary to be called every frame, its deadline can be \(2t_{d}\). Mapper runs as a background task constantly refining and expanding the 3D map and thus has no strict latency requirements, its deadline can be \(3t_{d}\). The object recognizer also has more relaxed requirements, as delays in the order of a second before relocalizing or annotating are still sufficient to achieve acceptable user experience, its deadline can be \(4t_{d}\)[3][10]. Considering their deadline requirements, this model can leverage the results from the previous frame, thus avoiding excess computation and latency without influencing user experiences. Based on the analysis of this paragraph, four directed acyclic graphs of the application are shown in Fig. 3. Fig. 3: MAR subtask simplified into three types of DAG according to different time intervals. \(T_{V},T_{F},T_{M},T_{O},T_{T},T_{R}\) represent the video capturer, the feature extractor, the mapper, the object recognizer, the tracker, and the render, respectively. Fig. 2: Main components of typical MAR application and their interdependency. ## IV Problem Formulation After a decision is executed, its QoS can be determined quantitatively by an objective function that can be denoted as \(Y\). To find the highest QoS score, we need to find the optimum schedule decision such that the objective function is minimized. This objective function consists of four first-level QoS indicators including the response time, the energy consumption, the host characteristic, and the service-level agreement (SLA) violations. With these indicators represented as \(ARS\), \(AEC\), \(HC\), and \(SLA\) respectively, the problem can be formulated as: \[Y=\alpha ARS+\beta AEC+\gamma HC+\delta SLA.\] In this formula, the \(\alpha\), \(\beta\), \(\gamma\), and \(\delta\) stand for their corresponding trade-off coefficients. The coefficients can be adjusted considering the features of MAR devices and different users' requirements. The structure of the objective function is shown in Fig. 4, each first-level indicator consists of several second-level indicators, which are discussed in detail as follows. ### _Response Time_ The response time of a MAR task consists of the communication latency and computation time of each task. The computation time is only relevant to task complexity and instruction per second that the current host could provide. The communication latency includes the transmission time, the connection time, and the queuing time. The transmission time includes the transmission time of original data and results, it is relevant to the data sizes and the uplink and downlink bandwidth. The connection time refers to the response time of the edge and cloud servers, which we consider being \(0.5\)\(\mathrm{ms}\) and \(5\)\(\mathrm{ms}\), respectively. The queuing time refers to the time spent before a task is scheduled to a host. ### _Energy Consumption_ In this work, the transmission energy consumption and the computation energy consumption are considered. The transmission to the cloud server requires larger power than that to the edge server because of the geographic distance differences, thus giving rise to differences in energy consumption. The computation energy in hosts and mobile devices is proportional to the task complexity. ### _Host Characteristics_ To achieve better long-term QoS scores, it is necessary to balance the computation resource usage of each host to ensure the efficient execution of tasks. Meanwhile, when task migration transfers the tasks of a host to another host, it causes additional transmission time and also disrupts the original computing process of the host, so it has a lot of internal hidden negative effects on the efficiency. Finally, we use the variance of host central processing unit (CPU) utilization and the number of host migrations to represent host characteristics. ### _SLA Violations_ SLA violation refers to the number of times tasks have not been completed within the deadlines. This metric can have a significant impact on the user experience, causing a perceptible delay, and thus need to be met in the first place. When calculating the value of the objective function, all these different indicators will be normalized and given different weights according to user requirements. The value of the objective function will be the weighted summation of all indicators. ## V MMCT Framework In this article, we propose an MMCT for look-ahead offloading decision-making. Traditional Monte Carlo tree search (MCTS) is a look-ahead scheme that runs several multi-step executions to get an unbiased estimate of the long-term effect of an immediate action [11]. Generally speaking, running these look-ahead executions in a real environment is time-consuming and hence infeasible for real-time scheduling. Fortunately, a recently proposed coupled-simulation framework leverages event-driven simulators to quickly get a QoS Fig. 4: The objective function, a system application scenario, and the MMCT search tree estimate to avoid executing complex decisions on a physical platform [12]. Using these simulated results, we create an objective function as discussed above, so as to get a QoS score for each decision to estimate its effect in near real-time. What's more, to further facilitate the efficient use of computing resources, we establish a mechanism to narrow the searching scope while iteration so as to accelerate convergence. In each interval, several executable tasks will be sent to the management layer. Offloading a task \(t_{i}\) to a host \(h_{i}\) is represented as decision \((t_{i},h_{i})\), and all possible decisions are defined as the decision spaces. We choose a point from the decision space randomly as the root node (initial state), where the whole algorithm starts from. After the root node is selected, other points in the decision space with unscheduled tasks serve as leaf nodes. Unlike other problems where there is seldom or no migration overhead between each decision, MAR scheduling problem has unpredictable migration overhead between decisions. For instance, a root node with a lower response time may incur a higher migration overhead in the next interval and vice versa. Therefore, relying on heuristics or prior knowledge to select the root node may result in a bias towards a specific subset of the decision space. Random selection of the root node can promote a thorough exploration of the decision space and mitigate any potential biases. Each node can be defined with \([(t_{i},h_{i}),q_{i},v_{i},n_{i}]\), where \(q_{i}\), \(v_{i}\) are initialized as \(0\), \(n_{i}\) is initialized as \(1\). \(q_{i}\), \(v_{i}\) and \(n_{i}\) represent the immediate QoS score, the long-term QoS score, and the number of visits to this node, respectively. The MMCT search consists of five steps: selection, expansion, simulation, backpropagation, and discards. Through these five steps, a search tree will be created, as shown in Fig. 4. The search tree will be refreshed at most \(M\) times, gradually converging to the optimal leaf node, which will be taken as the offloading decision finally. Next, we detail the five steps in this algorithm. ### _Selection_ In the algorithm, the leaf node with the highest upper confidence bound (UCB) will be chosen [5]. Specifically, the selection uses the rule: \[\text{Leaf}\ \text{node}=\underset{i}{\text{arg}}\ \max\ \ v_{i}+\sqrt{\frac{c \times\ln n}{n_{i}}},\] where \(n\) represents the number of visits to the root node. \(c\in[0,1]\) is exploration parameter. When the \(c\) is set to a larger size, the algorithm will converge to a leaf node faster. We set \(c\) as 0.5 in this work. ### _Expansion_ After selection, up to four child nodes can be generated from the selected leaf node using the decision in decision space to expend the searching tree. The immediate QoS score \(q_{i}\) of these child nodes will be calculated. Then, this leaf node becomes the new root node, while the child nodes become new leaf nodes. ### _Simulation_ For the newly generated nodes, each node is simulated for \(N\) steps, and each step is made by randomly selecting a single decision point of an unassigned task from the decision space, represented as \([(t_{j},h_{j}),q_{j},v_{j},n_{j}]\), with its \(q_{j}\) calculated and \(v_{j}\), \(n_{j}\) initialized as \(0\) and \(1\). \(N\) is the number of look-ahead simulations called roll-out parameter. When it is set to a larger size, it can lead to a better decision but is more computationally expensive. ### _Backpropagation_ After calculating the immediate effect \(q_{j}\) in the future \(N\) decisions of a leaf node, we can use these \(q_{j}\) values to Fig. 5: Numerical results of the experiments conducted in multi-user edge-cloud computing systems backpropagate from the last node step by step to get the long-term QoS score of the current leaf node. This is a weighted summation process. The farther away from the leaf node indicates the lower the weight of the simulated node fraction. ### _Discard_ In the traditional MCTS algorithm, the above four processes will be repeated \(M\) times, and the leaf node with the highest number of visits will be chosen as final decision in the end. However, in practice, when the number of leaf nodes is large and the objective function values of each node differs greatly, the nodes that show poor potential can be discarded, so as to narrow the search area. Specifically, we consider that when the number of visits of a leaf node \(n\) is greater than \([M/2]\), there won't be a leaf node with higher \(n\) in the search tree, so the rest of the tree can be discarded to save computing resources. ## VI Performance Evaluation In the experiments, we simulate a large-scale multi-user edge-cloud computing system with \(30\) edge hosts and \(20\) cloud hosts in total using Microsoft Azure virtual machines (VMs)4. Specifically, we use the Azure B2s and B8ms machines as edge servers and cloud servers, respectively. The Azure B2s machines consist of two cores (4029 MIPS), and B8ms machines have eight cores (1601 MIPS). The experiments are conducted in a simulated MAR scenario with MAR tasks generated every time interval. In the experiment, the number of look-ahead simulations \(N\) is set to be \(7\) and the maximum number of iterations \(M\) is set to be \(10\). They are determined based on a grid search that evaluated the convergence and stability of the algorithm under different values. For comparison, we also simulate the GA, Closure [4], DRL [1], MCDS [5] algorithms-based offloading. We compare these schemes in the average number of migrations, the average migration time, the energy consumption, the average response time, the average schedule time, and the SLA violations, the results with a fixed number of \(2000\) tasks are shown in Fig. 5. Footnote 4: Azure general purpose B-type VMs:[https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-b-series-burstable](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-b-series-burstable) Overall, the results indicate that MMCT model can effectively promote the QoS compared to the other four benchmark schemes. Specifically, MMCT has the relatively lowest response time and migration time, other indicators like SLA violations and scheduling time are within the acceptable range. It can be seen that although both MCDS and MMCT are based on the Monte Carlo tree search, the performance of MMCT is much better than MCDS, especially in scheduling time. This is mainly because our proposed model takes into account the volatile and dynamic nature of the multi-user MAR environment, allowing for more adaptive and intelligent decision-making. Energy consumption in Fig. 5 represents the energy consumed by the whole edge-cloud system, where the energy consumed by the mobile device is only relevant to the data size of the task and the location of the user. Besides, it is noticed that MMCT can reduce migration overheads significantly. As shown in Fig. 5, MMCT outperforms the other four schemes in terms of the average migration time and the average number of migrations, especially over single-step scheduling schemes like GA and DRL-based algorithms. Two factors contribute to this positive outcome. Firstly, the look-ahead ability of MMCT allows it to estimate the long-term effects of decisions and prevent potential significant Fig. 6: Numerical results of the scalability experiments conducted in edge-cloud computing system migration overhead. Secondly, the workflow model takes into account the interdependency of MAR applications, enabling more comprehensive and accurate predictions of task execution time and resource requirements. On the contrary, single-step offloading like GA and DRL-based schemes are rather myopic, creating a lot of non-local jumps in the search space while offloading. In addition, we also test and compare the scalability of the above-mentioned schemes, we change the number of average workflows (access users) in each time interval and observe the changes in each metric, the results are shown in Fig. 6. MMCT has shown better scalability compared with the other four schemes. Results indicate that all metrics of MMCT have been maintained at a relatively good level when the number of users grows. The intuitive reason is that MMCT considers several tasks as a whole while offloading and searches and compares possible decisions before offloading. In this way, MMCT can make better use of limited computation resources and thus is more stable when the access of users grows. ## VII Open Research Directions and Challenges To better provision multi-user MAR services in edge-cloud computing networks, there are several issues to be further addressed. ### _Motion-Aware Task Offloading_ A motion-aware scheduler can select keyframes from the video source [13]. By only offloading these keyframes to the back-end server for feature extraction and object recognition, the computational efficiency can be further improved. However, due to the loss of feature points caused by the tracking algorithm and the inherent difficulty of recognizing the user's movement behavior based on the isolated video frame, this technology is still immature. ### _Preloading for Virtual Content_ By modeling the user devices' motion trajectories as Markov decision process (MDP) and adaptively learning the optimal preloading policy, AR intelligent preloading algorithm can proactively transmit holographic contents to the devices [14]. However, due to the diversity of user devices' motion preferences, the edge server needs to provide individual preloading solutions for each user device, which would lead to high computation complexity when the number of user devices increases. ### _MAR Content Sharing_ One of the key observations in MAR is that nearby AR users may share some common interests, and may even have overlapped views to augment. By leveraging this feature, MAR devices can share the content among themselves and reuse the content, which helps relieve the edge workload and address the scalability issue. The tricky part of this system is to determine with whom to share the recognition results that are stored in the local cache and how to address the privacy issue when it comes to certain scenes [15]. ## VIII Conclusion In this article, we first established three types of workflow models for MAR applications according to different time interval periods. Then we proposed an MMCT search method, which can efficiently schedule workflows for MAR applications. Compared with the existing MAR scheduling schemes, the MAR workflow-oriented look-ahead scheduling method can find the optimal scheduling scheme for the long-term QoS, rather than being myopic by scheduling a single task at a time. Moreover, the migration overhead was considered in workflow scheduling. The methods are more relaxed about environmental assumptions and thus can adapt to more volatile environments.
2309.07458
Analysis of Speech Separation Performance Degradation on Emotional Speech Mixtures
Despite recent strides made in Speech Separation, most models are trained on datasets with neutral emotions. Emotional speech has been known to degrade performance of models in a variety of speech tasks, which reduces the effectiveness of these models when deployed in real-world scenarios. In this paper we perform analysis to differentiate the performance degradation arising from the emotions in speech from the impact of out-of-domain inference. This is measured using a carefully designed test dataset, Emo2Mix, consisting of balanced data across all emotional combinations. We show that even models with strong out-of-domain performance such as Sepformer can still suffer significant degradation of up to 5.1 dB SI-SDRi on mixtures with strong emotions. This demonstrates the importance of accounting for emotions in real-world speech separation applications.
Jia Qi Yip, Dianwen Ng, Bin Ma, Chng Eng Siong
2023-09-14T06:35:37Z
http://arxiv.org/abs/2309.07458v1
# Analysis of Speech Separation Performance Degradation on Emotional Speech Mixtures ###### Abstract Despite recent strides made in Speech Separation, most models are trained on datasets with neutral emotions. Emotional speech has been known to degrade performance of models in a variety of speech tasks, which reduces the effectiveness of these models when deployed in real-world scenarios. In this paper we perform analysis to differentiate the performance degradation arising from the emotions in speech from the impact of out-of-domain inference. This is measured using a carefully designed test dataset, Emo2Mix, consisting of balanced data across all emotional combinations. We show that even models with strong out-of-domain performance such as Sepromer can still suffer significant degradation of up to 5.1 dB SI-SDRi on mixtures with strong emotions. This demonstrates the importance of accounting for emotions in real-world speech separation applications. speech separation, transformer, deep learning, emotional speech, emotion classification ## I Introduction Speech Separation is the task of obtaining single speaker speech from a mixture of speakers, also known as the cocktail party problem [1]. It has been the focus of much recent research, as many downstream speech models for tasks such as Automatic Speech Recognition (ASR) [2] are trained on single talker speech. For deployment, mixed speech received from the wild should ideally be separated before performing ASR. The effect of a speaker's emotions on their speech [3] and speaker emotion recognition [4] are well studied research areas. The emotions of a speaker or the intention of a speaker to convey emotion can result in differences in speech. While some of the differences are semantic [5], there are clear differences also manifest in the prosody [6] and articulation [7] of words. Thus, emotional differences in speech can show up at the frame level, where speech separation models operate. The difference in sound between neutral or read speech and emotional speech has been shown to lead to performance degradation on a number of speech tasks such as speaker verification [8] and ASR [9]. Conversely, a mixture of speakers with different emotions also results in a degradation of emotion recognition performance [10]. The separation of emotional speech is important because overlapping speech often occurs when speakers are at heightened emotional states, such as in a heated argument or excited conversation, where the typical decorum of turn taking is breached [11]. Speech Separation models are often trained on datasets built from neutral speech. The most common datasets used for speech separation are LibriMix [12] created from the LibriSpeech [13] corpus and WSJ0Mix [14] created from the WSJ0 [15] corpus. Both datasets consist of read speech recorded under controlled conditions and neither are explicitly emotional datasets. This results in a potential mismatch between speech separation models trained primarily on neutral speech and the real-world emotional speech mixtures. ### _Our Approach_ In this paper analyses the performance degradation of speech separation models using a custom speech separation test dataset, Emo2Mix, built based utterances from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) [16]. To the authors' knowledge this is the first study to properly investigate the impact of emotions on speech separation with a test dataset that is balanced across multiple emotions at two different emotional intensities. Using this dataset, we conduct a detailed analysis of the impact of emotional speech on state-of-the-art speech separation models that were trained on neutral speech. To control for the out-of-domain issue due to the unseen speakers in the Emo2Mix test dataset, we make use of the neural emotion utterances present within the RAVDESS dataset as a baseline for an out-of-domain speaker with an "in-domain" emotion. We show that Emo2Mix presents a significant challenge to speech separation models trained on neutral speech with a Fig. 1: Illustration of the mixing strategy. Only three emotions are shown here for brevity but the full set of available emotions are used in the dataset (8 at normal intensity and 7 at strong intensity). Each mixture consist of different speakers each vocalizing different statements. The same two statements are shared across emotions and speakers to control for semantic differences. recent state-of-the-art Sepformer model [17][18] experiencing a performance degradation of 7.0dB compared to its Libri2Mix baseline. Of this, a performance difference of 1.9dB can be attributed to emotions alone, when comparing against the in-domain neutral emotion baseline. ### _Related Work_ Recently, [19] also proposed an emotional mixture test dataset based on the RAVDESS [16] dataset, RAVDESS2Mix. The dataset consists of mixtures from the RAVDESS dataset and LibriSpeech [13] dataset, as well as enrollment speech from the RAVDESS dataset for the target speaker extraction task. Compared to RAVDESS2Mix [19], our proposed Emo2Mix dataset does not consist of any utterances from the LibriSpeech corpus. This is for two reasons. Firstly, the LibriSpeech corpus consists exclusively of neutral speech, which limits the number of emotion combinations possible for the dataset. Secondly, many speech separation models are trained on the Libri2Mix dataset, which uses utterances from the LibriSpeech corpus. This makes the RAVDESS2Mix test dataset biased, as models pretrained on Libri2Mix will have and advantage over models pretrained on other datasets such as the commonly used WSJ02Mix dataset, which creates utterances from the WSJ0 corpus. Based on the RAVDESS2Mix [19] proposed that target speaker extraction is negatively impacted by emotional speech while blind source separation was not as significantly impacted. However, using Emo2Mix, we can show that the same performance degradation is observed once the in-domain advantage of Librispeech models are removed. ## II Methodology ### _Evaluation_ The performance of the models are measured using Scale-invariant signal-to-distortion ratio improvement (SI-SDRi) which is measured by comparing the output waveforms of the models with the ground-truth waveforms. Since there are multiple outputs, the final performance is calculated using the permutation invariant approach. ### _The RAVDESS dataset_ The RAVDESS dataset[16] is an emotional dataset consisting of 7356 recordings by 24 actors, gender balanced with 12 male and 12 female. The actors vocalize two lexically-matched statements in a neutral North American accent at two emotional intensities, normal and strong. At the normal emotional intensity, 8 emotions are expressed (neutral, calm, happy, sad, angry, fearful, disgust, surprised) while the strong intensity share the same emotions except for neutral. RAVDESS has a number of advantages over other emotional speech databases because of its inclusion of two neutral emotions, "neutral" and "calm" and two emotional intensities "normal" and "strong". These features are useful for comparison with existing corpora like LibriMix and WSJ0Mix which also call in this emotional category. These baselines can help differentiate between the performance impact of out-of-domain inference from the unseen speakers and the actual performance impact of the emotional speech. ### _Emo2Mix_ The Emo2Mix dataset draws from the RAVDESS dataset to produce two-speaker mixtures of emotional speech. The subset of RAVDESS selected for Emo2Mix excludes singing speech. It also excludes the audiovisual data. A breakdown of the features in the chosen subset of the RAVDESS dataset is shown in Table I. The mixtures are created with an even distribution across 64 combinations of 8 emotions at a normal emotional intensity and 49 combinations of 7 emotions at a strong emotion intensity. Each statement in the mixture is different, so the mixtures are produced as permutations and not combinations of emotions. While each actor will vocalize the same statement at the same emotion twice, to reduce superfluous mixtures we simply randomly select one of the two repetitions. For each of the 113 emotion-intensity combinations, each actor vocalizes 2 different statements, from which we can create \(N\times(N-1)\) combinations can be obtained, since we cannot mix a actor with themselves. For the full dataset, \(N=24\), which results in 552 mixtures per emotion-intensity permutation which would result in \(552\times 113=62,376\) mixtures overall. To reduce the number of testing mixtures, we reduce the number of speakers in the test set by setting \(N=8\), which results in 56 mixtures per emotion-intensity permutation. This gives \(56\times 64=3,584\) mixtures for the normal intensity test set, and \(56\times 49=2,744\) mixtures for the strong intensity test set. This is done by selecting only every third speaker (1,4,7,10,13,17,21,24) from the dataset. This approach, as opposed to randomly selecting 8 out of 24 of the speakers, is chosen so as to enable the possibility of future work on fine-tuning-models on the other 16 held-out speakers. The utterances selected from the RAVDESS dataset are dynamically mixed [20] so that the mixture test set does not have to be separately stored. During mixing, a number of pre-processing steps are performed in line with the parameters used for dynamic mixing [20] implemented on the Speechbrain [21] framework. Two audio segments are first cropped to the length of the minimum of the two, then downsampled to 8kHz and normalized. These processed segments are then combined at equal weights if no clipping is observed, or reweighted if \begin{table} \begin{tabular}{l c c} \hline \hline **Feature** & **Normal** & **Strong** \\ \hline Number of Speakers & 24 & 24 \\ Number of Emotions & 8 & 7 \\ Number of Statements & 2 & 2 \\ Number of Repetitions & 2 & 2 \\ \hline Total Recordings & 768 & 672 \\ \hline \hline \end{tabular} \end{table} TABLE I: Breakdown of the features of the selected subset of the RAVDESS dataset used to create Emo2Mix clipping is present. To ensure the reputability of the results we used fixed random seeds for all dataset generation and provide a standard script for generation which we release on GitHub1. Footnote 1: [https://github.com/Yip-Jia-Qi/EmoMix](https://github.com/Yip-Jia-Qi/EmoMix) ### _Training Datasets_ In this work we focus on models that have been trained on the Libri2Mix [12] and WSJ0-2Mix [14] datasets. The Libri2Mix training set consists of 50,800 mixtures while WSJ0-2Mix training set consists of 20,000 mixtures. Both have test sets of 3,000 mixtures. We note that 3,000 mixtures are comparable to the 3,584 and 2,744 mixtures in the Emo2Mix test set. ### _Baseline Models_ We make use to two baseline models, Sepformer which adopts a dual-path speech separation architecture with transformer blocks, and ConvTasNet [22] which utilizes temporal convolution blocks. Both models are seminal in the field of speech separation, with Sepformer [17][18] being the more recent state-of-the art model. We implement the two models on the Speechbrain framework as per their respective papers. For Sepformer we make use of pre-trained checkpoints available from the speechbrain huggingface2. Two versions of the model are available, one trained on Libri2Mix (Sepformer-L2M) and one trained on WSJ0-2Mix (Sepformer-WJ2). For ConvTasNet we train the model from scratch and validate the model replicating the Libri2Mix-Test baseline. To align with the models available on hugging face, we also train the model on both the Libri2Mix (ConvTasNet-L2M) and WSJ0-2Mix (ConvTasNet-WJ2) datasets. Although newer models surpassing Sepformer have recently been released [23][24], Sepformer and ConvTasNet are currently still more widely used and replicated and thus serve as better baselines. Footnote 2: [https://huggingface.co/speechbrain](https://huggingface.co/speechbrain) ## III Results In Table II we compare the overall separation performance of the baseline models against three different datasets, the original Libri2Mix Test dataset, RAVDESS2Mix and Emo2Mix. For each of the baseline models, we report the results based on pre-training on two different sets of training data, Libri2Mix and WSJ02Mix. The results on the RAVDESS2Mix dataset are provided as a point of comparison between related emotional speech separation test sets. ### _Performance degradation due to emotional speech_ The results presented in Table II for Emo2Mix-Normal and Emo2Mix-Strong are aggregated across all emotional combinations through simple averaging, since all emotional combinations are equally weighted. The Emo2Mix (Neutral) test is a subset of the Emo2Mix-Normal test data and consists of 56 mixtures of different speakers where both speakers are portraying neutral emotions. The purpose of Emo2Mix (Neutral) is to serve as a baseline where the speaker is out-of-domain, but has a neutral emotion matching the emotions in the training data. This allows us to measure the out-of-domain performance degradation, as per (7) in Table II, independent of the impact of emotions. Any further performance degradation beyond the Emo2Mix (Neutral) baseline, i.e. (8) in Table II, is unlikely to be due to the out-of-domain inference. If emotions do not impact speech separation performance, we should see the values in row (8) approach zero. However, the results of our experiments in Table II show that this is not the case, as the values for (8) range between 1.6dB and 4.0dB. Thus, based on the comparisons of the out-of-domain and emotional degredation in (7) and (8) of Table II, we can conclude that emotions do result in performance degradation in speech separation models. However, the performance impact of performing inference on an unseen speaker is greater than the performance impact from the within-speaker variance resulting from emotions. \begin{table} \begin{tabular}{|c|c||c|c|c||c|c|c|c|} \hline & \multicolumn{2}{c||}{Model:} & \multicolumn{2}{c||}{Sepformer} & \multicolumn{4}{c|}{ConvTasNet} \\ \hline & \multicolumn{2}{c||}{Training Dataset:} & \multicolumn{2}{c||}{Libri2Mix} & \multicolumn{2}{c||}{WSJ0-2Mix} & \multicolumn{2}{c|}{Libri2Mix} & \multicolumn{2}{c|}{WSJ0-2Mix} \\ \hline **ID** & **Testing Dataset** & **SI-SDRi** & **SDRi** & **SI-SDRi** & **SDRi** & **SI-SDRi** & **SDRi** & **SI-SDRi** & **SDRi** \\ \hline 1 & Libri2Mix-Test & 20.6 & 20.9 & 17.0 & 17.5 & 15.1 & 15.5 & 10.0 & 10.6 \\ \hline 2 & RAVDESS2Mix-Normal & 20.2 & 20.7 & 15.1 & 16.0 & 15.0 & 15.5 & 8.6 & 9.4 \\ \hline 3 & RAVDESS2Mix-Strong & 20.2 & 20.7 & 14.4 & 15.3 & 14.9 & 15.5 & 8.2 & 9.1 \\ \hline 4 & Emo2Mix (Neutral) & 15.5 & 16.3 & 12.0 & 12.9 & 10.5 & 11.4 & 4.9 & 5.8 \\ \hline 5 & Emo2Mix-Normal & 15.4 & 16.2 & 11.4 & 12.2 & 10.7 & 11.4 & 3.3 & 4.3 \\ \hline 6 & Emo2Mix-Strong & 13.6 & 14.7 & 9.4 & 10.3 & 8.4 & 9.2 & 0.9 & 2.7 \\ \hline \hline 7 & (1) - (4) & **5.1** & **4.6** & **5.0** & **4.6** & 4.6 & 4.1 & 5.1 & 4.8 \\ \hline 8 & (4) - (6) & **1.9** & **1.6** & **2.6** & **2.6** & 2.1 & 2.2 & 4.0 & 3.1 \\ \hline \end{tabular} \end{table} TABLE II: Overall results across different datasets, with aggregated results reported for the emotional datasets. The performance degradation due to only emotional mixtures is calculated by (4)-(6) which is the difference in performance between the Strong emotion and the Neutral emotion baseline. Out-of-domain performance degradation, independent of emotions can be measured by the difference between the original baseline and the Emo2Mix (Neutral) mixtures, calculated by (1)-(4). Taking (8) divided by (7), we see across multiple models that strong emotions can cause an additional 35-55% performance drop on top of the out-of-domain performance drop (7) when compared to the in-domain baseline (1). This shows that although the performance degradation due to emotions (8) is less than that from out-of-domain inference (7), the impact is still significant. We can further support the above conclusion using at the internal consistency in the results of the Emo2Mix test datasets at different emotional intensities. From the results of (4), (5) and (6) in Table II we see a significant performance gap between Emo2Mix-Normal and Emo2Mix-Strong, while the gap between Emo2Mix (Neutral) and Emo2Mix-Normal is much smaller. This suggests that emotional speech causes a deterioration in speech separation performance. If emotions do not impact speech separation performance, we should see that Emo2Mix-Normal and Emo2Mix-Strong have the similar performance. Since this is also a within-dataset comparison, the performance difference is not due to differences in speakers, since the speakers in both Emo2Mix-Normal and Emo2Mix-Strong are identical. ### _Comparison with RAVDESS2Mix_ The RAVDESS2Mix dataset consists of mixtures of 2 speakers where one speaker is drawn from the RAVDESS dataset while the other speaker is drawn from the Libri2Mix-Test dataset. This means that for a model trained on the Libri2Mix dataset, one of the speakers would be in-domain while the other is out-of-domain. In a 2-speaker separation problem, knowing the mask for one speaker could allow a model to infer the mask for the other speaker. This gives a model trained on the Libri2Mix dataset a big advantage. This is observed across all models, where we see little to no performance degradation for Sepformer-L2M and ConvTasNet-L2M but a significant performance drop for Sepformer-WJ2 and ConvTasNet-WJ2. Furthermore, when comparing the performance of RAVDESS2Mix-Normal and RAVDESS2Mix-Strong, the latter test set results in a lower performance for models trained on the WSJ0-2Mix dataset but this is not observed for the models trained on the Libri2Mix dataset. This lack of internal consistency across models trained on different training datasets, points to the fact that the use of the Libri2Mix-Test set in the RAVDESS2Mix mixtures allow for the model to bypass having to recognise emotional speech by inferring from the neutral speech from the in-domain Libri2Mix-Test utterance. In comparison to RAVDESS2Mix, Emo2Mix does not suffer from such an issue and is able to accurately benchmark the impact to performance caused by emotional speech. Across all the models tested, for both Libri2Mix and WSJ0-2Mix trained versions, there is a significant performance difference. This shows that the test dataset we have developed for our analysis, Emo2Mix, is a better benchmark for emotional speech separation than the previously reported RAVDESS2Mix. ### _The benefits of a larger training dataset_ It is common to find in various speech domains such as ASR [2] and Keyword spotting [25][26], that a larger training dataset results in better performance. Since the Libri2Mix training dataset is significantly larger than WSJ0-2Mix we would expect to see a similar trend. Furthermore, Libri2Mix consists of LibriSpeech utterances which come from audio \begin{table} \begin{tabular}{|l||c||c|c|c|c|c|c|c|} \hline & RD2M & calm & happy & sad & angry & fearful & disgust & surprised \\ \hline calm & 20.4 & 15.9 & 17.1 & 15.8 & **18.6** & 17.7 & 16.9 & 15.9 \\ happy & **20.6** & **17.2** & 12.4 & 13.8 & 10.9 & 11.2 & 14.8 & 12.2 \\ sad & 20.1 & 15.7 & 13.6 & 14.7 & **15.8** & 12.5 & 13.1 & 13.0 \\ angry & 20.4 & **17.6** & 11.6 & 15.9 & 9.3 & 10.6 & 15.7 & 13.6 \\ fearful & 20.3 & **18.4** & 11.9 & 14.1 & 11.6 & 9.7 & 14.9 & 11.7 \\ disgust & 19.8 & **17.4** & 11.7 & 14.4 & 13.9 & 11.8 & 13.4 & 10.7 \\ surprised & 19.7 & **16.7** & 12.4 & 14.3 & 13.0 & 11.9 & 12.4 & 10.4 \\ \hline \end{tabular} \end{table} TABLE IV: Performance of Sepformer-L2M on the Emo2Mix-Strong dataset broken down by emotion combination reported using SI-SDRi. \begin{table} \begin{tabular}{|l||c||c|c|c|c|c|c|c|c|} \hline & RD2M & calm & happy & sad & angry & fearful & disgust & surprised & neutral \\ \hline calm & **20.6** & 16.0 & 15.6 & 16.0 & **17.1** & 15.0 & 14.9 & 14.6 & 15.9 \\ happy & 20.3 & 15.7 & 14.9 & 16.3 & 16.3 & 15.9 & 16.0 & 13.5 & **16.4** \\ sad & 20.4 & 15.6 & 15.8 & 16.1 & **17.1** & 16.1 & 15.7 & 15.7 & 16.3 \\ angry & 20.3 & 16.5 & 14.9 & **17.2** & 15.0 & 16.0 & 15.6 & 14.4 & 16.9 \\ fearful & 20.2 & 15.4 & 14.3 & **16.9** & 15.1 & 15.9 & 15.0 & 13.6 & 16.4 \\ disgust & 20.0 & 15.4 & 15.0 & **16.3** & 15.3 & 15.5 & 15.1 & 13.9 & 14.7 \\ surprised & 19.7 & 14.4 & 12.7 & **15.3** & 15.2 & 14.5 & 13.8 & 12.5 & 14.5 \\ neutral & 19.9 & 15.8 & 15.8 & 16.3 & 16.2 & **16.8** & 15.9 & 14.8 & 15.5 \\ \hline \end{tabular} \end{table} TABLE III: Performance of Sepformer-L2M on the Emo2Mix-Normal dataset broken down by emotion combination reported using SI-SDRi. books which may be closer in domain to the RAVDESS datset. That said, the larger size of the Libri2Mix dataset likely improves the performance of the model on Emo2Mix. Looking only at the Sepformer experiments, we see that on Emo2Mix-Neutral the Sepformer-WJ2 model has a lower SDRi of 12.0 dB compared to 15.5 dB for Sepformer-L2M. The difference in SDRi performance for Emo2Mix-Strong is 1.9 dB on Sepformer-L2M compared to 2.6 dB for Sepformer-WJ2. A similar trend can be found for the ConvTasNet model, with the SDRi dropping to barely above noise at 0.9 dB on Emo2Mix strong. This mirrors the finding of [27] where emotional recognition performance benefited from non-emotional large scale pretraining. ### _Performance Comparisons by emotions_ To streamline the performance comparison, we only report here the SI-SDRi metric of the best performing model, Sepformer-L2M. Table III shows the results of the model tested on the Emo2Mix-Normal dataset, while Table IV shows the results of the model tested on the Emo2Mix-Strong dataset. In each of the tables, the highest value in each row is listed in bold and while lowest value is underlined. In the first column of each table, we also report the comparison with the RAVDESS2Mix dataset (RD2M) to which consists only of a combination of 1 emotional utterance from the RAVDESS dataset and 1 neutral utterance from the Libri2Mix Test dataset. In this comparison we also see that there is a minimal variation in the RD2M dataset across the various emotions of 0.9dB for both the Strong and Normal emotional intensities. Meanwhile the average difference between the min and max emotions across combinations is 2.5dB for the normal intensity and 6.1dB for the strong intensity, with the lowest being 1.5dB for the "sad" emotion row in the normal dataset. Across all combinations in the entire dataset, the difference between the min and max for Emo2Mix-normal is 4.7dB and for Emo2Mix-strong is 9.3dB. Considering the results on the Emo2Mix-Normal dataset in Table III we see a trend that having an utterance with the surprise emotion consistently results in the worst performance compared to the other emotions, with the worst results arising from a combination of two surprise utterances. One possible explanation is that the actors in the dataset simply found it difficult to express surprise at a low emotional intensity and exhibited the most intense emotions in their surprised utterances compared to the other emotions expressed at normal intensity. Meanwhile sad and angry are emotions expressed at normal intensity could simply be close to neutral, resulting in better SI-SDRi results compared to the other emotions. Finally, considering the results on the Emo2Mix-Strong dataset in Table IV we see a slightly different dynamic. The Calm utterances consistently have the best results in the Strong dataset compared to the Normal dataset. This could be that Calm utterances are closer to neutral at Strong emotional intensity and thus their contrast with the other strong emotions results makes it easier for the model to differentiate the speakers. ## IV Conclusion In this work we have shown that, contrary to previous work [19], emotions play an important role in the speech separation performance. Our results show that models trained on neutral speech will suffer a performance degradation when the mixture contains strong emotional expressions at inference time. Thus, this work makes the case for the need of including emotional speech into speech separation training datasets. ## V Acknowledgements This research is supported by ST Engineering Mission Software & Services Pte. Ltd under a collaboration programme (Research Collaboration No: REQ0149132). The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore ([https://www.nscc.sg](https://www.nscc.sg)).
2309.09475
Terahertz magnon frequency comb
Magnon frequency comb (MFC), the spin-wave spectra composing of equidistant coherent peaks, is attracting much attention in magnonics. A terahertz (THz) MFC, combining the advantages of the THz and MFC technologies, is highly desired because it would significantly advance the MFC applications in ultrafast magnonic metrology, sensing, and communications. Here, we show that the THz MFC can be generated by nonlinear interactions between spin waves and skyrmions in antiferromagnets [Z. Jin \emph{et al}., \href{https://doi.org/10.48550/arXiv.2301.03211}{arXiv:2301.03211}]. It is found that the strength of the three-wave mixing between propagating magnons and breathing skyrmions follows a linear dependence on the driving frequency and the MFC signal can be observed over a broad driving frequency range. Our results extend the working frequency of MFC to the THz regime, which would have potential applications in ultrafast spintronic devices and promote the development of nonlinear magnonics in antiferromagnets.
Xianglong Yao, Zhejunyu Jin, Zhenyu Wang, Zhaozhuo Zeng, Peng Yan
2023-09-18T04:14:43Z
http://arxiv.org/abs/2309.09475v1
# Terahertz magnon frequency comb ###### Abstract Magnon frequency comb (MFC), the spin-wave spectra composing of equidistant coherent peaks, is attracting much attention in magnonics. A terahertz (THz) MFC, combining the advantages of the THz and MFC technologies, is highly desired because it would significantly advance the MFC applications in ultrafast magnonic metrology, sensing, and communications. Here, we show that the THz MFC can be generated by nonlinear interactions between spin waves and skyrmions in antiferromagnets [Z. Jin _et al._, arXiv:2301.03211]. It is found that the strength of the three-wave mixing between propagating magnons and breathing skyrmions follows a linear dependence on the driving frequency and the MFC signal can be observed over a broad driving frequency range. Our results extend the working frequency of MFC to the THz regime, which would have potential applications in ultrafast spintronic devices and promote the development of nonlinear magnonics in antiferromagnets. ## I Introduction The far-infrared electromagnetic spectrum in the frequency region of \(0.1-30\) terahertz (THz), known as the THz band, has long been considered the last remaining scientific gap in the electromagnetic spectrum [1; 2]. THz technology has been widely applied to many fields, such as the wireless communication [3], medical imaging [4], security inspection [5], etc. Photonic and/or electronic THz devices, like quantum cascade laser [6], uni-travelling-carrier photodiode [7], Schottky-diode-based multipliers [8], and transistor-based integrated circuits [9], have already been realized. Very recently, the THz technology starts making its way into the field of spintronics. For example, a novel THz emitter utilizing the spin degree of freedom in magnetic materials has emerged [10; 11], which presents several unprecedented advantages, such as the ultra-broad bandwidth (up to 30 THz) [12] and flexible tunability by external magnetic fields [13] or internal magnetic textures [14]. Merging THz technology with other powerful techniques can yield unique multidimensional insight into fundamental processes at ultrafast time scales. An optical frequency comb is a spectrum consisting of a sequence of discrete and equally-spaced spectral lines that represent precise marks in frequency, like an optical ruler [15]. The optical frequency comb technique revolutionized optical frequency metrology and spectroscopy [16; 17; 18] and enabled optical atomic clocks [19; 20; 21]. A THz optical frequency comb capable of high-resolution measurement was recently demonstrated [22; 23], which would significantly advance THz technology applications in spectroscopy, metrology, sensing, and high-speed wireless communications. Most recently, it has been reported that a magnon frequency comb (MFC) can be generated by the nonlinear scattering between magnons and topological solitons, like skyrmion [24], vortex [25] and domain wall [26; 27]. However, the working frequency of MFC in ferromagnets often lies in the GHz regime, which would not be able to keep up the demand of higher-frequency operations. Extending the frequency range of the MFC to THz is therefore of fundamental interest and necessary as well for ultrafast magnonics. Antiferromagnets with two opposite magnetic sublattices have unique advantages, such as the full freedom in magnon polarization, vanishingly small stray field, and ultrafast magnetization dynamics typically in the THz region [28; 29; 30]. These properties make antiferromagnets a promising platform for magnonics [31]. Inspired by the three-wave mixing mechanism in ferromagnets, we propose to generate MFC with a THz central frequency by nonlinear interaction between breathing skyrmions and propagating magnons in antiferromagnets. By performing systematical micromagnetic simulations, we find the following differences compared with its ferromagnetic counterpart: (i) It is difficult to induce the skyrmion breathing in antiferromagnets merely by propagating magnons and an additional driving field is demanded. (ii) There is no frequency window to observe the MFC. Modeling calculations reveal an unexpectedly large critical microwave field and show a linear dependence of the three-wave coupling strength on the microwave frequency, which explain the observed two features. The paper is organized as follows. In Sec. II, we present the analytical model describing the nonlinear interaction between magnons and breathing skyrmions. The linear dispersion relation of magnons propagating in antiferromagnets is given. The dynamical equations of magnon modes involved in three-magnon processes are derived and numerically solved. Section III gives the full micromagnetic simulations to verify our theoretical analysis. Conclusions are drawn in Sec. IV. ## II Theoretical model We consiser an antiferromagnetic film hosting a Neel-type skyrmion stabilized by the interfacial Dzyaloshinskii-Moriya interaction (DMI), as shown in Fig. 1(a). The Hamiltonian of the antiferromagnetic system can be written as \[\mathcal{H}=J\sum_{<i,j>}\mathbf{S}_{i}\cdot\mathbf{S}_{j}-K\sum_{i}(\mathbf{ S}_{i}\cdot\mathbf{\hat{z}})^{2}+\sum_{<i,j>}\mathbf{D}_{ij}\cdot(\mathbf{S}_{i} \times\mathbf{S}_{j}), \tag{1}\] where \(\mathbf{S}_{i}\) is the unit vector of the spin at site \(i\). On the right-hand side of Eq. (1), the first term is the antiferromagnetic exchange interaction with coefficient \(J>0\). The second term is the uniaxial magnetocrystalline anisotropy with the easy-axis along \(\hat{z}\) direction and \(K\) being the anisotropic constant. The third term describes the DMI where the DM vector \(\mathbf{D}_{ij}=D(\hat{z}\times\mathbf{r}_{ij})\) with \(D\) the DMI strength and \(\mathbf{r}_{ij}\) representing the position vector connecting two neighboring spins. Under the continuum description, the antiferromagnetic system is characterized by the net magnetization \(\mathbf{m}=(\mathbf{S}_{A}+\mathbf{S}_{B})/2\) and the staggered magnetization \(\mathbf{n}=(\mathbf{S}_{A}-\mathbf{S}_{B})/2\), where \(\mathbf{S}_{A}\) and \(\mathbf{S}_{B}\) are the unit magnetization of two sublattices. Then, the system Hamiltonian in Eq. (1) can be recast as \[\begin{split}\mathcal{H}&=\int\Big{\{}\frac{ \lambda}{2}\mathbf{m}^{2}+\frac{A}{2}\Big{[}(\nabla\mathbf{n})^{2}+\partial_{x }\mathbf{n}\cdot\partial_{y}\mathbf{n}\Big{]}+L\mathbf{m}\cdot(\partial_{x} \mathbf{n}\\ &\quad+\partial_{y}\mathbf{n})-\frac{K_{c}}{2}n_{z}^{2}+\frac{D_{ c}}{2}\Big{[}n_{z}\nabla\cdot\mathbf{n}-(\mathbf{n}\cdot\nabla)n_{z}\Big{]} \Big{\}}d\mathbf{r},\end{split} \tag{2}\] where \(\lambda=8J\), \(A=2Jd^{2}\), \(L=2\sqrt{2}Jd\), \(K_{c}=2K\), and \(D_{c}=2Dd\) are the homogeneous exchange, inhomogeneous exchange, parity-breaking, magnetic anisotropy, and DMI constants, respectively, and \(d\) is the lattice constant. It has been demonstrated that the MFC can be generated by nonlinear coupling between magnons and breathing skyrmion in ferromagnets [24]. It is naturally expected that this mechanism could also work in antiferromagnets. To investigate the nonlinear interaction between antiferromagnetic magnons and skyrmion, we express the dynamical staggered magnetization in terms of the magnon creation and annihilation operators \(a^{\dagger}(\mathbf{r},t)\) and \(a(\mathbf{r},t)\) by using the Holstein-Primakoff transformation. Then the Hamiltonian can be rewritten as \(\mathcal{H}=\mathcal{H}^{(0)}+\mathcal{H}^{(2)}+\mathcal{H}^{(3)}+\mathcal{H}^ {(4)}+\cdots\), where \(\mathcal{H}^{(0)},\mathcal{H}^{(2)},\mathcal{H}^{(3)}\), and \(\mathcal{H}^{(4)}\) are the ground state energy, two-, three-, and four-magnon processes, respectively. The dispersion relation of antiferromagnetic magnons is described by the second-order Hamiltonian \(\mathcal{H}^{(2)}\), which can be expressed as \[\omega=2\gamma\sqrt{(2J+K)^{2}-J^{2}[\cos(k_{x}d)+\cos(k_{y}d)]^{2}}, \tag{3}\] with the gyromagnetic ratio \(\gamma=1.76\times 10^{11}\)rad s \({}^{-1}\)T\({}^{-1}\), and \(\mathbf{k}=(k_{x},k_{y})\) being the wave vector of spin waves, in the lattice model. In the continue limit, the dispersion relation reads \[\omega=\gamma\sqrt{\lambda\Big{(}\frac{A}{2}k^{2}+K_{c}\Big{)}}, \tag{4}\] with \(k=|\mathbf{k}|\). Micromagnetic simulations agree well with the analytical formula Eq. (3) [see the dashed black line in Fig. 1(b)], which justify the validity of the antiferromagnetic spin-wave dispersion relation. Simulation details can be found in Sec. III. For Eq. (4), the numerical dispersion shows a good agreement in a low-\(k\) region, but deviates in the high-\(k\) region [see the dashed red line in Fig. 1(b)]. It is because the spin-wave dispersion relation Eq. (4) is valid only when the wavelength of spin waves is much larger the lattice distance (\(2\pi/k\gg d\)). According to Eq. (3) [or Eq. (4)], the antiferromagnetic resonance frequency is given by \(\omega_{\rm AFMR}/2\pi=(\gamma/\pi)\sqrt{4JK+K^{2}}=0.369\) THz, below which spin waves cannot propagate [see the gray region in Fig. 1(b)]. The nonlinear interaction between magnons and breathing skyrmion is described by the third-order Hamiltonian \(\mathcal{H}^{(3)}\), which involves four modes: the incident magnon mode \(a_{k}\), skyrmion breathing mode \(a_{r}\), confluence mode \(a_{p}\), and splitting mode \(a_{q}\). The dynamical equations of four magnon modes can be derived as [32] \[\begin{split} i\frac{da_{k}}{dt}&=(\omega_{k}-i \Gamma_{k})a_{k}+g_{q}a_{r}a_{q}+g_{p}a_{r}^{\dagger}a_{p}+h_{1}e^{-i\omega_{ 1}t},\\ i\frac{da_{r}}{dt}&=(\omega_{r}-i\Gamma_{r})a_{r}+g _{q}a_{k}a_{q}^{\dagger}+g_{p}a_{k}^{\dagger}a_{p}+h_{2}e^{-i\omega_{2}t},\\ i\frac{da_{p}}{dt}&=(\omega_{p}-i\Gamma_{p})a_{p}+g _{p}a_{k}a_{r},\\ i\frac{da_{q}}{dt}&=(\omega_{q}-i\Gamma_{q})a_{q}+g _{q}a_{k}a_{r}^{\dagger},\end{split} \tag{5}\] where \(\Gamma_{r}=\alpha_{r}\omega_{r}\) (\(v=k,r,p,q\)) are the damping rate of the magnon modes with the effective Gilbert damping constants \(\alpha_{r}\) and mode frequencies \(\omega_{v}\). \(g_{p}\) and \(g_{q}\) are the coupling strength of the three-magnon confluence and splitting, respectively. The incident spin wave is excited by the microwave field \(h_{1}e^{-i\omega_{1}t}\). Since the skyrmion breathing mode in antiferromagnets is difficult to be excited merely by the incident spin waves in a non-resonant manner, as shall be discussed later, Figure 1: (a) Schematic diagram of an antiferromagnetic film hosting a skyrmion. (b) The dispersion relation of antiferromagnetic spin waves obtained by the FFT of the dynamical magnetization from micromagnetic simulations. The dashed black and red lines are the analytical formulas Eq. (3) and Eq. (4), respectively. (c) The FFT spectra of the antiferromagnetic film with (violet curve) and without (orange curve) a skyrmion (SK). The gray regions in (b) and (c) correspond to the band gap of spin waves. (d) The time evolution of the skyrmion center position (\(x_{z}\), \(y_{z}\)) and radius (\(R_{z}\)) under the driving field with frequency 0.095 THz. we apply another microwave field \(h_{2}e^{-i\omega_{2}t}\) to resonantly excite the skyrmion breathing mode. Here, \(h_{i}\) and \(\omega_{i}\) (\(i=1,2\)) are the microwave field amplitude and frequency, respectively. Then, the incident spin-wave mode (\(\omega_{k}\)) and breathing mode (\(\omega_{r}\)) of the skyrmion would mix with each other and generate the sum-frequency (\(\omega_{p}=\omega_{k}+\omega_{r}\)) and difference-frequency (\(\omega_{q}=\omega_{k}-\omega_{r}\)) modes. These two secondary signals further hybridize with the skyrmion breathing mode to generate higher-order frequency modes, eventually leading to the MFC. ## III Micromagnetic simulations To verify the above picture, we perform full micromagnetic simulations using MUMAX3 [33]. We consider a G-type antiferromagnetic film with dimensions \(1000\times 1000\times 1\) nm\({}^{3}\). The cell size of \(1\times 1\times 1\) nm\({}^{3}\) is used to discrete the film in simulations. Magnetic parameters of KMnF\({}_{3}\) are adopted [34; 35]: \(M_{s}=3.76\times 10^{5}\) A/m, \(A^{\text{sim}}=AM_{s}/4=6.59\) pJ/m, \(K^{\text{sim}}=K_{c}M_{s}/2=1.16\times 10^{5}\) J/m\({}^{3}\), \(D^{\text{sim}}=D_{c}M_{s}/2=1\) mJ/m\({}^{2}\), and \(\alpha=1\times 10^{-3}\). Absorbing boundary conditions are used to avoid the spin-wave reflection by film edges. To characterize the intrinsic modes of antiferromagnetic skyrmion, we apply a sinc-function field \(\mathbf{h}(t)=h_{0}\text{sinc}(\omega_{H}t)\hat{z}\) with amplitude \(h_{0}=10\) mT and cutoff frequency \(\omega_{H}/2\pi=2.5\) THz over the antiferromagnetic film. By carrying a standard fast Fourier transformation (FFT) for each cell and then averaging over the whole film, we find one main peak at 0.095 THz in the band gap [see the violet curve in Fig. 1(c)]. For the film without a skyrmion, this peak is absent [see the orange curve in Fig. 1(c)]. This indicates that the peak at 0.095 THz comes from the intrinsic mode of the antiferromagnetic skyrmion. By analyzing the skyrmion motion under a sinusoidal microwave field with frequency 0.095 THz, we identify this mode as the breathing mode (\(\omega_{r}\)) of the antiferromagnetic skyrmion, as shown in Fig. 1(d). Here, the position and radius of skyrmion are obtained by a circular curve fitting of the \(m_{z}=0\) contour. In ferromagnets, when the amplitude of the local exciting field increases above a threshold, the skyrmion breathing would be excited via a three-wave splitting process. Subsequent wave-mixing would result in the MFC [24]. Inspired by this idea, we first apply a linearly-polarized microwave field \(\mathbf{h}(t)=h_{1}\sin(\omega_{1}t)\hat{x}\) with \(\omega_{1}/2\pi=1.3\) THz in a narrow rectangular area [black bar in Fig. 2(a)] to excite the incident spin waves, which then interact with the skyrmion for generating the MFC. However, there is no sign of the MFC except the incident spin-wave mode and its frequency-doubling signal, even when the field amplitude increases up to 500 mT, as shown in Fig. 2(b), which is already beyond the scope of conventional experiments. It is noted that the skyrmion breathing in chiral antiferromagnets can be conveniently excited by a modification of the DMI or magnetocrystalline anisotropy [36; 37]. To overcome this obstacle, we apply another microwave field \(\mathbf{h}_{2}=h_{2}\sin(\omega_{2}t)\hat{z}\) with \(\omega_{2}/2\pi=0.095\) THz (\(\omega_{2}=\omega_{r}\)) in a square region [blue box in Fig. 2(a)] to excite the skyrmion breathing mode. Then, the incident magnons would interact with the breathing skyrmion. Black arrows represent the trajectories of scattered magnons, which may include both the linear and nonlinear topological magnon spin Hall effects [38]. The experimental scheme for detection shall be discussed below. To determine whether the MFC is generated or not, we detect the magnon spectrum in a rectangular region behind the skyrmion [red rectangle in Fig. 2(a)] by performing FFT of the dynamical magnetization. Figure 2(c) shows the FFT spectra by continuously varying the microwave field amplitude \(h_{1}\). Around 0.5 and 1.3 THz, two sets of magnon signals can be observed. The modes around 0.5 THz correspond to the frequency multiplication of the skyrmion breathing mode of frequency \(n\omega_{r}\) with \(n\) being an integer. Because of the band gap limited by the antiferromagnetic resonance \(\omega_{\text{AFMR}}\), only the high-order modes above 0.369 THz (\(n\geq 5\)) can escape from the skyrmion and are subsequently detected in the red rectangle region. The modes around 1.3 THz are the MFC modes generated by nonlinear interaction between the incident magnons and skyrmion breathing mode. Since the skyrmion breathing mode is directly excited by the microwave Figure 2: (a) Snapshot of the interaction between propagating magnons and skyrmion in AFM. The incident spin waves and breathing skyrmion are excited by the microwave fields \(h_{1}\sin(\omega_{1}t)\hat{x}\) (black bar) and \(h_{2}\sin(\omega_{2}t)\hat{z}\) (blue box), respectively. The red rectangle is the detection region. Black arrows label the Hall trajectory of magnons. (b) The spin-wave spectrum of the detection region in (a) when only the exciting field \(h_{1}=500\) mT is applied. (c) The spin-wave spectra as a function of the driving field amplitude (\(h_{1}\)). The driving frequency is fixed at 1.3 THz. (d) The FFT spectrum at three representative fields \(h_{1}=0.02\) mT (upper panel), 0.08 mT (middle panel), and 10 mT (lower panel), respectively. In (c) and (d), the skyrmion breathing mode is excited by the second microwave field with amplitude \(h_{2}=5\) mT and frequency \(\omega_{2}/2\pi=0.095\) THz. field \(\mathbf{h}_{2}\) rather than the incident magnon, the driving field amplitude is not required to be larger than the aforementioned threshold value (\(\gg 500\) mT). Nevertheless, when the incident spin waves are excited by a weak driving field (\(h_{1}=0.02\) mT), the MFC signals can hardly be observed [see the upper panel in Fig. 2(d)]. This is mainly because the amplitudes of newly generated MFC modes are very small and decay rapidly during their propagation in the presence of Gilbert damping. By increasing the driving field amplitude, the MFC modes with frequency spacing equal to \(\omega_{r}\) clearly emerge, as shown in the middle and lower panels of Fig. 2(d). The coupling strength (\(g_{p,q}\)) between spin waves and skyrmion is crucial for the formation of MFC, which depends on the mode overlap and is difficult to calculate in our case due to the lacking of the analytical skyrmion profile. Hence, we treat them as fitting parameters. For simplicity, we assume \(g_{p,q}=g\) and that all modes have the same damping rates \(\alpha_{k,r,p,q}=\alpha\). By numerically solving Eq. (5) and fitting the amplitudes of the incident, confluence, and splitting magnon modes at steady state (see Fig. 6 in Appendix), we obtain the coupling strength \(g=55.7\) MHz [see Fig. 3(a)]. Substituting this coupling strength back into Eq. (5), we can calculate the threshold field for the excitation of the skyrmion breathing mode by propagating magnons. We identify the critical value to be \(h_{1}^{\alpha}=813\) mT, as shown in Fig. 3(b). It is too high compared to the case in ferromagnets. This result provides an explanation why propagating magnons alone is not able to split the skyrmion breathing, as noticed in our micromagnetic simulations. Next, we investigate the dependence of the MFC on the driving frequency, as shown Fig. 4(a). We find that, over a rather broad driving frequency range, the MFC is still visible and the mode spacing of the MFC is always equal to the skyrmion breathing frequency. This result significantly differs from the MFC generated in ferromagnets, where the MFC can only be observed in a frequency window and we attributed it to the Gaussian profile of the frequency-dependent three-wave coupling [24]. By the same fitting method used in Fig. 3(a), we extract the coupling strengths for different driving frequencies, as shown in Fig. 4(b). One can see that the coupling strength \(g\) monotonically decreases with the increase of the driving frequency \(\omega_{1}\), approximately following a linear function \(g=c\omega_{1}+g_{0}\) with the dimensionless parameter \(c=-1.04\times 10^{-4}\) and intercepting coupling \(g_{0}=0.191\) THz. This result suggests that the nonlinear three-wave coupling between magnons and breathing skyrmion in ferromagnets and antiferromagnets has very different frequency dependencies, which might originate from distinct internal modes of the skyrmion in ferromagnets and antiferromagnets [39]. It might also be because the dispersion of magnons is parabolic in ferromagnets, but linear in antiferromagnets. Further investigations are necessary to elucidate the underlying physical origin behind such interesting findings, which goes beyond the scope of this paper. In Ref. [38], we have identified a so-called nonlinear topological magnon spin Hall effect by analyzing the real-space scattering patterns of the MFC. Below, we propose a multichannel device to measure it, by minimizing the spatial overlap between different magnon modes, as shown in Fig. 5. Experimentally, we can excite THz magnons in antiferromagnets by femtosecond laser [40]. When the generated magnons interact with the breathing skyrmion, the MFC emerges and the incident and nonlinear magnons are scattered to different channels due to their very different Hall angles. One can distinguish the accumulated magnons with different frequencies by magneto-optical effects (e.g., Faraday or Kerr rotations), as plotted in Fig. 5. ## IV Conclusion In summary, we theoretically demonstrated that the THz MFC can be generated in antiferromagnetic film by nonlinear Figure 3: (a) The amplitudes of three main peaks at \(\omega_{k}=\omega_{1}\) and \(\omega_{k}\pm\omega_{r}\) as a function of the driving field amplitude (\(h_{1}\)). The solid lines are the fitting curves with the parameter \(g=55.7\) MHz based on Eq. (5). (b) Analytical curves of the mode amplitudes, which are obtained by numerically solving Eq. (5) with parameters \(\omega_{k}/2\pi=1.3\) THz, \(\omega_{r}/2\pi=\omega_{2}/2\pi=0.095\) THz, \(h_{2}=0\) mT, \(\alpha=0.001\), and \(g=55.7\) MHz. Figure 4: (a) The spin-wave spectrum as a function of the driving field frequency \(\omega_{1}\). The driving field amplitude is fixed at \(h_{1}=30\) mT. (b) The FFT amplitudes of three modes at frequencies \(\omega_{1}\) (upper panel), and \(\omega_{1}\pm\omega_{r}\) (middle panel) as a function of the driving field frequency. The lower panel in (b) shows the dependence of the coupling strength on the driving field frequency. Magenta dots are extracted from simulation results and the green line is the linear fitting. interactions between magnons and skyrmions. Although the mechanism of the MFC generation in both ferromagnets and antiferromagnets comes from the three-wave mixing, there exist significant differences between them. First, the skyrmion breathing in antiferromagnets can hardly be excited merely by propagating magnons because of an unexpectedly large threshold field amplitude of microwaves. An additional low-frequency driving source is therefore needed to assist the MFC generation. Second, the dependence of the coupling strength on the driving frequency in two magnetic systems is different. The frequency dependence follows a Gaussian profile in ferromagnets [24], while it exhibits a linear dependence in antiferromagnets. As a consequence, the MFC in antiferromagnets is visible over a broad driving frequency range, in contrast to a narrow window in ferromagnets. From the application point of view, the THz MFC we predicted can be utilized to detect magnetic textures or defects in antiferromagnets, which is difficult to realize by conventional means because of the vanishingly small net magnetization. Our findings bring the MFC to the THz regime, which would advance MFC technology applications in ultrafast magnonic metrology, sensing, and communications. ## V Acknowledgments We thank H. Yang, L. Song, and X. Liu for helpful discussions. This work was funded by the National Key R&D Program under Contract No. 2022YFA1402802 and the National Natural Science Foundation of China (NSFC) (Grants No. 12374103 and No. 12074057). Z.W. acknowledges financial support from the NSFC under Grant No. 12204089. X.Y., Z.J., and Z.W. contributed equally. ## Appendix In general, the dynamical equations of magnon modes Eq. (5) is hard to solve analytically. Thus, we numerically solve it by using the ode45 solver in MATLAB. The initial amplitude of four modes are set as \(a_{k}=0\), \(a_{r}=0.001\), \(a_{p}=0\), and \(a_{q}=0\). After a long-time evolution (20 ns), a stationary solution can be obtained, as shown in Fig. 6.
2302.00045
Neural Control of Parametric Solutions for High-dimensional Evolution PDEs
We develop a novel computational framework to approximate solution operators of evolution partial differential equations (PDEs). By employing a general nonlinear reduced-order model, such as a deep neural network, to approximate the solution of a given PDE, we realize that the evolution of the model parameter is a control problem in the parameter space. Based on this observation, we propose to approximate the solution operator of the PDE by learning the control vector field in the parameter space. From any initial value, this control field can steer the parameter to generate a trajectory such that the corresponding reduced-order model solves the PDE. This allows for substantially reduced computational cost to solve the evolution PDE with arbitrary initial conditions. We also develop comprehensive error analysis for the proposed method when solving a large class of semilinear parabolic PDEs. Numerical experiments on different high-dimensional evolution PDEs with various initial conditions demonstrate the promising results of the proposed method.
Nathan Gaby, Xiaojing Ye, Haomin Zhou
2023-01-31T19:26:25Z
http://arxiv.org/abs/2302.00045v2
# Neural control of parametric solutions for high-dimensional evolution PDEs+ ###### Abstract We develop a novel computational framework to approximate solution operators of evolution partial differential equations (PDEs). By employing a general nonlinear reduced-order model, such as a deep neural network, to approximate the solution of a given PDE, we realize that the evolution of the model parameter is a control problem in the parameter space. Based on this observation, we propose to approximate the solution operator of the PDE by learning the control vector field in the parameter space. From any initial value, this control field can steer the parameter to generate a trajectory such that the corresponding reduced-order model solves the PDE. This allows for substantially reduced computational cost to solve the evolution PDE with arbitrary initial conditions. We also develop comprehensive error analysis for the proposed method when solving a large class of semilinear parabolic PDEs. Numerical experiments on different high-dimensional evolution PDEs with various initial conditions demonstrate the promising results of the proposed method. ## 1 Introduction Partial differential equations (PDEs) are ubiquitous in modeling and are vital in numerous applications from finance, engineering, and science [23]. As the solutions of many PDEs lack analytical form, it is necessary to use numerical methods to approximate the solutions [4, 23]. Traditional numerical methods such as finite difference and finite element methods rely upon the discretization of problem domains, which does not scale to high-dimensional problems due to the so-called "curse of dimensionality". In recent years, a class of nonlinear reduced-order models, known as deep neural networks (DNNs), have emerged as powerful tools for solving high-dimensional PDEs [5, 16, 21, 30, 31, 33, 43, 69]. For example, in [5, 16, 21, 69, 85], the solution of a given PDE is parameterized as a DNN, and the network parameters are trained to minimize potential violations (in various definitions) to the PDE. These methods have shown numerous successes in solving a large variety of PDEs empirically. Their successes are partly due to the provable universal approximation power of DNNs [34, 49, 84]. On the other hand, these methods aim at solving specific instances of PDEs, and as a consequence, they need to start from scratch for the same PDE whenever the initial and/or boundary value changes. There have also been recent studies to find solution operators of PDEs [47, 53]. These methods aim at finding the map from the problem's parameters to the corresponding solution. Finding solution operators has substantial applications as the same PDE may need to run many times with different initial or boundary value configurations. However, existing methods fall short in tackling high-dimensional problems as many require spatial discretization to represent the solution operators using DNNs. In this paper, we propose a new approach to find solution operators of high-dimensional evolution PDEs. For a given PDE, we first parameterize its solution as a general reduced-order model, such as a DNN, whose parameters denoted as \(\theta\) are to be determined. Then we seek to find a vector field on the parameter space which describes how \(\theta\) evolves in time. This vector field essentially acts as a controller on the parameter space, steering the parameters so that the induced DNN evolves and approximates the PDE solution for all time. Once such a vector field is found, we can easily change the initial conditions of the PDE by simply starting at a new point in the parameter space. Then we follow the control vector field to find the parameter trajectory which gives an approximation of the time-evolving solution. Thus, different initial conditions can be considered for the same PDE without solving it repeatedly. Our contributions can be summarized as follows. 1. We develop a new computational framework to find the solution operator of any given initial value problem (IVP) defined by high-dimensional nonlinear evolution PDEs. This framework is purely based on the evolution PDE itself and does not require any solutions of the PDE for training. Once we find the solution operator, we can quickly compute solutions of the PDE with any initial value at a low computational cost. 2. We provide comprehensive theoretical analysis to establish error bounds for the proposed method when solving linear PDEs and some special nonlinear PDEs. 3. We conduct a series of numerical experiments to demonstrate the effectiveness of the proposed method in solving a variety of linear and nonlinear PDEs. The remainder of this paper is organized as follows. In Section 2, we provide an overview of recent neural network based numerical methods for solving PDEs. We outline the fundamentals of our proposed approach in Section 3.1 and provide details of our method and its key characteristics in Section 3.2. We conduct comprehensive error analysis in Section 3.3. We demonstrate the performance of the proposed method on several linear and nonlinear evolution PDEs in Section 4. Some variations and generalizations of the proposed approach are given in Section 5. Finally, Section 6 concludes this paper. ## 2 Related Work ### Classical methods for solving PDEs Classical numerical methods for solving PDEs, such as finite difference [76] and finite element methods [40], discretize the spatial domain using mesh or triangulation. These methods convert a PDE to its discrete counterpart, which is a system of algebraic equations with finite number of unknowns, and solve the system to obtain approximate solution on the grid points [1, 22, 66, 75]. These methods have been significantly advanced in the past decades, and they are able to handle complicated situations such as irregular domains. However, they severely suffer "curse of dimensionality" when applied to high-dimensional problems--the number of unknowns increases exponentially fast with respect to spatial dimension, which renders them computationally intractable for many problems. ### Neural network based methods for solving PDEs Early attempts using neural networks to solve PDEs can be seen in [17, 43, 44, 45]. DNNs emerged in recent years and demonstrated striking power in solving PDEs through various approaches [5, 7, 21, 50, 61, 69, 70, 72, 78, 79, 82, 85, 86]. DNNs, which are the key machinery of deep learning, have demonstrated extraordinary potential in solving many high-dimensional nonlinear PDEs, which were considered computationally intractable using classical methods. For example, a variety of DNN based methods have been proposed based on strong form [7, 17, 41, 57, 59, 62, 63, 69, 70], variational form [21], and weak form [5, 85] of PDEs. They are considered with adaptive collocation strategy [3], adversarial inference procedure [83], oscillatory solutions [12], and multiscale methods [13, 52, 77]. Improvements of these methods with adaptive activation functions [39], networks structures [26, 27, 36], boundary conditions [18, 56], structure probing [36], as well as their convergence [55, 71], are also studied. For a class of high-dimensional PDEs which have equivalent backward stochastic differential equation (SDE) formulations due to Feynman-Kac theory, deep learning methods have been applied by leveraging such correspondences [6, 20, 30, 31, 32, 37, 38, 65]. These methods are shown to be good even in high dimensions [31, 37, 65], however, they can only solve for a solution \(u(x)\) for a single \(x\) at a time. For evolution PDEs, parameter evolution algorithms [2, 10, 19] have also been considered. These methods parameterize the PDE solution as neural network [10, 19] or an adaptively chosen ansatz as discussed in [2]. In these methods, the parameters are evolved forward in time through a time marching scheme, where at each step a linear system [10, 19] or a constrained optimization problem [2] needs to be solved. ### Learning solution operator of PDEs The aforementioned methods aim at solving specific instance of a given PDE, and they need to be rerun from scratch when any of the problem configuration (e.g., initial value, boundary value, problem domain) changes. In contrast, the solution operator of a PDE directly maps a problem configuration to its corresponding solution. To this end, several methods have been proposed to approximate Green's functions for some linear PDEs [8, 9, 51, 74], as solutions to such PDEs have explicit expression based on their Green's functions. However, this approach only applies to a small class of linear PDEs whose solution can be represented using Green's functions. Moreover, Green's functions have singularities and it requires special care to approximate them using neural networks. For example, rational functions are used as activation functions of DNNs to address singularities in [8]. In [9], the singularities are represented with the help of fundamental solutions. For general nonlinear PDEs, DNNs have been used for operator approximation and meta-learning for PDEs [11, 14, 15, 24, 42, 47, 48, 53, 54, 58, 62, 80, 81]. For example, DeepONets [53] seek to approximate solution mappings by use of a "branch" and "trunk" network, while FNOs [47] use Fourier transforms to map a neural network to a low dimensional space and then back to the solution. Both methods require discretization of the infinite-dimensional function space, and they potentially require a large number of labeled data-solution pairs obtained through other methods or examples with explicitly known solutions. ### Differences between our proposed approach and existing ones Different from all existing approaches, we propose to approximate solution operators of evolution PDEs in a control framework in parameter spaces induced by general reduced-order models such as DNNs. Unlike the existing solution operator approximation methods (e.g., DeepONet [53] and FNO [47]) which seek to directly approximate the infinite-dimensional operator, our approach is based on the relation between evolving solutions and their projected trajectories in the parameter space. This leads us to transform the problem of finding a solution operator into a control vector field optimization problem. As a result, the problem of solving an evolution PDE in continuous space is reduced to numerically solving a system of ODEs, which can be done accurately with very low computation complexity. Moreover, our approach does not require any spatial discretization nor need any basis function representation and thus has great potential to solve evolution PDEs in high-dimensional cases. This is a significant advantage over existing operator learning methods such as DeepONet or FNOs as their spatial discretization schemes, which are used to generate the training data, hinder their application to high-dimensional cases. ## 3 Proposed Method The main goal of this paper is to develop a new computational framework to approximate the _solution operator_ for IVPs of high-dimensional evolution PDEs. The solution operator is a procedure that, once known, can efficiently map an arbitrarily given initial value \(g\) to the solution of the IVP without solving the PDE again. We first propose to parameterize \(u\) as a _nonlinear reduced-order model_, such as a DNN, which is denoted by \(u_{\theta}\) with parameter \(\theta\), i.e., \(u_{\theta}\) is a parametric function determined by the value of its finite-dimensional parameter \(\theta\), and \(u_{\theta}\) is used to approximate \(u\). To find the solution operator, we propose to build a control vector field \(V\) in the parameter space \(\Theta\) where \(\theta\) resides. Then the solution operator can be implemented as a fast numerical solver of the ODE defined by \(V\). More precisely, we first find the parameter \(\theta_{0}\) such that \(u_{\theta_{0}}\) approximates \(g\), then we follow the control vector field \(V\) to obtain a trajectory \(\{\theta_{t}\,|\,0\leq t\leq T\}\) in \(\Theta\) with very low computational cost, which automatically induces a trajectory \(u_{\theta_{t}}\) to approximate the true solution \(u\) of the IVP with the initial value \(g\). We provide details of these constructions in the following subsections. ### Nonlinear reduced-order models and parameter submanifold Reduced-order models, such as DNNs, have emerged as a powerful tool to solve high-dimensional PDEs in recent years [5, 21, 30, 67, 68, 69, 85]. Mathematically, a DNN can be expressed as the composition of a series of simple linear and nonlinear functions. In the deep learning context, a typical building block of DNNs is called a layer, which is a mapping \(h:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}}\) for some compatible input dimension \(d\) and output dimension \(d^{\prime}\): \[h(z;W,b):=\sigma(Wz+b), \tag{1}\] where \(z\in\mathbb{R}^{d}\) is the input variable of \(h\), the matrix \(W\in\mathbb{R}^{d^{\prime}\times d}\) and vector \(b\in\mathbb{R}^{d^{\prime}}\) are called the weight and bias respectively, and \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) is a nonlinear function that operates componentwise on its \(d^{\prime}\)-dimensional argument vector \(Wz+b\) (hence \(\sigma\) is effectively a mapping from \(\mathbb{R}^{d^{\prime}}\) to \(\mathbb{R}^{d^{\prime}}\)). Common choices of activation functions include the sigmoid \(\sigma(z)=(1+e^{-z})^{-1}\), hyperbolic tangent (tanh) \(\sigma(z)=(e^{z}-e^{-z})/(e^{z}+e^{-z})\), rectified linear unit (ReLU) \(\sigma(z)=\max(0,z)\), rectified power unit (RePU) \(\sigma(z)=\max(0,z)^{p}\) for \(p>0\), swish function \(\sigma(z)=z(1+e^{-z})^{-1}\), rational functions, and many others. We only consider smooth activation functions \(\sigma\) hereafter. A commonly used DNN structure \(u_{\theta}\), often called feed-forward network (FFN), is defined as the composition of multiple layer functions of form (1) as follows: \[u_{\theta}(x):=u(x;\theta)=w^{\top}z_{L}+b,\] \[\text{where}\quad z_{0}=x,\quad z_{l}=h_{l}(z_{l-1}):=h(z_{l-1};W _{l},b_{l}),\quad l=1,\dots,L, \tag{2}\] and the \(l\)th hidden layer \(h(\cdot;W_{l},b_{l}):\mathbb{R}^{d_{l-1}}\rightarrow\mathbb{R}^{d_{l}}\) is determined by its weight and bias parameters \(W_{l}\in\mathbb{R}^{d_{l}\times d_{l-1}}\) and \(b_{l}\in\mathbb{R}^{d_{l}}\) for \(l=1,\dots,L\) and \(d_{0}=d\). Here the output of \(u_{\theta}\) is set to the affine transform of the last hidden layer \(z_{NN}=h_{L}(z_{L-1})\) using weight \(w\in\mathbb{R}^{d_{L}}\) and bias \(b\in\mathbb{R}\). The _network parameter_\(\theta\) refers to the collection of all learnable parameters (stacked as a vector in \(\mathbb{R}^{m}\)) of \(u_{\theta}\), i.e., \[\theta:=(w,b,W_{L},b_{L},\dots,W_{1},b_{1})\in\mathbb{R}^{m}, \tag{3}\] and training the network \(u_{\theta}\) refers to finding the minimizer \(\theta\) of some properly designed loss function. DNNs are shown to be very powerful in approximating high-dimensional functions in a vast amount of studies in recent years, see, e.g., [28, 29, 34, 35, 46, 49, 64, 84]. For example, it is shown in [28] that for any \(M,\varepsilon>0\), \(k\in\mathbb{N}\), \(p\in[1,\infty]\), and \(\Omega=(0,1)^{d}\subset\mathbb{R}^{d}\), denote \(F:=\{f\in W^{k,p}(\Omega;\mathbb{R})\,|\,\|f\|_{W^{k,p}(\Omega)}\leq M\}\), then there exists a DNN structure \(u_{\theta}\) of form (3.2) with sufficiently large \(m\) and \(L\) (which depend on \(M\), \(\varepsilon\), \(d\) and \(p\) only), such that for any \(f\in F\), there is \(\|u_{\theta}-f\|_{W^{k,p}(\Omega)}\leq\varepsilon\) for some \(\theta\in\mathbb{R}^{m}\). This result implies that \(\{u_{\theta}\,|\,\theta\in\mathbb{R}^{m}\}\) is an \(\varepsilon\)-net of the \(M\)-ball of the Sobolev space \(W^{k,p}(\Omega)\), suggesting that DNNs are suitable to approximate solutions of PDEs. This is one of the many error bounds established in recent years, and such bounds are still being continuously improved nowadays. Our approach relies on the key relation between the parameter \(\theta\) and the reduced-order model \(u_{\theta}\). More specifically, we identify the finite-dimensional parameter space \(\Theta\subset\mathbb{R}^{m}\) where \(\theta\) belongs to and the submanifold \(\mathcal{M}\) of functions defined by \[\mathcal{M}:=\left\{u_{\theta}:\Omega\to\mathbb{R}\ |\ \theta\in\Theta\right\}. \tag{3.4}\] As we can see, \(u_{\theta}\) defines a mapping from the parameter space \(\Theta\) to the submanifold \(\mathcal{M}\) of the infinite-dimensional function space. We call \(\mathcal{M}\) the _parameter submanifold_ determined by \(u_{\theta}\). To approximate a time-evolving function \(u^{*}(\cdot,t)\), e.g., the solution of an evolution PDE, over time horizon \([0,T]\) using the reduced-order model \(u_{\theta}\), we need to find a trajectory \(\{\theta_{t}\in\Theta\,|\,0\leq t\leq T\}\) in the parameter space \(\Theta\) so that \(u_{\theta_{t}}(\cdot)\) is close to \(u^{*}(\cdot,t)\) in the function space for every \(t\in[0,T]\). For example, if we consider \(L^{2}(\Omega)\) as the function space, by closeness we mean \(\|u_{\theta_{t}}-u^{*}(\cdot,t)\|_{L^{2}(\Omega)}\) is small for all \(t\) (hereafter we denote \(\|\cdot\|_{p}=\|\cdot\|_{L^{p}(\Omega)}\) for notation simplicity). Notice that \(\{u_{\theta_{t}}\,|\,0\leq t\leq T\}\) is a trajectory on \(\mathcal{M}\), whereas \(u^{*}(\cdot,t)\) is a trajectory in the full space \(L^{2}(\Omega)\). ### Proposed methodology Let \(\Omega=(0,1)^{d}\) be the unit open cube in \(\mathbb{R}^{d}\) and \(F\) a _nonlinear differential operator_ of functions \(u:\Omega\to\mathbb{R}\) with necessary regularity conditions, we consider the IVP of the evolution PDE defined by \(F\) with arbitrary initial value as follows: \[\begin{cases}\partial_{t}u(x,t)=F[u](x,t),&x\in\Omega,\ t\in(0,T],\\ u(x,0)=g(x),&x\in\Omega,\end{cases} \tag{3.5}\] where \(T>0\) is some prescribed terminal time, and \(g:\mathbb{R}^{d}\to\mathbb{R}\) stands for an initial value. For ease of presentation, we assume zero Dirichlet boundary condition \(u(x,t)=0\) for all \(x\in\bar{\Omega}\) and \(t\in[0,T]\) (for compatibility we henceforth assume \(g(x)\) has zero trace on \(\partial\Omega\)) throughout this paper. We denote \(u^{g}\) the solution to the IVP (3.5) with this initial \(g\). The solution operator \(\mathcal{S}_{F}\) of the IVP (3.5) is thus the mapping from the initial \(g\) to the solution \(u^{g}\) : \[\mathcal{S}_{F}:C^{2}(\bar{\Omega})\to C^{2,1}(\bar{\Omega}\times[0,T]),\quad \text{such that}\quad g\mapsto\mathcal{S}_{F}(g):=u^{g}, \tag{3.6}\] where \(C^{2}(\bar{\Omega}):=C(\bar{\Omega})\cap C^{2}(\Omega)\) for short. Our goal is to find a numerical approximation to \(\mathcal{S}_{F}\). Namely, _we want to find a fast computational scheme \(\mathcal{S}_{F}\) that takes any initial \(g\) as input and accurately estimate \(u^{g}\) with low computation complexity._ It is important to note the substantial difference between solving (3.5) for any given but fixed initial value \(g\) and finding the solution operator (3.6) that maps any \(g\) to the corresponding solution \(u^{g}\). In the literature, most methods are developed for solving IVP (3.5) with a fixed \(g\), such as traditional finite difference and finite element methods, as well as many state-of-the-art machine learning based methods. However, these methods are computationally expensive if (3.5) must be solved with many different initial values, and they need to start from scratch for every new \(g\). In a sharp contrast, our goal is to find an approximation to the solution operator \(\mathcal{S}_{F}\) which, once found, can help us to compute \(u^{g}\) for any given \(g\) at relatively much lower computational cost. For ease of presentation, we use autonomous, second-order nonlinear differential operators \(F[u]=F(x,u,\nabla_{x}u,\nabla_{x}^{2}u)\) as an example and take \(\Omega=(0,1)^{d}\) in (3.5) to describe our main idea below. Extensions to general non-autonomous nonlinear differential operators and PDEs defined on open bounded set \(\Omega\subset\mathbb{R}^{d}\) with given boundary values will be discussed in Section 5. To approximate the solution operator \(\mathcal{S}_{F}\) in (3.6), we propose _a control mechanism in the parameter space \(\Theta\) of a prescribed reduced-order model \(u_{\theta}\)_. Specifically, we first determine a reduced-order model \(u_{\theta}\) to represent solutions of the IVP. We allow any parametric form of \(u_{\theta}\) but only assume that \(u_{\theta}(x)=u(x;\theta)\) is \(C^{1}\) smooth with respect to \(\theta\). This is a mild condition satisfied by the commonly used reduced-order models: if \(u_{\theta}\) is a linear combination of basis functions and \(\theta\) represents the combination coefficients, then \(u_{\theta}\) is linear and hence smooth in \(\theta\); and if \(u_{\theta}\) is a DNN as in (15), then \(u_{\theta}\) is smooth in \(\theta\) as long as all activation functions \(\sigma\) are smooth. Suppose there exists a trajectory \(\{\theta_{t}\,|\,0\leq t\leq T\}\) in the parameter space \(\Theta\) such that its corresponding \(u_{\theta_{t}}\) approximates the solution of the IVP, we must have \[\begin{cases}\partial_{t}u_{\theta_{t}}(x)=\nabla_{\theta}u(x;\theta_{t})\cdot \dot{\theta}_{t}=F[u_{\theta_{t}}](x),&\quad\forall\,x\in\Omega,\ t\in(0,T],\\ u_{\theta_{0}}(x)=g(x),&\quad\forall\,x\in\Omega.\end{cases} \tag{16}\] To compute \(u_{\theta_{t}}\), it is sufficient to find a control vector (velocity) field \(V_{F}:\Theta\to\mathbb{R}^{m}\), in the sense of \(\dot{\theta}_{t}=V_{F}(\theta_{t})\), that steers the trajectory \(\theta_{t}\) along the correct direction starting from the initial \(\theta_{0}\) satisfying \(u_{\theta_{0}}(x)=g(x)\). This observation suggests a new approach to solve the IVP with a fixed evolution PDE but varying initial values \(g\): for the evolution equation in (16) to hold, it suffices to find a vector field \(V_{F}\) such that \[\nabla_{\theta}u_{\theta}\cdot V_{F}(\theta)=F[u_{\theta}] \tag{17}\] for all \(\theta\in\Theta\). It is important to note that \(V_{F}\) only depends on the nonlinear differential operator \(F\) of the original evolution PDE, but not any actual initial value \(g\) of the IVP. Once this is achieved, we can effectively approximate the solution of the IVP with any initial value \(g\): we first set \(\theta_{0}=\theta^{g}\), where \(\theta^{g}\) denotes the parameter such that \(u_{\theta^{g}}\) fits \(g\), then we numerically solve the following ODE in the parameter space \(\Theta\) (which can be fast) using the control vector field \(V_{F}\): \[\begin{cases}\dot{\theta}_{t}=V_{F}(\theta_{t}),\quad\forall\,t\in(0,T],\\ \theta_{0}=\theta^{g}.\end{cases} \tag{18}\] The solution trajectory \(\{\theta_{t}\,|\,0\leq t\leq T\}\) of the ODE (18) induces a path \(\{u_{\theta_{t}}\,|\,0\leq t\leq T\}\) in \(\mathcal{M}\) as an approximation to the solution of the IVP. The computational cost is thus composed of two parts: finding the parameter \(\theta^{g}\) of \(u_{\theta}\) to fit \(g\) and numerically solving the ODE (18), both of which are substantially cheaper than solving the IVP (16). The main question is how to get the control vector field \(V_{F}\) in (18). As an explicit form of \(V_{F}\) is unknown, we choose to express \(V_{F}\) in a general parametric form \(V_{\xi}\) with parameter \(\xi\) to be determined. Specifically, we propose to set \(V_{\xi}\) as another DNN where \(\xi\) represents the set of learnable network parameters in \(V_{\xi}\). We call \(V_{\xi}\) the _neural control_ field. We learn the parameter \(\xi\) by minimizing the following loss function: \[\ell(\xi):=\int_{\Theta}\|\nabla_{\theta}u_{\theta}\cdot V_{\xi}(\theta)-F[u_ {\theta}]\|_{2}^{2}\,d\theta. \tag{19}\] In practice, we approximate the integral in \(\ell\) by Monte Carlo integration. We sample \(K\) points \(\{\theta_{k}\,|\,k=1,\dots,K\}\) uniformly from \(\Theta\) (here the subscript \(k\) in \(\theta_{k}\) stands for the \(k\)th point among the \(K\) points sampled in \(\Theta\)) and form the empirical loss function \[\hat{\ell}(\xi)=K^{-1}\cdot\sum_{k=1}^{K}\|\nabla_{\theta}u_{\theta_{k}}\cdot V _{\xi}(\theta_{k})-F[u_{\theta_{k}}]\|_{2}^{2} \tag{20}\] Then we minimize \(\hat{\ell}(\xi)\) with respect to \(\xi\), where the \(L^{2}\) norm is also approximated by Monte Carlo integration on \(\Omega\). The training of \(V_{\xi}\) is summarized in Algorithm 1. Once we trained the vector field \(V_{\xi}\), we can implement the solution operator \(\mathcal{S}_{F}\) in the following two steps: we first find a \(\theta_{0}\) such that \(u_{\theta_{0}}\) fits \(g\), i.e., find \(\theta_{0}\) that minimizes \(\|u_{\theta}-g\|_{2}\). This can be done by sampling \(\{x_{n}\}_{n=1}^{N}\) from \(\Omega\) and minimizing the empirical squared \(L^{2}\) norm \((1/N)\cdot\sum_{n=1}^{N}|u_{\theta}(x_{n})-g(x_{n})|^{2}\) with respect to \(\theta\). Then we solve the ODE (18) using any numerical ODE solver (e.g., Euler, 4th order Runge-Kutta, predictor-corrector) with \(\theta_{0}\) as the initial value. Both steps can be done efficiently and the total computational cost is substantially lower than that of solving the original IVP (16) again. We summarize how neural control solves IVPs in Algorithm 1. ### Error analysis In this subsection, we develop an error estimate of the proposed method. We first focus on the error due to projection onto the tangent space \(T_{u_{\theta}}\mathcal{M}\) in the \(L^{2}\) space in Section 3.3.1. Then we establish the solution approximation error for linear and semilinear parabolic PDEs in Section 3.3.2. For ease of discussion, we again assume zero Dirichlet boundary condition \(u(x,t)=0\) for all \(x\in\bar{\Omega}\) and \(t\in[0,T]\), and we let \(\Omega=(0,1)^{d}\subset\mathbb{R}^{d}\) be the unit open cube in \(\mathbb{R}^{d}\) and \(\Theta\) some open bounded set in \(\mathbb{R}^{m}\). We let \(F[u]:=F(u,\nabla u,\nabla^{2}u)\) be a nonlinear differential operator with necessary regularity conditions to be specified later. Additional requirements on the regularity of \(u_{\theta}\) will be given when needed. #### 3.3.1 Approximation error of control vector field We first investigate the main source of error when using a reduced-order model to approximate the time-evolving solution of the given PDE. We show that this error is due to the imperfect representation of \(F[u_{\theta}]\) using \(\nabla_{\theta}u_{\theta}\) in (3.8). Specifically, due to the approximation of reduced-order models, \(T_{u_{\theta}}\mathcal{M}\) is only a finite-dimensional subspace of \(L^{2}\), and thus we can only approximate the projection of \(F[u_{\theta}]\) onto this tangent space. We will need the following assumptions on the regularity of \(u_{\theta}\) and \(F\). **Assumption 1**: _The reduced-order model \(u_{\theta}(\cdot)\in C^{3}(\Omega)\cap C(\bar{\Omega})\) for every \(\theta\in\bar{\Theta}\) and \(u(x;\cdot)\in C^{1}(\Theta)\cap C(\bar{\Theta})\). Moreover, there exists \(L>0\) such that for all \(\theta\in\bar{\Theta}\)_ \[F[u_{\theta}]\in\mathcal{F}^{L}:=\{f\in C^{1}(\Omega)\cap C(\bar{\Omega}):\|f \|_{\infty}\leq L,\ \|\nabla f\|_{\infty}\leq L\}. \tag{3.12}\] Assumption 1 provides some sufficient regularity conditions on the reduced-order model \(u_{\theta}\) and boundedness of \(F[u_{\theta}]\) and its gradient to be used in our error estimates. Notice that we consider \(F\) as second-order differential operator here and therefore the assumption \(u_{\theta}\in C^{3}(\Omega)\) ensures that \(u_{\theta}(x),\nabla u_{\theta}(x),\nabla^{2}u_{\theta}(x)\) are all sufficiently smooth. The regularity condition on \(F\) in Assumption 1 requires that the mapping \(F[u_{\theta}](x)\) is a \(C^{1}\) function and have magnitudes and gradients bounded by \(L\) over \(\bar{\Omega}\). These assumptions are generally mild as we will use reduced-order models smooth in \((x,\theta)\), e.g., a DNN with smooth activation functions, and the operator \(F\) is sufficiently regular. **Assumption 2**: _For any \(\bar{\varepsilon}>0\), there exist a reduced-order model \(u_{\theta}\) and a bounded open set \(\Theta\subset\mathbb{R}^{m}\), such that for every \(\theta\in\bar{\Theta}\) there exists a vector \(v_{\theta}\in\mathbb{R}^{m}\) satisfying_ \[\|v_{\theta}\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}\leq\bar{ \varepsilon}.\] Assumption 2 provides an upper bound on the error when projecting \(F[u_{\theta}]\) onto the tangent space \(T_{u_{\theta}}\mathcal{M}\), which is spanned by the functions in \(\nabla_{\theta}u_{\theta}\). This error bound is determined by the choice of the reduced-order model \(u_{\theta}\) and the parameter set \(\Theta\). As will be demonstrated in our numerical experiments, a small projection error can be achieved by using standard DNN as reduced-order model \(u_{\theta}\). As such error is difficult to analyze due to the complex structures of general DNNs. We provide an example reduced-order model with special structure to justify the reasonableness of Assumption 2. **Example 3.1**: _Let \(\bar{\varepsilon}>0\) and \(\{\varphi_{j}\}_{j=1}^{\infty}\) be a complete smooth orthonormal basis (e.g., generalized Fourier basis) for \(L^{2}(\Omega)\). Suppose there exist \(C>0\), \(\gamma>1\), and \(C_{0}>0\) such that for all \(u\in C^{3}(\Omega)\cap C(\bar{\Omega})\) and \(\|u\|_{2}^{2}\leq C_{0}\) we have_ \[F[u]\in\mathcal{G}^{C,\gamma}:=\left\{f\in C^{1}(\Omega)\cap C(\bar{\Omega}):| \langle f,\varphi_{j}\rangle|^{2}\leq Cj^{-\gamma},\ \forall\,j\geq 1\right\}. \tag{13}\] _Consider \(u_{\theta}=\theta\cdot\varphi=\sum_{j=1}^{m}\theta_{j}\varphi_{j}\). We denote \(f_{\theta}:=F[u_{\theta}]\) for short. Then there exists \(m=m(\bar{\varepsilon},C,\gamma)\in\mathbb{N}\) such that \(\sum_{j=m+1}^{\infty}Cj^{-\gamma}<\bar{\varepsilon}^{2}\). Then \(\nabla_{\theta}u_{\theta}=\varphi=(\varphi_{1},\ldots,\varphi_{m})\) and for \(\alpha^{f_{\theta}}=(\alpha_{1}^{f_{\theta}},\ldots,\alpha_{m}^{f_{\theta}})\) with \(\alpha_{j}^{f_{\theta}}:=\langle f_{\theta},\varphi_{j}\rangle\), there is_ \[\|\alpha^{f_{\theta}}\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}^{2}= \Big{\|}\sum_{j=1}^{m}\alpha_{j}^{f_{\theta}}\varphi_{j}-f_{\theta}\Big{\|}_{2 }^{2}=\sum_{j=m+1}^{\infty}|\langle f_{\theta},\varphi_{j}\rangle|^{2}\leq \bar{\varepsilon}^{2}.\] _Therefore, the reduced-order model \(u_{\theta}=\theta\cdot\varphi\) with \(\Theta=\{\alpha\in\mathbb{R}^{m}:|\alpha|^{2}<C_{0}\}\) and \(v_{\theta}=\alpha^{f_{\theta}}\) satisfy Assumption 2._ _This example can be modified to use a more general form of reduced-order model \(u_{\theta}\), such as a DNN. To see this, we first repeat the procedure above but with \(\bar{\varepsilon}\) replaced by \(\bar{\varepsilon}/2\). Then the universal approximation theorem [34, 84] and the continuity of DNNs in its parameters imply that there exist DNNs \(\{\hat{\varphi}_{j}:1\leq j\leq m\}\), whose network parameters are collectively denoted by \(\eta\in\mathbb{R}^{m^{\prime}}\), satisfy \(\|\hat{\varphi}_{j}-\varphi_{j}\|_{\infty}\leq\bar{\varepsilon}/(2\sqrt{mC_{0} |\Omega|})\) and hence \(\|\hat{\varphi}_{j}-\varphi_{j}\|_{2}\leq\bar{\varepsilon}/(2\sqrt{mC_{0}})\) for all \(\eta\) in an open set \(H\subset\mathbb{R}^{m^{\prime}}\). Consider the DNN \(u_{\theta}=c\cdot\hat{\varphi}\) with parameter \(\theta=(c,\eta)\in\mathbb{R}^{n}\) where \(n=m+m^{\prime}\). Then \(\nabla_{c}u_{\theta}(x)=(\hat{\varphi}_{1},\ldots,\hat{\varphi}_{m})\). Using the example above, we know for any \(f_{\theta}:=F[u_{\theta}]\in\mathcal{G}^{C,\gamma}\), there exists \(\alpha^{f_{\theta}}\in\mathbb{R}^{m}\) such that \(\|\alpha^{f_{\theta}}\cdot\varphi-F[u_{\theta}]\|_{2}\leq\bar{\varepsilon}/2\). Therefore, we use \((\alpha^{f_{\theta}},0)\) which concatenates \(\alpha^{f_{\theta}}\) and \(0\in\mathbb{R}^{m^{\prime}}\) as the combination coefficients of \(\nabla_{\theta}u_{\theta}\) to obtain_ \[\|(\alpha^{f_{\theta}},0)\cdot\nabla_{\theta}u_{\theta}-F[u_{ \theta}]\|_{2} =\|\alpha^{f_{\theta}}\cdot\nabla_{c}u_{\theta}-F[u_{\theta}]\|_{2}\] \[\leq\|\alpha^{f_{\theta}}\cdot\hat{\varphi}-\alpha^{f_{\theta}} \cdot\varphi\|_{2}+\|\alpha^{f_{\theta}}\cdot\varphi-F[u_{\theta}]\|_{\infty}\] \[\leq\sum_{j=1}^{m}|\alpha_{j}^{f_{\theta}}|\|\hat{\varphi}_{j}- \varphi_{j}\|_{2}+\frac{\bar{\varepsilon}}{2}\] \[\leq\sqrt{mC_{0}}\cdot\frac{\bar{\varepsilon}}{2\sqrt{mC_{0}}}+ \frac{\bar{\varepsilon}}{2}\] \[=\bar{\varepsilon}.\] _Therefore, the DNN \(u_{\theta}=c\cdot\hat{\varphi}\) with \(\Theta=\{(c,\eta):|c_{j}|^{2}<C_{0},\ \eta\in H\}\) and \(v_{\theta}=(\alpha^{f_{\theta}},0)\) satisfy Assumption 2._ Before proving the main proposition of this section we will need the following lemma. **Lemma 3.2**: _Suppose Assumption 1 and 2 are satisfied. For all \(\varepsilon>\bar{\varepsilon}\) there exists \(v:\bar{\Theta}\to\mathbb{R}^{m}\) such that \(v\) is bounded over \(\bar{\Theta}\) and_ \[\|v_{\theta}\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}\leq\varepsilon.\] Let \(\varepsilon>\bar{\varepsilon}\) and \(\delta\in(0,\varepsilon-\bar{\varepsilon})\). By Assumption 2, for all \(\theta\in\Theta\) there exists \(\alpha_{\theta}\in\mathbb{R}^{m}\) coefficient such that \[\|\alpha_{\theta}\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}\leq\bar{\varepsilon}.\] As \(F[u_{\theta}]\) and \(\nabla_{\theta}u_{\theta}\) are continuous and \(\Omega\) is bounded, we associate to each \(\theta\) and coefficient \(\alpha_{\theta}\) the open set \(U_{\theta}\) containing \(\theta\), small enough, such that for all \(\theta^{\prime}\in U_{\theta}\) we have \[\|\alpha_{\theta}\nabla_{\theta}u_{\theta^{\prime}}-\alpha_{\theta}\nabla_{ \theta}u_{\theta}\|_{2}+\|F[u_{\theta}]-F[u_{\theta^{\prime}}]\|_{2}\leq\delta \tag{14}\] and hence \[\|\alpha_{\theta}\cdot\nabla_{\theta}u_{\theta^{\prime}}-F[u_{\theta^{\prime}} ]\|_{2}\leq\|\alpha_{\theta}\nabla_{\theta}u_{\theta^{\prime}}-\alpha_{\theta} \nabla_{\theta}u_{\theta}\|_{2}+\|\alpha_{\theta}\nabla_{\theta}u_{\theta}-F[u_{ \theta}]\|_{2}+\|F[u_{\theta}]-F[u_{\theta^{\prime}}]\|_{2}\leq\delta+\bar{\varepsilon}. \tag{15}\] Therefore \(\cup_{\theta\in\bar{\Theta}}U_{\theta}\) is an open cover of \(\bar{\Theta}\). As \(\bar{\Theta}\) is compact this open cover has a finite subcover \(\cup_{i=1}^{N}U_{\theta_{i}}\) for particular \(\theta_{i}\)'s. Define \(v:\bar{\Theta}\to\mathbb{R}^{m}\) such that \(v_{\theta}=\alpha_{\theta_{i}}\) if \(\theta\in U_{\theta_{i}}\) (if \(\theta\) is in the intersection of multiple \(U_{\theta_{i}}\)'s we choose a single \(\alpha_{\theta_{i}}\) arbitrarily). We see from this construction that \(v_{\theta}\) is bounded over \(\bar{\Theta}\) as the range of \(v_{\theta}\) is finite. From (3.15) we have \[\|v_{\theta}\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}\leq\delta+ \bar{\varepsilon}\leq\varepsilon.\] With Assumptions 1 and 2, and Lemma 3.2 we can prove the existence of an accurate neural control field \(V_{\xi}\) parameterized as a neural network, as shown in the next proposition. **Proposition 3.3**: _Suppose Assumption 1 and 2 hold. Then for any \(\varepsilon>0\), there exists a differentiable vector field parameterized as a neural network \(V_{\xi}:\bar{\Theta}\to\mathbb{R}^{m}\) with parameter \(\xi\), such that_ \[\|V_{\xi}(\theta)\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}\leq\varepsilon,\] _for all \(\theta\in\bar{\Theta}\)._ We first show that there exists a differentiable vector-valued function \(V:\bar{\Theta}\to\mathbb{R}^{d}\) such that \[\|V(\theta)\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}\leq\frac{ \varepsilon}{2} \tag{3.16}\] for all \(\theta\in\bar{\Theta}\). To this end, we choose \(\bar{\varepsilon}_{0}\in(0,\varepsilon/2)\) and \(\bar{\varepsilon}\in(\bar{\varepsilon}_{0},\varepsilon/2)\), then by Assumption 2 and Lemma 3.2 we know that there exist a reduced-order model \(u_{\theta}\), a bounded open set \(\Theta\subset\mathbb{R}^{m}\), and \(M_{v}>0\) such that there is a vector-valued function \(\theta\mapsto v_{\theta}\), where for any \(\theta\in\bar{\Theta}\), we have \(|v_{\theta}|<M_{v}\) and \[\|v_{\theta}\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}\leq\bar{ \varepsilon}.\] Note that \(v_{\theta}\) is not necessarily differentiable with respect to \(\theta\). To obtain a differentiable vector field \(V(\theta)\), for each \(\theta\in\bar{\Theta}\), we define the function \(\psi_{\theta}\) by \[\psi_{\theta}(w):=\|w\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}^{2}=w ^{\top}G(\theta)w-2w^{\top}p(\theta)+q(\theta),\] where \[G(\theta):=\int_{\Omega}\nabla_{\theta}u_{\theta}(x)\nabla_{\theta}u_{\theta} (x)^{\top}\,dx,\quad p(\theta):=\int_{\Omega}\nabla_{\theta}u_{\theta}(x)F[u_{ \theta}](x)\,dx,\quad q(\theta):=\int_{\Omega}F[u_{\theta}](x)^{2}\,dx. \tag{3.17}\] Then we know \[\psi_{\theta}^{*}:=\psi_{\theta}(v_{\theta})=\|v_{\theta}\cdot\nabla_{\theta} u_{\theta}-F[u_{\theta}]\|_{2}^{2}\leq\bar{\varepsilon}^{2}. \tag{3.18}\] It is also clear that \(G(\theta)\) is symmetric and positive semi-definite. Moreover, due to the compactness of \(\bar{\Omega}\) and \(\bar{\Theta}\), as well as that \(\nabla_{\theta}u\in C(\bar{\Omega}\times\bar{\Theta})\), we know there exists \(\lambda_{G}>0\) such that \[\|G(\theta)\|_{2}\leq\lambda_{G}\] for all \(\theta\in\bar{\Theta}\). Therefore, \(\psi_{\theta}\) is a convex function and the Lipschitz constant of \(\nabla\psi_{\theta}\) is uniformly upper bounded by \(\lambda_{G}\) over \(\bar{\Theta}\). Now for any \(w\in\mathbb{R}^{m}\), \(h>0\), and \(K\in\mathbb{N}\) (we reuse the letter \(K\) as the iteration counter instead of the number of sampling points in this proof), we define \[\mathcal{O}_{\theta}^{K,h}(w):=w_{K},\quad\text{where}\quad w_{k}=w_{k-1}-h \nabla\psi_{\theta}(w_{k-1}),\quad w_{0}=w,\quad k=1,\ldots,K.\] Namely, \(\mathcal{O}_{\theta}^{K,h}\) is the oracle of executing the gradient descent optimization scheme on \(\psi_{\theta}\) with step size \(h>0\) for \(K\) iterations. Hence, adopting the standard convergence result of gradient descent in convex optimization [60, Theorem 2.1.14] using \(\psi_{\theta}^{*}\) and combined with the bound \(|v_{\theta}|<M_{v}\) we obtain for any fixed \(h\in(0,1/\lambda_{G})\) and \[K\geq\frac{2M_{v}^{2}}{h(2-\lambda_{G}h)((\varepsilon/2)^{2}-\bar{ \varepsilon}^{2})},\] there is \[\psi_{\theta}(\mathcal{O}_{\theta}^{K,h}(0))-\psi_{\theta}^{*}\leq\frac{2M_{v}^{2} }{Kh(2-\lambda_{G}h)}\leq\left(\frac{\varepsilon}{2}\right)^{2}-\bar{\varepsilon }^{2}. \tag{32}\] Notice that \(\mathcal{O}_{\theta}^{K,h}\) is a differentiable vector-valued function of \(\theta\) because \(K\) and \(h\) are fixed. Therefore, combining (31) and (32) yields \[0\leq\psi_{\theta}(\mathcal{O}_{\theta}^{K,h}(0))=(\psi_{\theta}(\mathcal{O}_{ \theta}^{K,h}(0))-\psi_{\theta}^{*})+\psi_{\theta}^{*}\leq(\varepsilon/2)^{2} -\bar{\varepsilon}^{2}+\bar{\varepsilon}^{2}=(\varepsilon/2)^{2}.\] As this inequality holds \(\theta\in\bar{\Theta}\), we set \(V(\theta)=\mathcal{O}_{\theta}^{K,h}(0)\) which is a differentiable function of \(\theta\) satisfying (30). By the universal approximation theorem of neural networks [34], we know there exists a differentiable vector-valued function parameterized as a neural network \(V_{\xi}\) with parameter \(\xi\) such that \[|V_{\xi}(\theta)-V(\theta)|_{\infty}\leq\varepsilon/(2B)\] for all \(\theta\in\bar{\Theta}\), where \(B:=\max_{\theta\in\bar{\Theta}}\|\nabla_{\theta}u_{\theta}\|_{2}<\infty\) and \(|\cdot|_{\infty}\) stands for the \(\infty\)-norm of vectors. Hence we know \[\|V_{\xi}(\theta)\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}\leq\|V_{ \xi}(\theta)\cdot\nabla_{\theta}u_{\theta}-V(\theta)\cdot\nabla_{\theta}u_{ \theta}\|_{2}+\|V(\theta)\cdot\nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2} \leq B\cdot\frac{\varepsilon}{2B}+\frac{\varepsilon}{2}=\varepsilon.\] This completes the proof. **Remark** : It is important to note the geometry of \(\mathcal{M}\), especially its dimensionality, is complex and highly dependent on the structure of \(u_{\theta}\) and the parameter space \(\Theta\). In particular, we can show that the tangent space \(T_{u_{\theta}}\mathcal{M}=\mathrm{span}(\nabla_{\theta}u_{\theta})\) at any \(u_{\theta}\in\mathcal{M}\) is in the \(L^{2}\) space, where \(\nabla_{\theta}u_{\theta}=(\partial_{b},u_{\theta},\ldots,\partial_{\theta_{m} }u_{\theta})\) for \(\theta=(\theta_{1},\ldots,\theta_{m})\). (Here we use discrete indices \(1,\ldots,m\) as subscripts of \(\theta\) to indicate its components for notation simplicity. This is to be distinguished from the subscript \(t\) in \(\theta_{t}\) which stands for time of the trajectory \(\theta_{t}\).) However, \(\dim(T_{u_{\theta}}\mathcal{M})\) may vary across different \(u_{\theta}\) on \(\mathcal{M}\). For example, consider the reduced-order model \(u_{\theta}\) parameterized as a DNN as in (18): when \(w=0\), we have \(\theta=(0,b,\cdots)\) and hence \(\partial_{W_{l}}u_{\theta}=0\) and \(\partial_{b_{l}}u_{\theta}=0\) for all \(l=1,\ldots,L\). In this case, the \(m\) components of \(\nabla_{\theta}u_{\theta}\) are _not_ linearly independent, and \(\dim(T_{u_{\theta}}\mathcal{M})<m\) for such \(\theta\)'s. This distinguishes our parameter submanifold from existing ones, such as [2], which assumes that the tangent space is always of full dimension \(m\) at any point of the submanifold. In our case, however, challenges and complications in dealing with the parameter submanifold \(\mathcal{M}\) can be avoided if we made such assumption, but it will lead to incorrect analysis and error estimation, which poses a major technical challenge for the proposed framework. Specifically, we note that the rank of \(G(\theta)\) varies across \(\Theta\), and therefore the pseudoinverse \(G(\theta)^{+}\) may be discontinuous. A major theoretical merit of Proposition 3 is that we can still ensure the existence of a differentiable control vector field in such case. #### 3.3.2 Error analysis in solving (semi-)linear parabolic PDEs Now we are ready to provide error bounds of our method in solving a large class of linear and semilinear parabolic PDEs. This class of PDEs covers many types of reaction-diffusion equations, such as heat equations, Fisher's equation or the Allen-Cahn equation. The differential operator \(F\) in linear and semilinear parabolic PDEs has the form \[F[u]=\nabla\cdot(A\nabla u)+b\cdot\nabla u+f(u)\] where \(A:\Omega\to\mathbb{R}^{d\times d}\) and \(b:\Omega\to\mathbb{R}^{d}\) are continuous, \(f:\mathbb{R}\to\mathbb{R}\) is \(L_{f}\)-Lipschitz and acts on \(u(x)\) for each \(x\). Moreover we assume that there exists \(\lambda\geq 0\) and \(B\geq 0\) such that \[z^{\top}A(x)z\geq\lambda|z|^{2},\quad\forall\,z\in\mathbb{R}^{d},\ x\in\Omega, \tag{33}\] and \[\|\nabla\cdot b\|_{\infty}\leq B. \tag{34}\] Furthermore, due to the smoothness of \(V_{\xi}\) and compactness of \(\bar{\Theta}\), we know there exist \(M_{V}>0\) and \(L_{V}>0\) such that \[\max_{\theta\in\Theta}|V_{\xi}(\theta)|\leq M_{V}\qquad\text{and}\qquad\max_{ \theta\in\Theta}|\nabla_{\theta}V_{\xi}(\theta)|\leq L_{V}. \tag{35}\] **Theorem 3.5**.: _Suppose Assumptions 1 and 2 hold. Then there exists control field \(V_{\xi}\) such that for any \(u^{*}\) satisfying the evolution PDE in (3.5) there is_ \[\|u_{\theta_{t}}(\cdot)-u^{*}(\cdot,t)\|_{2}\leq e^{(L_{f}+B/2-\lambda/C_{p})t}( \varepsilon_{0}+\varepsilon t) \tag{3.23}\] _for all \(t\) as long as \(\theta_{t}\in\bar{\Theta}\), where \(\theta_{t}\) is solved from the ODE (3.9) with \(V_{\xi}\) and initial \(\theta_{0}\) satisfying \(\|u_{\theta_{0}}(\cdot)-u^{*}(\cdot,0)\|_{2}\leq\varepsilon_{0}\). Here \(C_{p}\) is a constant depending only on \(\Omega\)._ Proof.: We denote the residual \[r(x,t):=\nabla_{\theta}u_{\theta_{t}}(x)\cdot V_{\xi}(\theta_{t})-F[u_{\theta _{t}}](x).\] Then by Proposition 3.3 we have \(\|r(\cdot,t)\|_{2}\leq\varepsilon\) for all \(t\). Furthermore, we denote \[\delta(x,t):=u_{\theta_{t}}(x)-u^{*}(x,t)\] for all \((x,t)\in\bar{\Omega}\times[0,T]\) and \(D(t):=\|\delta(\cdot,t)\|_{2}\), then there is \[D^{\prime}(t)=\Big{\langle}\frac{\delta(\cdot,t)}{\|\delta(\cdot,t)\|_{2}}, \partial_{t}\delta(\cdot,t)\Big{\rangle}. \tag{3.24}\] Here we use the convention that \(\delta(\cdot,t)/\|\delta(\cdot,t)\|_{2}=0\) if \(\delta(\cdot,t)=0\) a.e. By the definition of \(\delta\), we have \[\partial_{t}\delta(x,t) =\partial_{t}u_{\theta_{t}}(x)-\partial_{t}u^{*}(x,t)\] \[=\nabla_{\theta}u_{\theta_{t}}(x)\cdot\dot{\theta}_{t}-F[u^{*}](x,t)\] \[=\nabla_{\theta}u_{\theta_{t}}(x)\cdot V_{\xi}(\theta_{t})-F[u^{* }](x,t)\] \[=F[u_{\theta_{t}}](x)-F[u^{*}](x,t)+r(x,t)\] \[=\nabla\cdot(A(x)\nabla\delta(x,t))+b(x)\cdot\nabla\delta(x,t)+ f(u_{\theta_{t}}(x))-f(u^{*}(x,t))+r(x,t).\] Therefore, we have \[\begin{split}\langle\delta(\cdot,t),\partial_{t}\delta(\cdot,t) \rangle&=\int_{\Omega}\delta(x,t)\left(\nabla\cdot(A(x)\nabla \delta(x,t))+b(x)\cdot\nabla\delta(x,t)\right)\,dx\\ &\qquad+\int_{\Omega}\delta(x,t)(f(u_{\theta_{t}}(x))-f(u^{*}(x, t))+r(x,t))\,dx\\ =:I(t)+J(t).\end{split} \tag{3.25}\] Because \(u_{\theta_{t}}(\cdot)|_{\partial\Omega}=u^{*}(\cdot,t)|_{\partial\Omega}=0\), we know \(\delta(\cdot,t)|_{\partial\Omega}=0\). Thus, we have \[\begin{split} I(t)&=\int_{\Omega}\delta(x,t)\left( \nabla\cdot(A(x)\nabla\delta(x,t))+b(x)\cdot\nabla\delta(x,t)\right)\,dx\\ &=-\int_{\Omega}\nabla\delta(x,t)^{\top}A(x)\nabla\delta(x,t)\, dx-\frac{1}{2}\int_{\Omega}(\nabla\cdot b(x))\delta(x,t)^{2}\,dx\\ &\leq-\lambda\int_{\Omega}|\nabla\delta(x,t)|^{2}\,dx-\frac{1}{2 }\int_{\Omega}(\nabla\cdot b(x))\delta(x,t)^{2}\,dx\\ &\leq-\frac{\lambda}{C_{p}}\int_{\Omega}|\delta(x,t)|^{2}\,dx+ \frac{B}{2}\int_{\Omega}|\delta(x,t)|^{2}\,dx,\end{split} \tag{3.26}\] where the first equality is just by the definition of \(I(t)\), the second equality is obtained by integrating by parts on both terms and using \(\delta(\cdot,t)|_{\partial\Omega}=0\), the first inequality is due to (3.20), and the last inequality is due to the Poincare's inequality \[\|\delta(\cdot,t)\|_{2}\leq C_{p}\|\nabla\delta(\cdot,t)\|_{2}\] as \(\delta(\cdot,t)|_{\partial\Omega}=0\) for all \(t\) (here \(C_{p}\) the Poincare's constant depending on \(\Omega\) only) and the bound (3.21). We can also obtain \[J(t) =\int_{\Omega}\delta(x,t)(f(u_{\theta_{t}}(x))-f(u^{*}(x,t))-r(x,t ))\,dx \tag{3.27}\] \[\leq\int_{\Omega}|\delta(x,t)|\cdot|f(u_{\theta_{t}}(x))-f(u^{*}( x,t))-r(x,t)|\,dx\] \[\leq\int_{\Omega}|\delta(x,t)|\cdot(L_{f}|\delta(x,t)|+|r(x,t)|)\,dx\] \[\leq L_{f}\|\delta(x,t)\|_{2}^{2}+\|r(\cdot,t)\|_{2}\|\delta( \cdot,t)\|_{2}\] \[\leq L_{f}\|\delta(x,t)\|_{2}^{2}+\varepsilon\|\delta(\cdot,t)\|_ {2},\] where the first identity is by the definition of \(J(t)\), the second inequality is due to the Lipschitz condition of \(f\). Combining (3.24), (3.25), (3.26) and (3.27), we obtain \[D^{\prime}(t)\leq\Big{(}L_{f}+\frac{B}{2}-\frac{\lambda}{C_{p}}\Big{)}D(t)+| \Omega|^{1/2}\varepsilon.\] By Gronwall's inequality we deduce that \[D(t)\leq e^{(L_{f}+B/2-\lambda/C_{p})t}(D(0)+|\Omega|^{1/2}\varepsilon t).\] Recall that \[D(0)=\|\delta(\cdot,0)\|_{2}=\|u_{\theta_{0}}(\cdot)-u^{*}(\cdot,0)\|_{2}=\|u_{ \theta_{0}}(\cdot)-g(\cdot)\|_{2}\leq\varepsilon_{0},\] we thus have \[\|u(\cdot,\theta(t))-u(\cdot,t)\|_{2}=D(t)\leq e^{(L_{f}+B/2-\lambda/C_{p})t}( \varepsilon_{0}+\varepsilon t)\] for all time \(t\), which completes the proof. _Remark 3.6_.: While we assumed \(f\) to be globally Lipschitz, the result in Theorem 3.5 still holds locally with local Lipschitz condition of \(f\). For example, in the case of the Allen-Cahn example, we know if our initial function is bounded by \(1\) the true trajectories will remain bounded allowing the results of Theorem 3.5 to apply. **Corollary 3.7**.: _Suppose the conditions in Theorem 3.5 hold. Let \(\hat{\theta}_{t}\) be the numerical solution to the ODE (3.9) obtained by using the Euler scheme with step size \(h>0\). Then_ \[\|u_{\hat{\theta}_{t}}(\cdot)-u^{*}(\cdot,t)\|_{2}\leq\frac{L_{V}M_{V}|\Omega |h}{2}(e^{L_{V}t}-1)+e^{(L_{f}-B/2+\eta/C_{p})t}(\varepsilon_{0}+\bar{ \varepsilon}t) \tag{3.28}\] _for all \(t\) as long as \(\theta_{t}\in\bar{\Theta}\)._ Proof.: Given the estimate provided in Theorem 3.5, we only need to show \[\|u_{\hat{\theta}_{t}}(\cdot)-u_{\theta_{t}}(\cdot)\|_{2}\leq\frac{L_{V}M_{V}| \Omega|h}{2}(e^{L_{V}t}-1), \tag{3.29}\] since combined with (3.23) it yields the claimed estimate (3.28). To show (3.29), we notice that \[\tilde{\theta}_{t}=\frac{d}{dt}V_{\xi}(\theta_{t})=\nabla_{\theta}V_{\xi}( \theta_{t})\cdot\dot{\theta}_{t}=\nabla_{\theta}V_{\xi}(\theta_{t})\cdot V_{ \xi}(\theta_{t}).\] Therefore we have \[|\tilde{\theta}_{t}|=|\nabla_{\theta}V_{\xi}(\theta_{t})\cdot V_{\xi}(\theta_ {t})|\leq L_{V}M_{V}\] where \(L_{V}\) and \(M_{V}\) are defined in (3.22). Hence, by the standard results for the Euler's method [4, pp. 346]), we know the numerical solution \(\hat{\theta}_{t}\) satisfies \[|\hat{\theta}_{t}-\theta_{t}|\leq\frac{hM_{V}}{2}\left(e^{L_{V}t}-1\right) \tag{3.30}\] for all \(t\). Therefore, we obtain \[\|u_{\hat{\theta}_{t}}-u_{\theta_{t}}\|_{2} =\Big{(}\int_{\Omega}|u_{\theta_{t}}(x)-u_{\theta_{t}}(x)|^{2}\,dx \Big{)}^{1/2}=\Big{(}\int_{\Omega}|\nabla_{\theta}u_{\hat{\theta}_{t}}(x)\cdot( \hat{\theta}_{t}-\theta_{t})|^{2}\,dx\Big{)}^{1/2}\] \[\leq L_{V}|\Omega||\hat{\theta}_{t}-\theta_{t}|\leq\frac{L_{V}M_{ V}|\Omega|h}{2}(e^{L_{V}t}-1),\] where the second equality is due to the fact that \(u_{\theta}\) is \(C^{1}\) in \(\theta\) and hence the mean value theorem applies to \(u_{\theta}\) (here \(\tilde{\theta}_{t}\) is some point on the line segment between \(\hat{\theta}_{t}\) and \(\theta_{t}\)). The proof above can be modified if a different numerical ODE solver is employed. In that case one can obtain improved upper bound and order in step size \(h\) in (3.30). ## 4 Numerical Results ### Implementation of the training process of control field \(V_{\xi}\) In Section 3.2, we have showed that the neural control field \(V_{\xi}\) is parameterized as a deep network, and its parameter \(\xi\) can be learned by solving \[\min_{\xi}\Big{\{}\ell(\xi):=\int_{\Theta}\|V_{\xi}(\theta)\cdot \nabla_{\theta}u_{\theta}-F[u_{\theta}]\|_{2}^{2}\,d\theta\Big{\}}\,.\] The first-order optimality condition of this minimization problem is given by \(G(\theta)V_{\xi}(\theta)=p(\theta)\) where \(G(\theta)\) and \(p(\theta)\) are defined in (3.17). The objective function \(\ell(\xi)\) above shares the same minimizers as the following one: \[\bar{\ell}(\xi):=\int_{\Theta}|G(\theta)V_{\xi}(\theta)-p(\theta) |^{2}\,d\theta. \tag{4.1}\] In our numerical experiments, we use \(\bar{\ell}\) defined in (4.1) as the loss function where \(p(\theta)\) already represents the projection of \(F[u_{\theta}]\) onto the \(T_{u_{\theta}}\mathcal{M}\) space and it yields better performance empirically. Moreover, we know the minimum loss value of (4.1) is \(0\), which contrasts to (3.10) where the minimum loss value is often unknown. In practice, as the dimension of \(\theta\) and \(\Omega\) could be large, we have to approximate (4.1) using techniques such as Monte-Carlo integration. This leads to the approximate forms \[\tilde{G}(\theta)=\frac{1}{N_{x}}\sum_{i=1}^{N_{x}}\nabla_{\theta}u_{\theta}( x_{i})\nabla_{\theta}u_{\theta}(x_{i})^{\top},\quad\tilde{p}(\theta)=\frac{1}{N _{x}}\sum_{i=1}^{N_{x}}\nabla_{\theta}u_{\theta}(x_{i})F[u_{\theta}](x_{i}),\] where \(x_{i}\), \(i=1,\ldots,N_{x}\) are sampled from \(\Omega\). By also drawing samples from \(\Theta\), we arrive at our empirical loss function defined by \[\ell_{1}(\xi):=\frac{1}{N_{\theta}}\sum_{j=1}^{N_{\theta}}|\tilde{G}(\theta_{j })\cdot V_{\xi}(\theta_{j})-\tilde{p}(\theta_{j})|^{2}. \tag{4.2}\] To improve the training of \(V_{\xi}\), we also augment the loss function \(\ell_{1}\) in (4.2) with an additional term following a data-driven approach. Specifically, we follow the methods in [10, 19] to generate multiple sample trajectories starting from randomly sampled initial values \(\{\theta_{0}^{(i)}:i\in[M]\}\) in \(\Theta\). For the \(i\)th trajectory, a sequence of directions \(\{v_{j}^{(i)}:j=0,1\ldots,N_{t}\}\) are solved from linear systems \(\tilde{G}(\theta_{j}^{(i)})v_{j}^{(i)}=\tilde{p}(\theta_{j}^{(i)})\) and the discrete-time points on the trajectory are obtained by \(\theta_{j+1}^{(i)}=\theta_{j}^{(i)}+hv_{j}^{(i)}\) for \(j=0,1,\ldots,N_{t}-1\). We add the augment loss term \[\ell_{2}(\xi):=\frac{1}{N_{t}M}\sum_{i=1}^{M}\sum_{j=1}^{N_{t}}|V_{\xi}(\theta_ {j}^{(i)})-v_{j}^{(i)}|^{2}. \tag{4.3}\] Combining with (4.2), we obtain our final loss function \[\ell_{\text{total}}(\xi)=\ell_{1}(\xi)+\zeta\ell_{2}(\xi), \tag{4.4}\] where \(\zeta\) is a weight parameter. In our experience for parabolic linear PDEs using only \(\ell_{1}\) is sufficient to generate a good result. For the nonlinear case adding \(\ell_{2}\) substantially improves training results empirically as network parameters may move far away from those we sampled near the initial parameters. ### Experimental setting To demonstrate the performance of the proposed method, we test it on three different PDEs: a 10-dimensional (10D) transport equation, a 10D heat equation, and a 2D Allen-Cahn equation. Both of the transport equation and heat equation are linear PDEs, while the Allen-Cahn is a highly nonlinear PDE. In fact, we also tested 10D Allen-Cahn equation but only present the result of the 2D one here. This is because the true solution of Allen-Cahn equation does not have closed-form, and we have to employ a classical finite difference method, which does not scale to 10D case, to produce a reference solution for comparison. In contrast, we have closed-form solutions of the IVPs with transport and heat equations and hence we can use them as the true solution for direct comparison. In our tests, we employ the following structure of our reduced-order model \[u_{\theta}(x)=\alpha(x)z_{L}(x,\theta) \tag{10}\] for the heat equation and Allen-Cahn equation. We use the following network structure \[u_{\theta}(x)=z_{L}(\beta(x),\theta) \tag{11}\] for the transport equation. In (10), \(\alpha(x)\) is a distance function of \(\partial\Omega\) such that it satisfies the zero boundary condition, and in (11) \(\beta(x)\) is a function chosen to satisfy a periodic boundary condition as in [19]. This aligns with our choice of \(u_{\theta}\) in (10) and (11) as the IVP with heat and Allen-Cahn equations have zero boundary value whereas the IVP with transport equation has periodic boundary value in our experiments. In both (10) and (11), \(z_{L}\) is the neural network and is defined by \[z_{L}=w_{L}z_{L-1},\quad z_{l}=z_{l-1}+\sigma(W_{l}z_{l-1}+b_{l}),\ l=1,\ldots,L-1 \tag{12}\] and \(z_{0}=\sigma(W_{0}x+b_{0})\). Here \(\sigma\) is a user-chosen activation function (we use \(\tanh\) in our experiments) \(W_{l}\in\mathbb{R}^{d^{\prime}\times d^{\prime}}\) are the weight vectors and \(b_{l}\in\mathbb{R}^{d^{\prime}}\) are the bias vectors, and \(W_{0}\in\mathbb{R}^{d^{\prime}\times d}\) and \(w_{L}\in\mathbb{R}^{1\times d}\), all of these matrices and vectors make up the parameter vector \(\theta\). Networks such as in (12) are often called _residual neural networks_ (ResNet), and have been shown performing better than basic feed forward networks in function approximation [73]. The values of \(L\) and \(d^{\prime}\) in our experiments are shown in Table 1. They are selected manually to balance the depth \(L\) and width \(d^{\prime}\) so that \(u_{\theta}\) does not have too many neurons but still remains expressive. We do the same for the vector field \(V_{\xi}\), using the structure (12) with \(\tanh\) activation. The width and depth of this network are also collected in Table 1. Information about the number of trajectories used for (11) is also collected in Table 1. For all of the experiments, we set the weight \(\zeta=0.1\) in (10) to reflect the scale difference of the two loss terms and use the standard ADAM optimizer with learning rate \(0.001\), \(\beta_{1}=0.9\), \(\beta_{2}=0.999\). We terminate the training process when the empirical loss \(\ell_{\text{total}}(\xi)<0.1\). Once \(V_{\xi}\) is learned, we use the 4th-order Runge-Kutta method with a step size of \(T/200\) (\(T\) is determined from the problem) to solve \(\theta_{t}\) from the ODE in (11) in Algorithm 2 and compare the corresponding \(u_{\theta}\) with the reference solutions. All the implementations and experiments are performed using PyTorch in Python 3.9 in Windows 10 OS on a desktop computer with an AMD Ryzen 7 3800X 8-Core Processor at 3.90 GHz, 16 GB of system memory, and an Nvidia GeForce RTX 2080 Super GPU with 8 GB of graphics memory. Finally, we note the process for designing \(\Theta\). In our experiments, the following process was used: We first considered a few different initial conditions that we were interested to test for a particular IVP. These initials were chosen to either give us analytical examples to compare against when we later tested our model or because they represented some common test examples to compare against. Then we trained \(u_{\theta}\) to approximate those initial conditions gaining some relevant \(\theta_{0}\in\mathbb{R}^{m}\) by training as in step 1 of Algorithm 2 for error tolerance \(10^{-4}\). In practice, a user may have some class of functions they wish to be able to test around, and locating what region of \(\Theta\) corresponds to approximations of our desired initials would then become an important design step. As neural networks can be expressive and \(\Theta\) large, using too much of the parameter space becomes computationally difficult, while also including many examples we may not care about. Once found, we use the \(\theta_{0}\)'s to design \(\Theta\) as the union of radius 3 balls centered at each initial. We then can randomly sample from these balls to form our training sets. It is worth noting, that our method of creating \(\Theta\) is a potential challenge in using this method. Designing \(\Theta\) to be large enough to remain relevant to the initials one wants to be able to rapidly test, while not so large as to make approximating the control vector field too challenging, is a difficult balance to strike. Indeed, it is not hard to see that this problem is highly dependent upon the type of reduced order model one wishes to use, as well as the PDE structure. As of now, we do not have a clear answer on the best way to approach designing \(\Theta\) for arbitrary neural networks. This poses an interesting avenue for future research, but we do not consider it here. ### Numerical results on transport equation We first consider the initial value problem defined by a 10D transport equation with periodic boundary conditions as follows: \[\begin{cases}\partial_{t}u(x,t)=-\mathbf{1}\cdot\nabla_{x}u(x,t),&\quad\forall \,x\in\Omega,\ t\in[0,T],\\ u(x,0)=g(x),&\quad\forall\,x\in\bar{\Omega},\end{cases} \tag{10}\] where \(\Omega=(0,1)^{10}\), \(T=1\), \(\mathbf{1}\) is the vector whose components are all ones, and the boundary value \(u(x,t)=0\) for all \(x\in\partial\Omega\) and \(t\in[0,T]\). This IVP has the true solution \(u^{*}(x,t)=g(x-\mathbf{1}\cdot t)\). We obtain the solution operator of the IVP (10), we use (11) as the reduced-order model \(u_{\theta}\) and define \(\beta(x)=(\cos(2\pi(x-b)),\sin(2\pi(x-b)))^{\top}\) where \(b\in\mathbb{R}^{10}\) is a trainable parameter with \(\sin\) and \(\cos\) acting component-wise to \(x\). This means that the first hidden layer uses \(W_{0}\in\mathbb{R}^{12\times 20}\). Then we train the neural control vector field \(V_{\xi}\) by minimizing (11) with the number of sampled \(\theta\) in \(\Theta\) shown in Table 1. After the control \(V_{\xi}\) is obtained, we test the performance of neural control of the parameter \(\theta\) of \(u_{\theta}\) using \(V_{\xi}\) on a variety of initial values \(g\) for the IVP (10). Due to space limitation, we only plot the results of the solutions for perturbations of the following three initial values: \[\begin{split} g_{1}(x)&=\sin(2\pi x_{1})\sin(2\pi x _{2})\prod_{i=3}^{10}\sin(\pi x_{i}),\\ g_{2}(x)&=\sin(2\pi x_{1})\cos(2\pi x_{2})\prod_{i=3 }^{10}\sin(\pi x_{i}),\\ g_{3}(x)&=\sin(4\pi x_{1})\sin(2\pi x_{2})\prod_{i=3 }^{10}\sin(\pi x_{i}).\end{split} \tag{11}\] We emphasize that the corresponding \(\theta_{0}\)'s to these initial values are not specifically used in the training process. In order to evaluate the generalization ability of the proposed approach we approximate solutions to the IVP using initial values \(\theta_{0}+\delta\) where \(\delta\in\mathbb{R}^{m}\) and \(|\delta|=1\) is determined randomly. These approximate solutions are shown in Figure 1. For \(g_{1}\), we plot the corresponding true solution \(u^{*}(\cdot,t)\) using the perturbation as the initial, the approximate solution \(u_{\theta_{t}}\) obtained by Algorithm 2, and their pointwise absolute difference \(|u_{\theta_{t}}(x)-u^{*}(x,t)|\) from row 1 to row 3 in Figure 1 respectively for \(t=0,0.15,0.5,0.85,1\). The plots for \(g_{2}\) and \(g_{3}\) are shown in rows 4-6 and 7-9 in Figure 1 respectively. We only show the cross sections of the \(x_{1}\)-\(x_{2}\) plane for these plots as they have the largest variations in the solution. From Figure 1, we can see that the reduced-order model \(u_{\theta_{t}}\) with \(\theta_{t}\) controlled by the trained vector field \(V_{\xi}\) closely approximates the true solution \(u^{*}(\cdot,t)\) with low absolute errors (we'd like to point out that the scale of the error is different from that of \(u^{*}(\cdot,t)\) and \(u_{\theta_{t}}(\cdot)\)). The plot of \(\|u_{\theta}-u^{*}(\cdot,t)\|_{2}/\|u^{*}(\cdot,t)\|_{2}\) vs \(t\) seen in Figure 2 further justifies the small approximation error using the proposed method. ### Heat equation Next we consider an initial value problem with heat equation in 10D: \[\begin{cases}\partial_{t}u(x,t)=\Delta u(x,t),&\quad\forall\,x\in\Omega,t\in [0,T]\\ u(x,0)=g(x),&\quad\forall\,x\in\bar{\Omega},\end{cases} \tag{12}\] where \(\Omega=(0,1)^{10}\) and the boundary value \(u(x,t)=0\) for all \(x\in\partial\Omega\) and \(t\in[0,T]\). As most of the initial conditions we consider have rapid evolution in a short time, we use \(T=0.015\) in this test. For neural network we use (10), with \(\alpha(x)=\Pi_{i=1}^{10}4(x_{i}-x_{i}^{2})\). \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Problem** & **Dim.** & \(u_{\theta}\) **Width/Depth** & \(V_{\xi}\) **Width/Depth** & \(M\) & \(N_{\theta}\) \\ \hline Heat Equation & 10 & 12/4 & 1,500/4 & 2,000 & 600,000 \\ Transport Equation & 10 & 12/4 & 500/5 & 2,000 & 450,000 \\ Allen-Cahn Equation & 2 & 6/3 & 800/3 & 1,600 & 200,000 \\ \hline \hline \end{tabular} \end{table} Table 1: The considered dimensions of the three toy problems as well as the network dimension considered and the number of training trajectories/samples. Here \(M\) is the number of trajectories used from \(\Theta\) and \(N_{\theta}\) is the total number of samples from \(\Theta\). Figure 1: (Transport equation). Comparison between true solution \(u^{*}(\cdot,t)\), the approximation \(u_{\theta_{t}}(\cdot)\) and their pointwise absolute difference \(|u_{\theta_{t}}(x)-u^{*}(x,t)|\) for times \(t=0,0.15,0.5,0.85,1\) for IVPs with initial values \(g_{1}\) (rows 1–3), \(g_{2}\) (rows 4–6) and \(g_{3}\) (rows 7–9) which are given in (4.9). Again due to space limitation, we only demonstrate the performance on three initial conditions with analytical solutions whose corresponding \(\theta_{0}\) parameters were not contained in the training data. These initial conditions are given by \[\begin{split}& g_{1}(x)=\sin(2\pi x_{1})\sin(2\pi x_{2})\Pi_{i=3}^{ 10}\sin(\pi x_{i})+0.5\Pi_{i=1}^{10}\sin(\pi x_{i}),\\ & g_{2}(x)=\sin(2\pi x_{1})\sin(2\pi x_{2})\Pi_{i=3}^{10}\sin( \pi x_{i}),\\ & g_{3}(x)=\sin(2\pi x_{1})\Pi_{i=2}^{10}\sin(\pi x_{i}).\end{split} \tag{11}\] We chose these three initials because they yield solutions with closed-form expressions as follows: \[\begin{split}& u_{1}^{*}(x,t)=e^{-16\pi^{2}t}\sin(2\pi x_{1})\sin(2 \pi x_{2})\Pi_{i=3}^{10}\sin(\pi x_{i})+0.5e^{-10\pi^{2}t}\Pi_{i=1}^{10}\sin( \pi x_{i}),\\ & u_{2}^{*}(x,t)=e^{-16\pi^{2}t}\sin(2\pi x_{1})\sin(2\pi x_{2}) \Pi_{i=3}^{10}\sin(\pi x_{i}),\\ & u_{3}^{*}(x,t)=e^{-13\pi^{2}t}\sin(2\pi x_{1})\Pi_{i=2}^{10} \sin(\pi x_{i}),\end{split} \tag{12}\] The short time scale is used as for larger \(T\) the solutions decay towards \(0\). From Figures 3 and 4, we observe that the proposed method can quickly approximate the true solutions with consistently low relative error up to \(3\%\). ### Allen-Cahn equation In this test, we consider the IVP with nonlinear Allen-Cahn equation given by \[\begin{cases}\partial_{t}u(x,t)=\epsilon\Delta u(x,t)+\frac{3}{2}\left(u(x,t) -u(x,t)^{3}\right),&\forall\,x\in\Omega,t\in(0,T]\\ u(x,0)=g(x),&\forall\,x\in\tilde{\Omega},\end{cases} \tag{13}\] where \(\Omega=(0,1)^{2}\), \(\epsilon=0.0001\), and the boundary value \(u(x,t)=0\) for all \(x\in\partial\Omega\) and \(t\in[0,T]\). As the Allen-Cahn PDE does not have analytical solution to compare against, we have to resort to the classical finite difference method to generate a reference solution for comparison in 2D case only, despite that our method can be applied to higher dimensional case. In this test, we again use (11) with \(\alpha(x)=4^{2}(x_{1}-x_{1}^{2})(x_{2}-x_{2}^{2})\) as our neural network. For comparison we use the following initial conditions: \[\begin{split}& g_{1}(x)=0.75\sin(3\pi x_{1})\sin(\pi x_{2})\\ & g_{2}(x)=(x_{1}-x_{1}^{2})\cos(2\pi x_{1})\sin(\pi x_{2})\\ & g_{3}(x)=(x_{1}-x_{1}^{2})\cos(2\pi x_{1})\sin(2\pi x_{2}). \end{split} \tag{14}\] Fig. 2: (Transport equation). Comparison of the relative error \(\|u^{*}(\cdot,t)-u_{\theta_{t}}(\cdot)\|_{2}/\|u^{*}(\cdot,t)\|_{2}\) over time \(t\) for IVPs with initial values perturbed near \(g_{1}\), \(g_{2}\) and \(g_{3}\) which are given in (10). Note that we excluded these initials during the training of \(V_{\xi}\) to test the generalization property of the proposed method. The results of the proposed method at time \(t=0,0.15,0.3,0.45,0.6\) are shown in Figure 6. Again, the plots for \(g_{1}\) are in rows 1-3, while the plots for \(g_{2}\) and \(g_{3}\) are shown in rows 4-6 and 7-9 in Figure 6 respectively. In Figure 5 we compare the relative error \(\|u_{\theta_{t}}-u^{*}(\cdot,t)\|_{2}/\|u^{*}(\cdot,t)\|_{2}\) against the time \(t\). We again observe promising approximation accuracy using the proposed method in these figures. In Figure 5 we see how the \(g_{1}\) initial value increases faster than the other solutions because the solution to the Allen-Cahn equation for this initial value has fast increasing derivatives as time progresses, which poses a challenge to all numerical methods including ours in solving Allen-Cahn equations in general. Specifically, such large derivatives force the parameter \(\theta\) of the neural network to blow up quickly, and hence the trajectory \(\theta_{t}\) may rapidly escape from the prescribed \(\Theta\) over which we trained the vector field \(V_{\xi}\). This is a challenge that remains to be overcome using more adaptive training method and sampling strategy. ## 5 Variations and Generalizations In this section, we discuss potential variations and generalizations of the proposed approach. Specifically, one can modify the proposed method such that it applies to general time-dependent evolution PDEs. IVP with boundary value conditions are also addressed. Applications to general time-dependent PDEsOur approach can be readily applied to a large variety of time-dependent PDEs. The key reason is that these PDEs can be converted to the exact form of (12) for which our method is designed. To avoid overloading the bracket notation, we temporarily use \(F(u)\) and \(F(t,u)\) to represent \(F[u]\) and \(F_{t}[u]\) (differential operator that explicitly depends on time \(t\)) in this paragraph. Then we may convert any non-autonomous evolution PDE into an autonomous one: \[\partial_{t}u=F(t,u)\quad\Longleftrightarrow\quad\partial_{t}u=\tilde{F}(u),\ \ \text{where}\ \ u:=[t;u],\ \ \tilde{F}(u):=\big{[}1;F(u)\big{]}, \tag{13}\] and \([\,\cdot\,;\,\cdot\,]\) means to stack the two arguments vertically to form a single one. Therefore, our approach can be easily modified to tackle non-autonomous PDEs due to (13). Similarly, we can consider equations with higher order time derivatives such as wave equations by converting them to the standard evolution PDE with first-order time derivative: \[\partial_{tt}u=F(u)\quad\Longleftrightarrow\quad\partial_{t}u=\tilde{F}(u), \ \ \text{where}\ \ u:=[u;v],\ \ \tilde{F}(u):=\big{[}v;F(u)\big{]}. \tag{14}\] History-dependent PDEs can also be considered: denote \(H_{u}(t):=\{u(\cdot,s)\,|\,0\leq s\leq t\}\) the trajectory recording the path of \(u\) up to time \(t\) and \(F\) a nonlinear operator on path \(H_{u}\), then we can set \(H_{u}(t)\) as an auxiliary variable and convert the problem \(\partial_{t}u=F[H_{u}]\) to an autonomous evolution PDE of \([u;H_{u}]\). Evolution PDEs with boundary conditionsIn many real-world applications, IVP may also be associated with boundary conditions. We can extend our method to handle this situation. Let \(\phi\) denote any boundary value function for the IVP. Therefore, in addition to the evolution PDE and initial value, the solution \(u\) must also satisfy the boundary condition \(u(x,t)=\phi(x,t)\) for all \(x\in\partial\Omega\) and \(t\in[0,T]\). To cope with this requirement, we propose to parameterize the solution of the IVP as \(u_{\theta}(x)=\varphi_{\eta}(x)+\alpha(x)\psi_{\zeta}(x)\) with parameter \(\theta=(\eta,\zeta)\), where \(\varphi_{\eta}\) and \(\psi_{\zeta}\) are two reduced-order models with parameters \(\eta\) and \(\zeta\), respectively, and \(\alpha(x)\) is a prescribed smooth function such that \(\alpha(x)>0\) if \(x\in\Omega\) and \(\alpha(x)=0\) if \(x\in\partial\Omega\) (such \(\alpha\) function is easy to construct in practice). The purpose of \(\varphi_{\eta}\) is to fit the given boundary value \(\phi\), which is not interfered by \(\alpha\psi_{\zeta}\) because the latter vanishes at the boundary \(\partial\Omega\). Following our approach discussed in 2, we can again form a control vector field \(V_{\xi}(\theta)=V_{\xi}(\eta,\zeta)\). Here the parameter \(\eta_{t}\) plays two roles: it enforces \(\varphi_{\eta_{t}}\) to fit the boundary condition \(\phi(\cdot,t)\) on \(\partial\Omega\) and serves together with \(\zeta_{t}\) as the control input to the vector field \(V_{\xi}\) so that \(u_{\theta_{t}}\) minimizes the projection error given in (12). ## 6 Conclusion and Future Work We have shown a novel strategy for solving linear and nonlinear evolution PDEs numerically. Specifically, we propose to use deep neural networks as nonlinear reduced-order models to represent PDE solutions, and learn a control vector field to steer the network parameter so that the induced time-evolving neural network can approximate the solution accurately. The proposed method allows a user to quickly solve an evolution PDE with different initial values without the need to retrain the neural network. Error estimates of the proposed approach are also provided. We implemented the nonlinear reduced-order models as generic deep networks which yield promising results. We expect that the accuracy and effectiveness can be further improved by incorporating structural information and prior knowledge about the PDE and its solutions into the design of these networks. Training of control vector fields can also be made more efficient by integrating informative sample trajectories of \(\theta_{t}\) These improvements can potentially make the proposed method very effective in solving evolution PDEs in specified application domains.
2310.20184
Graph Theoretic Approach Identifies Critical Thresholds at which Galaxy Filamentary Structures Form
Numerical simulations and observations show that galaxies are not uniformly distributed. In cosmology, the largest known structures in the universe are galaxy filaments formed from the hierarchical clustering of galaxies due to gravitational forces. These structures consist of walls and bridges that connect clusters. Here, we use graph theory to model the structures as Euclidean networks in three-dimensional space. Using percolation theory, cosmological graphs are reduced based on the valency of nodes to reveal the inner, most robust structural formation. By constraining the network, we then find thresholds for physical features, such as length-scale and density, at which galaxy filament clusters are identified.
Sophia-Gisela Strey, Alexander Castronovo, Kailash Elumalai
2023-10-31T05:09:18Z
http://arxiv.org/abs/2310.20184v3
# Graph Theoretic Approach Identifies Critical Thresholds ###### Abstract Numerical simulations and observations show that galaxies are not uniformly distributed. In cosmology, the largest known structures in the universe are galaxy filaments formed from the hierarchical clustering of galaxies due to gravitational forces. These structures consist of "walls" and "bridges" that connect clusters. Here, we use graph theory to model the structures as Euclidean networks in three-dimensional space. Using percolation theory, cosmological graphs are reduced based on the valency of nodes to reveal the inner, most robust structural formation. By constraining the network, we then find thresholds for physical features, such as length-scale and density, at which galaxy filament clusters are identified. ## I Introduction Large-scale filamentary structures are thought to have evolved through gravitational instabilities and density fluctuations. There are two competing theories explaining the formation of galaxy clusters [1]: The Zeldovich theory describes primordial density fluctuation in the early universe causing the condensation of gas. Alternatively, the rival theory of hierarchical clustering suggests that merging halos explains the resulting Gaussian density fields and voids. In 1984, researchers analyzed the Center for Astrophysics (CfA) I redshift catalog [2] using percolation analysis. Their findings indicated that these large-scale distributions were consistent with network structures [3]. The next year, further work introduced the idea of using _minimum spanning trees_ (MST) [4], a concept borrowed from graph theory, providing a mathematical framework for representing relationships between objects. These relationships are simplified into points (_nodes_) and connections (_edges_), weighted according to a numerical measure used to characterize their relationship. A set of nodes and edges makes up a graph. When a graph contains no circuits or closed loops, it is a tree. If a tree contains all of the nodes in a graph (rather than a subset), it is called a spanning tree since it spans the entire dataset[5]. A minimum spanning tree takes this one step further by minimizing total edge weights for the tree. Applied to cosmology, graph theoretic analyses[4] identify the dominant pattern of connectedness in a set of points where each represents a galaxy. The resulting skeletal pattern can be analyzed in a quantitative manner, which differs from other criteria previously used to characterize the geometry of galaxy clustering. This quantitative measure, in turn, serves as an objective way to identify filaments. Given the statistical likelihood that all edges have different weights, there will be only one unique MST in any point dataset since there will only be one shortest-path solution [4]. While previous work has been valuable in advancing our understanding of the filamentary structure between clusters, the relationship between the characteristic thread-like geometry of filaments and the conditions of the gravitational collapse of matter into those filaments is not yet known. To investigate this question, we will be looking for the critical distance at which the galaxy clusters begin to form a filamentary structure over time. ## II Approach If we define the filamentary structure of a galaxy cluster using its minimum spanning tree, we can say that it remains _functional_ even after undergoing some operation, as long as the tree is preserved and retains its tree-like properties. The failure of key points and bonds in a network will cause irreparable losses and changes in the structure of the system. By identifying these key points, we can not only evaluate the stability of the system but also determine which substructures are most critical, i.e., galaxies most vital to the preserved functionality of the filament. In other words, the most critical points are galaxies where, without their existence, the filamentary structure will break down. To identify filament _dynamics_, we have to look at time. We can do this by looking at different radial distances from Earth, showing us different points in cosmological time. Looking at how the robustness of galaxy filaments evolves and changes under different constraints, we can try to find under what circumstances galaxy filaments begin to form. Recently, with the Data Release 4 of the Galaxy and Mass Assembly (GAMA) spectroscopic survey in 2022, there have been many cosmological analyses done due to the inclusion of redshift estimation measurements on galaxy candidates [6]. Cosmological redshift is a useful measure because it describes the Doppler shift light as it travels through space. Using Hubble's Law and the Cosmology.jl Julia package, we can find the radial distance of galaxies. Therefore, a critical next step is to characterize this structure (Figure 1). Relating to cosmology and galaxy filaments, graphs can tell us the underlying structure of the filaments and their associated features (walls, bridges, clusters, and voids) [4]. The robustness of a graph can be determined when a point data set can no longer be represented by the original minimum spanning tree [7]. The current study proposes to assess the robustness of the galaxy cluster by finding the _percolation threshold_, a measure adapted from statistical physics. The aim of percolation analysis is to study the emergence of clusters, or any ordered structure, from a disordered one. This is done by varying the probabilities of occupancy when a transition occurs. As the probabilities increase, the size of the largest cluster grows until it spans the entire system: a fully percolated system [8]. Using distance as a proxy for occupational probability, we define the probability of connecting two sites by the power-law as a function of their distance (\(y=ax^{k}\) where \(a\) controls the strength of distance dependence). We implemented all algorithms and analysis in the Julia programming language. Julia is a just-in-time compiled language optimized for high-performance computing. As such, it has emerged as one of the best choices for dynamic and graph theoretical modeling of data because of its speed (as fast as C) as well as its support from the community in developing state-of-the-art computational tools[9]. Specifically, we used the following Julia packages for our analysis: SimpleWeightedGraphs.jl and Cosmology.jl. In this study, we assume scale invariance in galaxy clustering [10]. Analysis of the CfA, Perseus-Pisces, SSRS, IRAS, etc., shows that galaxy structures are "highly irregular and self-similar". However, due to a lack of available data, the evidence of scale invariance becomes weaker at a scale of greater than 150 Mpc. ## III Data The GAMA Data Release 4, published in March 2022, provides over 300,000 galaxy spectroscopic redshift samples over 250deg\({}^{2}\) of sky. These samples were taken across five sky regions with bounds as shown in Table 1. The G02 sky region is not used in this study because it is incomplete [6]. This survey includes the galaxy's stellar mass function and its sub-division by morphological type. This function is defined as the number density of galaxies in a selected mass interval. Redshift measurements have exceptionally high completeness (\(\leq 95\%\)) and include many low redshift populations (\(z\leq 0.25\)), as seen in Figure 1A [6]. Completeness refers to whether the dataset contains all galaxies within the relevant regions of space, considering sensitivity, resolution, and error. Figure 1A shows a scatter plot showing the distribution of galaxies in each region. The \(x\), \(y\), and \(z\) coordinates were calculated as illustrated in Figure 1A and are in the units of Mpc. ## IV Methods Our study looks at various subsections of the data described in the previous section. The methods described were applied in the same way to each subsection of the data. The Methods Section follows the order of the Julia scripts that can be found in our publicly available repository. ### Initial Data Processing We treat the GAMA dataset as a three-dimensional point dataset using right ascension and declination measures. This initial processing converts spherical coordinates into Cartesian coordinates in units of Mpc. Using the Cosmology.jl package, the comoving radial distance is calculated for each galaxy based on the Dimensionless Hubble constant (\(H_{0}=0.7\)), matter density (\(\Omega_{M}=0.3\)), and radiation density (\(\Omega_{R}=0\)) parameters of the cosmology model along with their cosmological age. A custom struct, 'GalaxyDistance', is defined to store the distance information between pairs of galaxies and their indices. This also filters out null values. ### Filamentary Structure Identification Using 'GalaxyDistance' as its input, we create a weighted graph. From this graph, we construct an MST using Krustal's Algorithm. For a weighted graph \(G=(V,E,w)\), where \(V\) is the set of nodes, \(E\) is the set of edges, and \(w\) is the weight for each edge connecting node \(u\) to node \(v\), the MST is defined as the subset of edges, \(T\), whose weight \[w(T)=\sum_{(u,v)\in T}w(u,v)\] is minimized. Krustal's Algorithm sorts edges in the graph in increasing order of their weight and iterates through the sorted edges. For each edge, it is checked if adding it to the current MST will create a cycle. The process continues until \(V-1\) edges are included in the MST, where \(V\) is the number of vertices. This algorithm has a time complexity of \(O(\log E)\). Finally, all nodes of degree one connected to nodes with degrees exceeding \begin{table} \begin{tabular}{c|c|c} Region Label & RA Range & Dec Range \\ G02 & 30.2 to 38.8 & -10.25 to 3.72 \\ G09 & 129.0 to 141.0 & -2.0 to 3.0 \\ G12 & 174.0 to 186.0 & -3.0 to 2.0 \\ G15 & 211.5 to 223.5 & -2.0 to 3.0 \\ G23 & 338.1 to 351.9 & -35.0 to -30.0 \\ \end{tabular} \end{table} Table 1: GAMA regions and their right ascension and declination bounds. two are removed along with their offshooting branches. This is how we define the _functional_ filamentary structure. The same struct is redefined using only edges from the MST, and edges are sorted by distance. Note that for the purposes of this study, we will assume all edges have different lengths because edge weights are calculated with 16 digits of precision (limited by limitations of a 64-bit floating point value), so it is highly unlikely two edges will have the same weight. However, if two edges do have the same weight, then it is most likely not going to significantly affect the results of the analysis since the trees will be similar in structure. ### Percolation We implement the Newman-Ziff Monte Carlo Algorithm for bond percolation. Unlike traditional percolation algorithms where cluster growth is typically simulated, this algorithm utilizes the fact that the probability of a bond being part of the largest cluster in a percolating system is a continuous function of the site occupation probability. Simulations can be computationally intensive, especially for a large system. For this reason, the Newman-Ziff method is preferable. The method works by starting with an empty graph and sequentially adding bonds with a given occupation probability until all points are added to the cluster. In a cosmological system, distance can be used as a proxy for occupation probability. In this case, the Newman-Ziff method can be applied by constructing a curve of the probability that the galaxy belongs to the largest connected component as a function of distance. The threshold or critical point at which percolation occurs can be determined by finding the point on this curve where the probability of belonging to the largest cluster/component changes from zero to some nonzero value [11]. The percolation structure that we define represents a model of percolation in a grid. The grid contains 'true' (open) and 'false' (closed) cells. The 'QuickUnion' structures ('wuf1' and 'wuf2') are used to keep track of connectivity in the grid. The results have a 95% confidence interval. ## V Results ### Younger galaxies have a larger critical length-scale. The critical length-scale measures of older galaxies (\(\geq 11\) Gyr) = \(\approx 0.7\) Mpc (Figure 2) and younger galaxies (\(\leq 11\) Gyr) = \(\approx 1.2\) Mpc (Figure 3). The threshold of older galaxies lies between 0.2 and 1.3 Mpc, and the threshold of younger galaxies lies between 0.5 and 1.7 Mpc. Thus, older galaxies have a sharper transition than younger galaxies. Note that the figure has been logarithmically scaled. ### Two critical points in Total Sections. As shown in Figure 4, the percolation threshold of the total section of G15 occurs at two points: 0.8 Mpc and 2.5 Mpc. Having two percolation thresholds means that there are two distinct scales at which clustering properties emerge in the galaxy dataset. The first has a sharper transition than the second. This can be seen across all remaining sections of the GAMA dataset (excluding G02 for being incomplete). Additionally, there is little to no difference between the percolation thresholds or curves of younger and older galaxies when percolating over only the constructed galaxy filament. Once again, this can be seen across all remaining sections of the GAMA dataset (excluding G02 for being incomplete). Figure 2, showing results for G15, Figure 1: Approach to identification and analysis of galaxy filaments. (A) GAMA Dataset Sections, G15, G02, G12, G23, G09, described in Table 1 represented in 3-dimensional space. (B) The identification of filamentary structure by constructing and reducing a minimum spanning tree the point dataset described in A. (C) The percolation analysis performed on the identified filamentary structure by iterating the critical length scale to form a bond. is representative. Percolation curves for other sections (G12, G23, G09) can be found in our code repository. ## VI Discussion ### Dimensions of a Filament Our results, showing two distinct percolation thresholds, can be interpreted as two length-scale transitions of a shape defined by two-dimensional measurements. Generalizing filament shapes to a cylinder, these transitions could represent the diameter and length of its geometry. However, the observation of two distinct percolation thresholds prompts an alternative interpretation related to the age diversity of these thresholds. Rather than attributing these thresholds to a continuous filament with specific dimensions, we propose that they correspond to the evolutionary stages of galaxy filaments based on age. The sharper transition associated with the first threshold suggests the presence of dominant clustering structures at a smaller scale, likely indicative of characteristics exhibited by younger galaxies. Conversely, the second threshold, marked by a less abrupt transition, might be indicative of a larger-scale clustering phenomenon, potentially corresponding to older galaxies. This is supported by the interval of each transition and its correspondence to the distance thresholds found in the entire dataset. These transitions, signaling shifts between disconnected and connected states, unfold in distinct stages, mirroring the evolution of galaxies over time. ### Difference Between Entire Dataset versus Younger and Older Subsections Our results seem to imply that different age groups of galaxies undergo different evolutionary processes. The GAMA dataset contains a mix of galaxies with varying properties. The existence of two thresholds solely within the entire dataset suggests that both older and younger galaxies exhibit a higher degree of homogeneity within their respective groups. This challenges the notion of a continuous filament with specific dimensions and instead supports the idea that these thresholds delineate transitions between age-dependent characteristics. The distinct thresholds may reflect the intricate interplay of various factors, such as matter distribution, density, and Figure 3: Percolation analysis on younger galaxies with logarithmic scaling (G15). Length scale percolation threshold \(=\) 1.2 Mpc. Figure 2: Percolation analysis on older galaxies with logarithmic scaling (G15). Length scale percolation threshold \(=\) 0.7 Mpc. structural complexities, which evolve differently in galaxies of varying ages. ## Code All code utilized in this manuscript may be found at: [https://github.com/tstrvey/oastr2-leftovers](https://github.com/tstrvey/oastr2-leftovers). ## Acknowledgments GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalog is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programs, including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT, and ASKAP, providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is [http://www.gama-survey.org](http://www.gama-survey.org). We thank Hillel Sanhedrai, Rostam Razban, and Kalee Tock for providing feedback on the manuscript and Annabel Driussi for her assistance with producing figures. Figure 4: Percolation analysis on all galaxies (\(\frac{1}{4}\) of total dataset given computation constraints). Figure 5: Histogram of Cosmological times of whole datasets in Gyr.
2309.12055
Genetic Composition of Supercritical Branching Populations under Power Law Mutation Rates
We aim at understanding the evolution of the genetic composition of cancer cell populations. To this aim, we consider a branching individual based model representing a cell population where cells divide, die and mutate along the edges of a finite directed graph $(V,E)$. The process starts with only one cell of trait $0$. Following typical parameter values in cancer cell populations we study the model under \textit{large population and power law mutation rates limit}, in the sense that the mutation probabilities are parameterized by negative powers of $n$ and the typical sizes of the population of interest are positive powers of $n$. Under \textit{non-increasing growth rate condition}, we describe the time evolution of the first-order asymptotics of the size of each subpopulation on the $log(n)$ time scale, as well as in the random time scale at which the initial population, resp. the total population, reaches the size $n^{t}$. In particular, such results allow for the perfect characterization of evolutionary pathways. Without imposing any conditions on the growth rates, we describe the time evolution of the order of magnitude of each subpopulation, whose asymptotic limits are positive non-decreasing piecewise linear continuous functions.
Vianney Brouard
2023-09-21T13:22:55Z
http://arxiv.org/abs/2309.12055v2
# Genetic Composition of Supercritical Branching Populations under Power Law Mutation Rates ###### Abstract We aim at understanding the evolution of the genetic composition of cancer cell populations. To this aim, we consider a branching individual based model representing a cell population where cells divide, die and mutate along the edges of a finite directed graph \((V,E)\). The process starts with only one cell of trait 0. Following typical parameter values in cancer cell populations we study the model under _large population and power law mutation rates limit_, in the sense that the mutation probabilities are parameterized by negative powers of \(n\) and the typical sizes of the population of our interest are positive powers of \(n\). Under _non-increasing growth rate condition_ (namely the growth rate of any sub-population is smaller than the growth rate of trait 0), we describe the time evolution of the first-order asymptotics of the size of each sub-population on the \(log(n)\) time scale, as well as in the random time scale at which the initial population, resp. the total population, reaches the size \(n^{t}\). In particular, such results allow to characterize whose mutational paths along the edges of the graph are actually contributing to the size order of the sub-populations. Without any condition on the growth rate, we describe the time evolution of the orders of magnitude of each sub-population. Adapting techniques from [13], we show that these converges to positive deterministic non-decreasing piecewise linear continuous functions, whose slopes are given by an algorithm. Keywords: cancer evolution, multitype branching processes, finite graph, long time behavior, power law mutation rates, population genetics. ## 1 Introduction and presentation of the model Consider a population of cells characterized by a phenotypic trait, where the trait space \(V\) is finite and discrete. For all \(v\in V\) denote by \((Z_{v}(t))_{t\in\mathbb{R}^{+}}\) the number of cells of trait \(v\) at time \(t\) in the population, and \(\big{(}\mathcal{Z}(t):=(Z_{v}(t))_{v\in V}\big{)}_{t\in\mathbb{R}^{+}}\) the global process. Assume that \(0\in V\) and \[\forall v\in V,Z_{v}(0)=\mathbb{1}_{\{v=0\}},\text{ almost surely}.\] Cells with trait 0 are called _wild-type cells_, and all cells with trait \(v\in V\backslash\{0\}\) are called _mutant cells_. The population dynamics will follow a continuous time branching process on \(\mathbb{N}_{0}^{V}\). More precisely cells divide to give birth to two daughter cells and die with rates depending only on their phenotypic trait. The birth, death and growth rate functions are respectively \[\alpha:V\longrightarrow\mathbb{R}^{+},\] \[\beta:V\longrightarrow\mathbb{R}^{+},\] \[\lambda:=\alpha-\beta.\] During a division event of a cell of trait \(v\in V\), independent mutation over the two daughter cells are considered. Mutation landscape across traits is encoded via a graph structure \((V,E)\) on the trait space. \(E\subset\{(v,u),\forall v,u\in V^{2}\}\) is a set of ordered pairs over \(V\) satisfying for all \(v\in V\), \((v,v)\cap E=\emptyset\). It means that \((V,E)\) is a finite oriented graph without self-loop. Mutation from a trait \(v\) to a trait \(u\) is possible if and only if \((v,u)\in E\). Let \(\mu:E\longrightarrow[0,1]\) be a mutation kernel satisfying \[\forall v\in V,\overline{\mu}(v):=\sum_{u\in V:(v,u)\in E}\mu(v,u)\leq 1.\] A daughter cell mutates from type \(v\) to type \(u\) with probability \(\mu(v,u)\), meaning that \(\overline{\mu}(v)\) is its total mutation probability. Notice that backward mutations are contained in this model. Finally the exact transition rates from a state \(z=(z_{v})_{v\in V}\in\mathbb{N}_{0}^{V}\) of the process \(\mathcal{Z}\) are \[z\mapsto\begin{cases}&z-\delta_{v},\qquad\qquad\text{at rate }z_{v}\beta(v),\\ &z-\delta_{v}+\delta_{u}+\delta_{w},\qquad\text{at rate }2z_{v}\alpha(v)\mu(v,u) \mu(v,w)\mathbbm{1}_{\{(v,u)\in E\}}\mathbbm{1}_{\{(v,w)\in E\}}\mathbbm{1}_{ \{u\neq w\}},\\ &z-\delta_{v}+2\delta_{u},\qquad\qquad\text{at rate }z_{v}\alpha(v)\mu(v,u)^{2} \mathbbm{1}_{\{(v,u)\in E\}},\\ &z+\delta_{v},\qquad\qquad\text{at rate }z_{v}\alpha(v)\left(1-\overline{\mu}(v) \right)^{2}+2\sum\limits_{u\in V:(u,v)\in E}\!\!\!z_{u}\alpha(u)\mu(u,v)\left(1 -\overline{\mu}(u)\right),\end{cases}\] where \(\forall v\in V,\delta_{v}=\left(\mathbbm{1}_{\{u=v\}}\right)_{u\in V}\). Through the paper, the growth rate of the wild-type sub-population is assumed to be strictly positive \(\lambda(0)>0\), otherwise the wild-type sub-population won't survive almost surely. The biological motivation of this model is to capture the dynamics over time of the genetic composition of a population of cells during carcinogenesis. Tumors are detected when they reaches a size of a large amount of cells, typically \(10^{9}\) cells. The mutations rates per base pair per cell division is generally estimated to be of order \(10^{-9}\), see [20, 4]. Then it naturally invites to consider the framework of _large population and power law mutation rates regime_. It means that a parameter \(n\in\mathbb{N}\) is used to quantify both the decrease of the mutation probabilities, as negative powers of \(n\), and also the typical size of the population, depending on \(n\) as positive power of \(n\), at which we are interested in understanding the genetic composition. The aim is to obtain asymptotic results on the sizes of all the mutant sub-populations when \(n\) goes to infinity. It is a classical stochastic regime studied in particular in [7, 6, 13, 3, 5, 8, 14, 2, 30, 16]. Such regime is referred in [7, 6] as the _large population rare mutations limit_, but we decided to take the precision of _power law mutation rates_ in order to distinguish such regime with the classical _rare mutation regime_ where the mutation probabilities \(\mu^{(n)}\) scale typically as \(e^{-Cn}\ll\mu^{(n)}\ll\frac{1}{n\log(n)}\). Indeed with the large population power law mutation rates regime, the mutation probability can be of a higher order than the rare mutation regime if for instance \(\mu^{(n)}\propto n^{-\alpha}\) with \(\alpha\in(0,1]\). To be more precise, let \(L:=\{\ell(v,u)\in\mathbb{R}_{+}^{*},\forall(v,u)\in E\}\) be a set of strictly positive labels on the edges of the graph. Introduce a sequence of models \(\left(\mathcal{Z}^{(n)}\right)_{n\in\mathbb{N}}\), where for each \(n\in\mathbb{N}\), \(\mathcal{Z}^{(n)}\) corresponds to the process described above with the mutation kernel \(\mu^{(n)}:E\longrightarrow[0,1]\) satisfying \[\forall(v,u)\in E,n^{\ell(v,u)}\mu^{(n)}(v,u)\underset{n\to\infty}{ \longrightarrow}\mu(v,u)\in\mathbb{R}^{+}. \tag{1}\] For each \(t\in\mathbb{R}_{+}^{*}\), the stopping times corresponding to the first time that the wild-type population \(Z_{0}^{(n)}\), respectively the total population \(Z_{tot}^{(n)}:=\sum_{v\in V}Z_{v}^{(n)}\), reaches the level \(n^{t}\), are defined as \[\eta_{t}^{(n)} :=\inf\left\{u\in\mathbb{R}^{+}:Z_{0}^{(n)}(u)\geq n^{t}\right\},\] \[\sigma_{t}^{(n)} :=\inf\left\{u\in\mathbb{R}^{+}:Z_{tot}^{(n)}(u)\geq n^{t}\right\}.\] Two different biological interpretations in different settings can be made in order to motivate both of them. For instance, when considering metastasis the wild-type population \(Z_{0}^{(n)}\) may represent the primary tumor, and the mutant sub-populations \(Z_{v}^{(n)}\), for \(v\in V\backslash\{0\}\), may correspond to secondary tumors. As it is size and not age of a tumor that clinicians have access to, it is biologically relevant to estimate the genetic composition of the secondary tumors when the primary one has a given size. This is mathematically encoded in looking at the first-order asymptotics of \(Z_{v}^{(n)}\left(\eta_{t}^{(n)}\right)\) for any \(v\in V\backslash\{0\}\). Another biological setting is when the total population \(Z_{tot}^{(n)}\) represents one tumor. It is appropriate to obtain theoretical results about the size of the mutant cells \(Z_{v}^{(n)}\) for any \(v\in V\backslash\{0\}\) when the tumor has reached a given size. It corresponds exactly to look at the first-order asymptotics of \(Z_{v}^{(n)}\left(\sigma_{t}^{(n)}\right)\). Every time that results can be stated either with \(\eta_{t}^{(n)}\) or \(\sigma_{t}^{(n)}\), the following notation will be used \[\rho_{t}^{(n)}:=\eta_{t}^{(n)}\text{ or }\sigma_{t}^{(n)}. \tag{2}\] In the present work the population of cells will be studied in different time-scales: the random time-scale \[\left(\rho_{t}^{(n)}+s\right)_{(t,s)\in\mathbb{R}^{+}\times\mathbb{R}}; \tag{3}\] and the following deterministic approximation \[\left(\mathfrak{t}_{t}^{(n)}+s\right)_{(t,s)\in\mathbb{R}^{+}\times\mathbb{R} }, \tag{4}\] where \[\mathfrak{t}_{t}^{(n)}:=t\frac{\log(n)}{\lambda(0)}.\] Intuitively, the lineage of wild-type cells generated from the initial cell is the first sub-population that will allow to create mutations. Then understanding its growth gives the natural time scale to consider for seeing mutations. Its birth and death rates are \(\alpha(0)\left(1-\overline{\mu}^{(n)}(0)\right)^{2}\) and \(\beta(0)+\alpha(0)\left(\overline{\mu}^{(n)}(0)\right)^{2}\) respectively. Because of the power law mutation rates regime of Equation (1) they are converging to \(\alpha(0)\) and \(\beta(0)\) when \(n\) grows to \(\infty\). Meaning that this lineage should behave asymptotically as a birth and death process with rates \(\alpha(0)\) and \(\beta(0)\). Indeed such a result emerges on the natural martingale associated to a birth and death process, see Lemma 3.1. In particular the growth rate of this lineage is close to \(\lambda(0)\) thus this population reaches a size of order \(n^{t}\) approximately at the deterministic time \(\mathfrak{t}_{t}^{(n)}\), see Lemma 3.2. For any finite oriented labeled graph \((V,E,L)\) under the following _non-increasing growth rate condition_ \[\forall v\in V,\lambda(v)\leq\lambda(0), \tag{5}\] the first-order asymptotics of the sizes of any mutant populations \(Z_{v}^{(n)}\) are obtained both in random and deterministic time-scales (3) and (4), see Theorem 2.1. Assumption (5) can be biologically motivated. Historically tumor dynamics has been seen under the prism of clonal expansion of selective mutations, i.e. \(\lambda(v)>\lambda(0)\). Nevertheless the paradigm of neutral evolution of cancer has been recently considered, see [31, 26, 33, 32, 9], meaning that the selective mutations are already present in the initial cell and that the occurring mutations are neutral ones (i.e. \(\lambda(v)=\lambda(0)\)). With Assumption (5) deleterious mutations (i.e. \(\lambda(v)<\lambda(0)\)) are also considered. This paradigm has been introduced because the genetic heterogeneity inside a tumor could be explained only considering neutral mutations. Various statistical methods are developed to infer the evolutionary history of tumors, including test of neutral evolution, see [34, 1, 17] for details about that. Without any assumption on the growth rate function \(\lambda\), the study is made on the deterministic time-scale of Equation (4). As in [13, 3, 5, 8, 14, 2, 30] the asymptotic behaviors are obtained on the following _stochastic exponent_ processes \[\forall v\in V,X_{v}^{(n)}(t):=\frac{\log^{+}\left(Z_{v}^{(n)}\left(\mathfrak{ t}_{t}^{(n)}\right)\right)}{\log(n)/\lambda(0)}. \tag{6}\] The results are presented in Theorem 2.2. It is the exponent as a power of \(n\) that is tracked for any sub-populations, whereas Theorem 2.1 gives directly the size order on \(n\), this is a refined result. Up to our knowledge, it is the first model considering the power law mutation rates regime (1) capturing this level of refinement on the asymptotic behaviors. Two new significant results emerge. First it shows the remarkable result that under Assumption (5) the randomness on the first-order asymptotics of any mutant sub-populations is fully given by the stochasticity of only one random variable \(W\) -encoding the randomness on the long time for the lineage of wild-type cells issued from the initial cell. It means that the stochasticity for any mutant sub-population is fully driven, at least at the first-order asymptotics, by the randomness on the growth of the wild-type population and not from the dynamics of any lineage of a mutant cell, as well as the stochastic processes generating mutations. Second it characterizes exactly whether a mutational path on the graph structure of the trait space asymptotically contributes to the growth of the mutant sub-populations. Whereas having asymptotic results on the stochastic exponents only allows to discriminate some paths and not to determine exactly whose paths are actually contributing to the asymptotic growth of the mutant sub-populations. More precisely, if the weight of a path is defined as the sum of the label of its edges, asymptotic results on the stochastic exponent gives that for every trait \(v\), among the paths from \(0\) to \(v\) only those with the less weight might contribute to the asymptotic growth of trait \(v\). On the contrary, having results directly on the first-order asymptotics of the mutant sub-populations allows to discriminate among those paths with the less weight those which actually contributes to the dynamics of trait \(v\). In particular among those paths with the less weight only those with the maximal number of neutral mutations on their edges have an asymptotic impact on the growth of trait \(v\). Indeed an additional multiplicative factor of order \(\log(n)\) for each neutral mutation of a path is captured when looking at the first-order asymptotics and is obviously not captured with asymptotic results only on the stochastic exponents. Moreover it is the first time that this power-law mutation rates regime is studied in the random time-scale of Equation (3) up to our knowledge. From the biological point of view it is more interesting to get result on such random time-scale instead of the deterministic one. We obtain that the randomness on the first-order asymptotics of any mutant sub-populations is fully given by the stochasticity on the survival of the lineage of wild-type cells issued from the initial cell. In [7, 6] Cheek and Antal study a model that can be seen as an application of the model of the present work via a specific finite oriented labeled graph \((V,E,L)\). Among their results, they fully characterize in distribution the asymptotic sizes of all the mutant sub-populations around the random time at which the wild-type sub-population reaches the typical size allowing mutations to occur. In their setting it corresponds to \(\left(\eta_{1}^{(n)}+s\right)_{s\in\mathbb{R}}\). In particular they obtain that the asymptotic sizes of all the mutant sub-populations around this random time \(\eta_{1}^{(n)}\) is finite almost surely, following a generalised Luria-Delbruck distribution, see Theorem 5.1 in [6]. The initial Luria and Delbruck model has generated many subsequent works, see in particular [27, 25, 22, 19, 21, 23, 7, 6]. Two major features explain the latter result. The first one is that asymptotically only a finite number of mutant cells are generated from the wild-type population until time \(\eta_{1}^{(n)}\), following a Poisson distribution. The second one is that all the lineages of the mutant cells generated from the wild-type population have only, up to time \(\eta_{1}^{(n)}\), asymptotically a finite random time to grow, which is exponentially distributed. We extend their results to larger times, typically when the total mutation rate from the sub-population of a trait \(v\) to the sub-population of a trait \(u\) is growing as a positive power of \(n\), instead of remaining finite. In [13] Durrett and Mayberry study the exponentially growing Moran model. They consider the same mutation regime, their size of the total population is growing exponentially fast at a fixed rate, and new individuals in the population chose their trait via a selective frequency dependent process. In Theorem 2.2 a similar result is obtained for the case of a multitype branching population. In particular, for this setting the exponential speed of the total population (and of the dominant sub-populations) is evolving through time. More specifically, we show that the speed is a non-decreasing piecewise constant function going from \(\lambda(0)\) to \(\underset{v\in V}{\max}\lambda(v)\), and taking values only on the set \(\{\lambda(v),\forall v\in V\}\). In [7, 13, 3, 5, 8, 14, 2, 30], the authors are considering the power law mutation rates regime of Equation (1) in the special case where all different traits mutate with the same scaling of a fixed order of a negative power of \(n\). Whereas in the present work the power law mutation rates regime is more general by allowing traits to mutate with different scalings, as in [6, 16]. As in [7, 6], compared to the different models in [13, 3, 5, 8, 14, 2, 16], the initial population \(\mathcal{Z}^{(n)}(0)\) is not assumed to have a macroscopic size. It introduces a supplementary randomness on how the wild-type population is stochastically growing to get a macroscopic size. But contrary to [7, 6], we do not condition on the survival of the wild-type population or on the stopping times of Equation (3) to be finite. In [28] Nicholson and Antal study a similar model under a slightly less general non-increasing growth rate condition. More precisely, in their case all the growth rates of the mutant populations are strictly smaller than the growth rate of the wild-type population: \(\forall v\in V\backslash\{0\},\lambda(v)<\lambda(0)\). But the main difference remains the mutation regime. In their case, only the last mutation is in the power law mutation rates regime, all other mutations have a fixed probability independent of \(n\). In Theorem 2.1 the case where all mutations are in the power law mutation rates regime is treated. Also Nicholson and Antal are interested in obtaining the distribution of the first time that a mutant sub-population gets a mutant cell. Whereas in the present work the first-order asymptotics of the sizes of the mutant sub-populations over time are studied. In [29] Nicholson, Cheek and Antal study the case of a mono-directional graph where the time tends first to infinity with fixed mutation probabilities. In particular they obtain the almost sure first-order asymptotics of the size of the different mutant sub-populations. Under the non-increasing growth rate condition, they are able to characterized the distribution of the limit random variables they obtained. Without any condition on the growth rates, they study the distribution of the random limit they obtained under the small mutation probabilities limit, using the hypothesis of an approximating model with less stochasticity. Notice that the mutation regime they study is not the large population power law mutation rates regime of Eq. (1) as considered in the present work. Under the latter regime both the size of the population goes to infinity and the mutation probabilities to \(0\), through the parameter \(n\), see Equation (1). In [18] Gunnarson, Leder and Zhang study a similar model as the one in the present work and are also interested in capturing the evolution over time of the genetic diversity of a population of cells, using in their case the well-known summary statistic called the site frequency spectrum (SFS). The main difference is the mutation regime because they are not considering the power law mutation rates limit. In their case the mutation probabilities are fixed. Also, they restrict the study to the neutral cancer evolution case. In particular, as in the present work, they capture the first-order asymptotics of the SFS at a fixed time and at the random time at which the population first reaches a certain size. Two noticeable similarities in the results are that the first-order asymptotics of the SFS converges to a random limit when evaluated at a fixed time and to a deterministic limit when evaluated at the stochastic previous time. One could argue that in the present work the correct convergence for the latter case is actually a stochastic limit. But the stochasticity is fully given by the survival of the initial lineage of cells of trait \(0\), so conditioned on such an event at the end the limit is a deterministic one. In particular the results of Gunnarson, Leder and Zhang are all conditioned on nonextinction of the population. In [16] Gamblin and Lambert study a model of an exponentially growing asexual population that undergoes cyclic bottlenecks under the large population power law mutation rates regime. Their trait space is composed of \(4\) sub-populations \(00,10,01\) and \(11\), where two paths of mutations are possible \(00\mapsto 10\mapsto 11\) and \(00\mapsto 01\mapsto 11\). They study the special case where one mutation (10) has a high-rate but is a weakly beneficial mutation whereas the other mutation (01) has a low-rate but is a strongly beneficial mutation. In particular they show the noticeable result that due to cyclic bottlenecks only a unique evolutionary path unfolds but modifying their intensity and period implies that all paths can be explored. Their work relies on a deterministic approximation of the wild-type sub-population \(00\) and some parts of the analysis of the behavior of the model is only obtained due to heuristics. The present work, and more specifically Theorem 2.2 because they are considering selective mutations, can be used and adapted to consider the case of cyclic bottlenecks in order to prove rigorously their results, in the specific trait space that they consider as well as on a general finite trait space. The rest of the paper is organised as follows. In Section 2 the results and their biological interpretations are given. Sections 3 and 4 are dedicated to prove Theorem 2.1, which assumes Equation (5). In Section 3 the mathematical definition of the model is given for an infinite mono-directional graph as well as the proof in this particular case. The generalisation of the proof from an infinite mono-directional graph to a general finite graph is given in Section 4. In Section 5, Theorem 2.2 is proved adapting results from [13]. ## 2 Main results and biological interpretation In Subsection 2.1 the first-order asymptotics of the size of all the mutant sub-populations in the time-scales (3) and (4) are given under the non-increasing growth rate condition (5). In Subsection 2.2 the asymptotic result on the stochastic exponent of all the mutant sub-populations are given without any assumption on the growth rate function \(\lambda\). In each subsection, biological interpretations of the results are made. First-order asymptotics of the mutant sub-population sizes under non-increasing growth rate condition In this subsection assume that \((V,E,L)\) satisfies the non-increasing growth rate graph condition of Equation (5). **Heuristics:** The next definitions, notations and results are first motivated using some heuristics for the simplest graph that one can think of, i.e. a wild-type and a mutant population where only mutations from wild-type to mutant cells are considered. More precisely \((V,E,L)=(\{0,1\},\{(0,1)\},\{\ell(0,1)\})\) as in Figure 1. Under the power law mutation rates regime, the inner birth and death rate of the wild-type population is so close to \(\alpha(0)\) and \(\beta(0)\) respectively that its natural martingale asymptotically behaves as the one of a birth and death process with rates \(\alpha(0)\) and \(\beta(0)\) (Lemma 3.1). This fact allows to approximate the growth of the wild-type population as an exponential growth of parameter \(\lambda(0)\). Then if it survives, at time \(\mathfrak{t}_{t}^{(n)}\) (see (4)) its size is of order \(\mathcal{O}\left(n^{t}\right)\) (Lemma 3.2). From this fact, one understands why it is necessary to wait for time \(\mathfrak{t}_{\ell(0,1)}^{(n)}\) before seeing any mutation. Indeed, with a mutation probability which scales as \(n^{-\ell(0,1)}\), the total mutation probability up to time \(\mathfrak{t}_{t}^{(n)}\) scales as \(\int_{0}^{t}n^{u}n^{-\ell(0,1)}d\left(u\frac{\log(n)}{\lambda(0)}\right)= \frac{n^{-\ell(0,1)}}{\lambda(0)}\left(n^{t}-1\right)\) which starts to be of order \(1\) for \(t\geq\ell(0,1)\). This is made formal by D. Cheek and T. Antal in [7, 6] and an illustration can be found at Figure 2. It is also possible to get some heuristics for the size of the mutant population for time \(\mathfrak{t}_{t}^{(n)}\), for \(t\geq\ell(0,1)\). Let \(\ell(0,1)\leq u\leq t\), the number of new mutations generated at time \(\mathfrak{t}_{u}^{(n)}\) scales as \(\exp\left(\lambda(0)(u-\ell(0,1))\frac{\log(n)}{\lambda(0)}\right)\). The remaining time for these new mutant cells to grow exponentially fast at rate \(\lambda(1)\) until time \(\mathfrak{t}_{t}^{(n)}\) is \(\mathfrak{t}_{t-u}^{(n)}\). This implies that their lineages get at time \(\mathfrak{t}_{t}^{(n)}\) a size of order \[\mathcal{O}\left(\exp\left(\left[\lambda(1)t+(\lambda(0)-\lambda(1))u-\lambda( 0)\ell(0,1)\right]\frac{\log(n)}{\lambda(0)}\right)\right). \tag{7}\] Then two scenari are possible: 1. If \(\lambda(1)<\lambda(0)\): Equation (7) is maximal for \(u=t\) and equal to \(n^{t-\ell(0,1)}\). This means that the dynamics of the mutant population is driven by the mutation from the wild-type population and not from its inner growth. Mathematically it means that its size order at time \(\mathfrak{t}_{t}^{(n)}\) is fully given by the mutations generated at this time -and so is of order \(n^{t-\ell(0,1)}\)- and not from the Figure 1: Two traits model without backward mutation lineages issued from mutations at strictly previous time. Biologically these mutations are called _deleterious_. 2. If \(\lambda(1)=\lambda(0)\): Equation (7) is independent of \(u\). This means that these lineages have the same size order at time \(\mathfrak{t}_{t}^{(n)}\) than any other lineages of mutant cells generated from mutational events at any other time between \(\mathfrak{t}_{\ell(0,1)}^{(n)}\) and \(\mathfrak{t}_{t}^{(n)}\). To put it differently, in the dynamics of the mutant population there is a balance between the contribution of mutations and its inner growth. This is a consequence of assuming \(\lambda(1)=\lambda(0)\). These mutations are referred as _neutral mutation_, even if biologically speaking it would have exactly mean the more restrictive condition \(\alpha(1)=\alpha(0)\) and \(\beta(1)=\beta(0)\). Hence to capture the total size of the mutant population at time \(\mathfrak{t}_{t}^{(n)}\), it remains to integrate all the lineages issued from mutational events over time \(\mathfrak{t}_{u}^{(n)}\), for \(\ell(0,1)\leq u\leq t\). It gives exactly the order \(\mathcal{O}\left((t-\ell(0,1))\log(n)n^{t-\ell(0,1)}\right)\). To sum up, for this simple graph, the mutant population scales after time \(\mathfrak{t}_{\ell(0,1)}^{(n)}\) as \[\mathcal{O}\left(n^{t-\ell(0,1)}\left[\mathbb{1}_{\{\lambda(0)>\lambda(1)\}}+ \mathbb{1}_{\{\lambda(0)=\lambda(1)\}}(t-\ell(0,1))\log(n)\right]\right). \tag{8}\] Notice in particular that in any case, the mutant population has an exponential growth at rate \(\lambda(0)\) after time \(\mathfrak{t}_{\ell(0,1)}^{(n)}\). An illustration of this heuristic can be found in Figure 3. These heuristics on this simple graph can be used as an elementary brick for getting heuristics on a general finite graph. Considering a vertex \(v\in V\backslash\{0\}\), there are potentially many mutational paths from the initial vertex \(0\) to \(v\). Then it needs to be understood which ones are involved in the size order of the mutant population of trait \(v\). Using both the previous heuristics on the first time that mutations are generated and that after this time the mutant population grows exponentially fast at rate \(\lambda(0)\), it seems natural to iteratively apply the reasoning. Thus given one path from \(0\) to \(v\), the time \(u\) in the time-scale \(\mathfrak{t}_{u}^{(n)}\) to wait for seeing a cell of trait \(v\) generated due to this specific path is the sum of the labels of the edges of this path, called the weight of this path. Then, after this time, this sub-population of cells of trait \(v\) grows exponentially fast at rate \(\lambda(0)\). In particular two interesting facts for the total mutant population of trait \(v\) are brought out. First it starts having cells after a time which is the minimum of the weights over the paths from \(0\) to \(v\). Second, after this time only the paths whose weights are equal to the latter minimum might contribute to the size order of the mutant cells of trait \(v\). This is due to the fact that having a time delay create an exponential delay in the size order. But as seen in (8), for any neutral mutation a supplementary multiplicative factor of order \(\log(n)\) is captured on the size order. Consequently, over the paths from \(0\) to \(v\) satisfying that their Figure 2: Heuristics for the first occurrence time of mutant cells weights are equal to the latter minimum, only those with the maximal number of neutral mutations are actually contributing to the size order of the mutant population of trait \(v\). More specifically with a factor of \(\log(n)\) at the power this maximal number of neutral mutations. Moreover for any of these admissible paths, at each neutral mutation a supplementary time integral is obtained. An illustration with an example is given in Figure 4. **Notations:** Now the natural definitions issued from these heuristics are formally made before giving the results. Figure 4: Heuristics for the contribution of paths in the size order of a mutant sub-population: in this example the dashed red path has a weight equal to 5 whereas the dotted blue and the plain green ones have a weight equal to 4. Thus, only the two latter ones may contribute to the size order of the mutant. But the dotted blue path has only one neutral mutation compared to the plain green one which has two neutral mutations. Finally, only the plain green path will contribute to the size order of the purple mutant sub-population. For \(t\geq 4\), at time \(\mathfrak{t}_{t}^{(n)}\) it will grow as \(\log^{2}(n)e^{\lambda(0)\mathfrak{t}_{t-4}^{(n)}}\). Figure 3: Heuristics for the size of the mutant population after time \(\mathfrak{t}_{\ell(0,1)}^{(n)}\) **Definition 2.1**.: _(Deleterious and neutral vertices) A vertex \(v\in V\) satisfying \(\lambda(v)=\lambda(0)\), respectively \(\lambda(v)<\lambda(0)\), is called a neutral, respectively deleterious, vertex._ **Remark 2.1**.: _In the previous definition the neutral or deleterious denomination of a mutation originates from the comparison of its inner growth rate to the growth rate of the wild-type population. But one could imagine a mutation from a vertex \(v\) to a vertex \(u\) satisfying \(\lambda(v)<\lambda(u)\leq\lambda(0)\). This mutation should theoretically be called selective but in the previous definition it is actually called neutral or deleterious (depending on the value of \(\lambda(u)\) compared to \(\lambda(0)\)). This nomenclature emerges from the fact that under Assumption (5) any mutant population grows exponentially fast at rate \(\lambda(0)\), as seen in the previous heuristics. Hence it legitimates the previous definition._ **Definition 2.2**.: _(Path on the graph) \(\gamma=(v(0),\cdots,v(k))\) is said to be the path on the graph \((V,E)\) linking \(v(0)\) to \(v(k)\) by using the edges \((v(i),v(i+1))\) if for all \(0\leq i\leq k,v(i)\in V\) and \(\forall 0\leq i\leq k-1,(v(i),v(i+1))\in E\). For a path \(\gamma=(v(0),v(1),\cdots v(k))\) on \((V,E,L)\) define_ \[t(\gamma) :=\sum_{i=0}^{k-1}\ell(v(i),v(i+1)),\] \[\gamma_{neut} =\{v(i),1\leq i\leq k:\lambda(v(i))=\lambda(0)\},\] \[\theta(\gamma) :=|\gamma_{neut}|,\] _as respectively the sum of the labels of the edges of the path \(\gamma\), the subset of vertices at the end of the edges of \(\gamma\) that are neutral, and the cardinal of the previous subset. Introduce also \(w_{del}\), \(w_{neut}\) as_ \[w_{del}(\gamma) :=\prod_{1\leq i\leq k,\lambda(v(i))<\lambda(0)}\frac{2\alpha(v(i -1))\mu(v(i-1),v(i))}{\lambda(0)-\lambda(v(i))},\] \[w_{neut}(\gamma) :=\prod_{1\leq i\leq k,\lambda(v(i))=\lambda(0)}\frac{2\alpha(v(i -1))\mu(v(i-1),v(i))}{\lambda(0)}.\] _Introduce \(\forall i\leq k,t_{\gamma}(i):=\sum_{j=0}^{i-1}\ell(v(j),v(j+1))\) as the sum of the labels over the i-th first edges of \(\gamma\). Finally introduce \(\sigma\) an increasing function from \(\llbracket 1,\theta(\gamma)\rrbracket\) to \(\llbracket 1,k\rrbracket\), such that \(v(\sigma_{i})\) is the i-th neutral vertex of the path \(\gamma\). Then let_ \[I_{\gamma}(t):=\int_{t_{\gamma}(\sigma_{\theta(\gamma)})}^{t_{\forall t_{ \gamma}(\sigma_{\theta(\gamma)})}}\int_{t_{\gamma}(\sigma_{\theta(\gamma)-1} )}^{u_{1}}\cdots\int_{t_{\gamma}(\sigma_{\theta(\gamma)-k})}^{u_{k}}\cdots \int_{t_{\gamma}(\sigma_{1})}^{u_{\theta(\gamma)-1}}du_{\theta(\gamma)}\cdots du _{1}.\] _Finally, the weight on the path \(\gamma\) at time \(t\) is defined as_ \[w_{\gamma}(t):=w_{del}(\gamma)w_{neut}(\gamma)I_{\gamma}(t). \tag{9}\] **Remark 2.2**.: _Notice that if \(\overline{\gamma}:=(v(0),\cdots,v(k-1))\) we have_ \[w_{\gamma}(t)= 2\alpha(v(k))\mu(v(k-1),v(k))\] \[\cdot\left(\mathbbm{1}_{\{\lambda(k)<\lambda(0)\}}\frac{1}{ \lambda(0)-\lambda(v(k))}w_{\overline{\gamma}}(t)+\mathbbm{1}_{\{\lambda(k)= \lambda(0)\}}\frac{1}{\lambda(0)}\int_{t(\overline{\gamma})}^{t_{\forall t( \overline{\gamma})}}w_{\overline{\gamma}}(s)ds\right).\] **Definition 2.3**.: _(Admissible paths) For all \(v\in V\) denote by \(P(v)\) the set of all paths \(\gamma\) on \((V,E)\) linking the vertex 0 to the vertex \(v\). Define also_ \[t(v) :=\min\{t(\gamma),\forall\gamma\in P(v)\},\] \[\theta(v) :=\max\left\{\theta(\gamma),\forall\gamma\in P(v),t(\gamma)=t(v) \right\},\] \[A(v) :=\{\gamma\in P(v):t(\gamma)=t(v)\text{ and }\theta(\gamma)= \theta(v)\}.\] **Remark 2.3**.: _In the previous definition \(A(v)\) is called the set of admissible paths because as seen in the heuristics, only paths belonging to \(A(v)\) are contributing to the growth dynamics of the mutant cells of trait \(v\). This is made formal in Theorem 2.1._ **Definition 2.4**.: _(Weight of a vertex) The weight of a vertex \(v\in V\backslash\{0\}\) at time \(t\) is defined as_ \[w_{v}(t):=\sum_{\gamma\in A(v)}w_{\gamma}(t).\] **Remark 2.4**.: \(w_{v}(t)\) _is called the weight of the vertex \(v\) at time \(t\) because, as shown in Theorem 2.1, it is exactly the deterministic part of the first-order asymptotics of \(Z_{v}^{(n)}\left(\mathfrak{t}_{t}^{(n)}\right)\) or \(Z_{v}^{(n)}\left(\rho_{t}^{(n)}\right)\). As mentioned in the heuristics, only the paths \(\gamma\) from \(0\) to \(v\) in the set \(A(v)\) are actually contributing on the size order of the mutant population of trait \(v\). In each of these paths \(\gamma\), each edge has a constant contribution-depending on the parameters of the edge- that depends on whether it points to a deleterious or a neutral vertex. It is summed up in the respective weights \(w_{del}(\gamma)\) and \(w_{neut}(\gamma)\). The time dependency of \(w_{v}\) is defined via the time intensity \(I_{\gamma}(t)\) for every \(\gamma\in A(v)\). In particular, it is a succession of integral terms depending only on the neutral mutations of \(\gamma\)._ **Results:** Now the more refined result under Assumption (5) can formally be stated. **Theorem 2.1**.: _Assume that \((V,E,L)\) satisfies both the mutation regime described in (1) and the non-increasing growth rate graph condition of (5). Let \(\gamma_{n}=\frac{\log(n)}{\log(\log(1))\theta_{\max}+\varphi_{n}}\), where \(\varphi_{n}\underset{n\rightarrow\infty}{\rightarrow}\infty\) such that \(\gamma_{n}\underset{n\rightarrow\infty}{\rightarrow}\infty\) and where \(\theta_{max}:=\max_{v\in V\backslash\{0\}}\theta(v)\). Let also \(\psi_{n}\) such that \(e^{\varphi_{n}}\log(n)=o(\psi^{2}(n))\). Define for all \((t,s)\in\mathbb{R}^{+}\times\mathbb{R}\)_ \[d_{v}^{(n)}(t,s):= \mathbb{I}_{\left\{t\in[0,t(v)-\gamma_{n}^{-1})\right\}}+\mathbb{ I}_{\left\{t\in[t(v)-\gamma_{n}^{-1},t(v))\right\}}\psi_{n}\log^{\theta(v)-1}(n) \tag{10}\] \[+1_{\left\{t\in[t(v,\infty) )\right\}}n^{t-t(v)}\log^{\theta(v)}(n)e^{\lambda(0)s}.\] _Let \((T,M)\in\left(\mathbb{R}_{+}^{*}\right)^{2}\) and \(0<T_{1}<T_{2}\). There exists a random variable \(W\) satisfying_ \[W\stackrel{{ law}}{{\mathrel{\mathop{:}}=}}Ber\left(\frac{ \lambda(0)}{\alpha(0)}\right)\otimes Exp\left(\frac{\lambda(0)}{\alpha(0)} \right),\] _such that for all \(v\in V\backslash\{0\}\) we obtain the following results for the different time-scales:_ 1. _Deterministic time scale_ (4)_:_ _If_ \(\lambda(v)=\lambda(0)\) _then_ \[\left(\frac{Z_{v}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{d_{v}^{(n)}(t,s )}\right)_{(t,s)\in[0,T]\times[-M,M]}\underset{n\rightarrow\infty}{\longrightarrow} \left(Ww_{v}(t)\right)_{(t,s)\in[0,T]\times[-M,M]}.\] (11) _If_ \(\lambda(v)<\lambda(0)\) _then_ \[\left(\frac{Z_{v}^{(n)}\left(\mathfrak{t}_{t(v)+t}^{(n)}+s\right)}{n^{t}\log^ {\theta(v)}(n)e^{\lambda(0)s}}\right)_{(t,s)\in[T_{1},T_{2}]\times[-M,M]} \underset{n\rightarrow\infty}{\longrightarrow}\left(Ww_{v}(t(v)+t)\right)_{(t,s)\in[T_{1},T_{2}]\times[-M,M]}.\] (12) 2. _Random time scale_ (3)_: Take_ \(\left(\rho_{t}^{(n)}\right)_{t\in\mathbb{R}^{+}}\) _as defined in (_2_)._ _If_ \(\lambda(v)=\lambda(0)\) _then_ \[\left(\frac{Z_{v}^{(n)}\left(\rho_{t}^{(n)}+s\right)}{d_{v}^{(n)}(t,s)}\right) _{(t,s)\in[T_{1},T_{2}]\times[-M,M]}\underset{n\rightarrow\infty}{ \longrightarrow}\left(1_{\left\{W>0\right\}}w_{v}(t)\right)_{(t,s)\in[T_{1},T _{2}]\times[-M,M]}.\] (13) _If_ \(\lambda(v)<\lambda(0)\) _then_ \[\left(\frac{Z_{v}^{(n)}\left(\rho_{t(v)+t}^{(n)}+s\right)}{n^{t}\log^{\theta(v )}(n)e^{\lambda(0)s}}\right)_{(t,s)\in[T_{1},T_{2}]\times[-M,M]}\underset{n \rightarrow\infty}{\longrightarrow}\left(1_{\left\{W>0\right\}}w_{v}(t(v)+t) \right)_{(t,s)\in[T_{1},T_{2}]\times[-M,M]}.\] (14) _Using the mathematical definition of the model given in Section 4, see (78) and (79), the above convergences are obtained in probability in the adequate \(L^{\infty}\)-spaces. For any other mathematical description, the convergences are at least in distribution in \(\mathbb{D}\left(\left[0,T\right]\times\left[-M,M\right]\right)\) for Equation (11) and in \(\mathbb{D}\left(\left[T_{1},T_{2}\right]\times\left[-M,M\right]\right)\) for Equations (12), (13) and (14)._ The proof of Theorem 2.1 is based on a martingale approach using Doob's and Maximal Inequalities. The first step involves the control of the growth of the lineage of wild-type cells issued from the initial cell both for the deterministic and random time-scales (4) and (3) (Lemma 3.1 and 3.2). Then for any vertex \(v\in V\backslash\{0\}\), potentially many mutational paths on the graph \((V,E)\) can start from \(0\) and lead to \(v\). The contribution on the first-order asymptotics of the mutant sub-population of trait \(v\) for any of these paths needs to be understood. The proof is then done in 2 steps. The first one consists in considering an infinite mono-directional graph under Assumption (5) and in obtaining the result for this particular graph, see Section 3. Doing the first step for an infinite graph allows in particular to deal with the cycles (backward mutations for instance) for a general finite graph. The second step consists in discriminating among all the paths from the initial vertex \(0\) to \(v\) the ones that do not contribute to the first-order asymptotics of the mutant sub-population of trait \(v\), see Section 4. **Remark 2.5**.: _(i) Notice that a multiplicative factor of \(\log^{\theta(v)}(n)\) is captured, see Equations (10), (11), (12), (13) and (14). Getting result on the stochastic exponents (see (6)) does not capture such a factor. For instance with the model of Figure 1 if \(\lambda(1)=\lambda(0)\) Theorem 2.1 gives that after time \(\ell(0,1)\), \(Z_{1}^{\left(n\right)}\left(\binom{t_{1}^{\left(n\right)}}{t_{1}^{\left(n \right)}}\) behaves approximately as \(\log(n)e^{\lambda(0)t_{1-\ell(0,1)}^{\left(n\right)}}\). But what is captured with \(X_{1}^{\left(n\right)}(t)\) is asymptotically \(\lambda(0)(t-\ell(0,1))\) after time \(\ell(0,1)\). (ii) The random variable \(W\) is explicitly defined as the almost sure limit of the natural positive martingale associated to a specific birth and death branching process with rates \(\alpha(0)\) and \(\beta(0)\), see (81). The martingale associated to the lineage of wild-type cells issued from the initial cell is shown to behave as the one associated to the latter birth and death branching process (Lemma 4.1). Thus \(W\) quantifies the randomness of this lineage over the long time. Due to the power law mutation rates regime mutations arise after a long time such that the stochasticity of this lineage is already given by \(W\). Notice that under Assumption (5) the randomness in the first-order asymptotics of any mutant sub-population is summed up in \(W\). Meaning that the stochasticity of these sub-populations are driven more by the stochasticity in the growth of the wild-type sub-population than by both the randomness in the mutational process and the randomness of any lineages of mutant cells. (iii) It seems more than natural not to obtain such a result when considering selective mutation (\(\lambda(v)>\lambda(0)\)). Indeed, a selective mutation would mean that any time advantage is an advantage into growth. Thus the stochasticity of the mutational process can not be ignored as well as the one of the lineages of mutant cells. Hence hoping to control the stochasticity of the mutant population controlling only the randomness of the wild-type population and not the randomness of the mutational process as well as the one of the lineages of the mutant cells is vain. Meaning that using a martingale approach to get the first-order asymptotics can not be successful for a selective mutation. Nevertheless looking at the stochastic exponent (6) the martingale approach allows to get convergence results given in Theorem 2.2. (iv) In view of Theorem 2.1, the mathematical definition of neutral mutation \(\lambda(v)=\lambda(0)\) is well understood instead of the more restrictive but biologically more meaningful condition \(\alpha(v)=\alpha(0)\) and \(\beta(v)=\beta(0)\). Indeed, taking the growth rate \(\lambda(v)\) equal to \(\lambda(0)\) when changing birth and death rates \(\alpha(v)\) and \(\beta(v)\) modify the distribution of any lineage of mutant cells. Consequently one could naturally believe that it should impact the stochasticity of the size order of the mutant population. This is not the case, the randomness on the first asymptotic order is fully summed up by \(W\). Hence it is fully consistent with getting for the neutral assumption only a condition on the growth rate function instead of on the birth and death rates. (v) Considering the time-scale \(\mathfrak{t}_{t}^{\left(n\right)}\) notice that the result slightly differs depending on whether the vertex is neutral or deleterious. Indeed, when looking at the asymptotic behavior for a deleterious vertex \(v\) our result is true strictly after time \(t(v)\), whereas in the case of a neutral vertex all the trajectory from the initial time can be dealt with. Mathematically, this difference originates from the supplementary multiplicative \(\log(n)\) factor in the first asymptotic order when considering a neutral mutation. It allows to control the quadratic variation at time \(t(v)\) for the martingale associated to the mutant population. Then exactly three different regimes are obtained, see (10) and (11) :_ * _Up to_ \(t(v)-\gamma_{n}^{-1}\)_: with high probability no mutational paths from 0 to_ \(v\) _have generated a mutant cell of trait \(v\). Since \(\gamma_{n}\to\infty\) and satisfies \(\gamma_{n}=o(\log(n))\), \(t(v)\) can be interpreted as the first time -when considering the time-scale accelerated in \(\log(n)\)- it becomes asymptotically possible to see the first appearance of a mutant cell of trait \(v\). This result is actually also true for deleterious mutation, see Lemma 3.4._ * _For_ \(t\in\left[t(v)-\gamma_{n}^{-1},t(v)\right)\)_: in this time interval, some mutants cells of trait_ \(v\) _are produced, but the growth of the mutant sub-population of trait_ \(v\) _does not start to be an exponential growth. We succeed to dominate this growth by_ \(\psi_{n}\log^{\theta(v)-1}(n)\)_, with a well chosen function_ \(\psi_{n}\)_. Heuristically what happens is that the total number of mutant cells of trait_ \(v\) _issued from a mutational event up to time_ \(t\) _is of order_ \(\mathcal{O}\left(\log^{\theta(v)-1}(n)\right)\)_. Moreover with the remaining time for the lineages of these mutant cells to grow, we succeed to control the size of the mutant sub-population of trait_ \(v\) _by at most_ \(e^{\frac{\mu_{n}}{4}}\sqrt{\log(n)}\log^{\theta(v)-1}(n)\)_. Consequently dividing by any function_ \(\psi_{n}\) _satisfying_ \(e^{\varphi_{n}}\log(n)=o(\psi_{n}^{2})\) _the asymptotic limits is_ \(0\)_._ * _For_ \(t\in[t(v),\infty)\)_: with high probability the number of mutant cells of trait_ \(v\) _grows exponentially fast at rate_ \(\lambda(0)\)_. A supplementary multiplicative_ \(\log^{\theta(v)}(n)\) _factor is present due to the neutral mutations on the paths of_ \(A(v)\)_. Then it globally scales as_ \(n^{(t-t(v))}\log^{\theta(v)}(n)w_{v}(t)\)_._ _(vi) When comparing point (i) and (ii) of Theorem 2.1 notice that the result is transferred from the deterministic time-scale_ \(\mathfrak{t}_{t}^{(n)}\) _into the random time-scale_ \(\rho_{t}^{(n)}\) _by switching only_ \(W\) _to_ \(\mathbbm{1}_{\{W>0\}}\)_. This a priory surprising fact can be explained by the essential role of_ \(W\)_. As mentioned in Remark_ 2.5 _(ii)_,_ \(W\) _encodes the stochasticity on the long time for the lineage of wild-type cells issued from the initial cell. By showing that the time-scale_ \(\mathfrak{t}_{t}^{(n)}\) _is the right deterministic approximation of_ \(\rho_{t}^{(n)}\) _(Lemma_ 4.2_), one shows that having an asymptotic result on time-scale_ \(\mathfrak{t}_{t}^{(n)}\) _allows to get it for the time scale_ \(\rho_{t}^{(n)}\)_. This idea is made formal using a similar technique as in_ _[_15_]_ _Lemma_ 3_. Then the switch from_ \(W\) _to_ \(\mathbbm{1}_{\{W>0\}}\) _in the result is due to the fact that the time-scale_ \(\rho_{t}^{(n)}\) _already bears by definition the stochasticity of the random variable_ \(W\)_. Consequently the only randomness that needs to be kept is the survival of the lineage issued from the initial cell, which is asymptotically given by_ \(\mathbbm{1}_{\{W>0\}}\)_._ ### Result for a general finite oriented labeled graph This subsection is free from the non-increasing growth rate condition of Equation (5). Without this condition, the martingale approach fails in order to get the first-order asymptotics off all the mutant sub-populations. But, the stochastic exponent, as defined in (6), off all the mutant sub-populations can be uniformly tracked over time. In particular, we show that the limits are positive deterministic non-decreasing piecewise linear continuous functions. Such limits are defined via a recursive algorithm tracking their slopes over time. More precisely, we show that the slopes can only increase and take values on the set of the growth rates. Two different kinds of updates can be made. The first one is when a non-already born trait becomes alive and take the slope which is the maximum between its inner growth rate and the slope of the sub-population that is giving birth to it. The second one is when an already born trait changes it slope to increase it because another born trait among its upcoming neighbors with a higher slope has reached the typical size allowing it to now drive the trait in question, and consequently giving it its slope. This heuristic is made formal in the following theorem. The complexity of such an algorithm comes from the trait structure which is a general finite trait space. On a mono-directional one, this algorithm would be much easier. In particular, at the same time, the two kinds of event can happen. **Theorem 2.2**.: _For all \(v\in V\) define_ \[X_{v}^{(n)}(t):=\frac{\log_{+}\left(Z_{v}^{(n)}\left(\mathfrak{t}_{t}^{(n)} \right)\right)}{\log(n)/\lambda(0)}.\] _Then we have for all \(0<T_{1}<T_{2}\),_ \[\left(\left(X_{v}^{(n)}(t)\right)_{v\in V}\right)_{t\in[T_{1},T_{2}]}\underset {n\to\infty}{\longrightarrow}\mathbbm{1}_{\{W>0\}}\left(\left(x_{v}(t)\right) _{v\in V}\right)_{t\in[T_{1},T_{2}]},\] _in probability in \(L^{\infty}[T_{1},T_{2}]\). Where for all \(v\in V,x_{v}\) is a positive deterministic non-decreasing piecewise linear continuous function that is obtained via a recursive approach tracking its slope over time. In particular it exists \(k^{*}\in\mathbb{N}\) and \(0=\Delta_{0}<\Delta_{1}<\cdots<\Delta_{k^{*}}<\infty\) such that the slopes of \((x_{v})_{v\in V}\) only change at these times. For \(j\in\{0,\cdots,k^{*}\}\) at time \(\Delta_{j}\) two kind of changes in the slope can happen: either a new trait starts to grow or an already growing trait increases its slope due to a growth driven by another more selective trait. Along the algorithm the following different quantities are going to be tracked for all \(j\in\{0,\cdots,k^{*}\}\) at time \(\Delta_{j}\):_ * _the set of alive traits,_ \(A_{j}\)_,_ * _the set of not already born traits,_ \(U_{j}\)_,_ * _the slope of_ \(x_{v}\)_,_ \(\lambda_{j}(v)\)_,_ * _and the set of traits whose growth are driven by trait_ \(v\)_,_ \(C_{j}(v)\)_._ _Initialisation: Set \(A_{0}=\{0\}\), \(U_{0}=V\ \backslash\{0\}\) and for all \(v\in V\)_ \[x_{v}(0)=0,\lambda_{0}(v)=\lambda(0)\mathbb{1}_{\{v=0\}},\text{ and }C_{0}(v)=\emptyset.\] _Induction: Let \(j\in\{0,\cdots,k^{*}-1\}\). Assume that it exists times \(0=\Delta_{0}<\Delta_{1}<\cdots<\Delta_{j}<\infty\) such that \((x_{v})_{v\in V}\) are positive deterministic non-decreasing piecewise linear continuous functions defined on \([0,\Delta_{j}]\), where their change of slopes happened only on the discrete set \(\{\Delta_{1},\cdots,\Delta_{j}\}\). Also assume that it exists \(\lambda_{j}(v)\), \(A_{j}\), \(U_{j}\), and \(C_{j}(v)\), respectively the slope of \(x_{v}\), the set of alive vertices and not already born vertices, and the set of vertices whose growth are driven by \(v\), everything at time \(\Delta_{j}\)._ _Then it exists \(\Delta_{j+1}\in(\Delta_{j},\infty)\) such that \((x_{v})_{v\in V}\) are constructed during the time period \([\Delta_{j},\Delta_{j+1}]\) according to the following schedule. For all \(v\in V\) and for all \(t\geq\Delta_{j}\) let the following function_ \[y_{v}(t)=(t-\Delta_{j})\lambda_{j}(v)+x_{v}(\Delta_{j}).\] _For all \(v\in U_{j}\) define_ \[\forall u\in A_{j},\text{ such that }(u,v)\in E,\delta_{u,v}:= \inf\{t\geq\Delta_{j}:y_{u}(t)\geq\lambda(0)\ell(u,v)\},\] \[\delta_{v}:=\inf_{u\in A_{j}:(u,v)\in E}\delta_{u,v},\] \[\nu(v):=\{u\in A_{j}:(u,v)\in E\text{ and }\delta_{u,v}= \delta_{v}\},\] _For all \(v\in A_{j}\) define_ \[B_{j}(v):=\{u\in A_{j}:(v,u)\in E\text{ and }\lambda_{j}(v)> \lambda_{j}(u)\},\] \[\forall u\in B_{j}(v),\delta_{v,u}:=\inf\{t\geq\Delta_{j}:y_{v}(t )\geq y_{u}(t)+\lambda(0)\ell(v,u)\},\] \[\delta_{v}:=\inf_{u\in B_{j}(v)}\delta_{v,u},\] \[\nu(v):=\{u\in B_{j}(v):\delta_{v,u}=\delta_{v}\}.\] _Then define \(\Delta_{j+1}:=\inf_{v\in V}\delta_{v}\) and \(\nu_{j+1}:=\{v\in V:\delta_{v}=\Delta_{j+1}\}\). Then proceed to the following updates:_ * _Let_ \(A_{j+1}:=A_{j}\cup(\nu_{j+1}\cap U_{j})\) _and_ \(U_{j+1}=U_{j}\backslash\left(\nu_{j+1}\cap U_{j}\right).\) _Also let_ \(\forall v\in U_{j+1},\)__\(\lambda_{j+1}(v)=\lambda_{j}(v)=0\)_,_ \(C_{j+1}(v)=C_{j}(v)=\emptyset\)_._ * _For all_ \(v\in\nu_{j+1}\cap A_{j}\)_, introduce the set_ \(\nu^{(-)}(v):=\{u\in\nu(v):\exists w\in\nu_{j+1}\cap A_{j},\lambda_{j}(w)> \lambda_{j}(v),\text{ and }u\in\nu(w)\}\)_. Then let_ \(C_{j+1}(v):=C_{j}(v)\bigcup_{u\in\nu(v)\nu^{(-)}(v)}\left(\{u\}\cup C_{j}(u) \right).\) _For all_ \(u\in\nu(v)\backslash\nu^{(-)}(v)\) _and_ \(w\in C_{j}(u)\)_,_ \(\lambda_{j+1}(u)=\lambda_{j+1}(w)=\lambda_{j}(v)\)_._ * _For all_ \(v\in A_{j}\) _whose slope has not been updated yet, let_ \(\lambda_{j+1}(v)=\lambda_{j}(v)\)_. And for all_ \(v\in A_{j}\) _whose set_ \(C_{j}(v)\) _has not been updated yet, let_ \(C_{j+1}(v):=C_{j}(v)\)_._ * _For all_ \(v\in\nu_{j+1}\cap U_{j}\)_, let_ \(\lambda_{j+1}(v):=\max\left(\lambda(v),\max_{u\in\nu(v)}\lambda_{j+1}(u)\right)\)_, and_ \(C_{j+1}(v)=C_{j}(v)=\emptyset\)_. If_ \(\lambda_{j+1}(v)\geq\lambda(v)\)_, introduce the set_ \(\nu^{+}(v):=\{u\in\nu(v):\lambda_{j+1}(u)=\max_{w\in\nu(v)}\lambda_{j+1}(w)\}\)_, and for all_ \(u\in\nu^{+}(v)\)_,_ \(C_{j+1}(u):=C_{j+1}(u)\cup\{v\}\)_._ _For any other mathematical description as the one given in Section 4, see (78) and (79), the convergences are at least in distribution in \(\mathbb{D}\left([T_{1},T_{2}]\right)\)._ The proof of this theorem is given in Section 5. It is heavily based on the proofs of [13], where we exploit the stochastic construction of such a model, given in the beginning of Section 4, to adapt the proofs of the previous article to the situation of the present work. For that reason, we introduce lemmas and explain in the proofs how the adaptations from the proofs of [13] are made, without reproving them. This theorem is the counterpart of the study made in [8] in the case of branching sub-populations, instead of having competition between sub-populations. One difference is that the power law mutation rates regime is a bit more general in the present work, allowing each mutation probabilities to scale differently. But, the result in [8] can be adapted with this more general regime, as mentioned by the authors. Nevertheless, Theorem 2.2 is a less refined result compared to Theorem 2.1. We are going to explicit, using the example of Figure 4, the contribution of Theorem 2.1 compared to Theorem 2.2, under Assumption (5). For this example, the asymptotic function \(x\) obtained due to Theorem 2.2 for the purple trait is \(x(t)=\mathbbm{1}_{\{t\geq 4\}}\lambda(0)(t-4).\) In the caption of Figure 4 it is already made explicit that only the plain green path will contribute to the size order of the purple mutant sub-population. If one denotes by \(1\), \(2\) and \(3\) respectively the vertices on the plain green path such that it is exactly \((0,1,2,3)\), where \(3\) is the purple vertex, it follows that Theorem 2.1 gives that the asymptotic limits for the purple vertex is for all \(t\geq 4\), \(\frac{2\alpha(0)\mu(0,1)}{\lambda(0)}\cdot\frac{2\alpha(1)\mu(1,2)}{\lambda(0 )}\cdot\frac{2\alpha(2)\mu(2,3)}{\lambda(0)-\lambda(3)}W\int_{3}^{t}\left(\int_ {1}^{u}ds\right)du\cdot n^{t-4}\log^{2}(n)=\left(\frac{t^{2}}{2}-t-\frac{3}{2} \right)\frac{16\alpha(0)(1)\alpha(2)}{\lambda^{2}(0)(\lambda(0)-\lambda(3))}W n^{t-4}\log^{2}(n),\) because \(\mu(0,1)=\mu(1,2)=2\) and \(\mu(2,3)=1\). In particular Theorem 2.2 captures only the power of \(n\) which is \(t-4\) and with Theorem 2.1 we capture the stochasticity \(W\), a supplementary scaling factor \(\log^{2}(n)\), a time polynomial \(\frac{t^{2}}{2}-t-\frac{3}{2}\) and also a constant depending only on the parameters of the visited vertices \(\frac{16\alpha(0)\alpha(1)\alpha(2)}{\lambda^{2}(0)(\lambda(0)-\lambda(3))}\). Capturing this level of complexity under the large population power law mutation rates limit is, up to our knowledge, the first time that it is done. It opens potential works of identification of the graph structure using data, as well as designing statistical tools. ## 3 First-order asymptotics of the mutant sub-populations for an infinite mono-directional graph In this section consider the model described in Section 1 in the particular following infinite mono-directional graph \[\left(V,E\right)=\left(\mathbb{N}_{0},\{(i,i+1),i\in\mathbb{N}_{0}\}\right).\] Considering this special case will allow to deal with cycles (in particular cycles generated due to backward mutations) in the general finite graph case. Assume the non-increasing growth rate condition (5). For simplicity of notation consider for all \(i\in\mathbb{N}_{0}\) the new notations \(\mu_{i}^{(n)}:=\mu^{(n)}(i,i+1)\) and \(\ell(i):=\ell(i,i+1)\). It means that the following mutation regime is considered \[\forall i\in\mathbb{N}_{0},n^{\ell(i)}\mu_{i}^{(n)}\underset{n \rightarrow\infty}{\longrightarrow}\mu_{i}. \tag{15}\] Assume in the case of the infinite mono-directional graph that \[\ell^{*}:=\inf\{\ell(i):i\in\mathbb{N}_{0}\}>0.\] Again for \(i\in\mathbb{N}_{0}\) denote by \(\alpha_{i}\), \(\beta_{i}\) and \(\lambda_{i}\) the division, death and growth rates associated to trait \(i\) instead of \(\alpha(i),\beta(i)\) and \(\lambda(i)\). In this particular case three different scenari can happen during a division event of a cell of trait \(i\in\mathbb{N}_{0}\): * with probability \(\left(1-\mu_{i}^{(n)}\right)^{2}\) each daughter cell keep the trait \(i\) of its mother cell, * with probability \(2\mu_{i}^{(n)}\left(1-\mu_{i}^{(n)}\right)\) exactly one of the daughter cell mutates to the next trait \(i+1\) when the second daughter cell keeps the trait \(i\) of its mother cell, * with probability \(\left(\mu_{i}^{(n)}\right)^{2}\) both of the daughter cells mutate to the next trait \(i+1\). A graphical representation of the model can be found in Figure 5. In particular it means that any lineage of a cell of trait \(i\) follows a birth-death branching process with rates \(\alpha_{i}\left(1-\mu_{i}^{(n)}\right)^{2}\) and \(\beta_{i}+\alpha_{i}^{(n)}\left(\mu_{i}^{(n)}\right)^{2}\) respectively. Thus introduce the birth, death and growth rate of any lineage of a cell of trait \(i\) as \[\alpha_{i}^{(n)} :=\alpha_{i}\left(1-\mu_{i}^{(n)}\right)^{2},\] \[\beta_{i}^{(n)} :=\beta_{i}+\alpha_{i}^{(n)}\left(\mu_{i}^{(n)}\right)^{2},\] \[\lambda_{i}^{(n)} :=\alpha_{i}^{(n)}-\beta_{i}^{(n)}=\lambda_{i}-2\alpha_{i}\mu_{i} ^{(n)}.\] Compared to the general finite graph, for any trait \(i\in\mathbb{N}\) there is only one path from trait \(0\) to \(i\) for this mono-directional graph. In particular it implies that \[t(i) =\sum_{i=0}^{i-1}\ell(i),\] \[\theta(i) =|\{j\in\llbracket 1,i\rrbracket:\lambda_{j}=\lambda_{0}\}|.\] The sequence \(\left(\left(Z_{i}^{(n)}\right)_{i\in\mathbb{N}_{0}}\right)_{n\in\mathbb{N}}\) is mathematically constructed using independent Poisson Point Measures (PPMs). Let \(Q_{0}^{b}(ds,d\theta)\), \(Q_{0}^{d}(ds,d\theta)\), \(\left(Q_{i}(ds,d\theta)\right)_{i\in\mathbb{N}}\), \(\left(N_{i}(ds,d\theta)\right)_{i\in\mathbb{N}_{0}}\), and \(\left(Q_{i}^{m}(ds,d\theta)\right)_{i\in\mathbb{N}_{0}}\) be independent PPMs with intensity \(dsd\theta\). The sub-population of wild-type cells is \[Z_{0}^{(n)}(t):= 1+\int_{0}^{t}\int_{\mathbb{R}^{+}}\mathbbm{1}_{\left\{\theta \leq\alpha_{0}^{(n)}Z_{0}^{(n)}(s-)\right\}}Q_{0}^{b}(ds,d\theta)-\int_{0}^{t} \int_{\mathbb{R}^{+}}\mathbbm{1}_{\left\{\theta\leq\beta_{0}Z_{0}^{(n)}(s-) \right\}}Q_{0}^{d}(ds,d\theta)-H_{0}^{(n)}(t), \tag{16}\] and for all \(i\in\mathbb{N}\) \[Z_{i}^{(n)}(t) :=\int_{0}^{t}\int_{\mathbb{R}^{+}}\Bigg{(}\mathbbm{1}_{\left\{ \theta\leq\alpha_{i}^{(n)}Z_{i}^{(n)}(s-)\right\}}-\mathbbm{1}_{\left\{\alpha _{i}^{(n)}Z_{i}^{(n)}(s-)\leq\theta\leq\left(\alpha_{i}^{(n)}+\beta_{i}\right) Z_{i}^{(n)}(s-)\right\}}\Bigg{)}Q_{i}(ds,d\theta)\] \[+K_{i-1}^{(n)}(t)+2H_{i-1}^{(n)}(t)-H_{i}^{(n)}(t),\] Figure 5: Dynamical representation of the infinite mono-directional graph where for all \(i\in\mathbb{N}_{0}\) \[K_{i}^{(n)}(t) :=\int_{0}^{t}\int_{\mathbb{R}^{+}}\mathbb{1}_{\big{\{}\theta\leq 2 \alpha_{i}\mu_{i}^{(n)}\big{(}1-\mu_{i}^{(n)}\big{)}Z_{i}^{(n)}(s^{-})\big{\}}}N _{i}(ds,d\theta),\] \[H_{i}^{(n)}(t) :=\int_{0}^{t}\int_{\mathbb{R}^{+}}\mathbb{1}_{\big{\{}\theta\leq \alpha_{i}\big{(}\mu_{i}^{(n)}\big{)}^{2}Z_{i}^{(n)}(s^{-})\big{\}}}Q_{i}^{m}( ds,d\theta).\] The processes \(\left(K_{i}^{(n)}(t)\right)_{t\in\mathbb{R}^{+}}\) and \(\left(H_{i}^{(n)}(t)\right)_{t\in\mathbb{R}^{+}}\) count the number of mutations up to time \(t\) from the sub-population of trait \(i\) leading to exactly one, respectively two mutated daughter cells of trait \(i+1\). Let \((Z_{0}(t))_{t\in\mathbb{R}^{+}}\) be the birth-death branching process with rates \(\alpha_{0}\) and \(\beta_{0}\) respectively, constructed in the following way \[Z_{0}(t)=1+\int_{0}^{t}\int_{\mathbb{R}^{+}}\mathbb{1}_{\{\theta\leq\alpha_{0 }Z_{0}(s^{-})\}}Q_{0}^{b}(ds,d\theta)-\int_{0}^{t}\int_{\mathbb{R}^{+}} \mathbb{1}_{\{\theta\leq\beta_{0}Z_{0}(s^{-})\}}Q_{0}^{d}(ds,d\theta). \tag{17}\] Notice in particular that with such a construction it immediately follows the monotone coupling \[\forall t\geq 0,Z_{0}^{(n)}(t)\leq Z_{0}(t)\ a.s. \tag{18}\] Denote by \[W:=\lim_{t\to\infty}e^{-\lambda_{0}t}Z_{0}(t), \tag{19}\] the almost sure limit of the positive martingale \(\left(e^{-\lambda_{0}t}Z_{0}(t)\right)_{t\in\mathbb{R}^{+}}\), whose law is \[W\stackrel{{ law}}{{=}}Ber\left(\frac{\lambda_{0}}{\alpha_{0}} \right)\otimes Exp\left(\frac{\lambda_{0}}{\alpha_{0}}\right), \tag{20}\] see [12] Section 1.1, or [11] Theorem 1. ### The wild-type dynamics Using the same PPMs \(Q_{0}^{b}\) and \(Q_{0}^{d}\) in the construction of \(\left(Z_{0}^{(n)}\right)_{n\in\mathbb{N}}\) and \(Z_{0}\), see Equations (16) and (17), allows to control the size dynamics over time of the sequence by comparing it with the size of \(Z_{0}\). More precisely, we show that the natural martingale associated to \(Z_{0}^{(n)}\) can be compared to the natural one of \(Z_{0}\). It comes from the fact that \(\left(\alpha_{0}^{(n)},\beta_{0}^{(n)}\right)\underset{n\to\infty}{\to}( \alpha_{0},\beta_{0}).\) The control is obtained along the whole trajectory and in probability. The rate of convergence is quantifies to be at most of order \(\mathcal{O}\left(\mu_{0}^{(n)}\right)\). **Lemma 3.1**.: _It exits \(C(\alpha_{0},\lambda_{0})>0\) and \(N\in\mathbb{N}\) such that for all \(\varepsilon>0\) and \(n\geq N\)_ \[\mathbb{P}\left(\sup_{t\in\mathbb{R}^{+}}\left|e^{-\lambda_{0}t}Z_{0}(t)-e^{- \lambda_{0}^{(n)}t}Z_{0}^{(n)}(t)\right|\geq\varepsilon\right)\leq\frac{C( \alpha_{0},\lambda_{0})}{\varepsilon^{2}}\mu_{0}^{(n)}\underset{n\to\infty}{ \longrightarrow}0.\] Proof of Lemma 3.1.: Notice that \(\left(e^{-\lambda_{0}t}Z_{0}(t)-e^{-\lambda_{0}^{(n)}t}Z_{0}^{(n)}(t)\right)_{ t\in\mathbb{R}^{+}}\) is a martingale as the difference between the martingales \(\left(e^{-\lambda_{0}t}Z_{0}(t)\right)_{t\in\mathbb{R}^{+}}\) and \(\left(e^{-\lambda_{0}^{(n)}t}Z_{0}^{(n)}(t)\right)_{t\in\mathbb{R}^{+}}\). Let \((f(m))_{m\in\mathbb{N}}\) be a non decreasing sequence satisfying \(f(m)\underset{m\to\infty}{\rightarrow}\infty\). Using Doob's Inequality in \(L^{2}\) (see [24] Proposition 3.15) we get \[\mathbb{P}\left(\sup_{t\in[0,f(m)]}\left|e^{-\lambda_{0}t}Z_{0}(t )-e^{-\lambda_{0}^{(n)}t}Z_{0}^{(n)}(t)\right|\geq\varepsilon\right) \tag{21}\] \[\leq\frac{4}{\varepsilon^{2}}\mathbb{E}\left[\left(e^{-\lambda_{ 0}f(m)}Z_{0}(f(m))-e^{-\lambda_{0}^{(n)}f(m)}Z_{0}^{(n)}(f(m))\right)^{2}\right]\] \[=\frac{4}{\varepsilon^{2}}\mathbb{E}\left[e^{-2\lambda_{0}f(m)}Z_{ 0}(f(m))^{2}+e^{-2\lambda_{0}^{(n)}f(m)}Z_{0}^{(n)}(f(m))^{2}-2e^{-(\lambda_{0} +\lambda_{0}^{(n)})f(m)}Z_{0}(f(m))Z_{0}^{(n)}(f(m))\right].\] Using Ito's formula and (18) it follows \[\mathbb{E}\left[Z_{0}(t)Z_{0}^{(n)}(t)\right]=1+\int_{0}^{t}\left(\lambda_{0}+ \lambda_{0}^{(n)}\right)\mathbb{E}\left[Z_{0}(s)Z_{0}^{(n)}(s)\right]ds+\int_{0} ^{t}\left(\alpha_{0}^{(n)}+\beta_{0}\right)\mathbb{E}\left[Z_{0}^{(n)}(s) \right]ds.\] Solving this equation we obtain for all \(t\geq 0\) \[\mathbb{E}\left[Z_{0}(t)Z_{0}^{(n)}(t)\right]=\frac{\alpha_{0}+\alpha_{0}^{(n) }}{\lambda_{0}}e^{\left(\lambda_{0}+\lambda_{0}^{(n)}\right)t}-\frac{\alpha_{ 0}^{(n)}+\beta_{0}}{\lambda_{0}}e^{\lambda_{0}^{(n)}t}. \tag{22}\] Similarly we have \[\mathbb{E}\left[\left(Z_{0}(t)\right)^{2}\right]=\frac{2\alpha_{0 }}{\lambda_{0}}e^{2\lambda_{0}t}-\frac{\alpha_{0}+\beta_{0}}{\lambda_{0}}e^{ \lambda_{0}t}\leq\frac{2\alpha_{0}}{\lambda_{0}}e^{2\lambda_{0}t}, \tag{23}\] \[\mathbb{E}\left[\left(Z_{0}^{(n)}(t)\right)^{2}\right]=\frac{2 \alpha_{0}^{(n)}}{\lambda_{0}^{(n)}}e^{2\lambda_{0}^{(n)}t}-\frac{\alpha_{0}^ {(n)}+\beta_{0}^{(n)}}{\lambda_{0}^{(n)}}e^{\lambda_{0}^{(n)}t}\leq\frac{2 \alpha_{0}^{(n)}}{\lambda_{0}^{(n)}}e^{2\lambda_{0}^{(n)}t}.\] Consequently combining (21), (22) and (23) gives that \[\mathbb{P}\left(\sup_{t\in[0,f(m)]}\left|e^{-\lambda_{0}t}Z_{0}(t )-e^{-\lambda_{0}^{(n)}t}Z_{0}^{(n)}(t)\right|\geq\varepsilon\right)\] \[\leq\frac{4}{\varepsilon^{2}}\left(\frac{2\alpha_{0}}{\lambda_{0} }+\frac{2\alpha_{0}^{(n)}}{\lambda_{0}^{(n)}}-2\frac{\alpha_{0}+\alpha_{0}^{ (n)}}{\lambda_{0}}+2\frac{\alpha_{0}^{(n)}+\beta_{0}}{\lambda_{0}}e^{-\lambda _{0}f(m)}\right).\] The event \(\left\{\sup_{t\in[0,f(m)]}\left|e^{-\lambda_{0}t}Z_{0}(t)-e^{-\lambda_{0}^{(n) }t}Z_{0}^{(n)}(t)\right|\geq\varepsilon\right\}\) is increasing in the parameter \(m\). Then taking the limit \(m\to\infty\) and by monotonicity of the measure it follows \[\mathbb{P}\left(\sup_{t\in\mathbb{R}^{+}}\left|e^{-\lambda_{0}t}Z_{0}(t)-e^{- \lambda_{0}^{(n)}t}Z_{0}^{(n)}(t)\right|\geq\varepsilon\right)\leq\frac{4}{ \varepsilon^{2}}\left(\frac{2\alpha_{0}}{\lambda_{0}}+\frac{2\alpha_{0}^{(n)}} {\lambda_{0}^{(n)}}-2\frac{\alpha_{0}+\alpha_{0}^{(n)}}{\lambda_{0}}\right).\] Recalling that \(\lambda_{0}^{(n)}=\lambda_{0}-2\alpha_{0}\mu_{0}^{(n)}\) it easily follows that \[\frac{2\alpha_{0}^{(n)}}{\lambda_{0}^{(n)}}=\frac{2\alpha_{0}}{ \lambda_{0}}+\frac{4\beta_{0}\alpha_{0}}{\lambda_{0}^{2}}\mu_{0}^{(n)}+ \mathcal{O}\left(\left(\mu_{0}^{(n)}\right)^{2}\right),\] \[2\frac{\alpha_{0}+\alpha_{0}^{(n)}}{\lambda_{0}}=\frac{4\alpha_{ 0}}{\lambda_{0}}-\frac{4\alpha_{0}}{\lambda_{0}}\mu_{0}^{(n)}+\mathcal{O} \left(\left(\mu_{0}^{(n)}\right)^{2}\right).\] Finally we have \[\mathbb{P}\left(\sup_{t\in\mathbb{R}^{+}}\left|e^{-\lambda_{0}t}Z_ {0}(t)-e^{-\lambda_{0}^{(n)}t}Z_{0}^{(n)}(t)\right|\geq\varepsilon\right) \leq\frac{4}{\varepsilon^{2}}\left(\frac{4\beta_{0}\alpha_{0}} {\lambda_{0}^{2}}+\frac{4\alpha_{0}}{\lambda_{0}}\right)\mu_{0}^{(n)}+ \mathcal{O}\left(\left(\mu_{0}^{(n)}\right)^{2}\right)\] \[=\frac{16\alpha_{0}^{2}}{\varepsilon^{2}\lambda_{0}^{2}}\mu_{0}^{ (n)}+\mathcal{O}\left(\left(\mu_{0}^{(n)}\right)^{2}\right),\] which concludes the proof. The next Lemma gives an asymptotic comparison between the random stopping times \(\eta_{t}^{(n)}\) at which the wild-type population reaches the size \(n^{t}\), and the deterministic times \(\mathfrak{t}_{t}^{(n)}\). This asymptotic is given in probability and is conditioned on \(\{W>0\}\). It explains why these deterministic times are the natural deterministic candidates for studying the asymptotic behavior of the mutant sub-populations at the random stopping times. In particular it shows that the stochastic correction between the random time-scale and the deterministic one is asymptotically \(\frac{\log(W)}{\lambda_{0}}\). The result is obtained uniformly in time on intervals whose lengths tend to infinity not too quickly. **Lemma 3.2**.: _For all \(\varepsilon>0\), \((T_{1},T_{2})\in\mathbb{R}^{+}\) and \(\varphi_{n}\) such that \(\log(n)=o(\varphi_{n})\) and \(\varphi_{n}=o\left(n^{\ell(0)}\right)\), we have_ \[\mathbb{P}\left(\sup_{t\in\left[T_{1},T_{2}\frac{\varphi_{n}}{\log(n)}\right]} \left|\eta_{t}^{(n)}-\left(\mathfrak{t}_{t}^{(n)}-\frac{\log(W)}{\lambda_{0}} \right)\right|\geq\varepsilon\middle|W>0\right)\underset{n\to\infty}{ \longrightarrow}0.\] Proof of Lemma 3.2.: Let \(\varepsilon>0\) and for all \(n\in\mathbb{N}\) introduce the event \[A^{(n)}:=\Bigg{\{}\sup_{t\in\left[T_{1},T_{2}\frac{\varphi_{n}}{\log(n)}\right]} \left|\eta_{t}^{(n)}-\left(\mathfrak{t}_{t}^{(n)}-\frac{\log(W)}{\lambda_{0}} \right)\right|\geq\varepsilon\Bigg{\}}.\] **Step 1:** We start by showing that for all \(0<\delta_{1}<\delta_{2}\) \[\mathbb{P}\left(A^{(n)}\cap\{\delta_{1}<W<\delta_{2}\}\right) \underset{n\to\infty}{\longrightarrow}0. \tag{24}\] Let \(\nu>0\) and \(\tilde{\varepsilon}<\frac{\delta_{1}}{2}\). Firstly, since \(e^{-\lambda_{0}t}Z_{0}(t)\underset{t\to\infty}{\rightarrow}W\) almost surely, \(Y(t):=\sup_{s\in[t,\infty]}\left|e^{-\lambda_{0}s}Z_{0}(s)-W\right| \underset{t\to\infty}{\longrightarrow}0\) almost surely and as a consequence in probability. Thus introducing for all \(t>0\) the event \(B_{t}:=\{Y(t)\leq\tilde{\varepsilon}\}\), it exists \(t_{1}>0\) such that for all \(t\geq t_{1}\), \(\mathbb{P}\left(B_{t}\right)\geq 1-\frac{\nu}{3}\). Secondly using Lemma 3.1, we have that it exists \(n_{1}\in\mathbb{N}\) such that for all \(n\geq n_{1}\) \[\mathbb{P}\left(C^{(n)}\right)\geq 1-\frac{\nu}{3}\text{ with }C^{(n)}:=\left\{ \sup_{t\in\mathbb{R}^{+}}\left|e^{-\lambda_{0}t}Z_{0}(t)-e^{-\lambda_{0}^{(n) }t}Z_{0}^{(n)}(t)\right|\leq\tilde{\varepsilon}\right\}.\] Combining these two facts, we obtain the following inequality for all \(n\geq n_{1}\) \[\mathbb{P}\left(A^{(n)}\cap\{\delta_{1}<W<\delta_{2}\}\right) \leq\mathbb{P}\left(A^{(n)}\cap\{\delta_{1}<W<\delta_{2}\}\cap B_{t_{1}}\cap C ^{(n)}\right)+\frac{2\nu}{3}. \tag{25}\] It remains to show that \(\mathbb{P}\left(A^{(n)}\cap\{\delta_{1}<W<\delta_{2}\}\cap B_{t_{1}}\cap C^{( n)}\right)\leq\frac{\nu}{3}\) for n large enough. Under the event \(B_{t_{1}}\) we have \[\forall s\geq t_{1},(W-\widetilde{\varepsilon})\,e^{\lambda_{0}s}\leq Z_{0}( s)\leq(W+\widetilde{\varepsilon})\,e^{\lambda_{0}s}.\] Using that \(\lambda_{0}^{(n)}\leq\lambda_{0}\), we get that under the event \(C^{(n)}\) \[\forall s\in\mathbb{R}^{+},\left(e^{-\lambda_{0}s}Z_{0}(s)- \widetilde{\varepsilon}\right)e^{\lambda_{0}^{(n)}s}\leq Z_{0}^{(n)}(s)\leq \left(e^{-\lambda_{0}s}Z_{0}(s)+\widetilde{\varepsilon}\right)e^{\lambda_{0} ^{(n)}s}\leq Z_{0}(s)+\widetilde{\varepsilon}e^{\lambda_{0}s}.\] Combining the two previous inequalities it follows that under \(\{\delta_{1}<W<\delta_{2}\}\cap B_{t_{1}}\cap C^{(n)}\) we have \[\forall s\geq t_{1},(W-2\widetilde{\varepsilon})\,e^{\lambda_{0}^{(n)}s}\leq Z _{0}^{(n)}(s)\leq(W+2\widetilde{\varepsilon})\,e^{\lambda_{0}s}\leq(\delta_{2 }+2\widetilde{\varepsilon})\,e^{\lambda_{0}s}.\] Notice that by definition of \(\widetilde{\varepsilon}\), we have that \(W-2\widetilde{\varepsilon}>0\) under the event \(\{\delta_{1}<W\}\). Now introduce the following quantities, which almost surely increase with time \[\underline{T}_{\delta_{2},t}^{(n)} :=\inf\{s>0:(\delta_{2}+2\widetilde{\varepsilon})e^{\lambda_{0}s} \geq n^{t}\},\] \[\underline{T}_{t}^{(n)} :=\inf\{s>0:(W+2\widetilde{\varepsilon})e^{\lambda_{0}s}\geq n^{ t}\},\] \[\overline{T}_{t}^{(n)} :=\inf\{s>0:(W-2\widetilde{\varepsilon})e^{\lambda_{0}^{(n)}s}\geq n ^{t}\}.\] We have that it exists \(n_{2}\in\mathbb{N}\) such that for all \(n\geq n_{2}\) \[t_{1}\leq\underline{T}_{\delta_{2},T_{1}}^{(n)}.\] Moreover under the event \(\{\delta_{1}<W<\delta_{2}\}\cap B_{t_{1}}\cap C^{(n)}\) we have for all \(n\geq\max(n_{1},n_{2})\) and for all \(t\in\left[T_{1},T_{2}\frac{\varphi_{n}}{\log(n)}\right]\) \[\underline{T}_{\delta_{2},T_{1}}^{(n)}\leq\underline{T}_{\delta_{2},t}^{(n)} \leq\underline{T}_{t}^{(n)}\leq\eta_{t}^{(n)}\leq\overline{T}_{t}^{(n)}.\] Using that \(\frac{\lambda_{0}}{\lambda_{0}^{(n)}}=\left(1-\frac{2\alpha_{0}\mu_{0}^{(n)}}{ \lambda_{0}}\right)^{-1}\), and the previous equation we derive that \(\forall t\in\left[T_{1},T_{2}\frac{\varphi_{n}}{\log(n)}\right]\) \[\frac{t\log(n)}{\lambda_{0}}-\frac{\log(W)}{\lambda_{0}}-\frac{ \log\left(1+2\tilde{\varepsilon}/W\right)}{\lambda_{0}}\leq\eta_{t}^{(n)} \leq\left(\frac{t\log(n)}{\lambda_{0}}-\frac{\log(W)}{\lambda_{0}}-\frac{\log (1-2\tilde{\varepsilon}/W)}{\lambda_{0}}\right)\cdot\frac{1}{1-2\alpha_{0}\mu_ {0}^{(n)}/\lambda_{0}},\] from which we obtain \[-\frac{\log(1+2\tilde{\varepsilon}/W)}{\lambda_{0}}\leq\eta_{t}^{(n)}- \left(\frac{t\log(n)}{\lambda_{0}}-\frac{\log(W)}{\lambda_{0}}\right)\] \[\leq\frac{1}{1-2\alpha_{0}\mu_{0}^{(n)}/\lambda_{0}}\cdot\left( \left(\frac{t\log(n)}{\lambda_{0}}-\frac{\log(W)}{\lambda_{0}}\right)\frac{2 \alpha_{0}\mu_{0}^{(n)}}{\lambda_{0}}-\frac{\log(1-2\tilde{\varepsilon}/W)}{ \lambda_{0}}\right).\] In particular it implies that for all \(n\geq\max(n_{1},n_{2})\) \[\sup_{t\in\left[T_{1},T_{2}\frac{\omega_{n}}{\log(n)}\right]} \left|\eta_{t}^{(n)}-\left(\frac{t\log(n)}{\lambda_{0}}-\frac{\log(W)}{ \lambda_{0}}\right)\right|\] \[\leq\max\left(\frac{\log(1+2\tilde{\varepsilon}/W)}{\lambda_{0}} ;\frac{1}{1-2\alpha_{0}\mu_{0}^{(n)}/\lambda_{0}}\cdot\left(\left(\frac{T_{2} \varphi_{n}}{\lambda_{0}}-\frac{\log(W)}{\lambda_{0}}\right)\frac{2\alpha_{0} \mu_{0}^{(n)}}{\lambda_{0}}-\frac{\log(1-2\tilde{\varepsilon}/W)}{\lambda_{0} }\right)\right).\] Denote by \(D^{(n)}\) the right hand side of the last inequality. Then it directly follows that \[\mathbb{P}\left(A^{(n)}\cap\left\{\delta_{1}<W<\delta_{2}\right\} \cap B_{t_{1}}\cap C^{(n)}\right)\leq\mathbb{P}\left(\left\{D^{(n)}\geq \varepsilon\right\}\cap\left\{\delta_{1}<W<\delta_{2}\right\}\right). \tag{26}\] Because \(\varphi_{n}\) was defined such that \(\varphi_{n}\mu_{0}^{(n)}\underset{n\to\infty}{\to}0\) it is possible to find an adequate \(\widetilde{\varepsilon}>0\) and \(n_{3}\in\mathbb{N}\) such that for all \(n\geq n_{3}\), \(\mathbb{P}\left(\left\{D^{(n)}\geq\varepsilon\right\}\cap\left\{\delta_{1}<W< \delta_{2}\right\}\right)\leq\frac{\nu}{3}\). In addition with (25) and (26) we deduce (24). **Step 2:** To complete the proof we are going to prove that \(\mathbb{P}\left(A^{(n)}\cap\left\{W>0\right\}\right)\underset{n\to\infty}{ \longrightarrow}0.\) We have \[\mathbb{P}\left(A^{(n)}\cap\left\{W>0\right\}\right)\leq\mathbb{P}\left(A^{(n )}\cap\left\{\delta_{1}<W<\delta_{2}\right\}\right)+\mathbb{P}\left(0<W< \delta_{1}\right)+\mathbb{P}\left(W>\delta_{2}\right).\] Using Equation (24) we obtain that \[\limsup_{n\to\infty}\mathbb{P}\left(A^{(n)}\cap\left\{W>0\right\}\right)\leq \mathbb{P}\left(0<W<\delta_{1}\right)+\mathbb{P}\left(\delta_{2}<W\right).\] Taking the limit when \(\left(\delta_{1},\delta_{2}\right)\underset{n\to\infty}{\to}\left(0,\infty\right)\) and because \(W\) is finite almost surely (see (20)) we conclude. **Remark 3.1**.: _From Lemma 3.2, it follows the useful results_ \[\mathbb{P}\left(\sup_{t\in\left[T_{1},T_{2}\frac{\omega_{n}}{\log(n)}\right]} \left|\frac{\eta_{t}^{(n)}}{\log(n)}\lambda_{0}-t\right|\geq\varepsilon\middle| W>0\right)\underset{n\to\infty}{\longrightarrow}0,\] _and_ \[\mathbb{P}\left(\sup_{t\in\left[T_{1},T_{2}\frac{\omega_{n}}{\log(n)}\right]} \left|e^{-\lambda_{0}\left(\eta_{t}^{(n)}-t_{t}^{(n)}\right)}-W\right|\geq \varepsilon\middle|W>0\right)\underset{n\to\infty}{\longrightarrow}0.\] ### The mutant sub-populations dynamics in the deterministic time-scale (Theorem 2.1 (i)) In this subsection, Equations (11) and (12) are proven for the mono-directional graph. It will be done in two steps. The first one will consist in showing the result for a fixed \(s\in\mathbb{R}\) and uniformly in the parameter \(t\). Then in the second step, the result will be proved uniformly in the parameter \(s\). #### 3.2.1 Uniform control on the time parameter \(\mathbf{t}\) In this subsection we are going to prove the following proposition, which is a less refine result of (11) and (12), because the result is not uniform on the parameter \(s\). **Proposition 3.1**.: _Let \(i\in\mathbb{N}\), \((\psi_{n}(i),\gamma_{n}(i))\underset{n\to\infty}{\rightarrow}\infty\) such that it exists \(\varphi_{n}(i)\underset{n\to\infty}{\rightarrow}\infty\) such that \(\gamma_{n}(i)=\frac{\log(n)}{\log(\log(n))\theta(i-1)+\varphi_{n}(i)}\) and \(e^{\varphi_{n}(i)}\log(n)=o(\psi_{n}^{2}(i))\). For all \((t,s)\in\mathbb{R}^{+}\times\mathbb{R}\) define_ \[d_{i}^{(n)}(t,s):= 1_{\left\{t\in[0,t(i)-\gamma_{i}^{\infty}(i))\right\}}+ 1_{\left\{t\in[t(i)-\gamma_{i}^{\infty}(i),t(i))\right\}}\psi_{n}\log^{ \theta(i)-1}(n)\] \[+1_{\left\{t\in[t(i),\infty)\right\}}n^{t-t(i)}\log^{\theta(i)}(n )e^{\lambda(0)s}.\] _Let \(T>0\), \(0<T_{1}<T_{2}\), and \(s\in\mathbb{R}\). Then_ * _If_ \(\lambda_{i}=\lambda_{0}\)__ \[\left(\frac{Z_{i}^{(n)}\left(\mathfrak{k}_{t}^{(n)}+s\right)}{d_{i}^{(n)}(t,s )}\right)_{t\in[0,T]}\underset{n\to\infty}{\longrightarrow}\left(Ww_{i}(t) \right)_{t\in[0,T]},\] (27) _in probability in_ \(L^{\infty}\left([0,T]\right)\)_._ * _If_ \(\lambda_{i}<\lambda_{0}\)__ \[\left(\frac{Z_{i}^{(n)}\left(\mathfrak{k}_{t(i)+t}^{(n)}+s\right)}{n^{t}\log^{ \theta(i)}(n)e^{\lambda_{0}s}}\right)_{t\in[T_{1},T_{2}]}\underset{n\to \infty}{\longrightarrow}\left(Ww_{i}(t(i)+t)\right)_{t\in[T_{1},T_{2}]},\] (28) _in probability in_ \(L^{\infty}([T_{1},T_{2}])\)_._ The proof is done by induction on \(i\in\mathbb{N}\). As long as the proof is similar for the initialization and the inductive part the step considered will not be specified. To make the proof the clearer possible it is cut using several lemmas. All the results are obtained using a martingale approach. In the next Lemma the martingales that are considered for all the mutant sub-populations are introduced, and their quadratic variations are computed. **Lemma 3.3**.: _For all \(i\in\mathbb{N}\) define_ \[M_{i}^{(n)}(t):=Z_{i}^{(n)}(t)e^{-\lambda_{i}^{(n)}t}-\int_{0}^{t}2\alpha_{i-1 }\mu_{i-1}^{(n)}e^{-\lambda_{i}^{(n)}s}Z_{i-1}^{(n)}(s)ds. \tag{29}\] \(\left(M_{i}^{(n)}(t)\right)_{t\geq 0}\) _is a martingale, with quadratic variation_ \[\left\langle M_{i}^{(n)}\right\rangle_{t}=\int_{0}^{t}2\alpha_{i-1}\mu_{i-1}^{ (n)}e^{-2\lambda_{i}^{(n)}s}Z_{i-1}^{(n)}(s)ds+\left(\alpha_{i}^{(n)}+\beta_{i }^{(n)}\right)\int_{0}^{t}e^{-2\lambda_{i}^{(n)}s}Z_{i}^{(n)}(s)ds. \tag{30}\] Proof of Lemma 3.3.: For all \(t\geq 0\) let \(\mathcal{F}_{i,t}^{(n)}\) the \(\sigma\)-field generated by \(Z_{j}^{(n)}(s)\) for \(0\leq j\leq i\) and \(0\leq s\leq t\). For all \(h\geq 0\) we have \[\mathbb{E}\left[M_{i}^{(n)}(t+h)-M_{i}^{(n)}(t)|\mathcal{F}_{i,t} ^{(n)}\right] =\mathbb{E}\left[Z_{i}^{(n)}(t+h)\middle|\mathcal{F}_{i,t}^{(n)} \right]e^{-\lambda_{i}^{(n)}(t+h)}-Z_{i}^{(n)}(t)e^{-\lambda_{i}^{(n)}t} \tag{31}\] \[-\int_{t}^{t+h}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{-\lambda_{i}^{(n)} s}\mathbb{E}\left[Z_{i-1}^{(n)}(s)\middle|\mathcal{F}_{i,t}^{(n)}\right]ds.\] The forward Chapman-Kolmogorov equation gives the time-differential equation for \(\mathbb{E}\left[Z_{i}^{(n)}(t)\right]\) \[\frac{d\mathbb{E}\left[Z_{i}^{(n)}(t)\right]}{dt} =\mathbb{E}\left[\alpha_{i}^{(n)}Z_{i}^{(n)}(t)-\beta_{i}^{(n)}Z_ {i}^{(n)}(t)\right]+\mathbb{E}\left[2\alpha_{i-1}\mu_{i-1}^{(n)}(1-\mu_{i-1}^ {(n)})Z_{i-1}^{(n)}(t)\right]\] \[+2\mathbb{E}\left[\alpha_{i-1}\left(\mu_{i-1}^{(n)}\right)^{2}Z_ {i-1}^{(n)}(t)\right]\] \[=\lambda_{i}^{(n)}\mathbb{E}\left[Z_{i}^{(n)}(t)\right]+2\alpha_{ i-1}\mu_{i-1}^{(n)}\mathbb{E}\left[Z_{i-1}^{(n)}(t)\right],\] which leads to \[\mathbb{E}_{Z_{i}^{(n)}(0)}\left[Z_{i}^{(n)}(t)\right]=Z_{i}^{(n)}(0)e^{\lambda_{ i}^{(n)}t}+\int_{0}^{t}2\alpha_{i-1}\mu_{i-1}^{(n)}\mathbb{E}_{Z_{i}^{(n)}(0)} \left[Z_{i-1}^{(n)}(s)\right]e^{\lambda_{i}^{(n)}(t-s)}ds.\] In particular using the Markov property we obtain that \[\mathbb{E}\left[Z_{i}^{(n)}(t+h)\Big{|}\mathcal{F}_{i,t}^{(n)}\right]=Z_{i}^{( n)}(t)e^{\lambda_{i}^{(n)}h}+\int_{t}^{t+h}2\alpha_{i-1}\mu_{i-1}^{(n)} \mathbb{E}\left[Z_{i-1}^{(n)}(s)|\mathcal{F}_{i,t}^{(n)}\right]e^{\lambda_{i}^ {(n)}(t+h-s)}ds. \tag{32}\] Combining (31) and (32) it follows that \(\left(M_{i}^{(n)}(t)\right)_{t\in\mathbb{R}^{+}}\) is a martingale. Let \(F^{(n)}(t,x,y):=(e^{-\lambda_{i}^{(n)}t}x-y)^{2}\), it follows that \[\frac{\partial F^{(n)}}{\partial t}(t,x,y) =-2\lambda_{i}^{(n)}xe^{-\lambda_{i}^{(n)}t}F^{(n)}(t,x,y),\] \[\frac{\partial F^{(n)}}{\partial y}(t,x,y) =-2F^{(n)}.\] Applying Ito's formula with \(x=Z_{i}^{(n)}(t)\) and \(y=\int_{0}^{t}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{-\lambda_{i}^{(n)}s}Z_{i-1}^{(n) }(s)ds\) we obtain \[\left(M_{i}^{(n)}(t)\right)^{2}=F\left(t,Z_{i}^{(n)}(t),\int_{0} ^{t}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{-\lambda_{i}^{(n)}s}Z_{i-1}^{(n)}(s)ds\right)\] \[=F^{(n)}(0,0,0)-2\int_{0}^{t}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{- \lambda_{i}^{(n)}s}Z_{i-1}^{(n)}(s)M_{i}^{(n)}(s)ds-2\lambda_{i}^{(n)}\int_{0 }^{t}e^{-\lambda_{i}^{(n)}s}Z_{i}^{(n)}(s)M_{i}^{(n)}(s)ds\] \[+\int_{0}^{t}\int_{\mathbb{R}^{+}}\left[\left(M_{i}^{(n)}+e^{- \lambda_{i}^{(n)}s}\Bigg{\{}\mathbbm{1}_{\left\{\theta\leq\alpha_{i}^{(n)}Z_{ i}^{(n)}(s-)\right\}}-\mathbbm{1}_{\left\{\alpha_{i}^{(n)}Z_{i}^{(n)}(s-)\leq \theta\leq\left(\alpha_{i}^{(n)}+\beta_{i}\right)Z_{i}^{(n)}(s-)\right\}} \Bigg{\}}\right)^{2}\] \[-\left(M_{i}^{(n)}\right)^{2}\Bigg{]}Q_{i}(ds,d\theta)\] \[+\int_{0}^{t}\int_{\mathbb{R}^{+}}\left[\left(M_{i}^{(n)}+e^{- \lambda_{i}^{(n)}s}\mathbbm{1}_{\left\{\theta\leq 2\alpha_{i-1}\mu_{i-1}^{(n)} \left(1-\mu_{i-1}^{(n)}\right)Z_{i-1}^{(n)}(s-)\right\}}\right)^{2}-\left(M_{i }^{(n)}\right)^{2}\Bigg{]}N_{i-1}(ds,d\theta)\] \[+\int_{0}^{t}\int_{\mathbb{R}^{+}}\left[\left(M_{i}^{(n)}+e^{- \lambda_{i}^{(n)}s}2\mathbbm{1}_{\left\{\theta\leq\alpha_{i-1}\left(\mu_{i-1 }^{(n)}\right)^{2}Z_{i-1}^{(n)}(s-)\right\}}\right)^{2}-\left(M_{i}^{(n)} \right)^{2}\Bigg{]}Q_{i-1}^{m}(ds,d\theta)\] \[+\int_{0}^{t}\int_{\mathbb{R}^{+}}\left[\left(M_{i}^{(n)}-e^{- \lambda_{i}^{(n)}s}\mathbbm{1}_{\left\{\theta\leq\alpha_{i}\left(\mu_{i}^{(n)} \right)^{2}Z_{i}^{(n)}(s-)\right\}}\right)^{2}-\left(M_{i}^{(n)}\right)^{2} \Bigg{]}Q_{i}^{m}(ds,d\theta)\] \[=-2\int_{0}^{t}\left(2\alpha_{i-1}\mu_{i-1}^{(n)}Z_{i-1}^{(n)}(s)+ \lambda_{i}^{(n)}Z_{i}^{(n)}(s)\right)e^{-\lambda_{i}^{(n)}s}M_{i}^{(n)}(s)ds\] \[+2\int_{0}^{t}\left(2\alpha_{i-1}\mu_{i-1}^{(n)}Z_{i-1}^{(n)}(s)+ \lambda_{i}^{(n)}Z_{i}^{(n)}(s)\right)e^{-\lambda_{i}^{(n)}s}M_{i}^{(n)}ds\] \[+\int_{0}^{t}\left[2\alpha_{i-1}\mu_{i-1}^{(n)}Z_{i-1}^{(n)}(s)+ \left(\alpha_{i}^{(n)}+\beta_{i}^{(n)}\right)Z_{i}^{(n)}(s)\right]e^{-2\lambda _{i}^{(n)}s}ds+\widetilde{M}_{i}^{(n)}(t)\] \[=\widetilde{M}_{i}^{(n)}(t)+\int_{0}^{t}2\alpha_{i-1}\mu_{i-1}^{( n)}e^{-2\lambda_{i}^{(n)}s}Z_{i-1}^{(n)}(s)ds+\left(\alpha_{i}^{(n)}+\beta_{i}^{(n)} \right)\int_{0}^{t}e^{-2\lambda_{i}^{(n)}s}Z_{i}^{(n)}(s)ds,\] where \(\widetilde{M}_{i}^{(n)}\) is a martingale. Finally, we get \[\left\langle M_{i}^{(n)}\right\rangle_{t}=\int_{0}^{t}2\alpha_{i-1}\mu_{i-1}^{( n)}e^{-2\lambda_{i}^{(n)}s}Z_{i}^{(n)}(s)ds+\left(\alpha_{i}^{(n)}+\beta_{i}^{(n)} \right)\int_{0}^{t}e^{-2\lambda_{i}^{(n)}s}Z_{i}^{(n)}(s)ds.\] Now we can deal with the proof of Proposition 3.1. Proof of Proposition 3.1.: Let \(i\in\mathbb{N}^{*}\). For \(i\geq 2\) assume that Proposition 3.1 is true for \(i-1\). We start by showing the result when \(i\) is a neutral trait, that is to say we are going to prove (27). All the lemmas that we are mentioning in the proof are free from such neutral assumption, and work also for deleterious mutant traits. **(i) Neutral case:** Assume that \(\lambda_{i}=\lambda_{0}\). Let \((\psi_{n}(i),\gamma_{n}(i))\) as in Proposition 3.1. Notice that \[\mathbb{P}\Bigg{(}\sup_{t\in[0,T]}\Bigg{|}\frac{Z_{i}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)}{d_{i}^{(n)}(t,s)}-Ww_{i}(t)\Bigg{|}\geq 3 \varepsilon\Bigg{)}\] \[\qquad\qquad\qquad\qquad\leq\mathbb{P}\Bigg{(}\sup_{t\in\left[0, t(i)-\gamma_{n}^{-1}(i)\right)}Z_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s \right)\geq\varepsilon\Bigg{)} \tag{33}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\quad+\mathbb{P}\Bigg{(}\sup _{t\in\left[t(i)-\gamma_{n}^{-1}(i),t(i)\right)}\frac{Z_{i}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)}{\psi_{n}(i)\log^{\theta(i-1)}(n)}\geq \varepsilon\Bigg{)}\] (34) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+\mathbb{P}\Bigg{(} \sup_{t\in\left[t(i),T\right]}\Bigg{|}\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t}^ {(n)}+s\right)}{n^{t-t(i)}\log^{\theta(i)}(n)e^{\lambda_{0}s}}-Ww_{i}(t) \Bigg{|}\geq\varepsilon\Bigg{)}. \tag{35}\] We are going to show that (33), (34) and (35) converges to \(0\) when \(n\) goes to infinity. **Step 1: Convergence to 0 of (33):** The characterisation of \(t(i)\) as the first time to see mutant cell of trait \(i\) in the time-scale \(t\mapsto\mathfrak{t}_{t}^{(n)}\) is made explicit in the next Lemma. More precisely, we exactly show that up until time \(t(i)-\gamma_{n}^{-1}(i)\), asymptotically no mutant cells of trait \(i\) are generated. In particular the convergence to \(0\) of (33) is deduced from the next lemma. **Lemma 3.4**.: _Let \(i\in\mathbb{N}\), and \(\gamma_{n}(i)=\frac{\log(n)}{\log(\log(n))\theta(i-1)+\varphi_{n}(i)}\) where \(\varphi_{n}(i)\underset{n\to\infty}{\to}\infty\) such that \(\gamma_{n}(i)\underset{n\to\infty}{\to}\infty\), and \(s\in\mathbb{R}\). For \(i\geq 2\) we prove that if Proposition 3.1 is true for \(i-1\) then_ \[\mathbb{P}\Bigg{(}\sup_{t\in[0,t(i)-\gamma_{n}^{-1}(i)]}Z_{i}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)=0\Bigg{)}\underset{n\to\infty}{\longrightarrow }1. \tag{36}\] _For \(i=1\), we prove (36) without condition._ Proof of Lemma 3.4.: Notice first that \[\Bigg{\{}\sup_{t\in[0,t(i)-\gamma_{n}^{-1}(i)]}Z_{i}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)=0\Bigg{\}}=A_{n}\cap B_{n}, \tag{37}\] where \[A_{n} :=\Bigg{\{}K_{i-1}^{(n)}\left(\mathfrak{t}_{t(i)-\gamma_{n}^{-1}( i)}^{(n)}+s\right)=0\Bigg{\}},\] \[B_{n} :=\Bigg{\{}H_{i-1}^{(n)}\left(\mathfrak{t}_{t(i)-\gamma_{n}^{-1}( i)}^{(n)}+s\right)=0\Bigg{\}},\] because the event in the left hand side of Eq. (37) is satisfied if and only if there is no mutant cell of the sub-population \(Z_{i}^{(n)}\) generated from the sub-population \(Z_{i-1}^{(n)}\) up until time \(\mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n)}+s\). It corresponds almost surely to \(A_{n}\cap B_{n}\). In what follows, we will detail the proof of \(\mathbb{P}(A_{n})\underset{n\to\infty}{\to}1\). Proving \(\mathbb{P}(B_{n})\underset{n\to\infty}{\to}1\) can be done using a similar method, so the proof will not be detailed. This will conclude the proof of Lemma 3.4. So we deal with the proof of \(\mathbb{P}(A_{n})\underset{n\to\infty}{\to}1\) which will be slightly different depending on whether \(i=1\) or \(i\geq 2\). We begin with \(i=1\). **(i) Case \(i=1\):** Introduce the following event for all \(\widetilde{t}\in\mathbb{R}^{+}\) and \(\varepsilon\in\mathbb{R}^{+}\) \[C_{\varepsilon,\widetilde{t}}:=\Bigg{\{}\sup_{s\in[\widetilde{t},\infty]} \left|e^{-\lambda_{0}s}Z_{0}(s)-W\right|\leq\varepsilon\Bigg{\}}.\] Using the a.s. inequality of (18), under the event \(C_{\varepsilon,\widetilde{t}}\) we have \[K_{0}^{(n)}\left(\mathfrak{t}_{t(1)-\gamma_{n}^{-1}(1)}^{(n)}+s\right) \leq\int_{0}^{\widetilde{t}}\int_{\mathbb{R}^{+}}\mathbbm{1}_{ \big{\{}\theta\leq 2\alpha_{0}\mu_{0}^{(n)}\sup_{v\in[0,t_{1}]}Z_{0}(v)\big{\}}}N_ {0}(du,d\theta)\] (38) \[+\int_{\widetilde{t}}^{t(n)-\gamma_{n}^{-1}(1)}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Finally we have proven that \(\mathbb{P}(A_{n})\underset{n\to\infty}{\rightarrow}1\), which concludes the proof in the case \(i=1\). **(ii) Case \(i\geq 2\):** Let \(\widetilde{t}(i):=\frac{t(i)+t(i-1)}{2}\) and \(\Psi_{n}\underset{n\to\infty}{\rightarrow}\infty\). Introduce the event \[C_{\varepsilon}^{(n)}:=\left\{\sup_{t\in[0,t(i)]}\left|\frac{Z_{i-1}^{(n)} \Big{(}\mathfrak{t}_{i}^{(n)}\Big{)}}{d^{(n)}(t)}-W\mathbb{1}_{\{t\geq\widetilde {t}(i)\}}w_{i-1}(t)\right|\leq\varepsilon\right\},\] where \(d^{(n)}(t)=\mathbb{1}_{\big{\{}t\in[0,\widetilde{t}(i))\}}n^{\widetilde{t}(i) -t(i-1)}\log^{\theta(i-1)}(n)\Psi_{n}+\mathbb{1}_{\big{\{}t\in[\widetilde{t}( i),t(i)]\big{\}}}n^{t-t(i-1)}\log^{\theta(i-1)}(n)\). Under \(C_{\varepsilon}^{(n)}\) we have \[K_{i-1}^{(n)}\left(\mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n) }+s\right)\leq\int_{0}^{\mathfrak{t}_{(i)}^{(n)}}\int_{\mathbb{R}^{+}} \mathbb{1}_{\big{\{}\theta\leq 2\alpha_{i-1}\mu_{i-1}^{(n)}\varepsilon n^{t(i)-t(i-1)} \log^{\theta(i-1)}(n)\Psi_{n}\big{\}}}N_{i-1}(du,d\theta) \tag{39}\] \[+\int_{\mathfrak{t}_{(i)}^{(n)}}^{\mathfrak{t}_{(i)}^{(n)}- \gamma_{n}^{-1}(i)}\int_{\mathbb{R}^{+}}\mathbb{1}_{\big{\{}\theta\leq 2\alpha_{i-1} \mu_{i-1}^{(n)}(\varepsilon+Ww_{i-1}(t(i)))\varepsilon^{\lambda_{0}}u^{n-t(i-1 )}\log^{\theta(i-1)}(n)\big{\}}}N_{i-1}(du,d\theta).\] Let introduce the events \[D_{\varepsilon}^{(n)}:=\left\{\int_{0}^{\mathfrak{t}_{(i)}^{(n) }}\int_{\mathbb{R}^{+}}\mathbb{1}_{\big{\{}\theta\leq 2\alpha_{i-1}\mu_{i-1}^{(n)} \varepsilon n^{t(i)-t(i-1)}\log^{\theta(i-1)}(n)\Psi_{n}\big{\}}}N_{i-1}(du,d \theta)=0\right\},\] \[E_{\varepsilon}^{(n)}:=\left\{\int_{\mathfrak{t}_{(i)}^{(n)}}^{ \mathfrak{t}_{(i)}^{(n)}-\gamma_{n}^{-1}(i)}+s\right)\int_{\mathbb{R}^{+}} \mathbb{1}_{\big{\{}\theta\leq 2\alpha_{i-1}\mu_{i-1}^{(n)}(\varepsilon+Ww_{i-1}(t(i))) \varepsilon^{\lambda_{0}}u^{n-t(i-1)}\log^{\theta(i-1)}(n)\big{\}}}N_{i-1}(du, d\theta)=0\right\}.\] From (39) we obtain \[\mathbb{P}\left(A_{n}\right)\geq\mathbb{P}\left(A_{n}\cap C_{ \varepsilon}^{(n)}\right)\geq\mathbb{P}\left(C_{\varepsilon}^{(n)}\cap D_{ \varepsilon}^{(n)}\cap E_{\varepsilon}^{(n)}\right).\] It remains to show that the r.h.s. converge to \(1\). From assuming Proposition 3.1 for trait \(i-1\) it follows that \(\mathbb{P}\left(C_{\varepsilon}^{(n)}\right)\underset{n\to\infty}{ \longrightarrow}1\). Secondly we have \[\mathbb{P}\left(D_{\varepsilon}^{(n)}\right)=\exp\left(-\widetilde{t}(i)\frac{ \log^{\theta(i-1)+1}(n)}{\lambda_{0}}2\alpha_{i-1}\mu_{i-1}^{(n)}\varepsilon \sqrt{n^{\ell(i-1)}}\Psi_{n}\right)\underset{n\to\infty}{\longrightarrow}1,\] because \(\widetilde{t}(i)-t(i-1)=\frac{\ell(i-1)}{2}\) and also because \(\Psi_{n}\) can be chosen such that it satisfies both \(\Psi_{n}\underset{n\to\infty}{\rightarrow}\infty\) and \(\log^{\theta(i-1)+1}(n)\Psi_{n}\sqrt{n^{\ell(i-1)}}\mu_{i-1}^{(n)}\underset{n \to\infty}{\rightarrow}0\). Recall the distribution of \(W\) given in Equation (20). Because \(W\) and \(N_{i-1}\) are independent, we have \[\mathbb{P}\left(E_{\varepsilon}^{(n)}\right)=\frac{\beta_{0}}{ \alpha_{0}}\mathbb{P}\left(\int_{\mathfrak{t}_{(i)}^{(\alpha)}}^{t_{(i)}^{(n)}- \gamma_{n}^{-1}{}_{(i)}^{(i)}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _For \(i=1\) we prove (40) without condition._ Proof.: For all \(t\in\left[t(i)-\gamma_{n}^{-1}(i),t(i)\right]\) we have \[\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{\psi_{n}(i) \log^{\theta(i-1)}(n)}=\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)e ^{-\lambda_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}-Z_{i}^{(n)}\left( \mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n)}+s\right)e^{-\lambda_{i}^{(n)} \left(\mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n)}+s\right)}}{\psi_{n}(i)\log^ {\theta(i-1)}(n)e^{-\lambda_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}}\] \[\qquad\qquad\qquad+\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)- \gamma_{n}^{-1}(i)}^{(n)}+s\right)e^{-\lambda_{i}^{(n)}\left(\mathfrak{t}_{t(i )-\gamma_{n}^{-1}(i)}^{(n)}+s\right)}}{\psi_{n}(i)\log^{\theta(i-1)}(n)e^{- \lambda_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}}\] \[=\frac{M_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)-M_{i}^{( n)}\left(\mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n)}+s\right)}{\psi_{n}(i) \log^{\theta(i-1)}(n)e^{-\lambda_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s \right)}}+\frac{\int_{\mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n)}+s}^{\mathfrak {t}_{t(i)}^{(n)}+s}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{-\lambda_{i}^{(n)}u}Z_{i-1}^ {(n)}(u)du}{\psi_{n}(i)\log^{\theta(i-1)}(n)e^{-\lambda_{i}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)}}\] \[\qquad\qquad\qquad+\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)- \gamma_{n}^{-1}(i)}^{(n)}+s\right)}{\psi_{n}(i)\log^{\theta(i-1)}(n)e^{- \lambda_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}}.\] It allows to write \[\mathbb{P}\left(\sup_{t\in\left[t(i)-\gamma_{n}^{-1}(i),t(i) \right]}\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{\psi_{n}(i) \log^{\theta(i-1)}(n)}\geq 3\varepsilon\right)\] \[\qquad\leq\mathbb{P}\left(\sup_{t\in\left[t(i)-\gamma_{n}^{-1}(i),t(i)\right]}\left|\frac{M_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)-M_{ i}^{(n)}\left(\mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n)}+s\right)}{\psi_{n}(i) \log^{\theta(i-1)}(n)e^{-\lambda_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s \right)}}\right|\geq\varepsilon\right) \tag{41}\] \[\qquad+\mathbb{P}\left(\sup_{t\in\left[t(i)-\gamma_{n}^{-1}(i),t( i)\right]}\frac{\int_{\mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n)}+s}^{\mathfrak {t}_{t(i)-\gamma_{n}^{-1}(i)}+s}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{-\lambda_{i}^{ (n)}u}Z_{i-1}^{(n)}(u)du}{\psi_{n}(i)\log^{\theta(i-1)}(n)e^{-\lambda_{i}^{(n )}\left(\mathfrak{t}_{t}^{(n)}+s\right)}}\geq\varepsilon\right)\] (42) \[\qquad+\mathbb{P}\left(\sup_{t\in\left[t(i)-\gamma_{n}^{-1}(i),t( i)\right]}\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n)}+s \right)}{\psi_{n}(i)\log^{\theta(i-1)}(n)e^{-\lambda_{i}^{(n)}\left(\mathfrak{ t}_{t-t(i)+\gamma_{n}^{-1}(i)}^{(n)}}\geq\varepsilon\right).} \tag{43}\] We have \[(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq Proof of Lemma 3.6.: Let \[a_{t}^{(n)}:=\frac{\int_{t^{(n)}_{(i)-\gamma_{n}^{-1}(i)}}^{t^{(n)}+s}2\alpha_{i-1 }\mu_{i-1}^{(n)}e^{-\lambda_{i}^{(n)}u}Z_{i-1}^{(n)}(u)du}{\psi_{n}(i)\log^{ \theta(i-1)}(n)e^{-\lambda_{i}^{(n)}\left(\binom{t^{(n)}_{t}+s}{t}\right)}}.\] Our aim is to prove that for all \(\varepsilon>0\) \[\mathbb{P}\left(\sup_{t\in\left[t(i)-\gamma_{n}^{-1}(i),t(i)\right]}a_{t}^{(n )}\leq\varepsilon\right)\underset{n\to\infty}{\rightarrow}1. \tag{45}\] **(i) Case \(i=1\):** We have \[a_{t}^{(n)}=\frac{e^{\lambda_{i}^{(n)}\left(\binom{t^{(n)}_{t}+s }{t}\right)}\int_{t^{(n)-\gamma_{n}^{-1}(i)}_{t^{(1)}}+s}^{t^{(n)}_{t}+s}2 \alpha_{0}\mu_{0}^{(n)}\Bigg{[}W+\left(e^{-\lambda_{0}u}Z_{0}(u)-W\right)\\ +\left(e^{-\lambda_{0}^{(n)}u}Z_{0}^{(n)}(u)-e^{-\lambda_{0}u}Z_{ 0}(u)\right)\Bigg{]}e^{\left(\lambda_{0}^{(n)}-\lambda_{1}^{(n)}\right)u}du.\] Let us set \[B_{\widetilde{\varepsilon}}^{(n)}:=\left\{\sup_{u\in\mathbb{R}^{+ }}\left|e^{-\lambda_{0}u}Z_{0}(u)-e^{-\lambda_{0}^{(n)}u}Z_{0}^{(n)}(u)\right| \leq\widetilde{\varepsilon}\right\},\] \[C_{x,\widetilde{\varepsilon}}:=\left\{\sup_{u\in[x,\infty]}|e^{- \lambda_{0}u}Z_{0}(u)-W|\leq\widetilde{\varepsilon}\right\}.\] According to Lemma 3.1 and by definition of \(W\) (see (19)) we both have that \(\mathbb{P}\left(B_{\widetilde{\varepsilon}}^{(n)}\right)\underset{n\to \infty}{\rightarrow}1\) and \(\mathbb{P}\left(C_{\sqrt{\log(n)},\widetilde{\varepsilon}}\right)\underset{n \to\infty}{\rightarrow}1\). Then for \(n\) large enough under the event \(B_{\widetilde{\varepsilon}}^{(n)}\cap C_{\sqrt{\log(n)},\widetilde{\varepsilon}}\) we have \[a_{t}^{(n)}\leq 2\alpha_{0}\left(n^{\ell(1)}\mu_{0}^{(n)}\right)(W+2 \widetilde{\varepsilon})I_{n},\] where \(I_{n}:=\frac{e^{\lambda_{1}^{(n)}\left(\binom{t^{(n)}_{t}+s}{t}\right)}}{\psi _{n}(1)n^{\ell(1)}}\frac{t_{t}^{(n)}+s}{t_{t_{(i)}-\gamma_{n}^{-1}(1)}+s}e^{ \left(\lambda_{0}-\lambda_{1}^{(n)}\right)u}du\). In the case \(\lambda_{1}<\lambda_{0}\) we have that \[I_{n}\leq\frac{e^{\lambda_{1}^{(n)}\left(\binom{t^{(n)}_{t}+s}{t}\right)}}{ \psi_{n}(1)n^{\ell(1)}}\frac{e^{\left(\lambda_{0}-\lambda_{1}^{(n)}\right) \left(\binom{t^{(n)}_{t}+s}{t}\right)}}{\lambda_{0}-\lambda_{1}}=\frac{e^{- \lambda_{0}t_{t_{(1)-\varepsilon}^{(n)}}^{(n)}e^{\lambda_{0}s}}}{\psi_{n}(1)( \lambda_{0}-\lambda_{1})}\leq\frac{e^{\lambda_{0}s}}{\psi_{n}(1)(\lambda_{0}- \lambda_{1})}. \tag{46}\] In the case \(\lambda_{1}=\lambda_{0}\) remembering that \(\lambda_{1}^{(n)}=\lambda_{0}-2\alpha_{1}\mu_{1}^{(n)}\) we obtain \[I_{n} \leq\frac{e^{\lambda_{0}s}e^{-2\alpha_{1}\mu_{1}^{(n)}\left( \binom{t^{(n)}_{t}+s}{t}\right)}}{\psi_{n}(1)}\frac{e^{2\alpha_{1}\mu_{1}^{(n )}\left(\binom{t^{(n)}_{t}+s}{t}\right)}-e^{2\alpha_{1}\mu_{1}^{(n)}\left( \binom{t^{(n)}_{t}}{t(1)-\gamma_{n}^{-1}(1)}+s\right)}}{2\alpha_{1}\mu_{1}^{( n)}} \tag{47}\] \[=\frac{e^{\lambda_{0}s}}{\psi_{n}(1)}\frac{1-e^{-2\alpha_{1}\mu_{1 }^{(n)}t_{t-t(1)+\gamma_{n}^{-1}(1)}}}{2\alpha_{1}\mu_{1}^{(n)}}\] \[\leq\frac{e^{\lambda_{0}s}}{\psi_{n}(1)}t_{t-t(1)+\gamma_{n}^{-1} (1)}^{(n)}\] \[\leq\frac{e^{\lambda_{0}s}\log(n)}{\psi_{n}(1)\gamma_{n}(1)\lambda_ {0}},\] where for the second inequality we use the following equation applied with \(a=2\alpha_{1}\mu_{1}^{(n)}>0\) and \(x=\mathfrak{t}_{t-t(1)+\gamma_{n}^{-1}(1)}^{(n)}\) \[\forall x\geq 0,\forall a>0,\frac{1-e^{-ax}}{a}\leq x. \tag{48}\] In any case, since \(W\) is a finite random variable (see (20)) we find (45) and conclude the case \(i=1\). **(ii) Case \(i\geq 2\):** Assume Proposition 3.1 is true for \(i-1\). In particular we have \(\mathbb{P}\left(B_{\widetilde{\varepsilon}}^{(n)}\right)\underset{n\to\infty }{\rightarrow}1\) with \[B_{\widetilde{\varepsilon}}^{(n)}:=\left\{\sup_{v\in\left[t(i)-\gamma_{n}^{-1 }(i),t(i)\right]}\left|\frac{Z_{i-1}^{(n)}\left(\mathfrak{t}_{v}^{(n)}+s \right)}{n^{v-t(i-1)}e^{\lambda_{0}s}\log^{\theta(i-1)}(n)}-Ww_{i-1}(v)\right| \leq\widetilde{\varepsilon}\right\}.\] Using the change of variable \(u=\mathfrak{t}_{v}^{(n)}+s\) and that \(t(i-1)=t(i)-\ell(i-1)\), notice that \[a_{t}^{(n)}=\frac{e^{\lambda_{i}^{(n)}\left(\mathfrak{t}_{v}^{( n)}+s\right)}}{\psi_{n}(i)n^{t(i)}}\int_{t(i)-\gamma_{n}^{-1}(i)}^{t}2\alpha_{i-1} \left(n^{\ell(i-1)}\mu_{i-1}^{(n)}\right)\\ \cdot\frac{Z_{i-1}^{(n)}\left(\mathfrak{t}_{v}^{(n)}+s\right)}{n^ {v-t(i-1)}e^{\lambda_{0}s}\log^{\theta(i-1)}(n)}e^{\left(\lambda_{0}-\lambda_ {i}^{(n)}\right)\left(\mathfrak{t}_{v}^{(n)}+s\right)}\frac{\log(n)}{\lambda_ {0}}dv.\] Using that \(w_{i-1}\) is a non decreasing function it comes that under the event \(B_{\widetilde{\varepsilon}}^{(n)}\) \[a_{t}^{(n)}\leq 2\alpha_{i-1}\left(n^{\ell(i-1)}\mu_{i-1}^{(n)}\right)\left(Ww _{i-1}(t(i))+\widetilde{\varepsilon}\right)\frac{e^{\lambda_{i}^{(n)}\left( \mathfrak{t}_{v}^{(n)}+s\right)}}{\psi_{n}(i)n^{t(i)}}\int_{\mathfrak{t}_{t(i) -\gamma_{n}^{-1}(i)}^{t(n)}+s}^{t_{(n)}+s}e^{\left(\lambda_{0}-\lambda_{i}^{( n)}\right)u}du.\] Using similar computations as in (46) and (47), it follows (45). Now we will prove that (41) converges to \(0\). We start by introducing two lemma allowing to control in expectancy both the size of any mutant sub-population and the quadratic variation associated of the martingale \(M_{i}^{(n)}\). First, a natural upper bound on the mean of the growth of each mutant sub-population can be easily obtained. This is stated in the next Lemma. **Lemma 3.7**.: _For all \(i\in\mathbb{N}_{0}\) it exists \(C_{i}>0\) such that for all \(u\geq 0\)_ \[\mathbb{E}\left[Z_{i}^{(n)}(u)\right]\leq C_{i}\mu_{\otimes,i}^{(n)}u^{ \theta(i)}e^{\lambda_{0}u},\] _where \(\mu_{\otimes,i}^{(n)}:=\prod\limits_{j=1}^{i}\mu_{j-1}^{(n)}\) and \(C_{i}:=\prod\limits_{j=1}^{i}2\alpha_{j-1}\left(\mathbb{1}_{\{\lambda_{j}= \lambda_{0}\}}+\mathbb{1}_{\{\lambda_{j}<\lambda_{0}\}}\frac{1}{\lambda_{0}- \lambda_{j}}\right).\)_ Notice that there are \(3\) interesting components. The first one is the mutational cost to get such mutant cells encoded via the term \(\mu_{\otimes,i}^{(n)}\). Then the second one is given by the contribution over time of all neutral mutations in the path to the considered mutant population. And the last one is simply the exponential growth at rate \(\lambda_{0}\) given by the wild-type sub-population. Proof of Lemma 3.7.: First we have that \(\mathbb{E}\left[Z_{0}^{(n)}(u)\right]=e^{\lambda_{0}^{(n)}u}\leq e^{\lambda_{ 0}u}\), which is exactly the result for \(i=0\). Then for \(i\in\mathbb{N}\) assume that the result is true for \(i-1\). Then taking the expectation of the martingale \(M_{i}^{(n)}(u)\) defined in (29) and using the previous assumption we obtain the following equation \[\mathbb{E}\left[Z_{i}^{(n)}(u)\right] =e^{\lambda_{i}^{(n)}u}\int_{0}^{u}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{- \lambda_{i}^{(n)}s}\mathbb{E}\left[Z_{i-1}^{(n)}(s)\right]ds\] \[\leq e^{\lambda_{i}u}\int_{0}^{u}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{- \lambda_{i}s}\mathbb{E}\left[Z_{i-1}^{(n)}(s)\right]ds\] \[\leq C_{i-1}\mu_{\otimes,i}^{(n)}2\alpha_{i-1}\int_{0}^{u}e^{( \lambda_{0}-\lambda_{i})s}dsu^{\theta(i-1)}e^{\lambda_{i}u}\] \[\leq C_{i-1}\mu_{\otimes,i}^{(n)}2\alpha_{i-1}\left(\mathbb{1}_{ \left\{\lambda_{i}=\lambda_{0}\right\}}u+\mathbb{1}_{\left\{\lambda_{i}< \lambda_{0}\right\}}\frac{1}{\lambda_{0}-\lambda_{i}}e^{(\lambda_{0}-\lambda_ {i})u}\right)u^{\theta(i-1)}e^{\lambda_{i}u}\] \[=C_{i-1}\mu_{\otimes,i}^{(n)}2\alpha_{i-1}\left(\mathbb{1}_{ \left\{\lambda_{i}=\lambda_{0}\right\}}+\mathbb{1}_{\left\{\lambda_{i}< \lambda_{0}\right\}}\frac{1}{\lambda_{0}-\lambda_{i}}\right)u^{\theta(i)}e^{ \lambda_{0}u},\] which concludes the proof by induction. Second, using both the expression of the quadratic variation of the martingale associated to a mutant sub-population given in Equation (30) and the previous Lemma 3.7, a natural upperbound on its mean is obtained and summed up in the next Lemma. **Lemma 3.8**.: _Let \(0<t_{1}^{(n)}<t_{2}\) and \(s\in\mathbb{R}\). It exists \(N\in\mathbb{N}\) and \(C>0\) such that for all \(n\geq N\) we have_ \[\mathbb{E}\left[\left\langle M_{i}^{(n)}\right\rangle_{t_{2}^{(n) }+s}-\left\langle M_{i}^{(n)}\right\rangle_{t_{1}^{(n)}+s}\right]\] \[\leq C\mu_{\otimes,i}^{(n)}\Bigg{[}\mathbb{1}_{\left\{\lambda_{i }=\lambda_{0}\right\}}\frac{e^{-\lambda_{0}s}\left(t_{1}^{(n)}+s\right)^{ \theta(i)}}{n^{t_{1}^{(n)}}}+\left(\mathfrak{t}_{t_{2}}^{(n)}+s\right)^{ \theta(i)}\left(\mathbb{1}_{\left\{\lambda_{0}>2\lambda_{i}\right\}}e^{( \lambda_{0}-2\lambda_{i})\left(t_{2}^{(n)}+s\right)}\right.\] \[\qquad\qquad+\left.\mathbb{1}_{\left\{\lambda_{0}=2\lambda_{i} \right\}}\left(\mathfrak{t}_{t_{2}}^{(n)}+s\right)+\mathbb{1}_{\left\{ \lambda_{i}<\lambda_{0}<2\lambda_{i}\right\}}e^{-(2\lambda_{i}-\lambda_{0}) \left(t_{1}^{(n)}+s\right)}\right)\right].\] Proof.: In the proof \(C\) corresponds to a strictly positive constant that may change from line to line. Assume that \(\lambda_{i}=\lambda_{0}\). Applying Lemma 3.7, remembering that \(\lambda_{i}^{(n)}=\lambda_{0}-2\alpha_{i}\mu_{i}^{(n)}\), and using that it exists \(N_{1}\in\mathbb{N}\) such that for all \(n\geq N_{1}\) we have that \(e^{4\alpha_{i}\mu_{i}^{(n)}\left(t_{2}^{(i)}+s\right)}\leq 2\), we obtain that \[\int_{t_{1}^{(n)}+s}^{t_{2}^{(n)}+s}e^{-2\lambda_{i}^{(n)}u}\mathbb{E}\left[Z_ {i}^{(n)}(u)\right]du\leq C\mu_{\otimes,i}^{(n)}\int_{t_{1}^{(n)}+s}^{t_{2}^{(n )}+s}u^{\theta(i)}e^{-\lambda_{0}u}du.\] Using an integration by parts we obtain that \[\int_{t_{1}^{(n)}+s}^{t_{2}^{(n)}+s}u^{\theta(i)}e^{-\lambda_{0}u}du\leq\frac{ 1}{\lambda(0)}\left(\mathfrak{t}_{t_{1}^{(n)}}^{(n)}+s\right)^{\theta(i)}e^{- \lambda_{0}\left(t_{1}^{(n)}+s\right)}+\frac{1}{\lambda_{0}}\int_{t_{1}^{(n)} +s}^{t_{2}^{(n)}+s}u^{\theta(i)-1}e^{-\lambda_{0}u}du.\] Then using \(\theta(i)\) integration by parts, it exists \(N_{2}\in\mathbb{N}\) such that for \(n\geq N_{2}\) we have \[\int_{t_{1}^{(n)}+s}^{t_{2}^{(n)}+s}u^{\theta(i)}e^{-\lambda_{0}u}du\leq C \frac{e^{-\lambda_{0}s}}{n^{t_{1}^{(n)}}}\left(\mathfrak{t}_{t_{1}^{(n)}}^{(n)}+ s\right)^{\theta(i)}.\] It follows that for \(n\geq\max(N_{1},N_{2})\) \[\int_{t_{1}^{(n)}+s}^{t_{2}^{(n)}+s}e^{-2\lambda_{i}^{(n)}u}\mathbb{E}\left[Z_ {i}^{(n)}(u)\right]du\leq C\frac{e^{-\lambda_{0}s}}{n^{t_{1}^{(n)}}}\mu_{ \otimes,i}^{(n)}\left(\mathfrak{t}_{t_{1}^{(n)}}^{(n)}+s\right)^{\theta(i)}.\] Because the vertex \(i\) is assumed to be neutral it follows that \(\theta(i-1)=\theta(i)-1\). Using similar computation as the latter one, it exists \(N_{3}\in\mathbb{N}\) such that for \(n\geq N_{3}\) we have \[\int_{\begin{subarray}{c}t_{2}^{(n)}+s\\ t_{1}^{(n)}+s\end{subarray}}^{t_{2}^{(n)}+s}\mu_{i-1}^{(n)}e^{-2\lambda_{i}^{(n )}u}\mathbb{E}\left[Z_{i-1}^{(n)}(u)\right]du\leq C\frac{e^{-\lambda_{0}s}}{n^ {t_{1}^{(n)}}}\mu_{\otimes,i}^{(n)}\left(\mathfrak{t}_{t_{1}^{(n)}}^{(n)}+s \right)^{\theta(i)-1}.\] It follows that for all \(n\geq\max(N_{1},N_{2},N_{3})\) we have \[\mathbb{E}\left[\left\langle M_{i}^{(n)}\right\rangle_{\mathfrak{t}_{t_{2}}^{ (n)}+s}-\left\langle M_{i}^{(n)}\right\rangle_{\mathfrak{t}_{t_{1}^{(n)}}^{(n) }+s}\right]\leq C\frac{e^{-\lambda_{0}s}}{n^{t_{1}^{(n)}}}\mu_{\otimes,i}^{(n) }\left(\mathfrak{t}_{t_{1}^{(n)}}^{(n)}+s\right)^{\theta(i)}.\] We now deal with the case \(\lambda_{i}<\lambda_{0}\) by applying the same strategy. We obtain \[\int_{\begin{subarray}{c}t_{2}^{(n)}+s\\ t_{1}^{(n)}+s\end{subarray}}^{t_{2}^{(n)}+s}e^{-2\lambda_{i}^{(n)}u}\mathbb{ E}\left[Z_{i}^{(n)}(u)\right]du\] \[\leq C\mu_{\otimes,i}^{(n)}\int_{\mathfrak{t}_{t_{1}^{(n)}}^{(n) }+s}^{\mathfrak{t}_{t_{2}^{(n)}+s}^{(n)}}u^{\theta(i)}e^{(\lambda_{0}-2\lambda _{i})u}du\] \[\leq C\mu_{\otimes,i}^{(n)}\left(\mathfrak{t}_{t_{2}}^{(n)}+s \right)^{\theta(i)}\Bigg{[}\mathbb{1}_{\left\{\lambda_{0}>2\lambda_{i}\right\} }e^{\left(\lambda_{0}-2\lambda_{i}\right)\left(\mathfrak{t}_{t_{2}}^{(n)}+s \right)}\] \[\qquad\qquad\qquad\qquad\qquad+\mathbb{1}_{\left\{\lambda_{0}=2 \lambda_{i}\right\}}\mathfrak{t}^{(n)}\left(\mathfrak{t}_{t_{2}}^{(n)}+s \right)+\mathbb{1}_{\left\{\lambda_{i}<\lambda_{0}<2\lambda_{i}\right\}}e^{- \left(2\lambda_{i}-\lambda_{0}\right)\left(\mathfrak{t}_{t_{1}^{(n)}}^{(n)}+s \right)}\Bigg{]}.\] Then remembering that \(\theta(i-1)=\theta(i)\), we get that \[\int_{\begin{subarray}{c}t_{2}^{(n)}+s\\ t_{1}^{(n)}+s\end{subarray}}^{t_{1}^{(n)}+s}\mu_{i-1}^{(n)}e^{-2\lambda_{i}^{(n )}u}\mathbb{E}\left[Z_{i-1}^{(n)}(u)\right]du\] \[\leq C\mu_{\otimes,i}^{(n)}\left(\mathfrak{t}_{t_{2}}^{(n)}+s \right)^{\theta(i)}\Bigg{[}\mathbb{1}_{\left\{\lambda_{0}>2\lambda_{i}\right\} }e^{\left(\lambda_{0}-2\lambda_{i}\right)\left(\mathfrak{t}_{t_{2}}^{(n)}+s \right)}\] \[\qquad\qquad\qquad\qquad+\mathbb{1}_{\left\{\lambda_{0}=2\lambda _{i}\right\}}\mathfrak{t}^{(n)}\left(\mathfrak{t}_{t_{2}}^{(n)}+s\right)+ \mathbb{1}_{\left\{\lambda_{i}<\lambda_{0}<2\lambda_{i}\right\}}e^{-\left(2 \lambda_{i}-\lambda_{0}\right)\left(\mathfrak{t}_{t_{1}^{(n)}}^{(n)}+s \right)}\Bigg{]}.\] At the end we have \[\mathbb{E}\left[\left\langle M_{i}^{(n)}\right\rangle_{\mathfrak{t }_{t_{2}}^{(n)}+s}-\left\langle M_{i}^{(n)}\right\rangle_{\mathfrak{t}_{t_{1}^ {(n)}}^{(n)}+s}\right]\leq C\mu_{\otimes,i}^{(n)}\left(\mathfrak{t}_{t_{2}}^{(n )}+s\right)^{\theta(i)}\cdot\Bigg{[}\mathbb{1}_{\left\{\lambda_{0}>2\lambda _{i}\right\}}e^{\left(\lambda_{0}-2\lambda_{i}\right)\left(\mathfrak{t}_{t_{2 }}^{(n)}+s\right)}\] \[\qquad\qquad\qquad\qquad\qquad+\mathbb{1}_{\left\{\lambda_{0}=2 \lambda_{i}\right\}}\left(\mathfrak{t}_{t_{2}}^{(n)}+s\right)+\mathbb{1}_{ \left\{\lambda_{i}<\lambda_{0}<2\lambda_{i}\right\}}e^{-\left(2\lambda_{i}- \lambda_{0}\right)\left(\mathfrak{t}_{t_{1}^{(n)}}^{(n)}+s\right)}\Bigg{]}.\] Now we can prove that (41) converges to \(0\). Using the Maximal inequality, see [10] Chapter VI page 72, applied to the supermartingale \(\left[\frac{M_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)-M_{i}^{(n)} \left(\mathfrak{t}_{t(i)-\gamma_{i}^{n}(1)}^{(n)}+s\right)}{\psi_{n}(i)\log^ {\theta(i-1)}(n)e^{-\lambda_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}} \right]_{t\geq t(i)-\gamma_{i}^{n}(i)}\) it follows \[\eqref{eq:C1}\leq\frac{3}{\varepsilon\psi_{n}(i)\log^{\theta(i-1)}(n)}\sup_{t \in\left[t(i)-\gamma_{n}^{-1}(i),t(i)\right]}e^{\lambda_{i}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)}\mathbb{E}\left[\left\langle M_{i}^{(n)} \right\rangle_{\mathfrak{t}_{t}^{(n)}+s}-\left\langle M_{i}^{(n)}\right\rangle_ {\mathfrak{t}_{t(i)-\gamma_{n}^{-1}(i)}^{(n)}+s}\right]^{\frac{1}{2}}.\] First notice that the function \[t\mapsto f^{(n)}(t):=e^{\lambda_{i}^{(n)}\left(\binom{t^{(n)}_{i}+s}{t^{(n)}}\right) \mathbb{E}\left[\left\langle M_{i}^{(n)}\right\rangle_{t^{(n)}_{i}+s}-\left\langle M _{i}^{(n)}\right\rangle_{t^{(n)}_{i}-\gamma_{n}^{-1}(i)}^{(n)}\right]^{\frac{1} {2}}}\] is a non-decreasing function implying that \[\sup_{t\in\left[t(i)-\gamma_{n}^{-1}(i),t(i)\right]}f^{(n)}(t)=f^{(n)}(t(i)).\] In the case \(\lambda_{i}=\lambda_{0}\), using that it exists a constant \(C\) such that \(e^{\lambda_{i}^{(n)}\left(\binom{t^{(n)}_{i}+s}{t^{(n)}_{i}}\right)}\leq Cn^{t( i)}e^{\lambda_{0}s}\) and according to Lemma 3.8 applied with \(t^{(n)}_{1}=t(i)-\gamma_{n}^{-1}(i)\) and \(t_{2}=t(i)\) we have that \[f^{(n)}(t(i))\leq C\left(e^{\lambda_{0}s}\left(n^{t(i)}\mu_{\otimes,i}^{(n)} \right)\right)^{\frac{1}{2}}n^{\frac{\gamma_{n}^{-1}(i)}{2}}\left(\binom{t^{( n)}_{i}+s}{t^{(n)}_{2}}\right..\] Notice that \[n^{t(i)}\mu_{\otimes,i}^{(n)}=\prod_{j=1}^{i}n^{t(j-1)}\mu_{j-1}^{(n)} \underset{n\rightarrow\infty}{\longrightarrow}\prod_{j=1}^{i}\mu_{j-1}<\infty.\] Then for \(n\) large enough, and remembering that \(\theta(i)=\theta(i-1)+1\) we have \[(\ref{eq:1})\leq C\left(\frac{e^{\frac{\log(n)}{\gamma_{n}(i)}}\log(n)}{ \psi_{n}^{2}(i)\log^{\theta(i-1)}(n)}\right)^{\frac{1}{2}}.\] And we have \[\frac{e^{\frac{\log(n)}{\gamma_{n}(i)}}\log(n)}{\psi_{n}^{2}(i)\log^{\theta(i -1)}(n)}=\frac{e^{\varphi_{n}(i)}\log(n)}{\psi_{n}^{2}(i)}.\] We obtain that (41) converges to \(0\) by hypothesis on \(\psi_{n}(i)\). In the case \(\lambda_{i}<\lambda_{0}\), using that it exists a constant \(C>0\) such that \(e^{\lambda_{i}^{(n)}\left(\binom{t^{(n)}_{i}+s}{t^{(n)}_{i}}\right)}\leq Ce^{ \lambda_{i}\left(\binom{s}{t^{(n)}_{i}+s}{t^{(n)}_{i}}+s\right)}\) and according to Lemma 3.8 applied with \(t^{(n)}_{1}=t(i)-\gamma_{n}^{-1}(i)\) and \(t_{2}=t(i)\) we have that \[f^{(n)}(t(i)) \leq Ce^{\lambda_{1}s}\left(n^{t(i)}\mu_{\otimes,i}^{(n)}\right) ^{\frac{1}{2}}\left(\mathbbm{1}_{\{\lambda_{0}>2\lambda_{i}\}}n^{\frac{\lambda _{0}-2\lambda_{i}}{\lambda_{0}}t(i)}e^{(\lambda_{0}-2\lambda_{i})s}\right.\] \[+\left.\mathbbm{1}_{\{\lambda_{0}=2\lambda_{i}\}}\left(\binom{n} {t^{(n)}_{i}+s}+\mathbbm{1}_{\{\lambda_{i}<\lambda_{0}<2\lambda_{i}\}}n^{-\frac {2\lambda_{i}-\lambda_{0}}{\lambda_{0}}\left(t(i)-\gamma_{n}^{-1}(i)\right)}e^ {-(2\lambda_{i}-\lambda_{0})s}\right)^{\frac{1}{2}}\right.\] \[\cdot n^{-\frac{\lambda_{0}-2\lambda_{i}}{2\lambda_{0}}t(i)}\left( \binom{t^{(n)}_{i}+s}{t^{(n)}_{2}}\right..\] Then for \(n\) large enough, and remembering that \(\theta(i)=\theta(i-1)\) we have \[(\ref{eq:1})\leq C\left(\mathbbm{1}_{\{\lambda_{0}>2\lambda_{i}\}}+\mathbbm{1 }_{\{\lambda_{0}=2\lambda_{i}\}}\log(n)+\mathbbm{1}_{\{\lambda_{i}<\lambda_{0}< 2\lambda_{i}\}}e^{\frac{2\lambda_{i}-\lambda_{0}}{\lambda_{0}}\frac{\log(n)}{ \gamma_{n}(i)}}\right)^{\frac{1}{2}}\left(\frac{1}{\psi_{n}^{2}(i)\log^{\theta(i -1)}(n)}\right)^{\frac{1}{2}}.\] Then by hypothesis on \((\psi_{n}(i),\gamma_{n}(i))\) it follows that (41) converges to \(0\). **Step 3: Convergence to 0 of (35):** Notice that for all \(t\geq t(i)\) \[\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{n^{t-t(i)} \log^{\theta(i)}(n)e^{\lambda_{0}s}} =\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)e^{-\lambda _{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}-Z_{i}^{(n)}\left(\mathfrak{t}_ {t(i)}^{(n)}+s\right)e^{-\lambda_{i}^{(n)}\left(\mathfrak{t}_{t(i)}^{(n)}+s \right)}}{n^{-t(i)}\log^{\theta(i)}(n)e^{\left(\lambda_{0}-\lambda_{i}^{(n)} \right)\left(\mathfrak{t}_{t}^{(n)}+s\right)}}\] \[\qquad\qquad\qquad+\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)}^{( n)}+s\right)}{\log^{\theta(i)}(n)e^{\left(\lambda_{0}-\lambda_{i}^{(n)} \right)\mathfrak{t}_{t-t(i)}^{(n)}}e^{\lambda_{0}s}}.\] Then it allows to write \[\mathbb{P}\left(\sup_{t\in[t(i),T]}\left|\frac{Z_{i}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)}{n^{t-t(i)}\log^{\theta(i)}(n)e^{\lambda_{0} s}}-Ww_{i}(t)\right|\geq 3\varepsilon\right)\] \[\qquad\qquad\qquad\qquad\qquad\leq\mathbb{P}\left(\sup_{t\in[t(i),T]}\left|\frac{M_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)-M_{i}^{(n)} \left(\mathfrak{t}_{t(i)}^{(n)}+s\right)}{n^{-t(i)}\log^{\theta(i)}(n)e^{\left( \lambda_{0}-\lambda_{i}^{(n)}\right)\mathfrak{t}_{t}^{(n)}+s\right)}}\right| \geq\varepsilon\right) \tag{49}\] \[\qquad\qquad\qquad\qquad\qquad+\mathbb{P}\left(\sup_{t\in[t(i),T] }\left|\frac{\int_{\mathfrak{t}_{t(i)}^{(n)}+s}^{\mathfrak{t}_{t(i)}^{(n)}+s} 2\alpha_{i-1}\mu_{i-1}^{(n)}Z_{i-1}^{(n)}(u)e^{-\lambda_{i}^{(n)}}du}{n^{-t(i )}\log^{\theta(i)}(n)e^{\left(\lambda_{0}-\lambda_{i}^{(n)}\right)\left( \mathfrak{t}_{t}^{(n)}+s\right)}}-Ww_{i}(t)\right|\geq\varepsilon\right)\] (50) \[\qquad\qquad\qquad\qquad\qquad+\mathbb{P}\left(\sup_{t\in[t(i),T] }\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)}^{(n)}+s\right)}{\log^{\theta(i)}( n)e^{\left(\lambda_{0}-\lambda_{i}^{(n)}\right)\mathfrak{t}_{t-t(i)}^{(n)}e^{ \lambda_{0}s}}}\geq\varepsilon\right). \tag{51}\] We will show that every terms (49), (50) and (51) converge to \(0\) when \(n\) goes to infinity. Concerning the term (49), we use first that \(\lambda_{0}\geq\lambda_{i}^{(n)}\) to simplify the denominator. Then we apply Doob's inequality to the martingale \(\left(M_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)-M_{i}^{(n)}\left( \mathfrak{t}_{t(i)}^{(n)}+s\right)\right)_{t\geq t(i)}\), and we use that if \(M\) is a square integrable martingale then \(\mathbb{E}\left[(M(t)-M(s))^{2}\right]=\mathbb{E}[M^{2}(t)-M^{2}(s)]=\mathbb{E }[(M)_{t}-\langle M\rangle_{s}]\). Then using Equation (30), it follows \[\mathbb{P}\left(\sup_{t\in[t(i),T]}\left|\frac{M_{i}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)-M_{i}^{(n)}\left(\mathfrak{t}_{t(i)}^{(n)}+s \right)}{n^{-t(i)}\log^{\theta(i)}(n)e^{\left(\lambda_{0}-\lambda_{i}^{(n)} \right)\left(\mathfrak{t}_{t(i)}^{(n)}+s\right)}}\right|\geq\varepsilon\right) \tag{52}\] \[\leq\mathbb{P}\left(\sup_{t\in[t(i),T]}\left|\frac{M_{i}^{(n)} \left(\mathfrak{t}_{t}^{(n)}+s\right)-M_{i}^{(n)}\left(\mathfrak{t}_{t(i)}^{(n)} +s\right)}{n^{-t(i)}\log^{\theta(i)}(n)}\right|\geq\varepsilon\right)\] \[\leq\frac{4n^{2t(i)}}{\varepsilon^{2}\log^{2\theta(i)}(n)}\mathbb{ E}\left[\left\{M_{i}^{(n)}\left(\mathfrak{t}_{T}^{(n)}+s\right)-M_{i}^{(n)} \left(\mathfrak{t}_{t(i)}^{(n)}+s\right)\right\}^{2}\right]\] \[=\frac{4n^{2t(i)}}{\varepsilon^{2}\log^{2\theta(i)}(n)}\mathbb{ E}\left[\left\langle M_{i}^{(n)}\right\rangle_{\mathfrak{t}_{T}^{(n)}+s}-\left\langle M _{i}^{(n)}\right\rangle_{\mathfrak{t}_{t(i)}^{(n)}+s}\right].\] Applying Lemma 3.8 at times \(t_{1}^{(n)}=t(i)\) and \(t_{2}=T\) we obtain that \[\mathbb{E}\left[\left\langle M_{i}^{(n)}\right\rangle_{\mathfrak{t}_{T}^{(n)}+s }-\left\langle M_{i}^{(n)}\right\rangle_{\mathfrak{t}_{t(i)}^{(n)}+s}\right] \leq C\frac{e^{-\lambda_{0}s}\left(\mathfrak{t}_{t(i)}^{(n)}+s\right)^{\theta(i) }\mu_{\otimes,i}^{(n)}}{n^{t(i)}}. \tag{53}\] Then combining (52) and (53) we get \[\mathbb{P}\left(\sup_{t\in[t(i),T]}\left|\frac{M_{(i}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)-M_{i}^{(n)}\left(\mathfrak{t}_{t(i)}^{(n)}+s \right)}{n^{-t(i)}\log^{\theta(i)}(n)e^{\left(\lambda_{0}-\lambda_{i}^{(n)} \right)\left(\mathfrak{t}_{t}^{(n)}+s\right)}}\right|\geq\varepsilon\right) \leq\frac{4Ce^{-\lambda_{0}s}}{\varepsilon^{2}\log^{\theta(i)}(n)}\left(\frac{ \mathfrak{t}_{t(i)}^{(n)}+s}{\log(n)}\right)^{\theta(i)}n^{t(i)}\mu_{\otimes,i}^ {(n)}\] \[\underset{n\rightarrow\infty}{\longrightarrow}0,\] as \(\theta(i)\geq 1\) since the vertex \(i\) is assumed to be neutral. It ends the proof of the convergence to \(0\) of the term (49). The term (50) converges to \(0\) according to the following Lemma. **Lemma 3.9**.: _Assume Equation (15). Let \(i\in\mathbb{N}\), \(T\geq t(i)\) and \(s\in\mathbb{R}\). For \(i\geq 2\) we prove that if Proposition 3.1 is true for \(i-1\) then_ \[\mathbb{P}\left(\sup_{t\in[t(i),T]}\left|\frac{\int_{t_{(i)}^{(n)}+s}^{t_{(i) }^{(n)}+s}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{-\lambda_{i}^{(n)}u}Z_{i-1}^{(n)}(u)du }{n^{-t(i)}\log^{\theta(i)}(n)e^{\left(\lambda_{0}-\lambda_{i}^{(n)}\right) \left(\mathfrak{t}_{t}^{(n)}+s\right)}-Ww_{i}(t)}\right|\geq\varepsilon\right) \underset{n\rightarrow\infty}{\longrightarrow}0. \tag{54}\] _For \(i=1\), we prove (54) without condition._ Proof of Lemma 3.9.: Let \(c_{n}(t,s):=e^{\left(\lambda_{0}-\lambda_{i}^{(n)}\right)\left(\mathfrak{t}_{ t}^{(n)}+s\right)}\) and \[a_{t}^{(n)}:=\frac{\int_{\mathfrak{t}_{t(i)}^{(n)}+s}^{t_{(i)}^{(n)}+s}2\alpha_ {i-1}\mu_{i-1}^{(n)}e^{-\lambda_{i}^{(n)}u}Z_{i-1}^{(n)}(u)du}{n^{-t(i)}\log^{ \theta(i)}(n)c_{n}(t,s)}.\] Our aim is to prove that for all \(\varepsilon>0\) \[\mathbb{P}\left(\sup_{t\in[t(i),T]}\left|a_{t}^{(n)}-Ww_{i}(t)\right|\leq \varepsilon\right)\underset{n\rightarrow\infty}{\longrightarrow}1.\] **(i) Case \(i=1\):** We have \[a_{t}^{(n)}=\frac{n^{t(1)}}{\log^{\theta(1)}(n)c_{n}(t,s)}\int _{\mathfrak{t}_{t(i)}^{(n)}+s}^{t_{(n)}^{(n)}+s}2\alpha_{0}\mu_{0}^{(n)}\Bigg{[} W+\left(e^{-\lambda_{0}u}Z_{0}(u)-W\right)\\ +\left(e^{-\lambda_{0}^{(n)}u}Z_{0}^{(n)}(u)-e^{-\lambda_{0}u}Z_{ 0}(u)\right)\Bigg{]}e^{\left(\lambda_{0}^{(n)}-\lambda_{1}^{(n)}\right)u}du.\] Let us set \[B_{\widetilde{\varepsilon}}^{(n)}:=\left\{\sup_{u\in\mathbb{R}^ {+}}\left|e^{-\lambda_{0}u}Z_{0}(u)-e^{-\lambda_{0}^{(n)}u}Z_{0}^{(n)}(u) \right|\leq\widetilde{\varepsilon}\right\},\] \[C_{x,\widetilde{\varepsilon}}:=\left\{\sup_{u\in[x,\infty]}|e^{- \lambda_{0}u}Z_{0}(u)-W|\leq\widetilde{\varepsilon}\right\}.\] According to Lemma 3.1 and by definition of \(W\) (see (19)) we both have that \(\mathbb{P}\left(B_{\widetilde{\varepsilon}}^{(n)}\right)\underset{n \rightarrow\infty}{\rightarrow}1\) and \(\mathbb{P}\left(C_{\sqrt{\log(n)},\widetilde{\varepsilon}}\right)\underset{n \rightarrow\infty}{\rightarrow}1\). Notice that when \(\lambda_{1}<\lambda_{0}\) we have the following bound \[\frac{1}{c_{n}(t,s)}\int_{\mathfrak{t}_{t(1)}^{(n)}+s}^{t_{(n)}^{(n)}+s}e^{ \left(\lambda_{0}-\lambda_{1}^{(n)}\right)u}du=\frac{1}{\lambda_{0}-\lambda_ {1}^{(n)}}\frac{c_{n}(t,s)-c_{n}(t(1),s)}{c_{n}(t,s)}\leq\frac{1}{\lambda_{0}- \lambda_{1}}, \tag{55}\] and that when \(\lambda_{1}=\lambda_{0}\) we have the one \[\frac{1}{c_{n}(t,s)}\int_{\mathfrak{t}^{(n)}_{\mathfrak{t}^{(n)}_{( 1)}+s}}^{\mathfrak{t}^{(n)}_{\mathfrak{t}^{(n)}_{(1)}+s}}e^{\big{(}\lambda_{0}- \lambda^{(n)}_{1}\big{)}u}du=\frac{1-e^{-\big{(}\lambda_{0}-\lambda^{(n)}_{1} \big{)}\mathfrak{t}^{(n)}_{\mathfrak{t}^{(n)}_{-t(1)}}}}{\lambda_{0}-\lambda^ {(n)}_{1}}\leq\mathfrak{t}^{(n)}_{\mathfrak{t}-t(1)},\] where for the last inequality we use (48) applied with \(a=\lambda_{0}-\lambda^{(n)}_{1}=\lambda_{0}-\lambda_{1}+2\alpha_{1}\mu^{(n)}_ {1}>0\) and \(x=\mathfrak{t}^{(n)}_{\mathfrak{t}-t(1)}\). It follows that for \(n\) sufficiently large (such that \(\mathfrak{t}^{(n)}_{\mathfrak{t}(1)}+s\geq\sqrt{\log(n)}\)) under the event \(B^{(n)}_{\widetilde{\varepsilon}}\cap C_{\sqrt{\log(n)},\widetilde{ \varepsilon}}\) we have that \[a^{(n)}_{t} \leq\frac{n^{t(1)}}{\log^{\theta(1)}(n)c_{n}(t)}\int_{\mathfrak{t }^{(n)}_{\mathfrak{t}(1)}+s}^{\mathfrak{t}^{(n)}_{\mathfrak{t}^{(n)}_{ \mathfrak{t}(1)}+s}}2\alpha_{0}\mu^{(n)}_{0}\Big{(}W+2\widetilde{ \varepsilon}\Big{)}e^{\big{(}\lambda_{0}-\lambda^{(n)}_{1}\big{)}u}du\] \[\leq 2\alpha_{0}\left(n^{t(1)}\mu^{(n)}_{0}\right)\left(W+2 \widetilde{\varepsilon}\right)\left(\mathbb{1}_{\left\{\lambda_{1}<\lambda_{ 0}\right\}}\frac{1}{\lambda_{0}-\lambda_{1}}+\mathbb{1}_{\left\{\lambda_{1}= \lambda_{0}\right\}}\frac{1}{\lambda_{0}}(t-t(1))\right),\] since \(\theta(1)=\mathbb{1}_{\left\{\lambda_{1}=\lambda_{0}\right\}}.\) By definition \(w_{1}(t)=2\alpha_{0}\mu_{0}\left(\mathbb{1}_{\left\{\lambda_{1}<\lambda_{0} \right\}}\frac{1}{\lambda_{0}-\lambda_{1}}+\mathbb{1}_{\left\{\lambda_{1}= \lambda_{0}\right\}}\frac{1}{\lambda_{0}}(t-t(1))\right)\). It implies that \[a^{(n)}_{t}-Ww_{1}(t)\leq\frac{w_{1}(t)}{\mu_{0}}W\left(n^{t(1) }\mu^{(n)}_{0}-\mu_{0}\right)+C\widetilde{\varepsilon},\] where \(C>0\) is a constant sufficiently large. Introduce the event \(D^{(n)}_{\widetilde{\varepsilon}}:=\left\{\sup_{t\in[t(1),T]}\left| \frac{w_{1}(t)}{\mu_{0}}W\left(n^{t(1)}\mu^{(n)}_{0}-\mu_{0}\right)\right| \leq\widetilde{\varepsilon}\right\}\). It satisfies \(\mathbb{P}\left(D^{(n)}_{\widetilde{\varepsilon}}\right)\underset{n\to \infty}{\rightarrow}1\) because \(W\) is finite almost surely and \(n^{t(1)}\mu^{(n)}_{0}\underset{n\to\infty}{\rightarrow}\mu_{0}\). Under \(B^{(n)}_{\widetilde{\varepsilon}}\cap C_{\sqrt{\log(n)},\widetilde{ \varepsilon}}\cap D^{(n)}_{\widetilde{\varepsilon}}\) we have for all \(t\in[t(1),T]\) \[a^{(n)}_{t}-Ww_{1}(t)\leq(C+1)\widetilde{\varepsilon}.\] With similar computations one can also obtain that under \(B^{(n)}_{\widetilde{\varepsilon}}\cap C_{\sqrt{\log(n)},\widetilde{ \varepsilon}}\cap D^{(n)}_{\widetilde{\varepsilon}}\) \[\sup_{t\in[t(1),T]}|a^{(n)}_{t}-Ww_{1}(t)|\leq(C+1)\widetilde{ \varepsilon},\] and choosing \(\widetilde{\varepsilon}>0\) such that \((C+1)\widetilde{\varepsilon}\leq\varepsilon\) we deduce that under the event \(B^{(n)}_{\widetilde{\varepsilon}}\cap C_{\sqrt{\log(n)},\widetilde{ \varepsilon}}\cap D^{(n)}_{\widetilde{\varepsilon}}\) \[\sup_{t\in[t(1),T]}|a^{(n)}_{t}-Ww_{1}(t)|\leq\varepsilon.\] It concludes the case \(i=1\) because \(\mathbb{P}\left(B^{(n)}_{\widetilde{\varepsilon}}\cap C_{\sqrt{\log(n)}, \widetilde{\varepsilon}}\cap D^{(n)}_{\widetilde{\varepsilon}}\right) \underset{n\to\infty}{\rightarrow}1\). **(ii) Case \(i\geq 2\):** Assume Proposition 3.1 is true for \(i-1\). In particular we have \(\mathbb{P}\left(B^{(n)}_{\widetilde{\varepsilon}}\right)\underset{n\to \infty}{\rightarrow}1\) with \[B^{(n)}_{\widetilde{\varepsilon}}:=\left\{\sup_{v\in[t(i),T]} \left|\frac{Z^{(n)}_{i-1}\left(\mathfrak{t}^{(n)}_{v}+s\right)}{n^{v-t(i-1)}e^ {\lambda_{0}s}\log^{\theta(i-1)}(n)}-Ww_{i-1}(v)\right|\leq\widetilde{ \varepsilon}\right\}.\] Using the change of variable \(u=\mathfrak{t}^{(n)}_{v}+s\) and that \(t(i-1)=t(i)-\ell(i-1)\) yields that \[a^{(n)}_{t}=\int_{t(i)}^{t}2\alpha_{i-1}\left(n^{\ell(i-1)}\mu^{(n)}_{i-1} \right)\frac{Z^{(n)}_{i-1}\left(\mathfrak{t}^{(n)}_{v}+s\right)}{n^{v-t(i-1)}e^ {\lambda_{0}s}\log^{\theta(i)}(n)}\frac{c_{n}(v,s)}{c_{n}(t,s)}\frac{\log(n)}{ \lambda_{0}}dv.\] Notice that when \(\lambda_{i}=\lambda_{0}\) we have that \(\theta(i-1)=\theta(i)-1\) and when \(\lambda_{i}<\lambda_{0}\) we have that \(\theta(i-1)=\theta(i)\). In addition we use that \(v\mapsto c_{n}(v,s)\) and \(w_{i-1}\) are non-decreasing functions and then applied similar computation as in (55) replacing the index \(1\) by i to find, under \(B_{\widetilde{\varepsilon}}^{(n)}\), that \[a_{t}^{(n)}\leq 2\alpha_{i-1}\left(n^{\ell(i-1)}\mu_{i-1}^{(n)}\right)\Bigg{[} \mathbbm{1}_{\{\lambda_{i}<\lambda_{0}\}}\frac{Ww_{i-1}(t)+\widetilde{ \varepsilon}}{\lambda_{0}-\lambda_{i}}+\mathbbm{1}_{\{\lambda_{i}=\lambda_{0} \}}W\frac{1}{\lambda_{0}}\int_{t(i)}^{t}\left(w_{i-1}(v)+\widetilde{ \varepsilon}\right)dv\Bigg{]}.\] By definition (see (9) and Remark 2.2) \[w_{i}(t)=2\alpha_{i-1}\mu_{i-1}\left(\mathbbm{1}_{\{\lambda_{i}<\lambda_{0}\} }\frac{w_{i-1}(t)}{\lambda_{0}-\lambda_{i}}+\mathbbm{1}_{\{\lambda_{i}=\lambda _{0}\}}\frac{1}{\lambda_{0}}\int_{t(i)}^{t}w_{i-1}(u)du\right),\] it follows that under the event \(C_{\widetilde{\varepsilon}}^{(n)}:=\left\{W|n^{\ell(i-1)}\mu_{i-1}^{(n)}-\mu_{ i-1}|\leq\widetilde{\varepsilon}\right\}\) we obtain that for all \(t\leq T\) \[a_{t}^{(n)}-Ww_{i}(t) \leq 2\alpha_{i-1}\Bigg{[}\mathbbm{1}_{\{\lambda_{i}<\lambda_{0} \}}\frac{1}{\lambda_{0}-\lambda_{1}}\left(w_{i-1}(T)+\left(n^{\ell(i-1)}\mu_{ i-1}^{(n)}\right)\right)\] \[\qquad\qquad\qquad\qquad+\mathbbm{1}_{\{\lambda_{i}=\lambda_{0} \}}\frac{1}{\lambda_{0}}\left(\int_{t(i)}^{T}w_{i-1}(u)du+T\left(n^{\ell(i-1)} \mu_{i-1}^{(n)}\right)\right)\Bigg{]}\widetilde{\varepsilon}\] \[\leq C\widetilde{\varepsilon},\] where \(C\) is a positive constant depending only on the parameters and on \(T\) but which is independent from \(n\). Recalling that \(n^{\ell(i-1)}\mu_{i-1}^{(n)}\) converges and that \(W\) is finite almost surely (see (20)) we obtain that \(C_{\widetilde{\varepsilon}}^{(n)}\) satisfies \(\mathbb{P}\left(C_{\widetilde{\varepsilon}}^{(n)}\right)\underset{n\to\infty}{ \rightarrow}1\). Then choosing \(\widetilde{\varepsilon}>0\) such that \(C\widetilde{\varepsilon}\leq\varepsilon\), we have shown that under \(B_{\widetilde{\varepsilon}}^{(n)}\cap C_{\widetilde{\varepsilon}}^{(n)}\) \[\sup_{t\in[t(i),T]}a_{t}^{(n)}-Ww_{i}(t)\leq\varepsilon.\] With similar computations one can also obtain that under \(B_{\widetilde{\varepsilon}}^{(n)}\cap C_{\widetilde{\varepsilon}}^{(n)}\) \[\sup_{t\in[t(i),T]}|a_{t}^{(n)}-Ww_{i}(t)|\leq\varepsilon.\] We conclude the proof with the fact that \(\mathbb{P}\left(C_{\widetilde{\varepsilon}}^{(n)}\right)\underset{n\to \infty}{\rightarrow}1\) and \(\mathbb{P}\left(B_{\widetilde{\varepsilon}}^{(n)}\right)\underset{n\to \infty}{\rightarrow}1\) by the induction assumption. Since \(\lambda_{0}\geq\lambda_{i}^{(n)}\), the term (51) satisfies \[\mathbb{P}\left(\sup_{t\in[t(i),T]}\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{(i)}^{ (n)}+s\right)}{\log^{\theta(i)}(n)e^{\left(\lambda_{0}-\lambda_{i}^{(n)}\right) \mathfrak{t}_{t-(i)}^{(n)}}e^{\lambda_{0}s}}\geq\varepsilon\right)\leq \mathbb{P}\left(\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{(i)}^{(n)}+s\right)}{\log ^{\theta(i)}(n)e^{\lambda_{0}s}}\geq\varepsilon\right)\underset{n\to \infty}{\longrightarrow}0,\] where the convergence is obtained applying Lemma 3.5, which is possible because assuming vertex \(i\) to be neutral gives that \(\theta(i)=\theta(i-1)+1\). This ends the proof of (27). Let us now deal with (28). **(ii) Deleterious case:** Assume that \(\lambda_{i}<\lambda_{0}\). Let \(0<T_{1}<T_{2}\). For all \(t\in[T_{1},T_{2}]\) \[\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)}{n^{t} \log^{\theta(i)}(n)e^{\lambda_{0}s}}=\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)+ t}^{(n)}+s\right)e^{-\lambda_{i}^{(n)}\left(\mathfrak{t}_{t(i)+t}^{(n)}+s \right)}-Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)}^{(n)}+s\right)e^{-\lambda_{i}^{(n )}\left(\mathfrak{t}_{t(i)}^{(n)}+s\right)}}{n^{-\frac{\lambda_{i}^{(n)}}{ \lambda_{0}}t(i)}e^{\left(\lambda_{0}-\lambda_{i}^{(n)}\right)\left(\mathfrak{t }_{t}^{(n)}+s\right)}\log^{\theta(i)}(n)}\] \[\qquad\qquad\qquad+\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)}^{(n )}+s\right)}{n^{\frac{\lambda_{0}-\lambda_{0}^{(n)}}{\lambda_{0}}}e^{\lambda_ {0}s}\log^{\theta(i)}(n)}\] \[=\frac{M_{i}^{(n)}\left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)-M_ {i}^{(n)}\left(\mathfrak{t}_{t(i)}^{(n)}+s\right)+\int_{\mathfrak{t}_{t(i)+s }^{(n)}+s}^{\mathfrak{t}_{t(i)}^{(n)}+s}2\alpha_{i-1}\mu_{i-1}^{(n)}e^{- \lambda_{i}^{(n)}s}Z_{i-1}^{(n)}(s)ds}{n^{-\frac{\lambda_{i}^{(n)}}{\lambda_{0 }}t(i)}e^{\left(\lambda_{0}-\lambda_{i}^{(n)}\right)\left(\mathfrak{t}_{t}^{(n) }+s\right)}\log^{\theta(i)}(n)}\] \[\qquad\qquad\qquad+\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)}^{(n )}+s\right)}{n^{\frac{\lambda_{0}-\lambda_{0}^{(n)}}{\lambda_{0}}}e^{\lambda_ {0}s}\log^{\theta(i)}(n)}.\] Then it allows to write \[\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{i}^{(n)} \left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)}{n^{t}\log^{\theta(i)}(n)e^{\lambda _{0}s}}-Ww_{i}(t(i)+t)\right|\geq 3\varepsilon\right)\] \[\leq\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left|\frac{M_{i}^{(n) }\left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)-M_{i}^{(n)}\left(\mathfrak{t}_{t(i )}^{(n)}+s\right)}{n^{-\frac{\lambda_{0}^{(n)}}{\lambda_{0}}t(i)}e^{\left( \lambda_{0}-\lambda_{i}^{(n)}\right)\left(\mathfrak{t}_{t}^{(n)}+s\right)} \log^{\theta(i)}(n)}\right|\geq\varepsilon\right) \tag{56}\] \[\qquad\qquad+\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left|\frac{ \int_{\mathfrak{t}_{t(i)+s}^{(n)}+s}^{\mathfrak{t}_{t(i)}^{(n)}+s}2\alpha_{i-1 }\mu_{i-1}^{(n)}e^{-\lambda_{i}^{(n)}s}Z_{i-1}^{(n)}(s)ds}{n^{-\frac{\lambda _{0}^{(n)}}{\lambda_{0}}t(i)}n^{\frac{\lambda_{0}-\lambda_{i}^{(n)}}{\lambda_ {0}}}e^{\left(\lambda_{0}-\lambda_{i}^{(n)}\right)s}\log^{\theta(i)}(n)}-Ww_{i }(t(i)+t)\right|\geq\varepsilon\right)\] (57) \[\qquad\qquad+\mathbb{P}\left(\frac{Z_{i}^{(n)}\left(\mathfrak{t}_ {t(i)}^{(n)}+s\right)}{n^{T_{1}\frac{\lambda_{0}-\lambda_{0}^{(n)}}{\lambda_ {0}}}e^{\lambda_{0}s}\log^{\theta(i)}(n)}\geq\varepsilon\right) \tag{58}\] For the convergence to \(0\) of the term (56), we use first that \(\lambda_{i}^{(n)}\leq\lambda_{i}<\lambda_{0}\) to simplify the denominator. Then we apply the Maximal inequality to the martingale \(\left(M_{i}^{(n)}\left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)-M_{i}^{(n)}\left( \mathfrak{t}_{t(i)}^{(n)}+s\right)\right)_{t\geq 0}\) to obtain \[\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left|\frac{M_{i}^{(n)} \left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)-M_{i}^{(n)}\left(\mathfrak{t}_{t(i )}^{(n)}+s\right)}{n^{-\frac{\lambda_{0}^{(n)}}{\lambda_{0}}t(i)}e^{\left( \lambda_{0}-\lambda_{i}^{(n)}\right)\left(\mathfrak{t}_{t}^{(n)}+s\right)} \log^{\theta(i)}(n)}\right|\geq\varepsilon\right) \tag{59}\] \[\leq\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left|\frac{M_{i}^{(n) }\left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)-M_{i}^{(n)}\left(\mathfrak{t}_{t(i )}^{(n)}+s\right)}{n^{-\frac{\lambda_{0}^{(n)}}{\lambda_{0}}t(i)}e^{\left( \lambda_{0}-\lambda_{i}\right)\left(\mathfrak{t}_{t}^{(n)}+s\right)}\log^{ \theta(i)}(n)}\right|\geq\varepsilon\right)\] \[\leq\frac{3e^{-\left(\lambda_{0}-\lambda_{i}\right)s}}{\varepsilon \log^{\theta(i)}(n)}n^{\frac{\lambda_{i}}{\lambda_{0}}t(i)}\sup_{t\in[T_{1},T_ {2}]}n^{-t\frac{\lambda_{0}-\lambda_{i}}{\lambda_{0}}}\sqrt{\mathbb{E}\left[ \left\langle M_{i}^{(n)}\right\rangle_{\mathfrak{t}_{t(i)+t}^{(n)}+s}-\left \langle M_{i}^{(n)}\right\rangle_{\mathfrak{t}_{t(i)}^{(n)}+s}\right]}.\] Applying Lemma 3.8 at times \(t_{1}^{(n)}=t(i)\) and \(t_{2}^{(n)}=t(i)+t\) we obtain that \[\sqrt{\mathbb{E}\left[\left\langle M_{i}^{(n)}\right\rangle_{t_{1}^ {(n)}+s}-\left\langle M_{i}^{(n)}\right\rangle_{t_{1}^{(n)}+s}\right]} \tag{60}\] \[\leq C\left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)^{\frac{\theta(i)} {2}}\sqrt{\mu_{\otimes,i}^{(n)}}\cdot\bigg{[}\mathbbm{1}_{\{\lambda_{0}>2 \lambda_{i}\}}n^{\frac{\lambda_{0}-2\lambda_{i}}{2\lambda_{0}}(t(i)+t)}e^{ \frac{\lambda_{0}-2\lambda_{i}}{2}s}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\mathbbm{1}_{\{ \lambda_{0}=2\lambda_{i}\}}\sqrt{\mathfrak{t}_{t(i)+t}^{(n)}+s}+\mathbbm{1}_{ \{\lambda_{i}<\lambda_{0}<2\lambda_{i}\}}n^{-\frac{2\lambda_{i}-\lambda_{0}}{ 2\lambda_{0}}t(i)}e^{-\frac{2\lambda_{i}-\lambda_{0}}{2}s}\bigg{]}.\] For all \(t\in[T_{1},T_{2}]\) we have the following auxiliary computations that will be used to get the result \[n^{\frac{\lambda_{1}}{\lambda_{0}}t(i)}\sqrt{\mu_{\otimes,i}^{( n)}}n^{-t\frac{\lambda_{0}-\lambda_{i}}{\lambda_{0}}}n^{\frac{\lambda_{0}-2 \lambda_{i}}{2\lambda_{0}}t(i)+t}\leq\frac{\sqrt{n^{t(i)}\mu_{\otimes,i}^{(n)} }}{n^{\frac{T_{1}}{\lambda_{0}}}}, \tag{61}\] \[\mathbbm{1}_{\{\lambda_{0}=2\lambda_{i}\}}n^{\frac{\lambda_{1}}{ \lambda_{0}}t(i)}\sqrt{\mu_{\otimes,i}^{(n)}}n^{-t\frac{\lambda_{0}-\lambda_{ i}}{\lambda_{0}}}\leq\frac{\sqrt{n^{t(i)}\mu_{\otimes,i}^{(n)}}}{n^{\frac{T_{1}}{ \lambda_{0}}}},\] \[n^{\frac{\lambda_{1}}{\lambda_{0}}t(i)}\sqrt{\mu_{\otimes,i}^{( n)}}n^{-t\frac{\lambda_{0}-\lambda_{i}}{\lambda_{0}}}n^{-\frac{2\lambda_{i}- \lambda_{0}}{2\lambda_{0}}t(i)}\leq\frac{\sqrt{n^{t(i)}\mu_{\otimes,i}^{(n)}} }{n^{T_{1}\frac{\lambda_{0}-\lambda_{i}}{\lambda_{0}}}}.\] Then combining (59), (60) and (61) we obtain \[\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left|\frac{M_{i}^{(n)} \left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)-M_{i}^{(n)}\left(\mathfrak{t}_{t(i )}^{(n)}+s\right)}{n^{-\frac{\lambda^{(n)}}{\lambda_{0}}t(i)}e^{\left(\lambda_ {0}-\lambda_{i}^{(n)}\right)\left(\mathfrak{t}_{t}^{(n)}+s\right)}\log^{\theta (i)}(n)}\right|\geq\varepsilon\right)\] \[\leq\frac{3Ce^{-(\lambda_{0}-\lambda_{i})s}}{\varepsilon\log^{ \theta(i)/2}(n)}\left(\frac{\mathfrak{t}_{t(i)+T_{2}}^{(n)}+s}{\log(n)}\right)^ {\frac{\theta(i)}{2}}\] \[\cdot\sup_{t\in[T_{1},T_{2}]}n^{\frac{\lambda_{0}}{\lambda_{0}}t( i)}n^{-t\frac{\lambda_{0}-\lambda_{i}}{\lambda_{0}}}\sqrt{\mu_{\otimes,i}^{(n)}} \bigg{[}\mathbbm{1}_{\{\lambda_{0}>2\lambda_{i}\}}n^{\frac{\lambda_{0}-2 \lambda_{i}}{2\lambda_{0}}(t(i)+t)}e^{\frac{\lambda_{0}-2\lambda_{i}}{2}s}+ \mathbbm{1}_{\{\lambda_{0}=2\lambda_{i}\}}\sqrt{\mathfrak{t}_{t(i)+t}^{(n)}+s}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\mathbbm{1 }_{\{\lambda_{i}<\lambda_{0}<2\lambda_{i}\}}n^{-\frac{2\lambda_{i}-\lambda_{0} }{2\lambda_{0}}}e^{-\frac{2\lambda_{i}-\lambda_{0}}{2}s}\bigg{]}\] \[=\frac{3Ce^{-(\lambda_{0}-\lambda_{i})s}}{\varepsilon\log^{ \theta(i)/2}(n)}\left(\frac{\mathfrak{t}_{t(i)+T_{2}}^{(n)}+s}{\log(n)}\right)^ {\frac{\theta(i)}{2}}\sqrt{n^{t(i)}\mu_{\otimes,i}^{(n)}}\] \[\cdot\left(\mathbbm{1}_{\{\lambda_{0}>2\lambda_{i}\}}e^{\frac{ \lambda_{0}-2\lambda_{i}}{2}s}+\mathbbm{1}_{\{\lambda_{0}=2\lambda_{i}\}}\frac {\sqrt{\mathfrak{t}_{t(i)+T_{2}}^{(n)}+s}}{n^{\frac{T_{1}}{2}}}+\mathbbm{1}_{ \{\lambda_{i}<\lambda_{0}<2\lambda_{i}\}}\frac{e^{-\frac{2\lambda_{1}-\lambda_{0 }}{2}s}}{n^{T_{1}\frac{\lambda_{0}-\lambda_{i}}{\lambda_{0}}}}\right)\] \[\xrightarrow[n\to\infty]{}0.\] The term (57) converges to \(0\) according to Lemma 3.9. The convergences to \(0\) for the term (58) is obtained by applying Lemma 3.5 with \(\psi_{n}=n^{T_{1}\frac{\lambda_{0}-\lambda_{i}}{\lambda_{0}}}e^{\lambda_{0}s}\). This ends the proof of (28) and thus the proof of Proposition 3.1. #### 3.2.2 Uniform control on the parameter \(\mathbf{s}\) In this subsection we will prove (11) and (12) from (27) and (28) using an idea from the proof of Lemma 3 of [15]. Define \(u_{s}^{(n)}:=t+\frac{s-M}{\log(n)}\lambda_{0}\) such that \(\mathfrak{t}_{t}^{(n)}+s=\mathfrak{t}_{u_{s}^{(n)}}^{(n)}+M\). Notice that \[0\leq t-u_{s}^{(n)}\leq\frac{2M}{\log(n)}\lambda_{0}.\] We start by showing (12). We will use that \[n^{t}\log^{\theta(i)}(n)e^{\lambda_{0}s}=n^{u_{s}^{(n)}}\log^{\theta(i)}(n)e^{ \lambda_{0}M}.\] It gives that \[\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right)} {n^{t}\log^{\theta(i)}(n)e^{\lambda_{0}s}}-Ww_{i}(t(i)+t)\right| \leq\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)+u_{s}^{(n)}}^{(n)}+M \right)}{n^{u_{s}^{(n)}}\log^{\theta(i)}(n)e^{\lambda_{0}M}}-Ww_{i}\left(t(i)+ u_{s}^{(n)}\right)\right|\] \[\qquad\qquad+W\Big{|}w_{i}(t(i)+t)-w_{i}\left(t(i)+u_{s}^{(n)} \right)\Big{|}.\] \(w_{i}(t(i)+\cdot)\) is a polynomial function hence it exists \(C_{i}>0\) such that for all \(t\leq T_{2}\) and \(s\in[-M,M]\) \[\left|w_{i}(t(i)+t)-w_{i}\left(t(i)+u_{s}^{(n)}\right)\Big{|}\leq\frac{C_{i}} {\log(n)}\underset{n\to\infty}{\longrightarrow}0. \tag{62}\] Let \(0<\widetilde{T}_{1}<T_{1}\), then for \(n\) sufficiently large such that \(u_{s}^{(n)}\geq\widetilde{T}_{1}\) for all \((t,s)\in[T_{1},T_{2}]\times[-M,M]\) we have \[\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)+t}^{(n)}+s\right) }{n^{t}\log^{\theta(i)}(n)e^{\lambda_{0}s}}-Ww_{i}(t(i)+t)\right|\leq\sup_{x \in[\widetilde{T}_{1},T_{2}]}\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)+ x}^{(n)}+M\right)}{n^{x}\log^{\theta(i)}(n)e^{\lambda_{0}M}}-Ww_{i}(t(i)+x)\right|\] \[\qquad\qquad+W\frac{C_{i}}{\log(n)}.\] Hence we get for \(n\) sufficiently large \[\mathbb{P}\Bigg{(}\sup_{s\in[-M,M]} \sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t }_{t(i)+t}^{(n)}+s\right)}{n^{t}\log^{\theta(i)}(n)e^{\lambda_{0}s}}-Ww_{i}(t (i)+t)\right|\geq 2\varepsilon\Bigg{)}\] \[\leq\mathbb{P}\Bigg{(}\sup_{x\in[\widetilde{T}_{1},T_{2}]}\left| \frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t(i)+x}^{(n)}+M\right)}{n^{x}\log^{ \theta(i)}(n)e^{\lambda_{0}M}}-Ww_{i}(t(i)+x)\right|\geq\varepsilon\Bigg{)} \tag{63}\] \[+\mathbb{P}\left(W\geq\frac{\varepsilon\log(n)}{C_{i}}\right), \tag{64}\] from which (12) is obtained. Indeed the term (63) converges to \(0\) according to Equation (28) and (64) converges to \(0\) since \(W\) is finite almost surely (see (20)). We now show Equation (11). We have \[\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{d_{i }^{(n)}(t,s)}-Ww_{i}(t)\right| \tag{65}\] \[\leq\tfrac{1}{2}\left\{\left.\left(\mathfrak{t}\in[0,t(i)-\gamma_ {n}^{-1}(i))\right)Z_{i}^{(n)}\left(\mathfrak{t}_{u_{s}^{(n)}}^{(n)}+M\right)+ \tfrac{1}{2}\left.\left(\mathfrak{t}\in[t(i)-\gamma_{n}^{-1}(i),t(i))\right) \right\}\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{u_{s}^{(n)}}^{(n)}+M\right)}{ \psi_{n}(i)\log^{\theta(i-1)}(n)}\right.\] \[\qquad\qquad+\tfrac{1}{2}\left\{\left.\left(\mathfrak{t}_{u}^{(n )}+\epsilon(i)\right)\right\}\frac{\left|Z_{i}^{(n)}\left(\mathfrak{t}_{u_{s}^ {(n)}}^{(n)}+M\right)\right.}{\log^{\theta(i)}(n)e^{\lambda_{0}M}}e^{(t(i)-u_{s }^{(n)})\log(n)}-Ww_{i}(t)\right|\] \[\qquad\qquad+\tfrac{1}{2}\left\{\left.\left(\mathfrak{t}_{u}^{(n )}\geq t(i)\right)\right|}\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{u_{s}^{(n)}}^{ (n)}+M\right)}{n^{u_{s}^{(n)}-t(i)}\log^{\theta(i)}(n)e^{\lambda_{0}M}}-Ww_{i }\left(u_{s}^{(n)}\right)\Bigg{|}+\tfrac{1}{2}\left\{\left.\left(\mathfrak{t}_{ u}^{(n)}\geq t(i)\right)\right\}W\Big{|}w_{i}(t)-w_{i}\left(u_{s}^{(n)}\right) \Big{|}.\] As in Eq. (62), it exists \(C_{i}\) such that for all \((t,s)\in[0,T]\times[-M,M]\) \[\left|w_{i}(t)-w_{i}\left(u_{s}^{(n)}\right)\right.\Big{|}\leq\frac{C_{i}}{ \log(n)}.\] In the case \(t\geq t(i)\) and \(u_{s}^{(n)}<t(i)\), we have that \(t(i)-u_{s}^{(n)}\leq\frac{2M}{\log(n)}\lambda_{0}\) which in particular implies that \(e^{(t(i)-u_{s}^{(n)})\log(n)}\leq e^{2M\lambda_{0}}\). Then \(w_{i}\left(u_{s}^{(n)}\right)=0\) implies according to the latter inequality that \(w_{i}(t)\leq\frac{C_{i}}{\log(n)}\). Combining these arguments it follows that \[\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{u_{s}^{(n)}}^{(n)}+M \right)}{\log^{\theta(i)}(n)e^{\lambda_{0}M}}e^{(t(i)-u_{s}^{(n)})\log(n)}-Ww_ {i}(t)\right|\leq\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{u_{s}^{(n)}}^{(n)}+M \right)}{\log^{\theta(i)}(n)}e^{\lambda_{0}M}+W\frac{C_{i}}{\log(n)}. \tag{66}\] Finally using (65) and (66) we obtain for all \((t,s)\in[0,T]\times[-M,M]\) \[\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{d_ {i}^{(n)}(t,s)}-Ww_{i}(t)\right|\leq\sup_{x\in\left[0,t(i)-\gamma_{x}^{-1}(i) \right)}Z_{i}^{(n)}\left(\mathfrak{t}_{x}^{(n)}+M\right)+\sup_{x\in\left[0,t(i )\right]}\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{x}^{(n)}+M\right)}{\psi_{n}(i) \log^{\theta(i-1)}(n)}\] \[+\sup_{x\in[0,t(i)]}\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{x}^{(n) }+M\right)}{\log^{\theta(i)}(n)}e^{\lambda_{0}M}+W\frac{2C_{i}}{\log(n)}+\sup _{x\in[t(i),T]}\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{x}^{(n)}+M\right)}{ n^{x-t(i)}\log^{\theta(i)}(n)e^{\lambda_{0}M}}-Ww_{i}(x)\right|.\] Then it follows that \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\left|\frac{Z_{i }^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{d_{i}^{(n)}(t,s)}-Ww_{i}(t) \right|\geq 5\varepsilon\right)\] \[\leq\mathbb{P}\left(\sup_{x\in[0,t(i)-\gamma_{x}^{-1}(i))}Z_{i}^ {(n)}\left(\mathfrak{t}_{x}^{(n)}+M\right)\geq\varepsilon\right)+\mathbb{P} \left(\sup_{x\in[0,t(i)]}\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{x}^{(n)}+M \right)}{\psi_{n}(i)\log^{\theta(i-1)}(n)}\geq\varepsilon\right) \tag{67}\] \[\qquad+\mathbb{P}\left(\sup_{x\in[0,t(i)]}\left|\frac{Z_{i}^{(n)} \left(\mathfrak{t}_{x}^{(n)}+M\right)}{e^{-\lambda_{0}M}\log^{\theta(i)}(n)} \right|\geq\varepsilon\right)+\mathbb{P}\left(W\geq\frac{\varepsilon\log(n)}{ 2C_{i}}\right)\] (68) \[\qquad+\mathbb{P}\left(\sup_{x\in[t(i),T]}\left|\frac{Z_{i}^{(n)} \left(\mathfrak{t}_{x}^{(n)}+M\right)}{n^{x-t(i)}\log^{\theta(i)}(n)e^{ \lambda_{0}M}}-Ww_{i}(x)\right|\geq\varepsilon\right)\] (69) \[\underset{n\rightarrow\infty}{\longrightarrow}0,\] where the different convergences to \(0\) are obtained in the following way: * for the first term of Equation (67) see Lemma 3.4, * for the second term of Equation (67) and the first term of Equation (68) see Lemma 3.5. Where in the second case we apply it with \(\psi_{n}=e^{-\lambda_{0}M}\log(n)\), which is possible because \(\theta(i)=\theta(i-1)+1\), * for the second term of Equation (68), we use that \(W\) is finite almost surely, see Equation (20), * and for the term of Equation (69) see Step 3 of the Neutral case of the proof of Proposition 3.1. Finally we have proven Equations (11) and (12) in the particular case of the infinite mono-directional graph. ### First-order asymptotics of the mutant sub-populations in the random time-scale (Theorem 2.1 (ii)) In this subsection we will first show that the random time at which the total population reaches the size \(n^{t}\) behaves asymptotically as the random time at which the wild-type population reaches the size \(n^{t}\). This result is obtained uniformly on the time parameter \(t\), conditioning on \(\{W>0\}\). **Proposition 3.2**.: _Assume Equation (15), then for all \(\varepsilon>0\) and \(0<T_{1}<T_{2}\)_ \[\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left(\eta_{t}^{(n)}-\sigma_{t}^{(n)} \right)\leq\varepsilon\middle|W>0\right)\underset{n\rightarrow\infty}{ \longrightarrow}1.\] Proof.: The proof will be done in two steps. We start by showing the result on a weaker conditioning. **Step (i)** In this step we will show that for all \(0<\delta_{1}<\delta_{2}\) and \(\varepsilon>0\) we have \[\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left(\eta_{t}^{(n)}-\sigma_{t}^{(n)} \right)\geq\varepsilon\Big{|}\delta_{1}<W<\delta_{2}\right)\underset{n\to \infty}{\longrightarrow}0. \tag{70}\] Let \(0<\delta_{1}<\delta_{2}\), then it exists \(M\in\mathbb{R}^{+}\) such that \[\mathbb{P}\left(\left|\frac{\log(W)}{\lambda_{0}}\right|\leq M\Big{|}\delta_{ 1}<W<\delta_{2}\right)=1. \tag{71}\] For all \(\varepsilon>0\) introduce the event \(A_{\varepsilon}^{(n)}:=\left\{\sup_{t\in[T_{1},T_{2}]}\left(\eta_{t}^{(n)}- \sigma_{t}^{(n)}\right)\geq\varepsilon\right\}\). Assume that it exists \(\varepsilon>0\) such that the sequence \(\left(\mathbb{P}\left(A_{\varepsilon}^{(n)}\Big{|}\delta_{1}<W<\delta_{2} \right)\right)_{n\in\mathbb{N}}\) does not converges to \(0\). It means that it exists \(\eta>0\) for which it exists a subset \(N\subset\mathbb{N}\) satisfying \(|N|=\infty\) such that for all \(n\in N\), \(\mathbb{P}\left(A_{\varepsilon}^{(n)}\Big{|}\delta_{1}<W<\delta_{2}\right)\geq\eta\). For all \(\widetilde{\varepsilon}>0\) introduce the event \[B_{\widetilde{\varepsilon}}^{(n)}:=\left\{\sup_{t\in[T_{1},T_{2}]}\left|\eta_{ t}^{(n)}-\left(\mathfrak{t}_{t}^{(n)}-\frac{\log(W)}{\lambda_{0}}\right) \right|\leq\widetilde{\varepsilon}\right\},\] which satisfies \(\mathbb{P}\left(B_{\widetilde{\varepsilon}}^{(n)}\Big{|}\delta_{1}<W<\delta_{ 2}\right)\underset{n\to\infty}{\longrightarrow}1\), according to Lemma 3.2. From this fact and because \(\sigma_{t}^{(n)}\leq\eta_{t}^{(n)}\forall t>0\) almost surely, it follows that under \(B_{\widetilde{\varepsilon}}^{(n)}\) we have \(\sigma_{t}^{(n)}<\infty,\forall t\in[T_{1},T_{2}]\). Moreover, it also follows that under\(B_{\widetilde{\varepsilon}}^{(n)}\) we have \(Z_{0}^{(n)}\left(\eta_{t}^{(n)}\right)=n^{t},\forall t\in[T_{1},T_{2}]\). In particular under \(A_{\varepsilon}^{(n)}\) it exists \(t_{n}\in[T_{1},T_{2}]\) such that \(\eta_{t_{n}}^{(n)}-\sigma_{t_{n}}^{(n)}\geq\varepsilon\), which implies that \(Z_{0}^{(n)}\left(\sigma_{t_{n}}^{(n)}\right)\leq n^{t_{n}}e^{-\lambda_{0} \frac{\varepsilon}{2}}\), because otherwise using the strong Markov property, it would imply a contradiction with \(A_{\varepsilon}^{(n)}\). Combining these reasoning it follows that under \(A_{\varepsilon}^{(n)}\cap B_{\widetilde{\varepsilon}}^{(n)}\) we have that \[\sum_{i\geq 1}Z_{i}^{(n)}\left(\sigma_{t_{n}}^{(n)}\right)=Z_{tot}^{(n)}\left( \sigma_{t_{n}}^{(n)}\right)-Z_{0}^{(n)}\left(\sigma_{t_{n}}^{(n)}\right)\geq n ^{t_{n}}\left(1-e^{-\lambda_{0}\frac{\varepsilon}{2}}\right)=\mathcal{O} \left(n^{t_{n}}\right). \tag{72}\] But the result on the mutant sub-populations says exactly that due to the power law mutation rates regime, the mutant sub-populations have a neglected size compared to the wild-type sub-population. More precisely, under the event \(A_{\varepsilon}^{(n)}\cap B_{\widetilde{\varepsilon}}^{(n)}\), using (71) and Proposition 3.1, we have \[\sum_{i\geq 1}Z_{i}^{(n)}\left(\sigma_{t_{n}}^{(n)}\right) \leq\sup_{u\in[T_{1},t_{n}]}\sum_{i\geq 1}Z_{i}^{(n)}\left(\eta_{u}^{(n)}\right) \tag{73}\] \[\leq\sup_{u\in[T_{1},t_{n}]}\sup_{s\in[-(M+\widetilde{\varepsilon }),M+\widetilde{\varepsilon}]}\sum_{i\geq 1}Z_{i}^{(n)}\left(\mathfrak{t}_{u}^{(n)}+s\right)\] \[\leq o(n^{t_{n}}).\] There is a contradiction between (72) and (73) so we have proven (70) for all \(\varepsilon>0\) and \(0<\delta_{1}<\delta_{2}\). **Step (ii)** Using a similar method as in step 2 of the proof of Lemma 3.2, one can show that for all \(\varepsilon>0\) \[\mathbb{P}\left(A_{\varepsilon}^{(n)}\Big{|}W>0\right)\underset{n\to\infty}{ \longrightarrow}0,\] which concludes the proof. In the following, we will prove the next proposition. **Proposition 3.3**.: _Assume Equation (15), let \(0<T_{1}<T_{2}\) and \(M>0\).Then we have (i) If \(\lambda_{i}=\lambda_{0}\)_ \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{i}^{(n) }\left(\rho_{t}^{(n)}+s\right)}{d_{i}^{(n)}(t,s)}-\mathbb{1}_{\{W>0\}}w_{i}(t) \right|\geq\varepsilon\right)\underset{n\to\infty}{\longrightarrow}0.\] _(ii) If \(\lambda_{i}<\lambda_{0}\)_ \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{i}^{(n)} \left(\rho_{t(i)+t}^{(n)}+s\right)}{n^{t}\log^{\theta(i)}(n)e^{\lambda_{0}s}}- \mathbb{1}_{\{W>0\}}w_{i}(t(i)+t)\right|\geq\varepsilon\right)\underset{n \rightarrow\infty}{\longrightarrow}0.\] These results correspond to (13) and (14) in the case of the mono-directional graph. The proof will be done assuming \(\lambda_{i}=\lambda_{0}\). The case \(\lambda_{i}<\lambda_{0}\) can be done using similar reasoning, and is left to the reader. Proof of Proposition 3.3.: Rewrite the quantity of interest as \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left| \frac{Z_{i}^{(n)}(\rho_{t}^{(n)}+s)}{d_{i}^{(n)}(t,s)}-w_{i}(t)\mathbb{1}_{\{W> 0\}}\right|\geq\varepsilon\right)\] \[\leq\mathbb{P}\left(\{W>0\}\cap\left\{\sup_{s\in[-M,M]}\sup_{t\in [T_{1},T_{2}]}\left|\frac{Z_{i}^{(n)}(\rho_{t}^{(n)}+s)}{d_{i}^{(n)}(t,s)}-w_{ i}(t)\right|\geq\varepsilon\right\}\right) \tag{74}\] \[+\mathbb{P}\left(\{W=0\}\cap\left\{K_{0}^{(n)}\left(\rho_{T_{2}}^ {(n)}+M\right)\geq 1\right\}\cap\left\{H_{0}^{(n)}\left(\rho_{T_{2}}^{(n)}+M \right)\geq 1\right\}\right), \tag{75}\] where for the term (75) we use that a necessary condition for the mutant population of trait \(i\) to be strictly positive is that at least one mutational event from the wild-type population had happened. **Step 1:** The convergence to \(0\) of (75) follows from proving that \[\mathbb{P}\left(\left\{\sup_{t\in\mathbb{R}^{+}}K_{0}^{(n)}(t)=0\right\}\cap \left\{\sup_{t\in\mathbb{R}^{+}}H_{0}^{(n)}(t)=0\right\}\left|W=0\right) \underset{n\rightarrow\infty}{\longrightarrow}1.\] Let us first show that \[\mathbb{P}\left(\sup_{t\in\mathbb{R}^{+}}K_{0}^{(n)}(t)\geq 1\Big{|}W=0\right) \underset{n\rightarrow\infty}{\longrightarrow}0.\] Notice that almost surely \[K_{0}^{(n)}(t)\leq\widetilde{K}^{(n)}(t):=\int_{0}^{t}\int_{\mathbb{R}^{+}} \mathbb{1}_{\left\{\theta\leq 2\alpha_{0}\mu_{0}^{(n)}Z_{0}(s^{-})\right\}}N_{0}(ds,d \theta),\forall t\in\mathbb{R}^{+}.\] Then it follows that \[\mathbb{P}\left(\sup_{t\in\mathbb{R}^{+}}K_{0}^{(n)}(t)\geq 1 \Big{|}W=0\right) \leq\mathbb{P}\left(\sup_{t\in\mathbb{R}^{+}}\widetilde{K}^{(n)}( t)\geq 1\Big{|}W=0\right)\] \[\leq\mathbb{E}\left[\sup_{t\in\mathbb{R}^{+}}\widetilde{K}^{(n)}( t)\wedge 1\Big{|}W=0\right]\] \[\underset{n\rightarrow\infty}{\longrightarrow}0,\] by dominated convergence. Indeed we have that for all \(\omega\in\{W=0\}\) it exists \(T(\omega)\in\mathbb{R}^{+}\) such that for all \(t\geq T(\omega),Z_{0}(t)=0\), and combined with \(\mu_{0}^{(n)}\to 0\) it follows that it exists \(N(\omega)\in\mathbb{N}\) such that for all \(n\geq N(\omega)\) we have \(\sup_{t\in\mathbb{R}^{+}}\widetilde{K}^{(n)}(t)=0\). One concludes the proof of Step 1 by showing that \(\mathbb{P}\left(\sup_{t\in\mathbb{R}^{+}}H_{0}^{(n)}(t)\geq 1\Big{|}W=0\right) \underset{n\rightarrow\infty}{\longrightarrow}0\) using a similar reasoning. **Step 2:** We are going to show in three steps that (74) converges to \(0\). **Step 2) (i)** we start by showing that for all \(\varepsilon>0\) and \(\eta>0\) we have \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{i}^{(n )}(\rho_{t}^{(n)}+s)}{d_{i}^{(n)}(t,s)}e^{-\lambda_{0}\left[\eta_{t}^{(n)}- \mathbb{1}_{\{t\}}^{(n)}\right]}-Ww_{i}(t)\right|\geq\varepsilon\bigg{|}W> \eta\right)\underset{n\rightarrow\infty}{\longrightarrow}0.\] We have \[\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left|\rho_{t}^{(n)}-\mathfrak{ t}_{t}^{(n)}\right|\geq M\bigg{|}W>\eta\right) \leq\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left|\eta_{t}^{(n)}- \left(\mathfrak{t}_{t}^{(n)}-\frac{\log(W)}{\lambda_{0}}\right)\right|\geq M/2 \bigg{|}W>\eta\right)\] \[+\mathbb{P}\left(\frac{|\log(W)|}{\lambda_{0}}\geq M/2\bigg{|}W> \eta\right).\] Using the distribution of \(W\) given in Equation (20) and Lemma 3.2 it exists \(M>0\), \(N_{1}\in\mathbb{N}\) it follows that \[\forall\delta>0,\exists M\in\mathbb{R}^{+},\exists N_{1}\in\mathbb{N},\forall n \geq N_{1},\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left|\rho_{t}^{(n)}- \mathfrak{t}_{t}^{(n)}\right|\geq M\bigg{|}W>\eta\right)\leq 2\delta. \tag{76}\] Now we can apply Theorem 2.1 (i) Eq. (11) to get that it exists \(N_{2}\in\mathbb{N}\) such that for all \(n\geq N_{2}\) \[\mathbb{P}\Bigg{(}\sup_{s\in[-M,M]}\sup_{s_{1}\in[-M,M]}\sup_{t\in[T_{1},T_{2} ]}\left|\frac{Z_{i}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s+s_{1}\right)}{d_{i}^{ (n)}(t,s+s_{1})}-Ww_{i}(t)\right|\geq\varepsilon\Bigg{)}\leq\delta. \tag{77}\] Consequently, using Equations (76) and (77) we have proven that for all \(\delta>0\), it exists \(N:=\max(N_{1},N_{2})\in\mathbb{N}\) such that for all \(n\geq N\) \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{i}^{( n)}\left(\rho_{t}^{(n)}+s\right)}{d_{i}^{(n)}(t,s)}e^{-\lambda_{0}\left[\rho_{t}^ {(n)}-\mathfrak{t}_{t}^{(n)}\right]}-Ww_{i}(t)\right|\geq\varepsilon\bigg{|}W >\eta\right)\leq 3\delta,\] which ends Step 2) (i). **Step 2) (ii):** now we are going to prove that \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{i}^{( n)}(\rho_{t}^{(n)}+s)}{d_{i}^{(n)}(t,s)}-w_{i}(t)\right|\geq\varepsilon \bigg{|}W>\eta\right)\underset{n\to\infty}{\longrightarrow}0.\] Let \(\delta>0\) and \(0<\widetilde{\varepsilon}<\eta\), according to Remark 3.1, Proposition 3.2, and Step 2) (i) it exists \(N\in\mathbb{N}\) such that for all \(n\geq N\), we have that \(\mathbb{P}\left(A_{\widetilde{\varepsilon}}^{(n)}\cap B_{\widetilde{ \varepsilon}}^{(n)}|W>\eta\right)\geq 1-\delta\), where \[A_{\widetilde{\varepsilon}}^{(n)} :=\Bigg{\{}\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_ {i}^{(n)}(\rho_{t}^{(n)}+s)}{d_{i}^{(n)}(t,s)}e^{-\lambda_{0}\left[\rho_{t}^{ (n)}-\mathfrak{t}_{t}^{(n)}\right]}-Ww_{i}(t)\right|\leq\widetilde{ \varepsilon}\Bigg{\}},\] \[B_{\widetilde{\varepsilon}}^{(n)} :=\Bigg{\{}\sup_{t\in[T_{1},T_{2}]}\left|e^{-\lambda_{0}\left( \rho_{t}^{(n)}-\mathfrak{t}_{t}^{(n)}\right)}-W\right|\leq\widetilde{ \varepsilon}\Bigg{\}}.\] In particular conditioned on \(\{W>\eta\}\) under the event \(A_{\widetilde{\varepsilon}}^{(n)}\cap B_{\widetilde{\varepsilon}}^{(n)}\) we have that for all \(t\in[T_{1},T_{2}]\) and for all \(s\in[-M,M]\) \[\frac{Z_{i}^{(n)}(\rho_{t}^{(n)}+s)}{d_{i}^{(n)}(t,s)}-w_{i}(t) \leq\left(\widetilde{\varepsilon}+w_{i}(t)W\right)e^{\lambda_{0} \left(\rho_{t}^{(n)}-\mathfrak{t}_{t}^{(n)}\right)}-w_{i}(t)\] \[\leq\frac{\widetilde{\varepsilon}}{W-\widetilde{\varepsilon}}+w_ {i}(t)\left(\frac{W}{W-\widetilde{\varepsilon}}-1\right)\] \[\leq\left(1+w_{i}(T_{2})\right)\frac{\widetilde{\varepsilon}}{ \eta-\widetilde{\varepsilon}}\] \[\underset{\widetilde{\varepsilon}\to 0}{\longrightarrow}0,\] so that we can choose \(\widetilde{\varepsilon}\) arbitrarily small such that this upper bound is smaller than \(\varepsilon\). By doing a similar work for the lower bound we get that conditioned on \(\{W>\eta\}\) under the event \(A_{\widetilde{\varepsilon}}^{(n)}\cap B_{\widetilde{\varepsilon}}^{(n)}\) we have that for all \(t\in[T_{1},T_{2}]\) and for all \(s\in[-M,M]\) \[\frac{Z_{i}^{(n)}(\rho_{t}^{(n)}+s)}{d_{i}^{(n)}(t,s)}-w_{i}(t)\geq-\left(1+w_{i }(T_{2})\right)\frac{\tilde{\varepsilon}}{\eta-\widetilde{\varepsilon}} \underset{\varepsilon\to 0}{\longrightarrow}0.\] Consequently by taking an adequate \(\widetilde{\varepsilon}>0\) we have shown that it exists \(N\in\mathbb{N}\) such that for all \(n\geq N\), we have \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{i}^{(n)} (\rho_{t}^{(n)}+s)}{d_{i}^{(n)}(t,s)}-w_{i}(t)\right|\leq\varepsilon\bigg{|}W> \eta\right)\geq 1-\delta.\] **Step 2) (iii):** introduce the notation \(A_{\varepsilon}^{(n)}:=\left\{\sup_{s\in[-M,M]\in[T_{1},T_{2}]}\left|\frac{Z_{i }^{(n)}(\rho_{t}^{(n)}+s)}{d_{i}^{(n)}(t,s)}-w_{i}(t)\right|\geq\varepsilon\right\}\). To complete the proof of Step 2 we will show that \(\mathbb{P}\left(A_{\varepsilon}^{(n)}\cap\{W>0\}\right)\underset{n\to\infty}{ \longrightarrow}0.\) We have \[\mathbb{P}\left(A_{\varepsilon}^{(n)}\cap\{W>0\}\right)\leq\mathbb{P}\left(A _{\varepsilon}^{(n)}\cap\{W>\eta\}\right)+\mathbb{P}\left(0<W<\eta\right).\] We obtain using Step 2) (ii) that \[\limsup_{n\to\infty}\mathbb{P}\left(A_{\varepsilon}^{(n)}\cap\{W>0\}\right) \leq\mathbb{P}\left(0<W<\eta\right),\] Taking the limit when \(\eta\underset{n\to\infty}{\longrightarrow}0\) which completes the proof. First-order asymptotics of the mutant sub-populations for a general finite trait space (Theorem 2.1) As in Section 3 the sequence \(\left(Z_{v}^{(n)},v\in V\right)_{n\in\mathbb{N}}\) is mathematically constructed using independent PPMs. In the construction each population of trait \(v\) is decomposed as the sum of sub-populations indexed by the paths on the graph starting from trait \(0\) and leading to the trait \(v\). An exact definition will be given below. Notice in particular that due to backward mutations, or more generally to cycles, there may be an infinite countable paths from trait \(0\) to trait \(v\). Among wild-type individuals, we call _primary cell population_, denoted by \(\left(Z_{(0)}^{(n)}(t)\right)_{t\geq 0}\), all individuals that have no mutant in their ancestors going back to the initial cell. It has a specific role for the mathematical analysis, and corresponds to \(Z_{0}^{(n)}\) in the case of the mono-directional graph. **Definition 4.1**.: _(Paths and neighbors) Define the set of all paths on graph \(V\) starting from trait \(0\) as \(\Gamma(V)\). For a trait \(v\in V\) the set of traits from which a cell of trait \(v\) may mutate is defined as_ \[N(v):=\{u\in V:(v,u)\in E\}.\] _For a path \(\gamma=(0,\cdots,\gamma(k))\in\Gamma(V)\) denote the last trait \(\gamma(k)\) visited by \(\gamma\) and the sub path which do not visit this last trait as_ \[\gamma_{end} :=\gamma(k),\] \[\overline{\gamma} :=\left(0,\cdots,\gamma(k-1)\right).\] _Introduce the set of the tuples of the paths on \(V\) starting from trait \(0\) associated with one, respectively two, neighbors of the last trait of \(\gamma\) as_ \[N_{\Gamma}:=\{(\gamma,v):\gamma\in\Gamma(V),v\in N(\gamma_{end})\},\] \[M_{\Gamma}:=\{(\gamma,(v,u)):\gamma\in\Gamma(V),(v,u)\in N( \gamma_{end})\times N(\gamma_{end})\}.\] Then introduce the birth, death and growth rate of any lineage of a cell of trait \(v\) as \[\alpha^{(n)}(v) =\alpha(v)\left(1-\overline{\mu}^{(n)}(v)\right)^{2},\] \[\beta^{(n)}(v) =\beta(v)+\alpha(v)\sum_{(u,w)\in N(v)\times N(v)}\mu^{(n)}(v,u) \mu^{(n)}(v,w),\] \[\lambda^{(n)}(v) =\alpha^{(n)}(v)-\beta^{(n)}(v),\] \[\overline{\mu}^{(n)}(v) :=\sum_{u\in V:(v,u)\in E}\mu^{(n)}(v,u).\] Notice that \[\lambda^{(n)}(v) =\lambda(v)-2\alpha(v)\overline{\mu}^{(n)}(v)+\alpha(v)\left( \overline{\mu}^{(n)}(v)\right)^{2}-\alpha(v)\sum_{(u,w)\in N(v)^{2}}\mu^{(n)}(v,u )\mu^{(n)}(v,w)\] \[=\lambda(v)-2\alpha(v)\overline{\mu}^{(n)}(v).\] Let \[Q^{b}_{(0)}(ds,d\theta),Q^{d}_{(0)}(ds,d\theta),\left(Q_{\gamma}(ds,d\theta) \right)_{\gamma\in\Gamma(V)},\left(Q_{\gamma,v}(ds,d\theta)\right)_{(\gamma,v )\in N_{\Gamma}},\left(Q_{\gamma,(v,u)}(ds,d\theta)\right)_{(\gamma,(v,u))\in M _{\Gamma}}\] be independent PPMs with intensity \(dsd\theta\). The sub-population of primary cells is \[Z^{(n)}_{(0)}(t):= 1+\int_{0}^{t}\int_{\mathbb{R}^{+}}\mathbb{1}_{\left\{\theta\leq \alpha^{(n)}(0)Z^{(n)}_{(0)}(s^{-})\right\}}Q^{b}_{(0)}(ds,d\theta)-\int_{0}^ {t}\int_{\mathbb{R}^{+}}\mathbb{1}_{\left\{\theta\leq\beta(0)Z^{(n)}_{(0)}(s^ {-})\right\}}Q^{d}_{(0)}(ds,d\theta) \tag{78}\] \[-\sum_{(v,u)\in N(0)\times N(0)}H^{(n)}_{(0),(v,u)}(t),\] and for all \(\gamma\in\Gamma(V)\) \[Z^{(n)}_{\gamma}(t): =\int_{0}^{t}\int_{\mathbb{R}^{+}}\Bigg{(}\mathbb{1}_{\left\{ \theta\leq\alpha^{(n)}(\gamma_{end})Z^{(n)}_{\gamma}(s^{-})\right\}} \tag{79}\] \[-\mathbb{1}_{\left\{\alpha^{(n)}(\gamma_{end})Z^{(n)}_{\gamma}(s ^{-})\leq\theta\leq\left(\alpha^{(n)}(\gamma_{end})+\beta(\gamma_{end})\right) Z^{(n)}_{\gamma}(s^{-})\right\}}Q_{\gamma}(ds,d\theta)\] \[+K^{(n)}_{\overline{\gamma},\gamma_{end}}(t)+2H^{(n)}_{\overline {\gamma},(\gamma_{end},\gamma_{end})}+\sum_{v\in N(\overline{\gamma}_{end}),v \neq\gamma_{end}}\left(H^{(n)}_{\overline{\gamma},(\gamma_{end},v)}+H^{(n)}_ {\overline{\gamma},(v,\gamma_{end})}\right)(t)\] \[-\sum_{(v,u)\in N(\gamma_{end})\times N(\gamma_{end})}H^{(n)}_{ \gamma,(v,u)}(t),\] where \(\forall(\gamma,v)\in N_{\Gamma}\) \[K^{(n)}_{\gamma,v}(t):=\int_{0}^{t}\int_{\mathbb{R}^{+}}\mathbb{1}_{\left\{ \theta\leq 2\alpha(\gamma_{end})\mu^{(n)}(\gamma_{end},v)\left(1-\overline{\mu}^{(n)}( \gamma_{end})\right)Z^{(n)}_{\gamma}(s^{-})\right\}}Q_{\gamma,v}(ds,d\theta), \tag{80}\] and \(\forall(\gamma,(v,u))\in M_{\Gamma}\) \[H^{(n)}_{\gamma,(v,u)}(t):=\int_{0}^{t}\int_{\mathbb{R}^{+}}\mathbb{1}_{ \left\{\theta\leq\alpha(\gamma_{end})\mu^{(n)}(\gamma_{end},v)\mu^{(n)}( \gamma_{end},u)Z^{(n)}_{\gamma}(s^{-})\right\}}Q_{\gamma,(v,u)}(ds,d\theta).\] The process \(\left(K^{(n)}_{\gamma,v}(t)\right)_{t\in\mathbb{R}^{+}}\), resp. \(\left(H^{(n)}_{\gamma,(v,u)}(t):=H^{(n)}_{\gamma,(v,u)}(t)+H^{(n)}_{\gamma,(u, v)}(t)\right)_{t\in\mathbb{R}^{+}}\), counts the number of mutations up to time \(t\) from the sub-population indexed by \(\gamma\) leading to exactly one mutant daughter cell of trait \(v\), resp. two mutant daughter cells of traits \(\{v,u\}\). Hence the sub-population of trait \(v\in V\) is \[Z^{(n)}_{v}(t):=Z^{(n)}_{(0)}(t)\mathbb{1}_{\left\{v=0\right\}}+\sum_{\gamma\in P (v)}Z^{(n)}_{\gamma}(t),\] where \(P(v)\) is defined in Definition 2.3. Introduce the stopping time associated to the _primary population_ as \[\tau^{(n)}_{t}:=\inf\left\{u\in\mathbb{R}^{+}:Z^{(n)}_{(0)}(u)\geq n^{t} \right\}.\] **Definition 4.2** (Limiting birth and death process for the primary population).: _Let \((Z_{(0)}(t))_{t\in\mathbb{R}^{+}}\) be the birth-death branching process with rates \(\alpha(0)\) and \(\beta(0)\) respectively, constructed in the following way_ \[Z_{(0)}(t)=1+\int_{0}^{t}\int_{\mathbb{R}^{+}}\mathbb{1}_{\left\{\theta\leq \alpha(0)Z_{(0)}(s^{-})\right\}}Q^{b}_{(0)}(ds,d\theta)-\int_{0}^{t}\int_{ \mathbb{R}^{+}}\mathbb{1}_{\left\{\theta\leq\beta(0)Z_{(0)}(s^{-})\right\}}Q^{d }_{(0)}(ds,d\theta).\] _Notice that with such a construction it immediately follows the monotone coupling_ \[\forall t\geq 0,Z_{(0)}^{(n)}(t)\leq Z_{(0)}(t),a.s.\] _Introduce the almost sure limit of the positive martingale \(\big{(}e^{-\lambda(0)t}Z_{(0)}(t)\big{)}_{t\in\mathbb{R}^{+}}\) as_ \[W:=\lim_{t\to\infty}e^{-\lambda(0)t}Z_{(0)}(t), \tag{81}\] _whose law is_ \[W\stackrel{{ law}}{{=}}\text{Ber}\left(\frac{\lambda(0)}{ \alpha(0)}\right)\otimes Exp\left(\frac{\lambda(0)}{\alpha(0)}\right),\] _see [12] Section 1.1, or [11] Theorem 1._ **Lemma 4.1**.: _It exits \(C(\alpha(0),\lambda(0))>0\) and \(N\in\mathbb{N}\) such that for all \(\varepsilon>0\) and \(n\geq N\)_ \[\mathbb{P}\left(\sup_{t\in\mathbb{R}^{+}}\left|e^{-\lambda(0)t}Z_{(0)}(t)-e^{- \lambda^{(n)}(0)t}Z_{(0)}^{(n)}(t)\right|\geq\varepsilon\right)\leq\frac{C( \alpha(0),\lambda(0))}{\varepsilon^{2}}\overline{\mu}^{(n)}(0)\underset{n\to \infty}{\longrightarrow}0.\] Proof.: Adapting the proof of Lemma 3.1 when \(\mu_{0}^{(n)}\) is replaced by \(\overline{\mu}^{(n)}(0)\) gives the result. **Lemma 4.2**.: _For all \(\varepsilon>0\), \((T_{1},T_{2})\in\mathbb{R}^{+}\) and \(\varphi_{n}\) such that \(\log(n)=o(\varphi_{n})\) and \(\varphi_{n}=o\left(n^{\min\limits_{v\in N(0)}\ell(0,v)}\right)\), we have_ \[\mathbb{P}\left(\sup_{t\in\big{[}T_{1},T_{2}\frac{\varphi_{n}}{\log(n)}\big{]} }\left|\tau_{t}^{(n)}-\left(\mathfrak{t}_{t}^{(n)}-\frac{\log(W)}{\lambda(0)} \right)\right|\geq\varepsilon\bigg{|}W>0\right)\underset{n\to\infty}{ \longrightarrow}0.\] Proof.: Following the proof of Lemma 3.2 when replacing \(Z_{0}^{(n)}\) and \(\eta_{t}^{(n)}\) by \(Z_{(0)}^{(n)}\) and \(\tau_{t}^{(n)}\) gives the result. In the next definition, we are going to introduce an equivalence relation on \(\Gamma(V)\). Two paths are said to be equivalent if they are the same up to cycles (in particular cycles formed by backward mutations are taken into account). More precisely, that there exists a minimal path, from which the two previous paths are using all the edges, but potentially also some other edges forming cycles. The aim of this equivalence relation is to say that among one class of equivalence, only the path with the minimal length may contribute for the asymptotic of the mutant sub-population sizes. **Definition 4.3**.: _(Equivalence relation on \(\Gamma(V)\)) We say that two paths \(\gamma_{1}\) and \(\gamma_{2}\) in \(\Gamma(V)\times\Gamma(V)\) are equivalent, and denoted by \(\gamma_{1}\sim\gamma_{2}\), if and only if it exists \(\gamma\in\Gamma(V)\), and for all \(j\in\{1,2\}\) it exists_ \[\sigma_{j}:[\![0,|\gamma|-1]\!]\to[\![0,|\gamma_{j}|-1]\!]^{2}\] \[\qquad\qquad\qquad\qquad\qquad i\mapsto(\underline{\sigma}_{j}(i ),\overline{\sigma}_{j}(i)),\] _satisfying :_ 1. \(\forall j\in\{1,2\},\underline{\sigma}_{j}(0)=0\)_, and_ \(\overline{\sigma}_{j}(|\gamma|-1)=|\gamma_{j}|-1\)_,_ 2. \(\forall i\in[\![0,|\gamma|-1]\!],\forall j\in\{1,2\}\underline{\sigma}_{j}(i )\leq\overline{\sigma}_{j}(i)\) _and_ \(\overline{\sigma}_{j}(i)+1=\underline{\sigma}_{j}(i+1)\)_,_ 3. \(\forall i\in[\![0,|\gamma|-1]\!],\forall j\in\{1,2\},\gamma(i)=\gamma_{j}( \underline{\sigma}_{j}(i))=\gamma_{j}(\overline{\sigma}_{j}(i))\)_._ _Because the graph is finite, we have only a finite number of class of equivalence. For all path \(\gamma\in\Gamma(V)\) denote by \([\gamma]\) its class of equivalence. For all class of equivalence, there is one natural representing candidate which is the path with the minimum length and in the following we will denote it by \(\widetilde{\gamma}\). For all \(v\in V\) denote by \(C(v)\) the set of representing candidates of paths on \(P(v)\). Notice that \(|C(v)|<\infty\). An illustration on an example of this definition is found on Figure 6._ Now we have all the preliminary results and definitions to prove Theorem 2.1. Proof of Theorem 2.1.: We show Equation (11). The proof of Equation (12) is similar and is left to the reader. Let \(\widetilde{\gamma}\) a representing candidate of a class of equivalence. Our first step is to prove that for all \(\varepsilon>0\) \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\left|\sum_{\gamma\in[ \widetilde{\gamma}]}\frac{Z_{\gamma}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s \right)}{d_{\widetilde{\gamma}}^{(n)}(t,s)}-Ww_{\widetilde{\gamma}}(t)\right| \geq\varepsilon\right)\underset{n\to\infty}{\longrightarrow}0, \tag{82}\] using the result of Section 3 and where for all \(\gamma\in\Gamma(V)\) \[d_{\gamma}^{(n)}(t,s):= \mathbbm{1}_{\left\{t\in[0,t(\gamma)-\gamma_{n}^{-1}\right\}}+ \mathbbm{1}_{\left\{t\in[t(\gamma)-\gamma_{n}^{-1},t(\gamma))\right\}}\psi_{ n}\log^{\theta(\gamma)-1}(n)\] \[+\mathbbm{1}_{\left\{t\in[t(\gamma),\infty)\right\}}n^{t-t( \gamma)}\log^{\theta(\gamma)}(n)e^{\lambda(0)s}.\] Notice that \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\left|\sum_{ \gamma\in[\widetilde{\gamma}]}\frac{Z_{\widetilde{\gamma}}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)}{d_{\widetilde{\gamma}}^{(n)}(t,s)}-Ww_{ \widetilde{\gamma}}(t)\right|\geq\varepsilon\right)\] \[\leq\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\left|\frac{ Z_{\widetilde{\gamma}}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{d_{ \widetilde{\gamma}}^{(n)}(t,s)}-Ww_{\widetilde{\gamma}}(t)\right|\geq\varepsilon\right)\] \[+\sum_{\gamma\in[\widetilde{\gamma}]\setminus\{\widetilde{ \gamma}\}:t(\gamma)\leq T}\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]} \left|\frac{Z_{\gamma}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{d_{ \widetilde{\gamma}}^{(n)}(t,s)}\right|\geq\varepsilon\right)\] \[+\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\sum_{\gamma\in [\widetilde{\gamma}]\setminus\{\widetilde{\gamma}\}:t(\gamma)>T}\left|\frac{ Z_{\gamma}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{d_{\widetilde{\gamma}}^{(n)}(t,s)} \right|\geq\varepsilon\right).\] The first term of the r.h.s. is converging to \(0\) applying Equation (11) to the mono-directional graph given by the path \(\widetilde{\gamma}\), as this is proven in Section 3. More precisely, the mono-directional graph given by \(\widetilde{\gamma}\) is the graph composed of the successive sub-populations \(\left(Z_{(0)}^{(n)},Z_{(0,\widetilde{\gamma}(1))}^{(n)},\cdots,Z_{\widetilde{ \gamma}}^{(n)}\right)\). The second term of the r.h.s. converges also to \(0\) since: * the sum is on a finite set because we are considering a finite graph with labels on the edges that are strictly positive, * and for any \(\gamma\in[\overline{\gamma}]\backslash\{\widetilde{\gamma}\}\) we have \(t(\gamma)>t(\widetilde{\gamma})\), and applying Equation (11) on the mono-directional graph given by \(\gamma\). And the last term of the r.h.s converges to \(0\) because \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\sum_{\gamma\in[ \overline{\gamma}]\backslash\{\widetilde{\gamma}\}:t(\gamma)>T}Z_{\gamma}^{(n )}\left(\mathfrak{t}_{t}^{(n)}+s\right)=0\right)\underset{n\to\infty}{ \rightarrow}1. \tag{83}\] Indeed we have that for \(\gamma\in[\widetilde{\gamma}]\backslash\{\widetilde{\gamma}\}\) such that \(t(\gamma)>T\) \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}Z_{\gamma}^{(n )}\left(\mathfrak{t}_{t}^{(n)}+s\right)=0\right)\underset{n\to\infty}{ \longrightarrow}1, \tag{84}\] adapting Lemma 3.4 to the current situation. It remains to deal with the sum over the set \(A_{\widetilde{\gamma}}(T):=\{\gamma\in[\widetilde{\gamma}]\backslash\{ \widetilde{\gamma}\}:t(v)>T\}\). The easiest situation is when \(|A_{\widetilde{\gamma}}(T)|<\infty\), since the result follows directly. Now consider the case \(|A_{\widetilde{\gamma}}(T)|=\infty\). In this case even if for all \(\gamma\in A_{\widetilde{\gamma}}(T)\) we have Equation (84) it does not necessary mean that Equation (83) is automatically satisfied. The result follows if one shows that it exists a finite subset \(B_{\widetilde{\gamma}}(T)\subset A_{\widetilde{\gamma}}(T)\) such that \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\sum_{\gamma\in A _{\widetilde{\gamma}}(T)\backslash B_{\widetilde{\gamma}}(T)}Z_{\gamma}^{(n )}\left(\mathfrak{t}_{t}^{(n)}+s\right)=0\right|\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\sum_{\gamma\in B_{\widetilde{\gamma}}(T)}Z_{\gamma}^{(n)}\left( \mathfrak{t}_{t}^{(n)}+s\right)=0\right) \tag{85}\] \[=1.\] Then we will show that \(B_{\widetilde{\gamma}}(T)\) exists. \([\widetilde{\gamma}]\) is composed of the paths where for each vertex \(v\) visited by \(\widetilde{\gamma}\), there may have a cycle going back to \(v\). Because there are only a finite number of vertices visited by \(\widetilde{\gamma}\), it comes that the number of paths \(\gamma\in A_{\widetilde{\gamma}}(T)\) where we have to control the event that they do not have any cells up to time \(\mathfrak{t}_{T}^{(n)}+M\) is actually finite and is denoted by \(B_{\widetilde{\gamma}}(T)\). Indeed, for all paths \(\gamma\in A_{\widetilde{\gamma}}(T)\backslash B_{\widetilde{\gamma}}(T)\) it exists a path \(\gamma_{1}\in B_{\widetilde{\gamma}}(T)\) such that cells from the sub-population \(Z_{\gamma}^{(n)}\) are cells that results from (potentially many) mutations of cells of sub-population \(Z_{\gamma_{1}}^{(n)}\). Hence if one controls that with high probability there is no cell generated up to time \(\mathfrak{t}_{T}^{(n)}+M\) of the sub-populations indexed by \(\gamma\in B_{\widetilde{\gamma}}^{(n)}\), which is actually the case because \(B_{\widetilde{\gamma}}^{(n)}\) is finite, it automatically implies by the mechanistic construction of the process that under such event, almost surely there is no cell of the sub-populations indexed by \(\gamma\in A_{\widetilde{\gamma}}^{(n)}\backslash B_{\widetilde{\gamma}}^{(n)}\), which is exactly Equation (85). Notice that for \(\gamma\in A(v)\), where \(A(v)\) is defined in Definition 2.3, we have \(d_{\gamma}^{(n)}(t,s)=d_{v}^{(n)}(t,s)\), and also \(\gamma\) is the representing candidate \(\widetilde{\gamma}\) of its class of equivalence. Then the proof is obtained thanks to \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\left|\frac{Z_{v }^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s\right)}{d_{v}^{(n)}(t,s)}-Ww_{v}(t) \right|\geq\varepsilon\right)\] \[\leq\sum_{\widetilde{\gamma}\in C(v):\widetilde{\gamma}\in A(v)} \mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\left|\sum_{\gamma\in[ \widetilde{\gamma}]}\frac{Z_{\gamma}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s \right)}{d_{\widetilde{\gamma}}^{(n)}(t,s)}-Ww_{\widetilde{\gamma}}(t)\right| \geq\varepsilon\right)\] \[+\sum_{\widetilde{\gamma}\in C(v):\widetilde{\gamma}\notin A(v)} \mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[0,T]}\left|\sum_{\gamma\in[ \widetilde{\gamma}]}\frac{Z_{\gamma}^{(n)}\left(\mathfrak{t}_{t}^{(n)}+s \right)}{d_{v}^{(n)}(t,s)}\right|\geq\varepsilon\right).\] Using Equation (82) and because both sums are finite we obtain Equation (11). Now following the proof of Proposition 3.2 when replacing \(\eta_{t}^{(n)}\) by \(\tau_{t}^{(n)}\) and where \(W\) is defined as in (81) instead of (19), we obtain that for all \(0<T_{1}<T_{2}\) and for all \(\varepsilon>0\) \[\mathbb{P}\left(\sup_{t\in[T_{1},T_{2}]}\left(\tau_{t}^{(n)}-\sigma_{t}^{(n)} \right)\leq\varepsilon\left|W>0\right)\underset{n\to\infty}{\longrightarrow}1.\] Then adapting the different proofs from Subsection 3.3 we obtain that for all \(0<T_{1}<T_{2}\), \(M>0\) and \(\varepsilon>0\) (i) If \(\lambda_{i}=\lambda_{0}\) \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{v}^{(n )}\left(\rho_{t}^{(n)}+s\right)}{d_{v}^{(n)}(t,s)}-\mathbb{1}_{\{W>0\}}w_{v}(t )\right|\geq\varepsilon\right)\underset{n\to\infty}{\longrightarrow}0.\] (ii) If \(\lambda_{i}<\lambda_{0}\) \[\mathbb{P}\left(\sup_{s\in[-M,M]}\sup_{t\in[T_{1},T_{2}]}\left|\frac{Z_{v}^{(n )}\left(\rho_{t(v)+t}^{(n)}+s\right)}{n^{t}\log^{\theta(v)}(n)e^{\lambda_{0}s} }-\mathbb{1}_{\{W>0\}}w_{v}(t(v)+t)\right|\geq\varepsilon\right)\underset{n\to \infty}{\longrightarrow}0.\] ## 5 Convergence for the stochastic exponents (Theorem 2.2) This last section is devoted to the proof of Theorem 2.2. Recall that the sequence \(\left(Z_{v}^{(n)},v\in V\right)_{n\in\mathbb{N}}\) is mathematically constructed in Subsection 4 (see (78), (79) and (80)). **Step 1)** We start by proving Theorem 2.2 conditioned on \(\{W=0\}\). Let \(0<T_{1}<T_{2}\), we are going to show that \[\mathbb{P}\left(\exists v\in V,\sup_{t\in[T_{1},T_{2}]}X_{v}^{(n)}(t)>0\Big{|} W=0\right)\underset{n\to\infty}{\longrightarrow}0. \tag{86}\] Introduce \(\tau_{(0)}:=\inf\left\{s\in\mathbb{R}^{+}:Z_{(0)}(s)=0\right\}\) and \(\tau_{(0)}^{(n)}:=\inf\left\{s\in\mathbb{R}^{+}:Z_{(0)}^{(n)}(s)=0\right\}\). Conditioned on \(\{W=0\}\) we have \(\tau_{(0)}^{(n)}\leq\tau_{(0)}<\infty\), as well as \(Z_{(0)}(t)\) tends to \(0\) when \(t\) goes to infinity, almost surely. In particular one gets that for any function \(f(n)\to\infty\) and any \(\varepsilon>0\), \(\mathbb{P}\left(Z_{(0)}(f(n))\geq\varepsilon\right)\to 0\). Because \(Z_{(0)}\) is an integer-valued process, if one takes epsilon strictly smaller than \(1\), one has shown that \(\mathbb{P}\left(Z_{(0)}(f(n))=0\right)\to 1\). In particular it gives that \[\mathbb{P}\left(\tau_{(0)}^{(n)}\geq T_{1}\frac{\log(n)}{\lambda(0)}\Big{|}W= 0\right)\underset{n\to\infty}{\longrightarrow}0.\] Moreover we have \[\mathbb{P}\left(\exists v\in V,\sup_{t\in[T_{1},T_{2}]}X_{v}^{(n)} (t)>0\Big{|}W=0\right)\leq\mathbb{P}\left(\tau_{(0)}^{(n)}>T_{1}\frac{\log(n) }{\lambda(0)}\Big{|}W=0\right)\\ +\mathbb{P}\left(\bigcup_{v\in N(0)}\left\{\sup_{t\geq 0}K_{(0),v} ^{(n)}(t)>0\right\}\bigcup_{(v,u)\in N(0)}\left\{\sup_{t\geq 0}H_{(0),\{v,u\}}^{(n)}(t)>0 \right\}\Big{|}W=0\right). \tag{87}\] Using a similar approach as in step 1 of the proof of Proposition 3.1, we prove that (87) converges to \(0\), which gives (86). **Step 2)** Now we are going to prove the result of Theorem 2.2 conditioned on \(\{W>0\}\). We begin with the initial phase using the following Lemma. **Lemma 5.1**.: _By construction \(\Delta_{1}=\min_{v\in N(0)}(\ell(0,v))>0\). Let \(0<\varepsilon<\frac{\Delta_{1}}{2}\). We have for all \(\widetilde{\varepsilon}>0\)_ \[\mathbb{P}\left(\left\{\sup_{v\in V\setminus\{0\}}\sup_{t\in[\varepsilon, \Delta_{1}-\varepsilon]}X_{v}^{(n)}(t)=0\right\}\cap\left\{\sup_{t\in[ \varepsilon,\Delta_{1}-\varepsilon]}\Big{|}X_{0}^{(n)}(t)-\lambda_{0}t\Big{|} \leq\widetilde{\varepsilon}\right\}\right)\underset{n\to\infty}{ \longrightarrow}1.\] Proof.: Using a similar approach as in the proof of Lemma 3.4 just by adapting the situation, one can prove that \[\mathbb{P}\left(\sup_{v\in V\setminus\{0\}}\sup_{t\in[\varepsilon,\Delta_{1}- \varepsilon]}Z_{v}^{(n)}\left(\mathfrak{k}_{t}^{(n)}\right)=0\right)\underset{ n\to\infty}{\longrightarrow}1,\] from which it follows by definition of \(X_{v}^{(n)}\) that \[\mathbb{P}\left(\sup_{v\in V\setminus\{0\}}\sup_{t\in[\varepsilon,\Delta_{1}- \varepsilon]}X_{v}^{(n)}\left(t\right)=0\right)\underset{n\to\infty}{ \longrightarrow}1.\] We also have, by definition of \(W\) as the almost sure limit of the positive martingale \(\left(e^{-\lambda(0)t}Z_{(0)}(t)\right)_{t\in\mathbb{R}^{+}}\) and using Lemma 4.1, that for all \(\widetilde{\varepsilon}>0\), \(\mathbb{P}\left(A_{\widetilde{\varepsilon}}^{(n)}\right)\underset{n\to\infty }{\longrightarrow}1\), where \[A_{\widetilde{\varepsilon}}^{(n)}:=\left\{\sup_{t\in\left[\mathfrak{k}_{t}^{( n)},\mathfrak{k}_{A_{1}-\varepsilon}^{(n)}\right]}\left|e^{-\lambda^{(n)}(0)t}Z_{0 }^{(n)}(t)-W\right|\leq\widetilde{\varepsilon}\right\}.\] Indeed as mentioned above, with high probability there is no mutational event from the lineage of cells \(Z_{(0)}^{(n)}\), meaning that with high probability \(Z_{0}^{(n)}(t)=Z_{(0)}^{(n)}(t),\forall t\in\left[\mathfrak{k}_{t}^{(n)}, \mathfrak{k}_{\Delta_{1}-\varepsilon}^{(n)}\right].\) Let \(\delta>0\), \(\widetilde{\varepsilon}<\delta\), conditioned on \(\{W>\delta\}\) one obtains that for all \(\omega\in A_{\widetilde{\varepsilon}}^{(n)}\) and for all \(t\in[\varepsilon,\Delta_{1}-\varepsilon]\) \[(W-\widetilde{\varepsilon})e^{\lambda^{(n)}(0)t_{t}^{(n)}}\leq Z_{0}^{(n)} \left(\mathfrak{k}_{t}^{(n)}\right)\leq(W+\widetilde{\varepsilon})e^{\lambda^ {(n)}(0)t_{t}^{(n)}},\] which implies because it exists an \(n\in\mathbb{N}\) independent from the value of \(W\) such that \((W-\widetilde{\varepsilon})e^{\lambda^{(n)}(0)t_{t}^{(n)}}\geq 1\) that \[\frac{\log(W-\widetilde{\varepsilon})\lambda(0)}{\log(n)}+\lambda^{(n)}(0)t \leq X_{0}^{(n)}(t)\leq\frac{\log(W+\widetilde{\varepsilon})\lambda(0)}{\log (n)}+\lambda^{(n)}(0)t.\] Then because \(\lambda^{(n)}(0)\underset{n\to\infty}{\longrightarrow}\lambda(0)\) it gives that \[\mathbb{P}\left(\left\{\sup_{t\in[\varepsilon,\Delta_{1}-\varepsilon]}\left|X _{0}^{(n)}(t)-\lambda(0)t\right|\leq\widetilde{\varepsilon}\right\}\cap\left\{ \sup_{v\in V\setminus\{0\}}\sup_{t\in[\varepsilon,\Delta_{1}-\varepsilon]}X_{v }^{(n)}(t)=0\right\}\left|W>\delta\right)\underset{n\to\infty}{\longrightarrow}1.\] The result is ended applying a similar reasoning as in Step 2 of the proof of Lemma 3.2. Then we are going to express how the asymptotic behavior of \(\left(X_{v}^{(n)}\right)_{v\in V}\) is controlled between the times \(\Delta_{j}\) for \(j\in\{1,\cdots,k^{*}\}\). **Lemma 5.2**.: _Let \(j\in\{1,\cdots,k^{*}-1\}\) and \(0<\varepsilon<\frac{\Delta_{j+1}-\Delta_{j}}{2}\). Assume that \(\left(X_{v}^{(n)}(\Delta_{j}+\varepsilon)\right)_{v\in V}\) converges in probability to \((x_{v}(\Delta_{j}+\varepsilon))_{v\in V}\). Then we have for all \(\widetilde{\varepsilon}>0\)_ \[\mathbb{P}\left(\sup_{v\in V}\sup_{t\in[\Delta_{j}+\varepsilon,\Delta_{j+1}- \varepsilon]}\left|X_{v}^{(n)}(t)-x_{v}(t)\right|\leq\widetilde{\varepsilon} \right)\underset{n\to\infty}{\longrightarrow}1.\] Proof.: The proof of this Lemma is obtained by adapting the one of Proposition 2 of [13] because the behavior of the models are similar between two times of change of slope. Indeed, in our case, the property of branching is satisfied, there is no interaction between individuals except mutational exchange. In [13] although there are interactions between individuals, the model is well approximated by branching populations between two changes of slope (corresponding to either a new born trait or either a change of the dominant trait) as in the present work. In their case, some assumptions on how the functions \((x_{v})_{v\in V}\) behaves are added to prevent that 2 different traits become dominant simultaneously. This is due to technicalities when coupling the processes with a 3 species Lotka-Volterra system. But in our case, everything is branching, this situation with a potential coupling with a competitive system does not arise such that our Lemma is free from such assumptions. Finally we are going to show how the asymptotic behavior of \(\left(X_{v}^{(n)}\right)_{v\in V}\) is controlled around times \(\Delta_{j}\) for \(j\in\{1,\cdots,k^{*}\}\). By adapting Proposition 4 of [13] one shows the following Lemma. **Lemma 5.3**.: _Let \(j\in\{1,\cdots,k^{*}\}\) and \(0<\varepsilon<\frac{\Delta_{j}-\Delta_{j-1}}{2}\) and assume that \(\left(X_{v}^{(n)}\left(\Delta_{j}-\varepsilon\right)\right)_{v\in V}\) converges in probability to \(\left(x_{v}(\Delta_{j}-\varepsilon)\right)_{v\in V}\). Then it exists \(0<\varepsilon_{1}<\frac{\Delta_{j+1}-\Delta_{j}}{2}\) such that for all \(\widetilde{\varepsilon}>0\)_ \[\mathbb{P}\left(\sup_{v\in V}\sup_{t\in[\Delta_{j}-\varepsilon,\Delta_{j}+ \varepsilon_{1}]}\left|X_{v}^{(n)}(t)-x_{v}(t)\right|\leq\widetilde{ \varepsilon}\right)\underset{n\to\infty}{\longrightarrow}1,\] _where by convention one defines \(\Delta_{k^{*}+1}=\infty\)._ Proof.: The proof of this lemma is highly inspired from the one of Proposition 4 of [13]. This Proposition aims at dealing with the birth of a new trait. Moreover the techniques of the proof are still consistent if at the same time not only one but many new traits appeared. However in our case it may also happen that an already born trait \(v\) increases its slope due to a growth driven now by another trait \(u\neq v\). This was not studied in [13] because it never happen in their model. But such an event appears exactly when one of the sub-populations of trait \(v\) whose growth is driven by another trait \(u\neq v\) becomes now dominant inside the total population of the trait \(v\). In particular, it means that if one would like to deal with this kind of event, one would have to track the birth of all the sub-populations over the paths on the graph from the origin. Hence the way the sequence \(\left(Z_{v}^{(n)},v\in V\right)_{n\in\mathbb{N}}\) is constructed using sums of sub-populations over the paths on the graph from the origin allows to deal with such phenomenon only adapting Proposition 4 of [13]. Indeed instead of considering the set of traits as \(V\), we are considering the set of traits as all the paths on the graph from the origin \(\Gamma(V)\) and we apply the same reasoning using Lemma 5.1, Lemma 5.2 and the adaptation from Proposition 4 of [13] as mentioned above in this proof to get the result for this new set of traits. With this in mind, we deduce that it exists \(\widetilde{\Delta}_{0}=0<\widetilde{\Delta}_{1}<\cdots<\widetilde{\Delta}_{ \widetilde{k}^{*}}\) such that the convergence on probability is obtained for the populations \(\left(X_{\gamma}^{(n)}\right)_{\gamma\in\Gamma(V)}\) where \[X_{\gamma}^{(n)}:=\frac{\log_{+}\left(Z_{\gamma}^{(n)}\left(\mathfrak{k}_{t}^ {(n)}\right)\right)}{\log(n)/\lambda(0)},\] to some deterministic functions \(\left(x_{\gamma}\right)_{\gamma\in\Gamma(V)}\). In particular the set of traits is potentially an infinite set if there are cycles, meaning that \(\widetilde{\Delta}_{\widetilde{k}^{*}}\) is infinite. But we obtain results on finite time interval \([T_{1},T_{2}]\), thus only a finite number of traits are going to have at least a cell in this time interval, meaning that the situation is similar to the one of a finite trait space. Then if in the original process at time \(\Delta_{j}\) a trait \(v\) is increasing its slope, it means that it exists a time \(\widetilde{\Delta}_{\widetilde{j}}<\Delta_{j}\) such that the sub-population of trait \(v\) becoming dominant in the total population of trait \(v\) at time \(\Delta_{j}\) was born at time \(\widetilde{\Delta}_{\widetilde{j}}\) and lived sufficiently long \(\left(\Delta_{j}-\widetilde{\Delta}_{\widetilde{j}}\right)\) such that it becomes dominant. This ends the proof of this Lemma. AcknowledgementsThe author would like to thank Helene Leman for inspiring and helpful discussions and feedback.
2302.14756
Compact expansion of a repulsive suspension
Short-range repulsion governs the dynamics of matter from atoms to animals. Using theory, simulations, and experiments, we find that an ensemble of repulsive particles spreads compactly with a sharp boundary, in contrast to the diffusive spreading of Brownian particles. Starting from the pair interactions, at high densities, the many-body dynamics follow non-linear diffusion with a self-similar expansion, growing as $t^{1/4}$; At longer times, thermal motion dominates with the classic $t^{1/2}$ expansion. A logarithmic growth controlled by nearest-neighbor interactions connects the two self-similar regimes.
Matan Yah Ben Zion, Naomi Oppenheimer
2023-02-28T16:56:55Z
http://arxiv.org/abs/2302.14756v3
# Compact expansion of a repulsive suspension ###### Abstract We find that a suspension of particles that mutually repel by short-ranged interactions spreads compactly. Unlike the diffusive boundary of a spreading drop of Brownian particles, here, the density is strictly zero beyond a cutoff distance. We identify that the drop expands in a self-similar fashion in the dense limit. Starting from the pair potential, we show that in the continuum limit, the suspension's expansion follows a nonlinear diffusion equation. At early times (dense limit), the density profile is parabolic, and the area of the ensemble grows as the square root of time. At later times (sparse limit), the dynamics slow down and transition to a logarithmic growth. We verify the approximations of the analytical predictions in the dense regime using exact numerical integration. We examine the dilute regime by monitoring the expansion of a charged stabilized colloidal suspension and find the logarithmic expansion is consistent with the experiment. Using molecular dynamics simulation of thousands of particles, we see the crossover between the two regimes as the dynamics transition from self-similar to logarithmic. Colloidal suspensions are everywhere -- from the ink we (used to) write with, the soy milk we drink and the drugs we consume, to the very structure of most living systems. Life is built of microscopic particles suspended in a fluid. More often than not the particles are charged and the electrostatic interactions are screened by the presence of ions in solution [1; 2]. Such is the case for charged proteins in a membrane [3; 4; 5; 6], vesicles in suspension [7], droplets in microfluidic devices [8], water purification, and plasma physics [9; 10]. In other cases, particles are not strictly charged, yet are repelled by short-range forces, e.g. globular polymers, or colloidal particles coated by a brush shell, and sub-atomic particles interacting in the nucleus [11; 12; 13]. In what follows, we consider the expansion of a suspension of particles with repulsive, short range interactions that dominate over thermal diffusion. We find that when the interaction has a typical decay length, the suspension expands compactly -- the concentration vanishes identically outside a core of finite size. Compact profiles are found in diverse physical systems including gas diffusion through porous medium [14; 15], thin films with a free surface [16; 17], and even in population dynamics [18]. A family of compact solitons (called compactons) were found as solutions to a generalization of the Korteweg-De Vrie (KdV) equation [19]. These systems were modeled using a continuum, hydrodynamic description, characterized by phenomenological parameters. Here we show analytically, starting from a microscopic basis, that a non-linear diffusion equation with a compact solution is generic for particles with short range repulsion. We find under what conditions the continuum description breaks down, leading to a crossover in the dynamics. As we outline below, at high densities, the time evolution and distribution of the density field \(n(\mathbf{r},\mathbf{t})\), are determined by a non-linear diffusion equation, stemming from particle interactions, leading to a concentration dependent diffusion of the form \[\frac{\partial n}{\partial t}=\nabla\left[D(n)\nabla n\right]=D(n)\nabla^{2}n+ \alpha|\nabla n|^{2}, \tag{1}\] with \(D(n)=\alpha n\), and we derive \(\alpha\) from the microscopic pair potential. Since the effective diffusion coefficient, \(D(n)\) is proportional to the density, particles at the drop's boundary have a lower diffusion constant than particles at the edge. The solutions are inherently different from regular diffusion. For example, for diffusion, there is a Gaussian spread of the density. Here, the density profile is parabolic and is strictly zero beyond a maximal radius which grows with time. Unlike classic diffusion, here, the area of the drop does not grow linearly with time but as the square root of time. At lower densities, when the dis Figure 1: (a) Schematics of the interaction of repulsive particles with short-range repulsion in the dense region (top), where each particle interacts with many others, and in the sparse region (bottom), where a particle interacts only with its nearest neighbors. (b) MD simulation of 10,000 particles starting from a dense random distribution. At early times we see the \(R\sim t^{1/4}\) scaling predicted by the self-similar solution. At long times there is a transition to the sparse limit with a logarithmic dependence on time. tance between particles is larger than the characteristic repulsive distance, particle interactions are dominated by nearest neighbors. We show that in this limit, the suspension spreads logarithmically with time. The expansion of the repulsive suspension can be described by the time evolution of its radius. For exponential or screened electrostatic interactions, the asymptotic limits of the time evolution are given by \[R(t)\propto\begin{cases}t^{1/4}&\text{if }n\gg 1\\ \log(\text{t})&\text{if }n\ll 1\end{cases}, \tag{2}\] where \(n=\rho/\rho_{c}\) is the non-dimensionalized density, \(\rho_{c}=1/(\pi l^{2})\), with \(l\) being the typical decay of the short-range repulsion. The transition from the two types of expansions occurs when the typical distance between particles, \(L\) is equal to the decay length, \(l\). Our analysis addresses the athermal limit, where Brownian motion is negligible. This is valid when \(D=\alpha n\gg D_{0}\) where \(D_{0}\) is the self diffusion coefficient originating from thermal fluctuations. Similarly, this is true when the distance between particles satisfies \(L\ll\sqrt{2\pi F_{0}l^{3}/k_{b}T}\), where \(F_{0}\) is the strength of the repulsive force, \(k_{B}\) is Boltzmann constant, and \(T\) is temperature. Note that in the over-damped dynamics discussed here, this limit is independent of viscosity, which plays a dual role in both the deterministic and stochastic forces. Equations 1 and 2 are the main results of this work, which is structured as follows: first, we derive the two regimes analytically. Next, we compare analytic results with simulations and experiments. We show that at high densities (\(n\gg 1\)), both numerical integration of the mass conservation equation, as well as Molecular Dynamics (MD) simulations using the pair interactions, are quantitatively consistent with the approximate analytical solution; then we proceed to show that the low density limit (\(n\leq 1\)) indeed follows Eq. 2 as observed in both experiments of a concentrated charged colloidal suspension and MD simulations. ## III Governing equations We examine particles in the overdamped limit, where inertia is negligible, and the force, \(\mathbf{F}\) and velocity \(\mathbf{v}\), are proportional through constant mobility, \(\mathbf{v}=\mu\mathbf{F}\). Individual particles follow deterministic dynamics (no Brownian motion) and interact through a pair interaction, \(F(r)\). The interaction can be due to any short-ranged, isotropic, repulsive force -- from sub-atomic Yukawa potential, through Pauli repulsion at the inter-atomic scale, screened Coulomb potential in an ionic solution or plasma, or even soft-core entropic repulsion in colloidal suspensions [10; 11; 20; 21; 22]. Our results are generic, but for simplicity, we consider exponential interactions in the main text, \(F(r)\sim e^{-r/l}\). In the Supplementary Information (SI), we show results for screened-electrostatics. To build intuition, let us start by examining a single particle, then two, and then many. For a single Brownian particle, the mean-square displacement grows as \(\sqrt{t}\). By contrast, a single repulsive, athermal particle, is stationary. Unlike the separation between two Brownian particles that diffuse apart at a rate of \(\sqrt{t}\), two athermal, strictly repulsive particles separate as \(\sim\log(t)\) since \(dr/dt=v_{0}e^{-r/l}\), where \(v_{0}\) is the magnitude of the velocity given by \(v_{0}=\mu F_{0}\). In the ensemble limit, the area of \(N\) diffusing particles grows as \(\sqrt{t}\). Repulsive particles move due to the sum of the interaction from all their neighbors. That is, the velocity of particle \(i\) is given by, \[\mathbf{v}_{i}(\mathbf{r}_{i})=-\sum_{j}v_{0}e^{-\frac{|\mathbf{r}_{i}-\mathbf{ r}_{j}|}{t}}\frac{\mathbf{r}_{i}-\mathbf{r}_{j}}{|\mathbf{r}_{i}-\mathbf{r}_{j}|}, \tag{3}\] Molecular dynamic simulations of Eq. 3 (see Fig. 1) show that an ensemble of \(N\)_repulsive particles_ grows as \(R\sim t^{1/4}\) in the dense limit (\(n\gg 1\)) and as \(\log(t)\) in the dilute limit (\(n\leq 1\)), that is -- (a) subdiffusively, and (b) similar to the spread of two repulsive particles in the dilute limit but very differently than the dense limit. Figure 2: Molecular dynamics simulations of repulsive particles at high density show self-similar profile, as predicted in the continuum limit. Results from a simulation of 10,000 particles with exponentially repulsive interactions. We start from a high density and track the particles as they spread. a) Snapshots of the simulations at different times (\(t=0,5,000,20,000,35,000,50,000\)). b) Radius as a function of time showing an the \(t^{1/4}\) scaling. c) Density as a function of radius for different times. Color goes from bright to dark as time progresses. d) Re-scaled density \(\sqrt{t}\,n\) as a function of \(r^{2}/\sqrt{t}\) showing all the curves collapse to a single one as predicted by Eq. 12. When does the two particle case transitions to the continuum description? We will now derive analytically the cross-over between these two limits. **Analytic Results in the Dense Limit.** In the dense limit, \(n\gg 1\), we coarse-grain the velocity to derive a diffusion equation. The following procedure is analogous to a Focker-Planck expansion with a mean-field closure [23; 24; 25; 26]. We start with the mass conservation equation for the number of particles \[\frac{\partial n}{\partial t}+\nabla\cdot(n\mathbf{v})=0, \tag{4}\] where \(n\) is the normalized density of particles such that \(n=\rho/\rho_{c}\), \(\rho(\mathbf{r}(t))=\sum_{i}\delta(\mathbf{r}(t)-\mathbf{r}_{i}(t))\), and \(\rho_{c}=1/(\pi l^{2})\) is the critical density. We turn to find the coarse-grained velocity, \(\mathbf{v}(\mathbf{r}(t))\). Since the interactions are purely repulsive, the velocity field is of the form \(\mathbf{v}(\mathbf{r})=v(r)\hat{r}\)[27], where \(v(r)\) is given by the short-ranged repulsive force felt from all other particles. In the limit of a continuous density of particles, Eq. 3 becomes \[\mathbf{v}(\mathbf{X})=-v_{0}\rho_{c}\int_{0}^{R}\int_{0}^{2\pi}n(\mathbf{Y}) \frac{\mathbf{X}-\mathbf{Y}}{|\mathbf{X}-\mathbf{Y}|}e^{\frac{-|\mathbf{X}- \mathbf{Y}|}{l}}d^{2}\mathbf{Y} \tag{5}\] Equation 5 combined with the mass conservation, Eq. 4, can be solved numerically, as we show later on. To continue analytically, we must turn to approximations. In principle, the integration boundaries depend on the position of the particle \(\mathbf{X}\), but distances \(|\mathbf{X}-\mathbf{Y}|\gg l\) will hardly contribute due to the short-ranged nature of the forces. For particles away from the edge of the suspension, \(|R-X|\gg l\), we can extend the integration boundaries to the entire space. Thus the velocity is approximately \[\mathbf{v}(\mathbf{X})\approx-\frac{v_{0}}{\pi}\int_{0}^{\infty}\int_{0}^{2 \pi}n(\mathbf{X}+l\mathbf{s})\frac{\mathbf{s}}{s}e^{-s}d^{2}\mathbf{s}, \tag{6}\] where we have changed integration variables to a normalized distance \(\mathbf{s}=(\mathbf{Y}-\mathbf{X})/l\). In polar coordinates \(\mathbf{s}=r(\cos\theta,\sin\theta)\), \(\mathbf{X}=r^{\prime}(\cos\phi,\sin\phi)\), with \(\theta\in(0,2\pi)\) and \(s\in(0,\infty)\). We do a multipole expansion of Eq. 6 by expanding the density in a Taylor series, \[n(\mathbf{X}+l\mathbf{s})\approx n(\mathbf{X})+l\mathbf{s}\cdot\nabla n( \mathbf{X})+\ldots \tag{7}\] By symmetry, the first term of the moment expansion in Eq. 7 vanishes after integration in Eq. 6 (odd integrand over an even domain). The remaining leading term in the velocity is the concentration gradient, \[\mathbf{v}_{\mathbf{r}}(\mathbf{X})\approx-\frac{lv_{0}}{\pi}\nabla n( \mathbf{X})\cdot\int\mathbf{s}\hat{s}e^{-s}d^{2}\mathbf{s}=-\alpha\frac{ \partial n}{\partial r}\hat{r}, \tag{8}\] with \(\alpha=2v_{0}l/\pi\). More generally \(\alpha=\frac{\rho_{c}}{\mu}\int\mathbf{r}\cdot\mathbf{F}(r/l)d^{2}r\), where \(\mathbf{F}(r/l)\) is the short-ranged force. For example, screened electrostatic interactions have the potential \(U=U_{0}l\exp(-r/l)/r\), and the velocity will have the form \(v=-v_{0}l/r\exp(-r/l)-v_{0}l^{2}/r^{2}\exp(-r/l)\), with \(v_{0}=\mu U_{0}/l\). The result is then \(\alpha=3v_{0}l/\pi\), changing only the pre-factor of the exponential case by \(3/2\). We show simulation results for such a case in the SI. The full non-linear diffusion equation (Eq. 1) is found when plugging the velocity approximation (Eq. 8) in the equation for mass conservation (Eq. 4), giving an effective diffusion coefficient that linearly increases with density, \(D=\alpha n(r)\), which takes the following form in polar coordinates: \[\frac{\partial n}{\partial t}-\frac{\alpha}{r}\frac{\partial}{\partial r} \left(rn\frac{\partial n}{\partial r}\right)=0. \tag{9}\] This equation is identical to the effective porous media equation [14] but derived from the details of the pair interaction, allowing us to trace the effective parameters to their microscopic origins. Self-similar solutions are given by dimensional analysis of Eq. 9 (see Refs. [16; 17]). We start by assuming a solution of the form \(n=At^{\gamma}f(Br/t^{\beta})=At^{\gamma}f(\eta)\). We can further link \(\gamma\) and \(\beta\) by recalling that the total number of particles, \(N\), is independent of time \[N=\rho_{c}\int n(r)rdrd\theta\propto t^{\gamma+2\beta}\int f(\eta)\eta d\eta. \tag{10}\] which is constant only for \(\gamma=-2\beta\). The density therefore has the form \(n=At^{-2\beta}f(Br/t^{\beta})\). Placing \(n\) in Eq. 9, we find that \(f\) is indeed a function of \(\eta\) alone (as was assumed), and we must have \(\beta=\frac{1}{4}\), and \(B^{2}=\frac{1}{84\alpha}\). The equation for the self-similarity function, \(f\), is \[2f+\eta f^{\prime}=-\frac{1}{2\eta}\frac{d}{d\eta}\left(\eta ff^{\prime}\right), \tag{11}\] whose solution is parabolic, such that the concentration is \[n=\frac{A}{\sqrt{t}}\left(1-\eta^{2}\right)=\frac{A}{\sqrt{t}}\left(1-B^{2} \frac{r^{2}}{\sqrt{t}}\right). \tag{12}\] Finally, the prefactor, \(A\), is determined from the total number of particles (Eq. 10), giving \(A=\sqrt{3N/(8\pi\rho_{c}\alpha)}\). The self-similar profile is quadratic with respect to \(\eta\) hence it is quadratic with respect to distance, and its width is an increasing function of time. Note that the concentration of particles is _strictly zero_ beyond \(r=t^{1/4}\), meaning that the drop is compact. **Analytic Results in the Sparse Limit.** When the average distance between particles is larger than the decay length of the repulsive force, \(l\), we can assume only nearest neighbors contribute to the interaction, and the discrete nature of the suspension cannot be ignored. In such cases, we can no longer use Eq. 8 in order to find the density as a function of time. However, we can still approximate the radius of the drop as it spreads by considering the velocity of particles at the edge. Since in this limit particles predominately interact with their nearest neighbors, their velocity can be readily approximated. Due to the repulsive interactions, the arrangement of particles is hexagonal. We can assume a particle at the edge of the drop has three equally spaced nearest neighbors. Due to the isotropic nature of the interactions we can consider any particle. Without loss of generality, we take the particle positioned at \(\mathbf{r}=R\hat{x}\). The particle will move with velocity \(\mathbf{v}\left(R\hat{x}\right)=\frac{dR(t)}{dt}\hat{x}=2v_{0}e^{-R/\sqrt{N/ \pi}l}\hat{x}\), where \(R/\sqrt{N/\pi}\) is the average distance between particles in the ensemble. The approximate radius is simply found by integrating the velocity \[R(t)=\sqrt{\frac{N}{\pi}}l\log(t/t_{0}+c), \tag{13}\] with \(t_{0}=\sqrt{N/\pi}l/2v_{0}\) and \(c=\exp\left(\sqrt{R_{0}^{2}\pi/lN^{2}}\right)\) where \(R_{0}\) is the initial radius. To test the validity of this result we ran MD simulations and experiments of a concentrated suspension of charge stabilized colloids in deionized water. ## Dense limit verification **Molecular Dynamics Simulation in the Dense Limit.** We ran simulations of 10,000 particles with short-ranged exponential repulsion. We start from a random configuration in a circle of size \(R\) and let the system evolve over time using a \(5^{th}\) order Runge-Kutta scheme while adapting step size according to the distance between particles. As the drop evolves, it spreads, such that \(R=R(t)\). We find the local density of particles by finding the Voronoi tessellation and calculating the area of each cell, \(A_{\text{cell}}^{i}\)[28]. The density is given by \(n(r)=1/A_{\text{cell}}^{i}\). The upper left panel in Fig. 2 shows overlayed snapshots from the simulation at different times. Figure 2B shows the radius of the drop as a function of time. After a short transient, indeed it follows the expected power-law of Radius \(\propto t^{1/4}\). The density profile as a function of the radius of the drop, \(r\), at different times is presented in Fig. 2C. And lastly, the bottom right panel shows the re-scaled density \(\sqrt{t}\,n\) versus \(\eta=r^{2}/\sqrt{t}\) excluding the first few timesteps. Note how at longer times, all the curves collapse to a single straight line according to the scaling of Eq. 12. **Numerical Solutions in the Dense Limit.** To test the validity of the asymptotic solution, we have numerically integrated the mass conservation equation, Eq. 4 together with the full velocity given by Eq. 5. Note that this is a numerical solution of the _full_ partial differential equations. That is, without resorting to any of the assumptions used to derive Eq. 6 and the nonlinear diffusion equation, Eq. 9. Namely, without the asymptotic approximation of the integral in Eq. 5, and using the full density distribution (not only the first nonvanishing term in its Taylor expansion). To that end, we started with a narrow Gaussian distribution with a standard devia Figure 3: Full numerical solution of Eq. 4 combined with Eq. 5 using Fast Fourier Transform with padded arrays. a) Snapshots of the density distribution at constant time intervals show that a repulsive suspension starting with an initially smooth Gaussian distribution spreads compactly. b) Radially averaged density profile as a function of the radius of the drop at different times. Inset shows a zoom-in on the edge of the drop, showing the sharpening of the cusp over time. Color goes from bright to dark as time progresses. c) Collapsed plot of the re-scaled distributions at later times (following Eq. 12). Density is scaled by \(\sqrt{t}\) and plotted against \(\eta=r^{2}/\sqrt{t}\) as predicted by Eq. 12 for the approximate velocity, Eq. 8. Inset shows a plot of the density versus distance used for the figure. d) Radius as a function of time showing the expected growth as \(t^{1/4}\). tion \(\sigma=0.03\) on a grid of \(N\times N=128\times 128\), and with a timestep \(dt=1/N^{2}\). In each time step, we calculated the velocity using convolution in Fourier space of Eq. 5 while making sure the convolution is well behaved by adding padded arrays. The spatial part of the partial differential equation was solved using FFT whereas the time integration scheme which was chosen is Leapfrog. We propagated the dynamics over \(t=32,000\) timesteps. The numerical results are presented in Fig. 3. On the left bottom panel, the density as a function of radius is presented at various times. The inset shows a zoom-in on the edge of the drop, showing that at later times, a cusp is formed, indicating the compactness of the solution. In the bottom-moddle panel, the normalized density \(\sqrt{t}\,n(r)\) is plotted as a function of \(\eta=r^{2}/\sqrt{t}\), where a collapse of all later times to a single straight curve is obtained, as expected from the nonlinear diffusion Eq. 12. Lastly, on the right-bottom panel, the radius as a function of time shows the expected power-law of \(\propto t^{1/4}\). ## IV Sparse limit verification **Experiments of the Expansion of a Colloidal Suspension**. We tested experimentally the expansion of a colloidal suspension by concentrating particles using optical tweezers, then turning off the light, and monitoring the spreading of the colloidal drop (see Fig. 4). Most commonly, optical tweezers have the laser light first enter the objective rear, coming to a tight focus at the imaging plane [29]. This creates a strong, yet small trap, that can typically host a single colloidal particle (\(\sim 1\ \mu\)m). To make a trap that can tweeze many particles we built custom optical tweezers by using a nearly collimated laser beam that first passes through the sample (and only then enters the objective through the collecting lens) [30; 31]. For this a custom Galilean telescope was used to shrink a 3 mm IR laser source (1064 nm IPG Photonics) into a nearly collimated beam with a diameter of \(D\approx 300\ \mu\)m at the sample (see Fig. 4A). We used \(d=3\ \mu\)m colloidal particles (Bangslabs) suspended in deionized water (Millipore, 18.2M\(\Omega\)cm), with pH \(\approx 6.3\). The particles are charged stabilized, and repel through screened Coulomb interaction, and can be approximated using the DLVO theory to follow \(U(r)\propto\exp\{\mathrm{d}/\mathrm{l}-\mathrm{r}/\mathrm{l}\}/\mathrm{r}\), where \(l\) can be approximated from the Debye screening length, \(l\approx\lambda_{D}+d/2\), where \(\lambda_{D}\approx 1\mu\)m [32]. The suspension was loaded into a 100 \(\mu\)m tall, passivated capillary glass (Vitrotubes, see supporting information for treatment details). Particles settle at the bottom surface and form a quasi-2D suspension (gravitational height \(h_{g}\approx 20nm<0.01d\)) with a filling fraction of \(Nd^{2}/D^{2}\lesssim 0.9\). Note that the high filling fraction is still in the sparse limit, as \(l\lesssim d\). With no light, individual particles display Brownian motion with a measured diffusion constant of \(D_{0}=0.11\pm 0.02\mu\)m\({}^{2}\)/s as measured from the mean square displacement of individually tracked particles (see Supporting Information). Figure 4: Experimental results. (A) Large field optical tweezer setup. Top inset shows the light field intensity distribution of the image of the sample (Scalebar 200 \(\mu\)m). Bottom inset shows an image of two colloidal particles on the surface (Scalebar 10 \(\mu\)m). (B) Snapshots of a dense suspension with \(N\approx 10^{4}\) particles show a compact expansion, with particles spreading due to screened electrostatic repulsion. Scalebar 100 \(\mu\)m (C) Overlapping figures with a color threshold (D) Drop radius as a function of time. The radius shows a logarithmic dependence (green dashed line) as predicted by the theory for a sparse suspension. When the laser is turned on, particles are softly attracted into the region of a higher optical field, collecting approximately \(N\approx 10^{4}\) particles (see Fig. 4A). Particles are packed at an effective area fraction of 0.9. Given the individual diffusion constant, and the effective area fraction, the suspension is expected to be in the dilute regime (\(n<1\)) described in Eq. 2, but above the diffusive regime. Once the beam is turned off, the suspension starts to spread. During the initial half hour of spreading, the suspension remains compact with a sharp boundary. At later times, the boundary turns diffusive and the expansion is no longer compact (see supporting Movie 1). We measure the size of the drop in the compact expansion regime by thresholding the movie (Fig. 4) and extracting the radius of the drop at each frame. We find that as predicted by Eq. 2, the suspension expands logarithmically. **Moincular Dynamics Simulation in the Sparse Limit**. Running simulations of 10,000 either for very long times or starting from a sparse random configuration results in a logarithm- mic growth of the radius of the drop as a function of time. Figure 1 shows a transition from a \(R\sim t^{1/4}\) at early times, as predicted by the self similar solution, to \(\sqrt{N/\pi}l\log(t)\) as predicted in the sparse limit, Eq. 13. Even though the analytic arguments made rough assumptions, namely, taking the average distance between particles as a measure of spacing, the coefficient of the logarithm is correctly predicted. In out case \(l=0.005\) giving \(\sqrt{N/\pi}l=0.28\). **Conclusions.** We have characterized the compact expansion of a repulsive suspension using theory, numeric integration, simulations, and experiments. An ensemble of repulsive particles spreads in a subdiffusive manner -- with a power law of \(t^{1/4}\) in the dense limit, when distance between particles is smaller than the decay length of the repulsive potential, and logarithmically in the sparse limit when particles only interact with their nearest neighbors. In our work we verified experimentally only the sparse limit, but the dense limit can also be accessed in a colloidal sample using recently recent protocols [33] where the screening length much exceeds the particle size. Our results link the microscopic pair potential to macroscopically observed dynamics in ensembles dominated by interactions. **Author Contribution.** NO initiated the research, derived the theory, and developed the molecular dynamics simulations. MYBZ performed experiments. Both authors wrote the numerical integration code, analyzed the data, and wrote the manuscript. **Acknowledgments.** We thank Michael Shelley, Haim Diamant, and Philip Rosenau for helpful discussions.
2309.09313
Transportation cost spaces and their embeddings into $L_1$, a Survey
These notes present a basic survey on Transportation cost spaces (aka Lipschitzfree spaces, Wasserstein spaces) and their bi-Lipschitz and linear embeddings into $L_1$ spaces. To make these notes as self-contained as possible, we added the proofs of several relevant results from computational graph theory in the appendix.
Thomas Schlumprecht
2023-09-17T16:00:30Z
http://arxiv.org/abs/2309.09313v1
# Transportation cost spaces and their embeddings into \(L_{1}\), a survey ###### Abstract. These notes present a basic survey on Transportation cost spaces (aka Lipschitzfree spaces, Wasserstein spaces) and their bi-Lipschitz and linear embeddings into \(L_{1}\) spaces. To make these notes as self-contained as possible, we added the proofs of several relevant results from computational graph theory in the appendix. Key words and phrases:Transportation Cost Spaces, Lipschitzfree Spaces, \(L_{1}\)-distortion 2010 Mathematics Subject Classification: 46B85, 68R12, 46B20 The author was supported by the National Science Foundation under Grant Number DMS-2054443. ###### Contents * 1 Basic facts about Lipschitz free spaces * 1.1 Basic facts about Lipschitz free spaces * 1.2 Basic facts about Lipschitz free spaces * 1.3 The Lipschitz free space * 1.4 The Lipschitz free space * 1.5 The Lipschitz free space * 1.6 The Lipschitz free space * 1.7 The Lipschitz free space * 1.8 The Lipschitz free space * 1.9 The Lipschitz free space * 1.10 The Lipschitz free space * 1.11 The Lipschitz free space * 1.12 The Lipschitz free space * 1.13 The Lipschitz free space * 1.14 The Lipschitz free space * 1.15 The Lipschitz free space * 1.16 The Lipschitz free space * 1.17 The Lipschitz free space * 1.18 The Lipschitz free space * 1.19 The Lipschitz free space * 1.20 The Lipschitz free space * 1.21 The Lipschitz free space * 1.22 The Lipschitz free space * 1.23 The Lipschitz free space * 1.24 The Lipschitz free space * 1.25 The Lipschitz free space * 1.26 The Lipschitz free space * 1.27 The Lipschitz free space * 1.28 The Lipschitz free space * 1.29 The Lipschitz free space * 1.30 The Lipschitz free space * 1.31 The Lipschitz free space * 1.32 The Lipschitz free space * 1.33 The Lipschitz free space * 1.34 The Lipschitz free space * 1.35 The Lipschitz free space * 1.36 The Lipschitz free space * 1.37 The Lipschitz free space * 1.38 The Lipschitz free space * 1.39 The Lipschitz free space * 1.4.1 The Lipschitz free space * 1.4.2 The Lipschitz free space * 1.4.3 The Lipschitz free space * 1.4.4 The Lipschitz free space * 1.4.5 The Lipschitz free space * 1.4.6 The Lipschitz free space * 1.4.7 The Lipschitz free space * 1.4.8 The Lipschitz free space * 1.4.9 The Lipschitz free space * 1.4.1 The Lipschitz free space * 1.4.1 The Lipschitz free space * 1.4.2 The Lipschitz free space * 1.4.3 The Lipschitz free space * 1.4.4 The Lipschitz free space * 1.4.5 The Lipschitz free space * 1.4.6 The Lipschitz free space * 1.4.7 The Lipschitz free space * 1.4.8 The Lipschitz free space * 1.4.9 The Lipschitz free space * 1.4.1 The Lipschitz free space * 1.4.1 The Lipschitz free space * 1.4.2 The Lipschitz free space * 1.4.3 The Lipschitz free space * 1.4.4 The Lipschitz free space * 1.4.5 The Lipschitz free space * 1.4.6 The Lipschitz free space * 1.4.7 The Lipschitz free space * 1.4.8 The Lipschitz free space * 1.4.9 The Lipschitz free space * 1.4.1 The Lipschitz free space * 1.4.1 The Lipschitz free space * 1.4.2 The Lipschitz free space * 1.4.3 The Lipschitz free space * 1.4.4 The Lipschitz free space * 1.4.5 The Lipschitz free space * 1.4.6 The Lipschitz free space * 1.4.7 The Lipschitz free space * 1.4.8 The Lipschitz free space * 1.4.9 The Lipschitz free space * 1.4.10 The Lipschitz free space * 1.4.11 The Lipschitz free space * 1.4.11 The Lipschitz free space * 1.4.12 The Lipschitz free space * 1.4.13 The Lipschitz free space * 1.4.14 The Lipschitz free space * 1.4.15 The Lipschitz free space * 1.4.16 The Lipschitz free space * 1.4.17 The Lipschitz free space * 1.4.18 The Lipschitz free space * 1.4.19 The Lipschitz free space * 1.4.10 The Lipschitz free space * 1.4.11 The Lipschitz free space * 1.4.11 The Lipschitz free space * 1.4.12 The Lipschitz free space * 1.4.13 The Lipschitz free space * 1.4.14 The Lipschitz free space * 1.4.15 The Lipschitz free space * 1.4.16 The Lipschitz free space * 1.4.17 The Lipschitz free space * 1.4.18 The Lipschitz free space * 1.4.19 The Lipschitz free space * 1.4.20 The Lipschitz free space * 1.4.21 The Lipschitz free space * 1.4.22 The Lipschitz free space * 1.4.22 The Lipschitz free space * 1.4.23 The Lipschitz free space * 1.4.24 The Lipschitz free space * 1.4.25 The Lipschitz free space * 1.4.26 The Lipschitz free space * 1.4.27 The Lipschitz free space * 1.4.28 The Lipschitz free space * 1.4.29 The Lipschitz free space * 1.4.30 The Lipschitz free space * 1.4.41 The Lipschitz free space * 1.4.42 The Lipschitz free space * 1.4.31 The Lipschitz free space * 1.4.43 The Lipschitz free space * 1.4.44 The Lipschitz free space * 1.4.5 The Lipschitz free space * 1.4.6 The Lipschitz free space * 1.4.7 The Lipschitz free space * 1.4.8 The Lipschitz free space * 1.4.9 The Lipschitz free space * 1.4.10 The Lipschitz free space * 1.4.11 The Lipschitz free space * 1.4.11 The Lipschitz free space * 1.4.12 The Lipschitz free space * 1.4.13 The Lipschitz free space * 1.4.14 The Lipschitz free space * 1.4.15 The Lipschitz free space * 1.4.16 The Lipschitz free space * 1.4.17 The Lipschitz free space * 1.4.18 The Lipschitz free space * 1.4.19 The Lipschitz free space * 1.4.21 The Lipschitz free space * 1.4.22 The Lipschitz free space * 1.4.23 The Lipschitz free space * 1.4.24 The Lipschitz free space * 1.4.25 The Lipschitz free space * 1.4.26 The Lipschitz free space * 1.4.27 The Lipschitz free space * 1.4.28 The Lipschitz free space * 1.4.29 The Lipschitz free space * 1.4.31 The Lipschitz free space * 1.4.45 The Lipschitz free space * 1.4.6 The Lipschitz free space * 1.4.7 The Lipschitz free space * 1.4.8 The Lipschitz free space * 1.4.9 The Lipschitz free space * 1.4.11 The Lipschitz free space * 1.4.12 The Lipschitz free space * 1.4.13 The Lipschitz free space * 1.4.14 The Lipschitz free space * 1.4.15 The Lipschitz free space * 1.4.16 The Lipschitz free space * 1.4.17 The Lipschitz free space * 1.4.18 The Lipschitz free space * 1.4.19 The Lipschitz free space * 1.4.21 The Lipschitz free space * 1.4.22 The Lipschitz free space * 1.4.23 The Lipschitz free space * 1.4.24 The Lipschitz free space * 1.4.25 The Lipschitz free space * 1.4.26 The Lipschitz free space * 1.4.27 The Lipschitz free space * 1.4.28 The Lipschitz free space * 1.4.29 The Lipschitz free space * 1.4.30 The Lipschitz free space * 1.4.41 The Lipschitz free space * 1.4.31 The Lipschitz free space * 1.4.32 The Lipschitz free space * 1.4.4.43 The Lipschitz free space * 1.4.5.1 The Lipschitz free space * 1.4.5.2 The Lipschitz free space * 1.4.6.1 The Lipschitz free space * 1.4.7 The Lipschitz free space * 1.4.7 The Lipschitz free space * 1.4.8.18 The Lipschitz free space * 1.4.9.10 The Lipschitz free space * 1.4.11 The Lipschitz free space * 1.4.11.11 The Lipschitz free space * 1.4.11.12 The Lipschitz free space * 1.4.11.13 The Lipschitz free space * 1.4.11.14 The Lipschitz free space * 1.4.11.15 The Lipschitz free space * 1.4.11.16 The Lipschitz free space * 1.4.117 The Lipschitz free space * 1.4.11.18 The Lipschitz free space * 1.4.11.19 The Lipschitz free space * 1.4.11.19 The Lipschitz free space * 1.4.11.20 The Lipschitz free space * 1.4.11.21 The Lipschitz free space * 1.4.11.21 The Lipschitz free space * 1.4.11.22 The Lipschitz free space * 1.4.22.13 The Lipschitz free space * 1.4.22.14 The Lipschitz free space * 1.4.22.15 The Lipschitz free space * 1.4.2.16 The Lipschitz free space * 1.4.22.17 The Lipschitz free space * 1.4.2.18 The Lipschitz free space * 1.4.2.19 The Lipschitz free space * 1.4.22.19 The Lipschitz free space * 1.4.2.22 The Lipschitz free space * 1.4.2.22.23 The Lipschitz free space * 1.4.22.3 The Lipschitz free space * 1.4.24.3 The Lipschitz free space * 1.4.2.424 The Lipschitz free space * 1.4.2.3.4.5 The Lipschitz free space * 1.4.2.4.6 The Lipschitz free space * 1.4.25 The Lipschitz free space * 1.4.2.7 The Lipschitz free space * 1.4.28 The Lipschitz free space * 1.4.29 The Lipschitz free space * 1.4.29 The Lipschitz free space * 1.4.29 The Lipschitz free space * 1.4.11.21 The Lipschitz free space * 1.4.29 The Lipschitz free space * 1.4.29 The Lipschitz free space * 1.4.30 The Lipschitz free space * 1.4.11.21 The Lipschitz free space * 1.4.29 The Lipschitz free space * 1.4.31.21 The Lipschitz free space * 1.4.32.11 The Lipschitz free space * 1.4.32.11.22 The Lipschitz free space * 1.4.33.2.13 The Lipschitz free space * 1.4.33.2.14 The Lipschitz free space * 1.4.3.15 The Lipschitz free space * 1.4.3.16 The Lipschitz free space * 1.4.3.2.17 The Lipschitz free space * 1.4.3.2.18 The Lipschitz free space * 1.4.3.2.19 The Lipschitz free space * 1.4.3.20 The Lipschitz free space * 1.4.3.21 The Lipschitz free space * 1.4.3.222 The Lipschitz free space * 1.4.3.223 The Lipschitz free space * 1.4.3.3.31.19 The Lipschitz free space * 1.4.3.3.31.211 The Lipschitz free space * 1.4.3.3.22.24 The Lipschitz free space * 1.4.3.3.22.25 The Lipschitz free space * 1.4.3.2.26 The Lipschitz free space * 1.4.3.2.27 The Lipschitz free space * 1.4.3.2.28 The Lipschitz free space * 1.4.3.29 The Lipschitz free space * 1.4.3.2.29 The Lipschitz free space * 1.4.3.3.31.22 The Lipschitz free space * 1.4.3.3.31.3.3.4.4.5 The Lipschitz free space * 1.4.3.3.4.5 The Lipschitz free space * 1.4.3.5.6 The Lipschitz free space * 1.4.3.3.4.6 The Lipschitz free space * 1.4.3.7.1.19 The Lipschitz free space * 1.4.3.2.21 The Lipschitz free space * 1.4.3.2.22.3.3.4.5 The Lipschitz free space * 1.4.3.2.4.5 The Lipschitz free space * 1.4.3.2.6.1.1.22 The Lipschitz free space * 1.4.3.2.13.2.14.5 The Lipschitz free space * 1.4.3.2.15.6.1.2.16.2.17 The Lipschitz free space * 1.4.3.2.17.19 The Lipschitz free space **Proposition 1.1**.: _Assume that \(M\) is a metric space and \(X\) a Banach space. Then \(\|\cdot\|_{L}\) is a norm on \(\operatorname{Lip}_{0}(M,X)\) which turns \(\operatorname{Lip}_{0}(M,X)\) into a Banach space._ Let \(\mathcal{M}(M)\) be the space of linear combinations of Dirac measures \(\mu\) on \(M\). The elements of \(\mathcal{M}(M)\) can be seen as elements of the dual of \(\operatorname{Lip}_{0}(M)\) and we define \(\mathcal{F}(M)\) to be closure of \(\mathcal{M}(M)\) in \(\operatorname{Lip}_{0}^{*}(M)\). Thus \(\mathcal{F}(M)\) is the completion of \(\mathcal{M}_{0}(M)\) with respect to the norm \[\|\mu\|_{\mathcal{F}}=\sup_{\begin{subarray}{c}f\in\operatorname{Lip}_{0}(M) \\ \|f\|_{\operatorname{Lip}}\leq 1\end{subarray}}\int f(x)\,d\mu(x)=\sup_{ \begin{subarray}{c}f\in\operatorname{Lip}_{0}(M)\\ \|f\|_{\operatorname{Lip}}\leq 1\end{subarray}}\sum_{j=1}^{n}a_{j}f(x_{j})\text{ for }\mu=\sum_{j=1}^{n}a_{j} \delta_{x_{j}}\in\mathcal{M}(M).\] \(\mathcal{F}(M)\) can be seen as a "linearization of \(M\)" as the following observation suggests. **Proposition 1.2**.: _Let \(M\) be a metric space. The map \(\delta_{M}:M\to\mathcal{F}(M)\), \(m\mapsto\delta_{m}\), is an isometry._ _We will, from now on, identify elements of \(M\) with their image in \(\mathcal{F}(M)\) under \(\delta_{M}\)._ Proof.: For \(m,m^{\prime}\in M\) \[\|\delta_{m}-\delta_{m^{\prime}}\|_{\operatorname{Lip}^{*}}=\sup_{f\in \operatorname{Lip}_{0}(M),\|f\|_{L}\leq 1}\|f(m)-f(m^{\prime})\|\leq d(m,m^{ \prime}).\] On the other hand, define for \(m\in M\) \[f_{m}(m^{\prime})=d(m,m^{\prime})-d(m,0),\text{ for }m^{\prime}\in M.\] then \(f\in\operatorname{Lip}_{0}(M)\) with \(\|f\|_{L}=1\), and \[\langle\delta_{m^{\prime}}-\delta_{m},f\rangle=f(m^{\prime})-f(m)=d(m^{\prime },m),\] and thus \(\|\delta_{m}-\delta_{m^{\prime}}\|_{L^{*}}\geq d(m,m^{\prime})\). **Remark 1.3**.: The measure \(\delta_{0}\), as elment of \(\operatorname{Lip}_{0}^{*}(M)\) is the \(0\)-functional. Therefore the family \((\delta_{m}:m\in M)\) is not linear independent in \(\operatorname{Lip}_{0}^{*}(M)\). But it is easy to see that the family \((\delta_{m}:m\in M\setminus\{0\})\) is linear independent, and the linear span of it is dense in \(\mathcal{F}(M)\). Let \(\mathcal{M}_{0}(M)\) be the elements \(\mu\in\mathcal{M}(M)\) for which \(\mu(M)=0\). After adding an appropriate multiple of \(\delta_{0}\) to \(\mu\in\mathcal{M}\), which will not change how \(\mu\) acts on elements of \(\operatorname{Lip}_{0}(M)\), we can assume that \(\mu\in\mathcal{M}_{0}(M)\) and thus we also can see \(\mathcal{F}(M)\) as the closure of \(\mathcal{M}_{0}(M)\) in \(\operatorname{Lip}_{0}^{*}(M)\). We will show (cf. [19, Section 2]) that \(\operatorname{Lip}_{0}(M)\) is, in a natural way, isometrically equivalent to the dual of \(\mathcal{F}(M)\). The following observation is easy to see and crucial. **Proposition 1.4**.: _Assume that \((f_{i})_{i\in I}\) is a net in \(\operatorname{Lip}_{0}(M)\), with \(\|f\|_{\operatorname{Lip}}\leq 1\), for \(i\in I\) and \(f(m)=\lim_{i\in I}f_{i}(m)\) exists for every \(m\in M\), then \(f\in\operatorname{Lip}_{0}(M)\), and \(\|f\|_{\operatorname{Lip}}\leq 1\)._ **Theorem 1.5**.: _Let \(M\) be a metric space and \(Z\) a Banach space then the canonical map_ \[\operatorname{Lip}_{0}(M)\to\mathcal{F}^{*}(M),\quad f\mapsto\chi_{f},\text{ with }\chi_{f}(\mu)=\langle\mu,f\rangle,\text{ for }\mu\in\mathcal{F}(M)\] _is an isometry from \(\operatorname{Lip}_{0}(M)\) onto \(\mathcal{F}^{*}(M)\)._ We will need the following well known result by Dixmier: **Theorem 1.6**.: _[_5, 18_]_ _Assume that \(U\) is a Banach space and \(V\) is a closed subspace of \(U^{*}\) so that \(B_{U}\), the unit ball in \(U\), is compact in the topology \(\sigma(U,V)\), the topology on \(U\) generated by \(V\)._ _Then \(V^{*}\) is isometrically isomorphic to \(U\) and the map_ \[T:U\to V^{*},\quad u\mapsto F_{u},\text{ with }F_{u}(v)=\langle u,v\rangle \text{, for }u\in U\text{ and }v\in V\text{,}\] _is an isometrical isomorphism onto \(V^{*}\)._ Proof of Theorem 1.5.: We will verify that the assumptions of Theorem 1.6 hold for \(U=\operatorname{Lip}_{0}(M)\) and \(V=\mathcal{F}(M)\subset\operatorname{Lip}_{0}^{*}(M)\). Let \((f_{i})_{i\in I}\subset B_{\operatorname{Lip}_{0}(M)}\) be a net in \(\subset B_{\operatorname{Lip}_{0}(M)}\). We have to show that there is a subnet \((g_{j})_{j\in J}\) of \((f_{i})_{i\in I}\) which converges to some element \(f\in B_{\operatorname{Lip}_{0}(M)}\) with respect to \(\sigma\big{(}\operatorname{Lip}_{0}(M),\mathcal{F}(M)\big{)}\). Let \[K=\prod_{m\in M}[0,d(0,m)]\] which is compact in the product topology. It follows that some subnet \((g_{j})_{j\in J}\) of \((f_{i})_{i\in I}\) is pointwise converging to \(g\in K\), and from Proposition 1.4 it follows that \(g\in\operatorname{Lip}_{0}(M)\), and \(\|g\|_{\operatorname{Lip}_{0}(M)}\leq 1\). Thus \(\langle\mu,g_{j}\rangle\) converges to \(\langle\mu,g\rangle\) for each \(\mu\) in the linear span of \((\delta_{m}:m\in M)\), and since \((f_{i})_{i\in I}\) is bounded it follows that \(\langle\mu,g_{j}\rangle\) converges to \(\langle\mu,g\rangle\) for all \(\mu\in\mathcal{F}(M)\). We deduce therefore our claim from Theorem 1.6. ### Example **Example 1.7**.: Let \(M=\mathbb{R}\). We want to find a concrete representation of the space \(\mathcal{F}(\mathbb{R})\). Every Lipschitz function on \(\mathbb{R}\) is absolutely continuous and thus, by the Fundamental Theorem of Calculus [7, Theorem 3.35] almost everywhere differentiable. Moreover the derivative \[D:\operatorname{Lip}_{0}(\mathbb{R})\to L_{\infty}(\mathbb{R}),\quad f\mapsto f ^{\prime}\] is an isometry whose inverse is \[I:L_{\infty}(\mathbb{R})\to\operatorname{Lip}_{0}(\mathbb{R}),\quad f\mapsto F,\text{ with }F(x)=\int_{0}^{x}f(t)\,dt\text{ for }x\in\mathbb{R}.\] We claim that the map \(\delta_{x}\mapsto 1_{[0,x]}\), for \(x\geq 0\) and \(\delta_{x}\mapsto 1_{[x,0]}\), if \(x<0\), extends to an isometric isomorphism between \(\mathcal{F}(\mathbb{R})\) and \(L_{1}(\mathbb{R})\). Indeed, we represent the span of the \(\delta_{x}\), \(x\in\mathbb{R}\) by \[\tilde{\mathcal{F}}=\left\{\sum_{j=-m}^{n-1}\xi_{j}(\delta_{x_{j+1}}-\delta_{ x_{j}}):\begin{matrix}m,n\!\in\!\mathbb{N},&x_{-m}\!<\!x_{-m+1}\!<\!\ldots\!<\!x_{-1}\! <\!x_{0}\!=\!0\!<\!x_{1}\!<\!\ldots\!x_{n}\\ &\text{ and }(\xi_{j})_{j=-m}^{n}\subset\mathbb{R}\end{matrix}\right\}.\] Then, we consider the map \[T:\tilde{\mathcal{F}}\to L_{1}(\mathbb{R}),\quad\sum_{j=-m}^{n-1}\xi_{j}( \delta_{x_{j+1}}-\delta_{x_{j}})\mapsto\sum_{j=-m}^{n-1}\xi_{j}1_{[x_{j},x_{j+ 1})}.\] Since \(T\) has a dense image, we only need to show that \(T\) is an isometry. For \(\mu=\sum_{j=-m}^{n-1}\xi_{j}(\delta_{x_{j+1}}-\delta_{x_{j}})\in\tilde{\mathcal{F}}\) it follows that \[\big{\|}T(\mu)\| =\sup_{f\in L_{\infty}(\mathbb{R}),\|f\|_{\infty}\leq 1}\Bigg{|}\sum_{j =-m}^{n-1}\xi_{j}\int_{x_{j}}^{x_{j+1}}f(t)\,dt\Bigg{|}\] \[=\sup_{f\in L_{\infty}(\mathbb{R}),\|f\|_{\infty}\leq 1}\sum_{j =-m}^{n-1}\xi_{j}\big{(}I(f)(x_{j+1})-I(f)(x_{j})\big{)}\] \[=\sup_{F\in\operatorname{Lip}_{0}(\mathbb{R}),\|f\|_{\infty}\leq 1 }\sum_{j=-m}^{n-1}\xi_{j}\big{(}F(x_{j+1})-F(x_{j})\big{)}=\|\mu\|_{ \operatorname{Lip}^{*}},\] which verifies our claim. **Example 1.8**.: Similarly the following can be shown. If \(M=\mathbb{Z}\) with its usual metric, then \[T:\operatorname{Lip}_{0}(\mathbb{Z})\to\ell_{\infty}(\mathbb{Z}),\ f\mapsto x _{f},\ \text{with}\ x_{f}(n)=f(n)-f(n-1),\ \ \text{for}\ n\!\in\!\mathbb{Z},\] is an isometric isomorphism onto \(\ell_{\infty}(\mathbb{Z})\), and it is the adjoint of the operator \[S:\ell_{1}(\mathbb{Z})\to\mathcal{F}(\mathbb{Z}),\quad e_{n}\mapsto\delta_{n}- \delta_{n-1},\] which is also an isometric isomorphism. ### The Universality property of \(\mathcal{F}(M)\) An important property is the following extension property of \(\mathcal{F}(M)\). **Proposition 1.9**.: _Let \(M\) and \(N\) be metric spaces, with special \(0\)-points \(0_{M}\) and \(0_{N}\). Then every Lipschitz map \(\varphi:M\to N\), with \(\varphi(0_{M})=0_{N}\), can be extended to a linear bounded map \(\hat{\varphi}:\mathcal{F}(M)\to\mathcal{F}(N)\) (meaning that \(\hat{\varphi}(\delta_{m})=\delta_{\varphi(m)}\), for \(m\in M\)) so that for the operator norm of \(\hat{\varphi}\) we have_ \[\|\hat{\varphi}\|_{\mathcal{F}(M)\to\mathcal{F}(N)}\leq\operatorname{Lip}( \varphi).\] Proof.: Let \(\lambda=\operatorname{Lip}(\varphi)\). We define \[\varphi^{\#}:\operatorname{Lip}_{0}(N)\to\operatorname{Lip}_{0}(M),\qquad f \mapsto f\circ\varphi.\] Thus, \(\varphi^{\#}\) is a linear bounded operator, with \(\|\varphi^{\#}\|\leq\lambda\). We claim that \(\varphi^{\#}\) is also \(w^{*}\) continuous, more precisely \(\varphi^{\#}\) is \(\sigma(\operatorname{Lip}_{0}(N),\mathcal{F}(N))\)- \(\sigma(\operatorname{Lip}_{0}(M),\mathcal{F}(M))\)-continuous. Let \((f_{i})_{i\in I}\) be a net in \(\operatorname{Lip}_{0}(N)\) which \(w^{*}\)-converges to \(0\), and let \(\mu\in\mathcal{F}(M)\), say \(\mu=\lim_{k\to\infty}\mu_{k}\) in \(\mathcal{F}(M)\), and thus in \(\operatorname{Lip}_{0}^{*}(M)\), where \(\mu_{k}\) is of the form \[\mu_{k}=\sum_{j=1}^{l_{k}}a_{(k,j)}\delta_{m(k,j)},\ \text{with}\ (a_{(k,j)})_{j=1}^{l_{k}}\subset \mathbb{R},\,\text{and}\ (m_{(k,j)})_{j=1}^{l_{k}}\subset M.\] Then \[\tilde{\mu}_{k}=\sum_{j=1}^{l_{k}}a_{(k,j)}\delta_{\varphi(m(k,j))}\] converges in \(\mathcal{F}(N)\) to some element \(\tilde{\mu}\in\mathcal{F}(N)\) with the property that \(\langle\tilde{\mu},f\rangle=\langle\mu,f\circ\varphi\rangle\), for \(f\in\operatorname{Lip}_{0}(N)\). Indeed, note that for \(k,k^{\prime}\in\mathbb{N}\) \[\|\tilde{\mu}_{k}-\tilde{\mu}_{k^{\prime}}\|_{\operatorname{Lip}^ {*}} =\sup_{f\in\operatorname{Lip}_{0}(N),\|f\|_{L}\leq 1}\langle f,\ \tilde{\mu}_{k}-\tilde{\mu}_{k^{\prime}}\rangle\] \[=\sup_{f\in\operatorname{Lip}_{0}(N),\|f\|_{L}\leq 1}\langle f \circ\varphi,\ \mu_{k}-\mu_{k^{\prime}}\rangle\] \[\leq\lambda\sup_{g\in\operatorname{Lip}_{0}(M),\|g\|_{L}\leq 1 }\langle g,\mu_{k}-\mu_{k^{\prime}}\rangle=\|\mu_{k}-\mu_{k^{\prime}}\|_{ \operatorname{Lip}^{*}}.\] It follows therefore that \[\lim_{i\in I}\langle f_{i}\circ\varphi,\mu\rangle=\lim_{i\in I}\langle f_{i}, \tilde{\mu}\rangle=0\] and thus we verified that \(\varphi^{\#}\) is \(w^{*}\)-continuous. It follows therefore that \(\varphi^{\#}\) is the adjoint of an operator \(\hat{\varphi}:\mathcal{F}(M)\to\mathcal{F}(N)\). Since for \(f\in\operatorname{Lip}_{0}(N)\) and \(m\in M\) \[\langle\hat{\varphi}(\delta_{m}),f\rangle=\langle\delta_{m},\varphi^{\#}(f) \rangle=f(\varphi(m))=\langle\delta_{\varphi(m)},f\rangle,\] it follows that \(\hat{\varphi}(\delta_{m})=\delta_{\varphi(n)}\), which finishes our proof. The following extension result of \(\mathcal{F}(M)\) is the reason why Godefroy and Kalton coined the name _Lipschitz free space over \(M\)_ for \(\mathcal{F}(M)\). **Proposition 1.10**.: _[_19_, Theorem 2.2.4]_ _Let \(M\) be a metric space and \(X\) a Banach space, and let \(L:M\to X\) be a Lipschitz function. Then a unique linear and bounded extension \(\hat{L}:\mathcal{F}(M)\to X\) of \(L\) exists. This means that \(\hat{L}(\delta_{m})=L(m)\), for all \(m\in\mathbb{N}\)._ _Moreover we have in this case that \(\|\hat{L}\|_{\mathcal{F}(M)\to X}=\|L\|_{L}\)._ Proof.: Since the \(\delta_{m}\), \(m\in M\setminus\{0\}\) are linearly independent as elements of \(\mathcal{F}(M)\) we can at least extend \(L\) linearly to \(\operatorname{span}(\delta_{m}:m\in M\setminus\{0\})\). We denote this extension by \(\tilde{L}\), and we have to show that \(\tilde{L}\) extends to a bounded linear operator on \(\mathcal{F}(M)\), whose operator norm coincides with \(\|L\|_{\operatorname{Lip}}\). Let \(\mu=\sum_{j=1}^{n}a_{j}\delta_{m_{j}}\in\operatorname{span}(\delta_{m}:m\in M)\). Then there is some \(x^{*}\in B_{X^{*}}\), so that \(\langle x^{*},\tilde{L}(\mu)\rangle=\|\tilde{L}(\mu)\|\). It follows that \(x^{*}\circ L\) is in \(\operatorname{Lip}_{0}(M)\) and \(\|x^{*}\circ L\|_{\operatorname{Lip}}\leq\|L\|_{\operatorname{Lip}}\), and thus, \[\|\tilde{L}(\mu)\|_{X}=\langle x^{*},\tilde{L}(\mu)\rangle\leq\|x^{*}\circ L\| _{\operatorname{Lip}}\cdot\|\mu\|_{\operatorname{Lip}^{*}}\leq\operatorname{ Lip}(L)\cdot\|\mu\|_{\operatorname{Lip}^{*}}. \tag{1}\] Thus, \(\tilde{L}\) can be extended to a linear bounded operator \(\hat{L}\) on all of \(\mathcal{F}(M)\). \(\hat{L}\) is of course also Lipschitz with \(\operatorname{Lip}(\hat{L})=\|\hat{L}\|_{\mathcal{F}(M)\to X}\) and since \(M\) isometrically embeds into \(\mathcal{F}\), it follows that \[\|\hat{L}\|_{\mathcal{F}(M)\to X}=\operatorname{Lip}(\hat{L})\geq\sup_{m,m^{ \prime}\in M,m\neq m^{\prime}}\frac{\|L(m)-L(m^{\prime})\|}{d(m,m^{\prime})}= \operatorname{Lip}(L),\] and thus, together with (1), we deduce that \(\|\hat{L}\|_{\mathcal{F}(M)\to X}=\operatorname{Lip}(L)\). **Proposition 1.11**.: _If \(M\) is a metric space and \(N\subset M\), then \(\mathcal{F}(N)\) is (in the natural way) a subspace of \(\mathcal{F}(M)\)._ The proof of Proposition 1.11 follows from the following extension result for Lipschitz functions. **Lemma 1.12**.: _Any Lipschitz function \(f:N\to\mathbb{R}\) can be extended to a Lipschitz function \(F:M\to\mathbb{R}\), by defining_ \[F(m)=\inf_{n\in N}(f(n)+\|f\|_{L}d(n,m)),\text{ for }m\in M.\] _Moreover, it follows that \(\operatorname{Lip}(F)=\operatorname{Lip}(f)\)._ Proof.: Assume that \(m,m^{\prime}\in M\), and assume without loss of generality that \(F(m)\leq F(m^{\prime})\). Let \(\varepsilon>0\) and choose \(n\in N\) so that \(F(m)\geq f(n)+\operatorname{Lip}(f)d(n,m)-\varepsilon\). Then it follows \[0 \leq F(m^{\prime})-F(m)\] \[\leq f(n)+\operatorname{Lip}(f)d(n,m^{\prime})-\big{(}f(n)+ \operatorname{Lip}(f)d(n,m^{\prime})\big{)}+\varepsilon\] \[=\operatorname{Lip}(f)(d(n,m)-d(n,m^{\prime}))+\varepsilon\leq \operatorname{Lip}(f)d(m^{\prime},m)+\varepsilon,\] thus, we deduce the claim if we let \(\varepsilon\) tend to \(0\). ## 2. The Transportation cost norm ### The Duality Theorem of Kantorovich This section presents an intrinsic definition of the space \(\mathcal{F}(M)\) for a metric space \((M,d)\), _i.e.,_ a definition which only uses the metric on \(M\). It represents \(\mathcal{F}(M)\) as a _Transportation Cost Space_. Let \(\mu=\sum_{j=1}^{n}a_{j}\delta_{x_{j}}\in\tilde{\mathcal{F}}(M)\), \((a_{j})_{j=1}^{n}\subset\mathbb{R}\) and \((x_{j})_{j=1}^{n}\subset M\setminus\{0\}\). Since \(\delta_{0}\equiv 0\) (as a functional acting on \(\operatorname{Lip}_{0}(M)\)), we can put \(a_{0}=-\sum_{j=1}^{n}a_{j}\), and write \(\mu\) as \[\mu=a_{0}\delta_{0}+\sum_{j=1}^{n}a_{j}\delta_{x_{j}}\] Thus, from now on, we define \[\tilde{\mathcal{F}}(M)=\Big{\{}\sum_{j=1}^{n}a_{j}\delta_{x_{j}}:n\in\mathbb{ N},(a_{i})_{j=1}^{n}\subset\mathbb{R},\,(x_{j})_{j=1}^{n}\subset M,\text{ with }\sum_{j=1}^{n}a_{j}=0\Big{\}}.\] **Proposition 2.1**.: _Every \(\mu\in\tilde{\mathcal{F}}(M)\) can be represented as_ \[\mu=\sum_{j=1}^{l}r_{j}(\delta_{x_{j}}-\delta_{y_{j}}),\text{ with }(x_{j})_{j=1}^{l},(y_{j})_{j=1}^{l}\subset M,(r_{j})_{j=1}^{l}\subset \mathbb{R}^{+}. \tag{2}\] **Definition 2.2**.: For \(x,y\in M\) we call \(\delta_{x}-\delta_{y}\), with \(x\neq y\) a _molecule in \(\tilde{F}(M)\)_ and (2) a _molecular representation of \(\mu\in\tilde{\mathcal{F}}(M)\)_. We will always assume in that case that \(x_{j}\neq y_{j}\). Proof of Proposition 2.1.: We can write \(\mu\in\tilde{\mathcal{F}}(M)\) as \[\mu=\sum_{i=1}^{m}a_{i}\delta_{x_{i}}-\sum_{j=1}^{n}b_{j}\delta_{y_{j}}\] with \(a_{i},b_{j}>0\), \(1\leq i\leq m\), \(1\leq j\leq n\), and \(S:=\sum_{i=1}^{m}a_{i}=\sum_{j=1}^{n}b_{j}\), and \((x_{i})_{i=1}^{m}\), \((y_{j})_{j=1}^{n}\subset M\). \[\mu =\frac{1}{S}\Big{(}\sum_{j=1}^{n}b_{j}\Big{)}\sum_{i=1}^{m}a_{i} \delta_{x_{i}}-\frac{1}{S}\Big{(}\sum_{i=1}^{m}a_{i}\Big{)}\sum_{j=1}^{n}b_{j} \delta_{y_{j}}\] \[=\frac{1}{S}\sum_{i=1}^{m}\sum_{j=1}^{n}a_{i}b_{j}\big{(}\delta_{x _{i}}-\delta_{y_{j}}\big{)}.\] **Definition 2.3**.: Let \(\mu=\mathcal{F}(M)\) have the molecular representation \[\mu=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{y_{j}}),\] \(r_{j}>0\), \(x_{j},y_{j}\in M\), for \(j=1,2,\ldots n\), then we define \[t\big{(}(r_{j})_{j=1}^{n},(x_{j})_{j=1}^{n},(y_{j})_{j=1}^{n}\big{)}:=\sum_{j= 1}^{n}r_{j}d(x_{j},y_{j}), \tag{3}\] and call it the _transportation costs of that representation_. **Interpretation:** Let us assume that transporting \(a\) units of a product from \(x\) to \(y\) costs \(a\cdot d(x,y)\). Let \(\mu=\mu^{+}-\mu^{-}\in\tilde{\mathcal{F}}(M)\), where \(\mu^{+}\) is the positive part and a negative part \(\mu^{-}\). We interpret \(\mu^{+}\) as the distribution of the surplus and \(\mu^{-}\) as the distribution of the need of the product. Then, a molecular representation \[\mu=\sum_{j=1}^{l}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})\] can be seen as a _transportation plan_ (and will be called as such) to move \(r_{j}\) units from \(x_{j}\) to \(y_{j}\) and thereby balancing the surplus with the need. For such a transportation plan, \(t\big{(}(r_{j})_{j=1}^{n},(x_{j})_{j=1}^{l}(y_{j})\big{)}\) represents the total transportation costs. We define \[\|\mu\|_{\rm tc}=\inf\Big{\{}t\big{(}(r_{j})_{j=1}^{n},(x_{j})_{j=1}^{n}(y_{j })_{j=1}^{n}\big{)}:\mu=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{y_{j}}) \Big{\}}\] We will soon see that \(\|\cdot\|_{\rm tc}\) is a norm. We first want to show that the \(\inf\) in the definition of \(\|\cdot\|_{\rm tc}\) is attained for \(\mu\in\mathcal{F}(M)\). To do that we need the following proposition **Proposition 2.4**.: _Assume \(\mu\in\tilde{\mathcal{F}}(M)\) and_ \[\mu=\sum_{j=1}^{n}r_{i}(\delta_{x_{j}}-\delta_{y_{j}})\] _is a molecular representation._ _Then, there exists a molecular representation_ \[\mu=\sum_{j=1}^{n^{\prime}}r_{i}^{\prime}(\delta_{x_{j}^{\prime}}-\delta_{y_{j}^{ \prime}})\] _for which the sets \(\{x_{j}^{\prime}:j=1,2,\ldots,n^{\prime}\}\) and \(\{y_{j}^{\prime}:j=1,2,\ldots n^{\prime}\}\) are disjoint and_ \[t\big{(}(r_{j}^{\prime})_{j=1}^{n},(x_{j}^{\prime})_{j=1}^{n^{\prime}},(y_{j}^ {\prime})_{j=1}^{n^{\prime}}\big{)}\leq t\big{(}(r_{j})_{j=1}^{n},(x_{j})_{j=1 }^{n},(y_{j})_{j=1}^{n}\big{)}.\] _In this case, we call_ \[\mu=\sum_{j=1}^{n^{\prime}}r_{i}^{\prime}(\delta_{x_{j}^{\prime}}-\delta_{y_{ j}^{\prime}})\] a disjoint molecular representation. Proof.: Assume \(\{x_{j}:j=1,2,\ldots,n\}\) and \(\{y_{j}:j=1,2,\ldots n\}\) are not disjoint and without loss of generality \(x_{n-1}=y_{n}\). We write \[\mu=\sum_{j=1}^{n-2}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})+\underbrace{r_{n-1} (\delta_{x_{n-1}}-\delta_{y_{n-1}})+r_{n}(\delta_{x_{n}}-\delta_{y_{n}})}_{ \nu}.\] Assume \(r_{n-1}\geq r_{n}\) (similar argument if \(r_{n-1}<r_{n}\)) and write \(\nu\) as \[\nu=(r_{n-1}-r_{n})(\delta_{x_{n-1}}-\delta_{y_{n-1}})+r_{n}(\delta_{x_{n}}- \delta_{y_{n-1}})\] and note that \[(r_{n-1}-r_{n})d(x_{n-1},y_{n-1})+r_{n}d(x_{n},y_{n-1}) =r_{n-1}d(x_{n-1},y_{n-1})+r_{n}(d(x_{n},y_{n-1})-d(x_{n-1},y_{n-1 }))\] \[=r_{n-1}d(x_{n-1},y_{n-1})+r_{l}(d(x_{l},y_{n-1})-d(y_{n},y_{n-1}))\] \[\leq r_{n-1}d(x_{n-1},y_{n-1})+r_{l}(d(x_{n},y_{n}).\] Thus \[\mu=\sum_{j=1}^{n-2}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})+(r_{n-1}-r_{n})( \delta_{x_{n-1}}-\delta_{y_{n-1}})+r_{n}(\delta_{x_{n}}-\delta_{y_{n-1}})\] is a molecular representation eliminating \(x_{n-1}=y_{n}\) in the intersection without increasing the transportation costs. We can therefore iterate this procedure until we arrive at a disjoint representation of \(\mu\). **Remark**.: Let \[\mu=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{x_{j}})\] be a disjoint molecular representation of \(\mu\). Then it follows from the _Jordan Decomposition Theorem_ (which in the simple case of finite linear combinations of Dirac measures is trivial) \[\mu^{+}=\sum_{j=1}^{n}r_{j}\delta_{x_{j}}\text{ is the positive part and }\mu^{-}=\sum_{j=1}^{n}r_{j}\delta_{y_{j}}\text{ is the negative part of }\mu.\] Put \[A^{+}=\operatorname{supp}(\mu^{+})=\{x\in M:\mu^{+}(x)>0\}\text{ and }A^{-}= \operatorname{supp}(\mu^{-})=\{x\in M:\mu^{-}(x)>0\}.\] A disjoint molecular decomposition is then always of the following form \[\mu=\sum_{x\in A^{+}}\sum_{y\in A^{-}}\nu(x,y)(\delta_{x}-\delta_{y}).\] For \(x\in A^{+}\) and \(y\in A^{-}\) \[\mu(x)=\mu^{+}(x)=\sum_{y\in A^{-}}\nu(x,y)\text{ and }\mu^{-}(y)=\sum_{x\in A ^{+}}\nu(x,y).\] Thus \(\nu\) (put \(\nu(x,y)=0\) if \((x,y)\neq A^{+}\times A^{-}\)) can be seen as a (positive) measure on \(M^{2}\) whose _marginals_ are \(\mu^{+}\) and \(\mu^{-}\). Also, note that \[\nu(M^{2})=\sum_{x,y\in M^{2}}\nu(x,y)=\sum_{x\in M}\sum_{y\in M}\nu(x,y)=\sum _{x\in A^{+}}\sum_{y\in A^{-}}\nu(x,y)=\mu^{+}(M)=\mu^{-}(M).\] Let \(t(\nu)\) be the transportation costs of the representation \[\mu=\sum_{x\in A^{+}}\sum_{y\in A^{-}}\nu(x,y)(\delta_{x}-\delta_{y}).\] Then \[t(\nu)=\sum_{x,y\in M}\nu(x,y)d(x,y)=\int_{M}\int_{M}d(x,y)\,d\nu(x,y).\] and thus \[\|\mu\|_{\text{tc}} =\inf\left\{\begin{array}{c}\nu\text{ measure on }M^{2},\\ \sum_{(x,y)\in M^{2}}\nu(x,y)d(x,y):\mu^{+}(x)=\sum_{y^{\prime}\in M}\nu(x,y^ {\prime})\text{ for }x\in M\text{ and }\\ \mu^{-}(y)=\sum_{x^{\prime}\in M}\nu(x^{\prime},y)\text{ for }y\in M\end{array}\right\}\] \[=\inf\left\{\begin{array}{c}\nu\text{ measure on }M^{2},\nu(M^{2})=\mu^{+}(M)\\ \sum_{(x,y)\in M^{2}}\nu(x,y)d(x,y):\begin{array}{c}\operatorname{supp}(\nu )\subset\operatorname{supp}(\mu^{+})\times\operatorname{supp}(\mu^{-})\\ \mu^{+}(x)=\sum_{y^{\prime}\in M}\nu(x,y^{\prime})\text{ for }x\in M\text{ and }\\ \mu^{-}(y)=\sum_{x^{\prime}\in M}\nu(x^{\prime},y)\text{ for }y\in M\end{array}\right\}. \tag{4}\] From compactness, we therefore deduce that **Corollary 2.5**.: _For \(\mu\in\tilde{\mathcal{F}}\)_ \[\|\mu\|_{\rm tc}=\inf\Big{\{}t\big{(}(r_{j})_{j=1}^{n},(x_{j})_{j=1}^{l}(y_{j}) \big{)}:\mu=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})\Big{\}}\] _is attained._ _We call a representation of \(\mu\) optimal if its transportation cost equals to \(\|\mu\|_{\rm tc}\)._ **Corollary 2.6**.: _For \(x,y\in M\)_ \[\|\delta_{x}-\delta_{y}\|_{\rm tc}=d(x,y).\] Proof.: By above remark \(\|\mu\|_{\rm tc}=t(\nu)\) where \(\nu\) is a measure on \(M\times M\) with \({\rm supp}(\nu)=\{(x,y)\}\), and thus \(\nu=\delta_{x}-\delta_{y}\) is an optimal representation **Theorem 2.7** (Duality Theorem of Kantorovich [14]).: _For \(\mu\in\tilde{\mathcal{F}}(M)\) it follows that \(\|\mu\|_{\mathcal{F}}=\|\mu\|_{\rm tc}\)._ Proof.: It is easy to see that \(\|\cdot\|_{\rm tc}\) is a semi norm on \(\tilde{\mathcal{F}}(M)\). By Corollary 2.6\(\|\delta_{x}-\delta_{y}\|_{\rm tc}=d(x,y)\), for \(x,y\in M\). **Claim.** For every norm \(\|\cdot\|\) on \(\tilde{\mathcal{F}}(M)\) with \(\|\delta_{x}-\delta_{y}\|=d(x,y)\), for all \(x,y\in M\), it follows that \(\|\mu\|_{\rm tc}\geq\|\mu\|\) for all \(\mu\in\tilde{\mathcal{F}}(M)\). Indeed, let \(\mu=\sum_{x,y}\nu(x,y)(\delta_{x}-\delta_{y})\in\tilde{\mathcal{F}}(M)\) be a molecular representation. Then \[t(\nu)=\sum_{x,y\in M}\nu(x,y)d(x,y)\geq\Big{\|}\sum_{x,y}\nu(x,y)(\delta_{x}- \delta_{y})\Big{\|}.\] So the claim follows by taking the infimum over all representations. Since \(\|\cdot\|_{\mathcal{F}}\) is such a norm, it follows that \(\|\cdot\|_{\mathcal{F}}\leq\|\cdot\|_{\rm tc}\), and in particular that \(\|\cdot\|_{\rm tc}\) is also a norm. Let \(X\) be the completion of \(\tilde{\mathcal{F}}(M)\) with respect to \(\|\cdot\|_{\rm tc}\). The map \(L:M\to X\), \(x\to\delta_{x}\) is an isometric embedding, which by the extension property of \(\mathcal{F}(M)\) can be extended to a linear operator \(\bar{L}:\mathcal{F}(M)\to X\) with \(\|\bar{L}\|_{\mathcal{F}(M)\to X}=1\). This means that \(\|\mu\|_{\mathcal{F}}\geq\|\mu\|_{\rm tc}\), thus \(\|\mu\|_{\mathcal{F}}=\|\mu\|_{\rm tc}\), for \(\mu\in\tilde{\mathcal{F}}(M)\). Therefore \(X=\mathcal{F}(M)\) and \(\|\cdot\|_{\rm tc}=\|\cdot\|_{\mathcal{F}}\) the norms are the same. **Corollary 2.8**.: _Let \(\mu\in\tilde{\mathcal{F}}(M)\). Then a representation \(\mu=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})\) is optimal if and only if there is an \(f\in{\rm Lip}_{0}(M)\), with \(\|f\|_{\rm Lip}=1\), for which_ \[f(x_{j})-f(y_{j})=d(x_{j},y_{j}),\mbox{ for }j=1,2,\ldots,n. \tag{5}\] Proof.: If the representation \(\mu=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})\),with \(r_{j}>0\) and \(x_{j},y_{j}\in M\), for \(j=1,2,\ldots,n\), is optimal, then it follows from the Hahn-Banach theorem and Theorem 2.7, that there is an \(f\in{\rm Lip}_{0}(M)\), \(\|f\|_{\rm Lip}=1\) for which \[\int f\,d\mu=\sum_{j=1}^{n}r_{j}\big{(}f(x_{j})-f(y_{j})\big{)}=\|\mu\|_{tc}= \sum_{j=1}^{n}r_{i}d(x_{j},(y_{j})\] since \(\|f\|_{\mathrm{Lip}}=1\) it follows that \(f(x_{i})-f(y_{i})\leq d(x_{j},(y_{j})\), and, thus, \(f(x_{i})-f(y_{i})=d(x_{j},y_{j})\), for all \(j=1,2,\ldots n\). Conversely, if (5) holds for some \(f\in\mathrm{Lip}_{0}(M)\), with \(\|f\|_{\mathrm{Lip}}=1\), then by Theorem 2.7 \[\int f\,d\mu=\sum_{j=1}^{n}r_{j}\big{(}f(x_{j})-f(y_{j})\big{)}\leq\|\mu\|_{ \mathcal{F}}=\|\mu\|_{tc}. \tag{6}\] On the other hand, by definition of \(\|\mu\|_{tc}\), it follows that \[\|\mu\|_{tc}\leq\sum_{j=1}^{n}r_{i}d(x_{j},(y_{j})=\sum_{j=1}^{n}r_{i}\big{(}f (x_{j})-(y_{j})\big{)},\] and thus, the inequality in (6) is an equality, and the representation is optimal. ### The Extreme Points of \(B_{\mathcal{F}(M)}\) **Definition 2.9**.: Let \(\mu\in\mathcal{F}(M)\) and let \[\mu=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})\] be a molecular representation with \(r_{j}>0\), \(j=1,2,\ldots n\). We call a sequence \((j_{i})_{i=1}^{l}\) of pairwise distinct elements in \(\{1,2,\ldots n\}\)_path_ for the above representation if \(x_{j_{i+1}}=y_{j_{i}}\), for \(i=1,2,\ldots,l-1\), and we call it a _circle_, if moreover \(y_{j_{l}}=x_{j_{1}}\) Not every optimal representation needs to be disjoint. Nevertheless, if it is not disjoint, something special has to happen. **Proposition 2.10**.: _If for \(\mu\in\mathcal{F}(M)\) the representation_ \[\mu=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{y_{j}}). \tag{7}\] _is optimal, then for any path \((j_{i})_{i=1}^{l}\) it follows that_ \[d(x_{j_{1}},y_{j_{l}})=\sum_{i=1}^{l}d(x_{j_{i}},y_{j_{i}})=d(x_{i_{1}},y_{i_{ 1}})+\sum_{i=2}^{l}d(y_{j_{i-1}},y_{j_{i}}).\] _In particular, it does not contain a circle._ Intuitively the claim is clear. Proof.: Without loss of generality (after reordering), we can assume that \(j_{i}=n-l+i\), for \(i=1,2,\ldots l\). We put \(\varepsilon=\min\{r_{j}:n-l<j\leq n\}\). Then we write \[\mu=\sum_{j=1}^{n-l}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})+\sum_{j=n-l+1}^{n}(r _{j}-\varepsilon)(\delta_{x_{j}}-\delta_{y_{j}})+\varepsilon\underbrace{\sum _{j=n-l+1}^{n}(\delta_{x_{j}}-\delta_{y_{j}})}_{\equiv\nu}\] \[=\sum_{j=1}^{n-l}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})+\sum_{j=n-l+1}^{n}(r_{j}- \varepsilon)(\delta_{x_{j}}-\delta_{y_{j}})+\varepsilon(\delta_{x_{j_{1}}}- \delta_{y_{j_{l}}}).\] From the optimality of the representation (7), it follows that \[d(x_{j_{1}},y_{j_{l}})=\sum_{i=1}^{l}d(x_{j_{i}},y_{j_{i}}).\] We deduce **Corollary 2.11**.: _Assume \(x,y\in M\), \(x\neq y\), and there is no \(z\!\in\!M\setminus\{x,y\}\) for which \(d(x,y)\!=\!d(x,z)\!+\!d(z,y)\). Then every optimal representation of \(\delta_{x}-\delta_{y}\) is disjoint, i.e., by the remark after Proposition 2.4 it is of the form_ \[\delta_{x}-\delta_{y}=\sum_{j=1}^{n}r_{j}(\delta_{x}-\delta_{y}),\text{with} \,\sum_{j=1}^{n}r_{j}=d(x,y).\] Proof.: Note that a representation \[\delta_{x}-\delta_{y}=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{y_{j}}),\] which is not disjoint, would have a path \((j_{i})_{i=1}^{l}\), \(l\geq 2\), with \(x=x_{j_{1}}\) and \(y=y_{j_{l}}\), but this would by the assumption on \(x\) and \(y\) and by Proposition 2.10 mean that this representation is not optimal. The following result was shown in the general case (_i.e.,_ in the case that \(M\) is not finite) by Aliega and Prenecka in [1]. Recall that for a Banach space \(E\), and a subset \(C\subset E\) an element \(x\in C\) is called _extreme point of \(C\)_ if \(x\) cannot be written as \(x=\alpha y+(1-\alpha)z\),with \(0<\alpha<1\) and \(y\neq z\), \(z,y\in C\setminus\{x\}\). In other words, if \[x=\sum_{j=1}^{n}x_{j}\alpha_{j},\,0<\alpha_{j}<1,\text{ and }x_{j}\in C,\,j=1,2 \ldots n,\quad\text{ with }\,\sum_{j=1}^{n}\alpha_{j}=1,\] then \(x_{1}=x_{2}=\ldots=x_{n}=x\). Recall (Krein-Milman Theorem): Every convex and compact subset \(C\) of \(E\) is the closed convex hull of its extreme points, **Theorem 2.12**.: _Let \((M,d)\) be a finite metric space, then \(\mu\in B_{\mathcal{F}(M)}\) is an extreme point if and only if \(\mu\) is of the form_ \[\mu=\frac{\delta_{x}-\delta_{y}}{d(x,y)},\text{ with }x\neq y\text{, and there is no }z\!\in\!M\setminus\{x,y\}\text{ for which }d(x,y)\!=\!d(x,z)\!+\!d(z,y)\text{.}\] Proof.: (We are using that \(\mathcal{F}(M)=\tilde{\mathcal{F}}(M)\)) Assume that \(\mu\in B_{\mathcal{F}(M)}\) is an extreme point, and let \[\mu=\sum_{j=1}^{l}a_{j}\frac{\delta_{x_{j}}-\delta_{y_{j}}}{d(x_{j},y_{j})},\] be its optimal representation. Thus it follows that \[1=\|\mu\|_{\mathcal{F}}=\sum_{j=1}^{l}a_{j}.\] Since \(\mu\) is an extreme point it follows that \(x_{i}=x_{j}=x\) and all \(y_{i}=y_{j}=y\) for all \(i,j\in\{1,2,\ldots,l\}\), and thus \(\mu=\frac{\delta_{x}-\delta_{y}}{d(x,y)}\). There cannot be a \(z\in M\) so that \(d(x,y)=d(x,z)+d(z,y)\), because otherwise we could write \[\mu=\frac{d(x,z)}{d(x,y)}\frac{\delta_{x}-\delta_{z}}{d(x,z)}+\frac{d(y,z)}{d( x,y)}\frac{\delta_{z}-\delta_{y}}{d(y,z)}.\] and note that this is also an optimal representation. Conversely, assume that \(x\neq y\) are in \(M\), and that there is no \(z\in M\) for which \(d(x,y)=d(x,z)+d(z,y)\), and assume we write \[\frac{\delta_{x}-\delta_{y}}{d(x,y)}=\alpha\mu+(1-\alpha)\nu,\text{ with }\mu,\nu\in B_{\mathcal{F}},\text{ and }0<\alpha<1.\] Write the optimal decompositions of \(\mu\) and \(\nu\) as \[\mu=\sum_{j=1}^{l}a_{j}\frac{\delta_{x_{j}}-\delta_{y_{j}}}{d(x_{j},y_{j})} \text{ and }\nu=\sum_{j=l+1}^{m+l}a_{j}\frac{\delta_{x_{j}}-\delta_{y_{j}}}{d(x_{j},y_{j} )},\] with \(a_{j}\geq 0\), \(j=1,2\ldots,l+m\). By Corollary 2.11 \[1=\Big{\|}\frac{\delta_{x}-\delta_{y}}{d(x,y)}\Big{\|}_{\mathcal{F}}\leq\alpha \|\mu\|_{\mathcal{F}}+(1-\alpha)\|\nu\|_{\mathcal{F}}=\alpha\sum_{j=1}^{l}a_{j }+(1-\alpha)\sum_{j=l+1}^{l+m}a_{j}=1.\] This implies that \[\frac{\delta_{x}-\delta_{y}}{d(x,y)}=\sum_{j=1}^{m+l}b_{j}\frac{\delta_{x_{j} }-\delta_{y_{j}}}{d(x_{j},y_{j})}\] with \(b_{j}=\alpha a_{j}\), if \(j=1,2\ldots,l\) and \(b_{j}=(1-\alpha)a_{j}\), if \(j=l+1,l+2,\ldots,l_{l+m}\), is an optimal representation of \(\frac{\delta_{x}-\delta_{y}}{d(x,y)}\), and thus, from Corollary 2.11 it follows that \(x_{j}=x\) and \(y_{j}=y\), for \(j=1,2,\ldots,,m+l\). This implies that \(\frac{\delta_{x}-\delta_{y}}{d(x,y)}\), is an extreme point of \(B_{\mathcal{F}(M)}\) ### Some Notational Remarks In the literature there are several names for the space \(\mathcal{F}(M)\): Other than the _Lipschitz free space over \(M\)_, \(\mathcal{F}(M)\) is also called * _Transportation Cost Space_, * _Wasserstein Space_ or more precisely _Wasserstein 1-Space_, * _Arens Eals_ (denoted by \(\AE\)), * _Earthmover Space_. Let us introduce some more notation Denote the set of measures on \(M\), with finite support, by \(\mathcal{M}\)_i.e.,_ \[\mathcal{M}(M)=\Big{\{}\sum_{j=1}^{n}a_{j}\delta_{x_{j}}:n\in\mathbb{N},a_{j} \in\mathbb{R},x_{j}\in M,\text{ for }j=1,2,\ldots n\Big{\}}\] (actually \(\mathcal{M}(M)=\tilde{\mathcal{F}}(M)\)). Let \(\mathcal{M}^{+}(M)\) denote the positive measures and \(\mathcal{P}(M)\) denote probabilities on \(M\), with finite support. For \(\sigma,\tau\in\mathcal{P}(M)\) define the _Wasserstein distance of \(\sigma\) and \(\tau\)_ by \[d_{\text{Wa}}(\sigma,\tau)=\|\sigma-\tau\|_{\mathcal{F}}=\|\sigma-\tau\|_{ \text{tc}}\] Thus if we let \(\mu=\sigma-\tau\), it follows by the Remark after Proposition 2.4 that \[d_{\text{Wa}}(\sigma,\tau)=\inf\left\{\sum_{(x,y)\in M^{2}}\nu(x,y)d(x,y):\mu ^{+}(x)=\sum_{y^{\prime}\in M}\nu(x,y^{\prime})\text{ for }x\in M\text{ and } \atop\mu^{-}(y)=\sum_{x^{\prime}\in M}\nu(x^{\prime},y)\text{ for }y\in M\right\}\] We claim that \[d_{\text{Wa}}(\sigma,\tau)=\inf\left\{\sum_{(x,y)\in M^{2}}\pi(x,y)d(x,y): \pi\in\mathcal{P}(M^{2}),\atop\sigma(x)=\sum_{y^{\prime}\in M}\pi(x,y^{\prime })\text{ for }x\in M\text{ and }\atop\tau(y)=\sum_{x^{\prime}\in M}\pi(x^{\prime},y)\text{ for }y\in M\right\}.\] (which is the usual definition of the Wasserstein distance). Indeed \(\sigma-\tau=\mu=\mu^{+}-\mu^{-}\) and thus, for every \(x\in M\) \[\rho(x):=\sigma(x)-\mu^{+}(x)=\tau(x)-\mu^{-}(x)\] Let \(\nu\in\mathcal{M}^{+}(M^{2})\) be such that \[\mu^{+}(x)=\sum_{y^{\prime}\in M}\nu(x,y^{\prime}),\text{ for }x\in M\text{ and }\mu^{-}(y)=\sum_{x^{\prime}\in M}\nu(x^{\prime},y),\text{ for }y\in M. \tag{8}\] Then define \(\pi=\nu+\sum_{x\in M}\delta_{(x,x)}\rho(x)\) and note that \[\sigma(x)=\sum_{y^{\prime}\in M}\pi(x,y^{\prime}),\text{ for }x\in M\text{ and } \tau(y)=\sum_{x^{\prime}\in M}\pi(x^{\prime},y),\text{ for }y\in M,\] and thus \(\pi\in\mathcal{P}(M^{2})\). Moreover, we have \[\sum_{(x,y)\in M^{2}}\nu(x,y)d(x,y)=\sum_{(x,y)\in M^{2}}\pi(x,y)d(x,y), \tag{9}\] Similarly it follows that if \(\pi\in\mathcal{P}(M^{2})\) satisfies (9) then \(\nu=\pi-\sum_{x\in M}\delta_{(x,x)}\pi(x,x)\) satisfies (8). We define for \(\sigma,\tau\in\mathcal{P}(M)\) the _Transition Probabilities from \(\sigma\) to \(\tau\)_ \[\mathcal{P}(\sigma,\tau)=\Big{\{}\pi\in\mathcal{P}(M^{2}):\sigma(x)=\sum_{y^ {\prime}\in M}\pi(x,y^{\prime}),\tau(y)=\sum_{x^{\prime}\in M}\pi(x^{\prime},y ),\text{ for }x,y\in M\Big{\}}\] and can rewrite \(d_{\text{Wa}}(\sigma,\tau)\) as \[d_{\text{Wa}}(\sigma,\tau)=\min\Big{\{}\sum_{x,y\in M}\pi(x,y)d(x,y):\pi\in \mathcal{P}(\sigma,\tau)\Big{\}}. \tag{10}\] **Definition 2.13**.: We call \(\mathcal{P}(M)\) together with the metric \(d_{\text{Wa}}(\cdot,\cdot)\)_Wasserstein space_, and denote it by \(\text{Wa}(M)\). More generally if \(\mu_{1},\mu_{2}\in\mathcal{M}^{+}(M)\), with \(\mu_{1}(M)=\mu_{2}(M)\), we put \[\mathcal{M}^{+}(\mu_{1},\mu_{2})=\Big{\{}\nu\in\mathcal{M}^{+}(M^{2}),:\mu_{1 }(x)=\sum_{y^{\prime}\in M}\nu(x,y^{\prime}),\mu_{2}(y)=\sum_{x^{\prime}\in M }\nu(x^{\prime},y),\text{ for }x,y\!\in\!M\Big{\}}\] and conclude that \[\|\mu_{1}-\mu_{2}\|_{\mathcal{F}}=\min\Big{\{}\underbrace{\sum_{x,y\in M}\nu( x,y)d(x,y)}_{=\int_{M^{2}}d(x,y)\,d\nu(x,y)}:\nu\in\mathcal{M}^{+}(\mu_{1},\mu_{2}) \Big{\}}. \tag{11}\] ### Uniform Distibutions For \(A\subset M\) we denote the uniform distribution on \(A\) by \(\mu_{A}\),_i.e.,_\(\mu_{A}(x)=\frac{1}{|A|}\chi_{A}(x)\). **Proposition 2.14**.: _If \(A,B\subset M\) with \(n=|A|=|B|\), then there exist a bijection \(f:A\to B\) So that_ \[d_{\text{Wa}}(\mu_{A},\mu_{B})=\frac{1}{n}\sum_{x\in M}d(x,f(x))\] _In other words, the representation_ \[\mu_{A}-\mu_{B}=\frac{1}{n}\sum_{x\in A}\big{(}\delta_{x}-\delta_{f(x)}\big{)}\] _is optimal._ The proof is a Corollary of the following theorem by Birkhoff (see appendix for a proof) **Theorem 2.15**.: _(Birkhoff) Assume \(n\in\mathbb{N}\) and that \(A=(a_{i,j})_{i,j=1}^{n}\) is a doubly stochastic matrix, i.e.,_ \[0\leq a_{i,j}\leq 1\text{ for all }1\leq i,j\leq n\text{,}\] \[\sum_{j=1}^{n}a_{i,j}=1\text{ for }i=1,2,\ldots,n\text{, and }\sum_{i=1}^{n}a_{i,j}=1\text{ for }j=1,2,\ldots,n\text{.}\] _Then \(A\) is a convex combination of permutation matrices, i.e.,matrices which have in each row and each column exactly one entry whose value is \(1\) and vanish elsewhere._ Proof of Proposition 2.14.: Let \(A=\{x_{1},x_{2},\ldots x_{n}\}\) and \(B=\{y_{1},y_{2},\ldots,y_{n}\}\). We note that for every \(\pi\in\mathcal{P}(\sigma,\tau)\) the matrix \(M=(n\pi(x_{i},y_{j}):1\leq i,j\leq n)\) is a doubly stochastic matrix (since \(\sum_{x\in A}\pi(x,y)=\tau(y)=\frac{1}{|B|}=\frac{1}{|A|}=\sigma(x)=\sum_{y\in B }\pi(x,y)\)). Thus by (10) \[d_{\text{Wa}}(\mu_{A},\mu_{B})=\frac{1}{n}\min\Big{\{}\sum_{i,j=1}^{n}M_{i,j}d (x_{i},y_{j}):M\in\text{DS}_{n}\Big{\}}\] Since the map \[\text{DS}\to[0,\infty),\quad M\mapsto\sum_{i,j=1}^{n}M_{i,j}d(x_{i},y_{j})\] is linear, it achieves its minimum on an extreme point; our claim follows from Theorem 2.15 ## 3. Embeddings of Transportation cost spaces over trees into \(L_{1}\) ### Distortion Let \((M,d)\) and \((M^{\prime},d^{\prime})\) be two metric spaces. For \(f:M\to M^{\prime}\) the _distortion of \(f\)_ is defined by \[\operatorname{dist}(f)=\sup_{x\neq y,x,y\in M}\underbrace{\frac{d^{\prime}(f(x),f(y))}{d(x,y)}}_{\operatorname{Lip}(f)}\sup_{x\neq y,x,y\in M}\underbrace{ \frac{d(x,y)}{d^{\prime}(f(x),f(y))}}_{\operatorname{Lip}(f^{-1})}\] where \(f^{-1}\) is defined on \(f(M)\), if \(f\) is injective, and otherwise \(\operatorname{dist}(f):=\infty\). We define the \(M^{\prime}\)_- distortion of \(M\)_ by \[c_{M^{\prime}}(M)=\inf\big{\{}\operatorname{dist}(f)|\,f:M\to M^{\prime}\big{\}}.\] Let \(\mathcal{M}^{\prime}\) be a family of metric spaces. We define the \(\mathcal{M}^{\prime}\)_-distortion of \(M\)_ by \[c_{\mathcal{M}^{\prime}}(M)=\inf_{M^{\prime}\in\mathcal{M}^{\prime}}c_{M^{ \prime}}(M).\] The main question we want to address: **Problem**.: Let \((M,d)\) be a finite metric space. Find upper and lower estimates of * \(c_{(\ell_{1}^{a},n\in\mathbb{N})}(\mathcal{F}(M))\), * \(c_{(\ell_{1}^{a},n\in\mathbb{N})}(\operatorname{Wa}(M))\), * \(\inf\big{\{}\|T\|_{\mathcal{F}(M)\to L_{1}}\cdot\|T^{-1}\|_{T(L_{1})\to \mathcal{F}(M)}:T:\mathcal{F}\to L_{1}\text{ linear and bounded}\big{\}}\). Our families of metric spaces are usually closed under scaling, _i.e.,_ if \((M^{\prime},d^{\prime})\in\mathcal{M}^{\prime}\) and \(\lambda>0\), then also \((\mathcal{M}^{\prime},\lambda\cdot d^{\prime})\in\mathcal{M}^{\prime}\). In that case \[c_{\mathcal{M}^{\prime}}(M)=\inf_{M^{\prime}\in\mathcal{M}^{\prime}}\{\|f\|_{ \operatorname{Lip}}:f:M\to M^{\prime}\text{ is expansive}\}\] where \(f:(M,d)\to(M^{\prime},d^{\prime})\) is called _expansive_, if \[d^{\prime}(f(x),f(z))\geq d(x,z),\text{ for }x,z\in M.\] Lipschitz embeddings of \(\operatorname{Wa}(M)\) into \(L_{1}\) imply linear embeddings of \(\mathcal{F}(M)\) into \(\ell_{1}(N)\) Throughout this subsection, we assume that \((M,d)\) is a finite metric space and let \(n=|M|\). **Theorem 3.1**.: _Assume that_ \[F:\mathcal{P}(M)\to L_{1}[0,1]\] _has the property that for some \(L\geq 1\)_ \[d_{\operatorname{Wa}}(\sigma,\tau)\leq\|F(\sigma)-F(\tau)\|_{1}\leq Ld_{ \operatorname{Wa}}(\sigma,\tau)\text{ for }\sigma,\tau\in\mathcal{P}(M).\] _Then, there exists a Lipschitz map_ \[H:\mathcal{F}(M)\to L_{1}[0,1]\] _for which_ \[\|\mu-\nu\|_{\mathcal{F}}\leq\|H(\mu)-H(\nu)\|_{1}\leq 3L\|\mu-\nu\|_{ \mathcal{F}},\text{ for }\mu,\nu\in\mathcal{F}(M).\] **Remark**.: Using more sophisticated tools, we can obtain a linear bounded operator \[T:\mathcal{F}(M)\to L_{1}[0,1]\] for which \[\|\mu\|_{\mathcal{F}}\leq\|T(\mu)\|_{1}\leq L\|\mu\|_{1}.\] Here, we are following a more elementary but also more technical proof by Naor and Schechtman [17]. Proof.: After scaling we can assume that \(d(u,v)\geq 1\) for all \(u\neq v\) in \(M\), and secondly we can assume that the image of the uniform distribution, \(\mu_{0}=\frac{1}{n}\sum_{x\in M}\delta_{x}\), under \(F\) vanishes. Note that for \(\mu\in\mathcal{F}(M)\) \[\|\mu\|_{\infty}=\max_{x\in M}|\mu(x)|\leq\|\mu\|_{\mathcal{F}}.\] Indeed, if \(\nu\in\mathcal{M}^{+}(\mu^{+},\mu^{-})\), then \[\int_{M\times M}d(x,y)\,d\nu(x,y)\geq\int_{M\times M}1\,d\nu(x,y)=\int_{M}1\, d\mu^{+}(x)=\mu^{+}(M)=\mu^{-}(M)\geq\|\mu\|_{\infty},\] and thus, taking the infimum over all \(\nu\in\mathcal{M}^{+}(\mu^{+},\mu^{-})\), we deduce our claim. We put \[B=\{\mu\in\mathcal{F}(M),\|\mu\|_{\infty}\leq 1\}.\] \[\Psi:B\to\mathcal{P}(M),\quad\mu\mapsto\sum_{x\in M}\frac{1+\mu(x)}{n}\delta_{ x}\in\mathcal{P}(M).\] For any \(f\in\operatorname{Lip}_{0}(M)\), \(\mu,\nu\in B\) \[\int_{M}f(x)d(n\Psi(\mu)-n\Psi(\nu))=\sum_{x\in M}f(x)(\mu(x)-\nu(x))=\int_{M }f(x)d(\mu-\nu),\] and thus \[\|\mu-\nu\|_{\mathcal{F}(M)}=n\|\Psi(\mu)-\Psi(\nu)\|_{\mathcal{F}(M)}=nd_{ \operatorname{Wa}}\big{(}\Psi(\mu),\Psi(\nu)\big{)}.\] Then define \[h:B\to L_{1}[0,1]\quad\mu\mapsto nF\circ\Psi(\mu).\] Then \(h(0)=0\) (because the image of the uniform distribution under \(F\) was assumed to vanish) and for \(\mu,\nu\in B\) \[\|h(\mu) -h(\nu)\|_{1}\] \[=n\|F(\Psi(\mu))-F(\Psi(\nu))\|_{1}\] \[=n\Big{\|}F\Big{(}\sum_{x\in M}\frac{1}{n}(1+\mu(x))\delta_{x} \Big{)}-F\Big{(}\sum_{x\in M}\frac{1}{n}(1+\nu(x))\delta_{x}\Big{)}\Big{\|}_{1}\] \[\geq n\Big{\|}\sum_{x\in M}\frac{1}{n}(1+\mu(x))\delta_{x}-\sum_ {x\in M}\frac{1}{n}(1+\nu(x))\delta_{x}\Big{\|}_{\mathcal{F}}\quad(\text{ expansiveness})\] \[=\|\mu-\nu\|_{\mathcal{F}}, \tag{12}\] and \[\|h(\mu) -h(\nu)\|_{1}\] \[=n\Big{\|}F\Big{(}\sum_{x\in M}\frac{1}{n}(1+\mu(x))\delta_{x}\Big{)} -F\Big{(}\sum_{x\in M}\frac{1}{n}(1+\nu(x))\delta_{x}\Big{)}\Big{\|}_{1} \tag{14}\] \[\leq nLd_{\text{Wa}}\Big{(}\sum_{x\in M}\frac{1}{n}(1+\mu(x)) \delta_{x},\sum_{x\in M}\frac{1}{n}(1+\nu(x))\delta_{x}\Big{)}=\|\mu-\nu\|_{ \mathcal{F}}. \tag{13}\] We define \(\chi(f):[0,1]\times\mathbb{R}\to\{-1,0,1\}\), for \(f\in L_{1}[0,1]\), by \[\chi(f)(s,t)=\text{sign}\big{(}f(s)\big{)}1_{[0,|f(s)|]}(t)=\begin{cases}1& \text{if $f(s)>0$ and $0\leq t\leq f(s)$},\\ -1&\text{if $f(s)<0$ and $0\leq t\leq-f(s)$},\\ 0&\text{else}.\end{cases}\] Note that (by Fubini's Theorem) \[\|\chi(f)-\chi(g)\|_{L_{1}([0,1]\times\mathbb{R})}=\|f-g\|_{L_{1}[0,1]},\text { for }f,g\in L_{1}[0,1]. \tag{15}\] Indeed, note that \(f(s)\geq g(s)\iff\chi(f)(s,t)\geq\chi(g)(s,t)\) and that, since \(\chi(f)\) and \(\chi(g)\) are \(-1,0,1\) valued, we have \[\int_{0}^{1}\int_{-\infty}^{\infty} |\chi(f)(s,t)-\chi(g)(s,t)|\,dtds\] \[=\int_{\{s:f(s)>g(s)\}}\int_{-\infty}^{+\infty}\chi(f)(s,t)-\chi( g)(s,t)\,dtds\] \[\quad+\int_{\{s:f(s)<g(s)\}}\int_{-\infty}^{+\infty}\chi(g)(s,t)- \chi(f)(s,t)\,dtds\] \[=\int_{\{s:f(s)>g(s)\}}f(s)-g(s)ds+\int_{\{s:f(s)<g(s)\}}g(s)-f(s )ds=\|f-g\|_{L_{1}[0,1]}.\] Now we put (recall that \(\mu/\|\mu\|_{\mathcal{F}}\in B\), for \(\mu\in\mathcal{F}(M)\)) \[H:\mathcal{F}(M)\to L_{1}([0,1]\times\mathbb{R}),\quad\mu\mapsto\|\mu\|_{ \mathcal{F}}\cdot\chi\circ h\Big{(}\frac{\mu}{\|\mu\|_{\mathcal{F}}}\Big{)} \text{if $\mu\neq 0$, and $H(0)=0$}.\] It follows for \(\mu,\nu\in\mathcal{F}(M)\), with \(\|\mu\|_{\mathcal{F}}\geq\|\nu\|_{\mathcal{F}}\) (w.l.o.g.), that \[\big{|}H(\mu)-H(\nu)\big{|} =\Big{|}\|\mu\|_{\mathcal{F}}\cdot\chi\circ h\Big{(}\frac{\mu}{\| \mu\|_{\mathcal{F}}}\Big{)}-\|\nu\|_{\mathcal{F}}\cdot\chi\circ h\Big{(}\frac{ \nu}{\|\nu\|_{\mathcal{F}}}\Big{)}\Big{|}\] \[=\Big{|}\|\nu\|_{\mathcal{F}}\Big{(}\chi\circ h\Big{(}\frac{\mu}{ \|\mu\|_{\mathcal{F}}}\Big{)}-\chi\circ h\Big{(}\frac{\nu}{\|\nu\|_{\mathcal{ F}}}\Big{)}\Big{)}\] \[\quad\quad\quad\quad\quad+(\|\mu\|_{\mathcal{F}}-\|\nu\|_{\mathcal{ F}})\cdot\chi\circ h\Big{(}\frac{\mu}{\|\mu\|_{\mathcal{F}}}\Big{)}\Big{|}\] \[=\|\nu\|_{\mathcal{F}}\Big{|}\chi\circ h\Big{(}\frac{\mu}{\|\mu\|_{ \mathcal{F}}}\Big{)}-\chi\circ h\Big{(}\frac{\nu}{\|\nu\|_{\mathcal{F}}}\Big{)} \Big{|}\] \[\qquad\qquad\qquad+(\|\mu\|_{\mathcal{F}}-\|\nu\|_{\mathcal{F}}) \Big{|}\chi\circ h\Big{(}\frac{\mu}{\|\mu\|_{\mathcal{F}}}\Big{)}\Big{|}\] \[\Big{(}\text{Check possible values of }\chi\circ h\Big{(}\frac{\mu}{\|\mu\|_{ \mathcal{F}}}\Big{)},\chi\circ h\Big{(}\frac{\nu}{\|\nu\|_{\mathcal{F}}}\Big{)} \in\{-1,0,1\}\Big{)}.\] Thus \[\big{\|}H(\mu)-H(\nu)\big{\|}_{L_{1}([0,1]\times\mathbb{R})} =\|\nu\|_{\mathcal{F}}\Big{\|}\chi\circ h\Big{(}\frac{\mu}{\|\mu \|_{\mathcal{F}}}\Big{)}-\chi\circ h\Big{(}\frac{\nu}{\|\nu\|_{\mathcal{F}}} \Big{)}\Big{\|}_{1}\] \[\qquad\qquad\qquad+(\|\mu\|_{\mathcal{F}}-\|\nu\|_{\mathcal{F}} )\Big{\|}h\Big{(}\frac{\mu}{\|\mu\|_{\mathcal{F}}}\Big{)}\Big{\|}_{1}\] \[=\|\nu\|_{\mathcal{F}}\Big{\|}h\Big{(}\frac{\mu}{\|\mu\|_{ \mathcal{F}}}\Big{)}-h\Big{(}\frac{\nu}{\|\nu\|_{\mathcal{F}}}\Big{)}\Big{\|} _{1}\ (\text{By (\ref{eq:1})})\] \[\qquad\qquad\qquad+(\|\mu\|_{\mathcal{F}}-\|\nu\|_{\mathcal{F}}) \Big{\|}h\Big{(}\frac{\mu}{\|\mu\|_{\mathcal{F}}}\Big{)}\Big{\|}_{1}\] \[\geq\|\nu\|_{\mathcal{F}}\Big{\|}\frac{\mu}{\|\mu\|_{\mathcal{F} }}-\frac{\nu}{\|\nu\|_{\mathcal{F}}}\Big{\|}_{\mathcal{F}}+\|\mu\|_{\mathcal{ F}}-\|\nu\|_{\mathcal{F}}\ (\text{By (\ref{eq:1})})\] \[\geq\|\nu-\mu\|_{\mathcal{F}}-\Big{\|}\mu\Big{(}1-\frac{\|\nu\|_ {\mathcal{F}}}{\|\mu\|_{\mathcal{F}}}\Big{)}\Big{\|}+\|\mu\|_{\mathcal{F}}- \|\nu\|_{\mathcal{F}}.=\|\mu-\nu\|_{\mathcal{F}}\] We also get \[\big{\|}H(\mu)-H(\nu)\big{\|}_{L_{1}([0,1]\times\mathbb{R})} =\|\nu\|_{\mathcal{F}}\Big{\|}h\Big{(}\frac{\mu}{\|\mu\|_{ \mathcal{F}}}\Big{)}-h\Big{(}\frac{\nu}{\|\nu\|_{\mathcal{F}}}\Big{)}\Big{\|} _{1}\] \[\qquad\qquad\qquad+(\|\mu\|_{\mathcal{F}}-\|\nu\|_{\mathcal{F}}) \Big{\|}h\Big{(}\frac{\mu}{\|\mu\|_{\mathcal{F}}}\Big{)}\Big{\|}_{1}\] \[\leq L\|\nu\|_{\mathcal{F}}\Big{\|}\frac{\mu}{\|\mu\|_{\mathcal{F} }}-\frac{\nu}{\|\nu\|_{\mathcal{F}}}\Big{\|}_{1}\] \[\qquad\qquad\qquad\qquad+L(\|\mu\|_{\mathcal{F}}-\|\nu\|_{ \mathcal{F}})\ (\text{By (\ref{eq:1})})\] \[=L\Big{\|}\frac{\|\nu\|_{\mathcal{F}}}{\|\mu\|_{\mathcal{F}}}\mu -\nu\Big{\|}_{\mathcal{F}}+L(\|\mu\|_{\mathcal{F}}-\|\nu\|_{\mathcal{F}})\] \[\leq L\Big{(}\Big{\|}\frac{\|\nu\|_{\mathcal{F}}}{\|\mu\|_{ \mathcal{F}}}\mu-\mu\Big{\|}_{\mathcal{F}}+\|\mu-\nu\|_{\mathcal{F}}\Big{)}+L( \|\mu-\nu\|_{\mathcal{F}})\] \[=L\Big{(}\|\mu\|_{\mathcal{F}}-\|\nu\|_{\mathcal{F}}\Big{)}+2L( \|\mu-\nu\|_{\mathcal{F}})\leq 3L(\|\mu-\nu\|_{\mathcal{F}}).\] Thus \(H\) is a bi- Lipschitz map from \(\mathcal{F}(M)\) to \(L_{1}([0,1]\times\mathbb{R})\equiv L_{1}[0,1]\) of distortion not larger than \(3L\). **Remark**.: If \(F:\mathcal{P}(M)\to L_{1}[0,1]\) is Lipschitz, we could perturb \(F\) a bit, to a map \(\tilde{F}\) having a finite-dimensional image and almost the same distortion. If we could then produce a Lipschitz map \(H:\mathcal{F}(M)\to L_{1}[0,1]\), which also has a finite-dimensional image we would only need Rademacher's Theorem to linearize \(H\), The following result is a generalization of Rademacher's Theorem by Heinrich and Mankiewicz. Let \(X\) and \(Z\) be Banach spaces and let \(f:X\to Z^{*}\) be a Lipschitz map. We say that \(f\) is \(w^{*}\) differentiable at a point \(x_{0}\) if for all \(x\in X\) \[(D^{*}f)_{x_{0}}(x)=w^{*}-\lim_{\lambda\to 0}\frac{f(x_{0}+\lambda x)-f(x_{0})}{ \lambda}\text{ for all }x\in X\text{ exists.}\] We call \((D^{*}f)_{x_{0}}\) the \(w^{*}\)-derivative of \(f\) at \(x_{0}\). **Theorem 3.2**.: _[_12_, Theorem 3.2]_ _Let \(X\) and \(Z\) be Banach spaces, \(Z\) being separable, and \(f:X\to Z^{*}\) be a Lipschitz map. Then there is a dense set \(D\subset X\) so that for all \(x_{0}\in D\) the \(w^{*}\) derivative \((D^{*}f)_{x_{0}}\) exists._ _Moreover:_ 1. _For every_ \(x\in D\)_,_ \((D^{*}f)_{x_{0}}\) _is bounded linear operator from_ \(X\) _to_ \(Z^{*}\) _and_ \(\|(D^{*}f)_{x_{0}}\|\leq\|f\|_{\mathrm{Lip}}\)_._ 2. _If, moreover,_ \(f\) _is bi-Lipschitz, then_ \((D^{*}f)_{x_{0}}\) _is an isomorphic embedding and_ \(\|(D^{*}f)_{x_{0}}^{-1}\|\leq\|f^{-1}\|_{\mathrm{Lip}}\)_._ **Corollary 3.3**.: _Assume that_ \[F:\mathcal{P}(M)\to L_{1}[0,1]\] _has the property that_ \[d_{\mathrm{Wa}}(\sigma,\tau)\leq\|F(\sigma)-F(\tau)\|_{1}\leq Ld_{\mathrm{Wa} }(\sigma,\tau),\text{ for }\sigma,\tau\in\mathcal{P}(M) \tag{16}\] _and let \(\varepsilon>0\)_ _Then there exists an \(N\in\mathbb{N}\), and an linear embedding \(T\) of \(\mathcal{F}(M)\) into \(\ell_{1}^{N}\) so that_ \[\|\mu\|_{\mathcal{F}}\leq\|T(\mu)\|_{1}\leq(3L+\varepsilon)\|\mu\|_{\mathcal{ F}}.\] Proof.: Let \(H:\mathcal{F}(M)\to L_{1}[0,1]\) be defined as in Theorem 3.1. Since \(L_{1}[0,1]\) is isometrically a subspace of \(C^{*}[0,1]\) we can apply Theorem 3.2 and obtain an isomorphic embedding \(S\) of \(\mathcal{F}(M)\) into \(C^{*}[0,1]\), with \[\|\mu\|_{\mathcal{F}}\leq\|S(\mu)\|_{1}\leq 3L\|\mu\|_{\mathcal{F}}.\] Since \(\mathcal{F}(M)\) is finite-dimensional \(S(\mathcal{F}(M))\) is also finite-dimensional. \(C^{*}[0,1]\) is a \(\mathcal{L}_{1}\)-space of constant \(1\), which means that for every finite dimensional subspace \(F\) of \(C^{*}[0,1]\) there is a finite dimensional subspace \(G\) of \(C^{*}[0,1]\) which contains \(F\), and which is \((1+\varepsilon)\)-isomorphic to \(\ell_{1}^{N}\), for some \(N\) and \((1+\varepsilon)\)-complemented in \(C^{*}[0,1]\) (the complementation is not needed). We deduce from this our claim. **Exercise 3.4**.: Prove that \(C^{*}[0,1]\) is a \(\mathcal{L}_{1}\)-space. ### Geodesic Graphs An undirected graph \(G\) is a pair \(G=(V(G),E(G))\) with \[E(G)\subset[V(G)]^{2}=\{e\subset V(G):|e|=2\}.\] For \(v\) we call \[\deg(v)=\big{\{}e\in E(G):v\in e\big{\}}\] the _degree of \(v\)_. A _walk_ in a graph \(G\) is a graph \(W=(V(W),E(W))\) with \(V(W)\subset V(G)\), and \(E(W)\subset E(G)\), and \(V(W)\) can be ordered into \(\{x_{j}:j=0,1,2,\ldots n\}\) (where the \(x_{j}\) are not necessarily distinct) so that \(E(W)=\big{\{}\{x_{j-1},x_{j}\}:j=1,2\ldots n\}\). In that case we call \(W\)_a walk from \(x_{0}\) to \(x_{n}\)_, and also write \(W=(x_{j})_{j=0}^{n}\). In the case that \(x_{j}\neq x_{i}\), if \(i\neq j\) in \(\{0,1,2,\ldots,n\}\) we call \(W\) a path. If \(x_{0}=x_{n}\), \(x_{j}\neq x_{i}\), unless \(\{i,j\}=\{0,n\}\), we call \(W\) a _cycle_. We call a graph \(G=(V(G),E(G))\)_connected_ if for each \(u,v\in V(G)\) there is a walk (and thus also a path) from \(u\) to \(v\). A connected graph that does not contain a cycle is called a _tree_. In that case, a unique path exists between any two vertices \(u\) and \(v\). We denote that unique path by \([u,v]_{G}\). **Proposition 3.5**.: _Let \(G=(V(G),E(G))\). The following statements are equivalent:_ 1. \(G\) _is a tree,_ 2. \(G\) _is minimal connected graph, i.e., for every_ \(e\in E(G)\)_, the graph_ \(G^{\prime}=(V(G),E(G)\setminus\{e\})\) _is not connected,_ 3. \(G\) _is a maximal graph without a cycle, i.e., for every_ \(e\in[V(G)]^{2}\setminus E(G)\)_, the graph_ \(\tilde{G}=(V(G),E(G)\bigcup\{e\})\) _has a cycle,_ _and in the case that \(n=|V(G)|<\infty\)_ 1. \(G\) _is connected and_ \(|E(G)|=n-1\)_,_ Proof.: Exercise. **Definition**.: Let \(T=(V(T),E(T))\) be a tree we call \(v\in V(T)\) a _leaf of \(T\)_ if \(\deg(v)=1\), **Exercise 3.6**.: Every finite tree has leaves. If \(G\) is a connected graph and \(d_{G}\) is a metric on \(V(G)\), we call \(d_{G}\) a _geodesic metric on \(G\)_ if \[d_{G}(u,v)=\min\{\text{length}_{d_{G}}(p):\text{$p$ is a path from $u$ to $v$}\},\text{ for $u,v\in V$},\] where for a path \(p=(x_{j})_{j=0}^{n}\) in \(G\) we define the length of \(p\) by \[\text{length}_{d_{G}}(p)=\sum_{j=1}^{n}d_{G}(x_{j-1},x_{j}).\] In that case we call the pair \((G,d_{G})\) a _geodesic graph_. For \(e=\{u,v\}\in E(G)\) we put \(d_{G}(e)=d_{G}(u,v)\). If \(G\) is a cycle, a path, or a tree, we call it a _geodesic cycle_, _geodesic path_, or a _geodesic tree_, respectively. Assume that \(w:E(G)\to\mathbb{R}^{+}\) is a function. Define for \(u,v\in V(G)\) \[d_{G}(u,v):=\min\Big{\{}\sum_{j=1}^{n}w(\{x_{i-1},x_{i}\}):(x_{j})_{j=0}^{n}\text { is a path from }u\text{ to }v\Big{\}}.\] Then \(d_{G}\) is a geodesic metric on \(G\), and we call it the _metric generated by the weight function \(w\)_. This is why geodesic graphs are often referred to as _weighted graphs_. ### Transportation cost spaces over geodesic trees Let \((M,d)\) be a metric space. Recall the following notation from 2.3: for \(\mu=\mu^{+}-\mu^{-}\in\tilde{\mathcal{F}}(M)\), \[\mathcal{M}^{+}(\mu_{1},\mu_{2})=\Big{\{}\nu\in\mathcal{M}^{+}(M^{2}):\mu_{1} (x)=\sum_{y^{\prime}\in M}\nu(x,y^{\prime}),\mu_{2}(y)=\sum_{x^{\prime}\in M} \nu(x^{\prime},y),\text{ for }x,y\in M\Big{\}}.\] We showed \[\|\mu\|_{\mathcal{F}}=\|\mu\|_{\text{tc}}=\min\Big{\{}\sum_{x,y\in M}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 3. (Tripot property of trees) If \(u,v\in V(T)\) then there exists a (unique) \(z\in V(T)\) for which \([v_{0},z]_{T}=[v_{0},u]_{T}\cap[v_{0},v]_{T}\) which we denote by \(\min(u,v)\). Moreover, it follows in this case that \(d_{T}(u,v)=d_{T}(u,z)+d_{T}(z,v)\). **Proposition 3.8**.: _The map \(f:(V(T),d_{T})\to\ell_{1}(E(T),w)\), \(v\mapsto 1_{E([v_{0},v]_{T}}\) is an isometry._ _Here \(1_{E([v_{0},v]_{T})}:E(T)\to\mathbb{R}\) is seen as element of \(\ell_{1}(E(T),w)\) defined by_ \[1_{E([v_{0},v]_{T})}(e):=\begin{cases}1&\text{if $e\in E([v_{0},v]_{T})$,}\\ 0&\text{else.}\end{cases}\] Proof.: Let \(u,v\in V(T)\) and let \(z=\min(v,u)\). Since \([v,z]_{T}\cup[z,u]_{T}\) is the (unique!) path from \(u\) to \(v\), it follows that \[d_{T}(u,v) =\sum_{e\in E([u,z])}w(e)+\sum_{e\in E([z,v])}w(e)\] \[=\sum_{E\in E(T)}|1_{E([u,z])}(e)-1_{E([z,v])}(e)|w(e)\] \[=\sum_{E\in E(T)}|1_{E([v_{0},u])}(e)-1_{E([v_{0},v])}(e)|w(e)=\| f(u)-f(v)\|_{1}.\] We use now the Extension property of \(\mathcal{F}(V(T),d_{T})\) and denote the unique linear extension of \(f\) by \(F\), and recall that \(\|F\|_{\mathcal{F}(V(T),d_{T})\to\ell_{1}(E(T),w)}=1\). By linearity of \(F:\mathcal{F}(M)\to\ell_{1}(E(T),w)\) we deduce that for any \(\mu=\mu^{+}-\mu^{-}\) and any \(\nu\in\mathcal{M}^{+}(\mu^{+},\mu^{-})\), that \[F(\mu)=F\Big{(}\sum_{x,y\in V(T)}\nu(x,y)(\delta_{x}-\delta_{y})\Big{)}=\sum_{ x,y\in V(T)}\nu(x,y)(f(x)-f(y))\] and thus \[\|F(\mu)\|_{1}\leq\|\mu\|_{\mathcal{F}}=\|(\mu)\|_{\rm tc}=\inf_{\nu\in \mathcal{M}^{+}(\mu^{+},\mu^{-})}\Big{\|}\sum_{x,y\in V(T)}\nu(x,y)(f(x)-f(y)) \Big{\|}_{1}. \tag{17}\] For our next step, we introduce another notation: Let \(e=(e^{-},e^{+})\in E_{d}(T)\), and put \[V_{e}=\{v\in V(T):e^{+}\preceq v\},\] We note that \(T_{e}=(V_{e},E(T)\cap[V_{e}]^{2})\) and its complement \(T_{e}^{c}=(V(T)\setminus V_{e},E(T)\cap[V(T)\setminus V_{e}]^{2})\) are subtrees of \(T\). **Proposition 3.9**.: _For any \(\mu\in F(V(T),d_{T})\) it follows that_ \[\|F(\mu)\|_{1}=\sum_{e\in E}w_{e}|\mu(V(T_{e}))|.\] **Remark**.: Since for \(e\in E(T)\) the identicator function \(1_{T_{e}}\), as function on \(V(T)\), is a Lipschitz function whose Lipschitz norm is \(\frac{1}{w(e)}\) the term \(\mu(T_{e})\) is therefore well defined, even if \(T\) is not a finite tree. Proof.: W.lo.g. \(\mu\in\tilde{\mathcal{F}}(M)\). By linearity \[F(\mu)=\sum_{v\in V(T)}\mu(v)f(v)=\sum_{v\in V(T)}\mu(v)1_{E([v_{0},v])}\] and thus \[\|F(\mu)\|_{1}=\sum_{e\in E(T)}w(e)\Big{|}\sum_{v\in V(T)}\mu(v)\underbrace{1_ {E([v_{0},v])}(e)}_{=1\Longleftrightarrow v\succeq e^{+}\Longleftrightarrow v \in T_{e}}\Big{|}=\sum_{e\in E(T)}w(e)|\mu(T_{e})|.\] We are now able to provide an explicit formula for \(\|\mu\|_{\rm tc}\), for \(\mu\in\mathcal{F}(V(T),d_{T})\) **Theorem 3.10**.: _For \(\mu\in\mathcal{F}(M)\)_ \[\|\mu\|_{tc}=\sum_{e\in E}w_{e}|\mu(T_{e}))|. \tag{18}\] Proof.: It is enough to show (18) for \(\mu\in\tilde{\mathcal{F}}(V(T),d_{T})\). From the inequality (17) and Proposition 3.9 we deduce \[\sum_{e\in E}w_{e}|\mu(V(T_{e}))|=\|F(\mu)\|_{1}\leq\|\mu\|_{\rm tc}.\] We need to show that there is a specific transportation plan \(\nu\in\mathcal{M}^{+}(\mu^{+},\mu^{-})\) for which \[t(\nu)=\sum_{x,y\in V(T)}\nu(x,y)d_{T}(x,y)=\sum_{e\in E}w_{e}|\mu(V(T_{e}))|.\] We put \[S_{\mu}=\bigcup\{[v_{0},v]_{T}:v\in\operatorname{supp}(\mu)\}\] Note that \(T_{\mu}=(S_{\mu},E(T)\cap[S_{\mu}]^{2})\) is a finite subtree of \(T\), we put \(n_{\mu}=|S_{\mu}|\) and we will verify our claim by induction for all values of \(n_{\nu}\). If \(n_{\nu}=0\), it follows that \(\mu=0\); thus, our claim is trivial. Assume that \(n_{\mu}=n+1\) and that our claim is true as long as \(n_{\mu}\leq n\). We choose a leaf \(v\) of \(T_{\mu}\). Note \(v\neq v_{0}\) (otherwise \(\mu\) would be non zero multiple of \(\delta_{v_{0}}\)) and let \(u\) be its immediate predecessor. Then we define \(\mu^{\prime}=\mu-\mu(v)\delta_{v}+\mu(v)\delta_{u}\). It follows that 1. \(S_{\mu^{\prime}}\subset S_{\mu}\setminus\{v\}\), and thus \(n_{\mu^{\prime}}<n_{\mu}\), 2. \(\mu(T_{(u,v)})=\mu(v)\) and \(\mu^{\prime}(T_{(u,v)})=0\), 3. \(\mu(T_{e})=\mu^{\prime}(T_{e})\) for all \(e\in E(T)\setminus\{(u,v)\}\). Using the induction hypothesis, let \(\nu^{\prime}\in\mathcal{M}^{+}({\mu^{\prime}}^{+},{\mu^{\prime}}^{-})\) be such that \[\|\mu^{\prime}\|_{\rm tc}=\sum_{x,y}\nu^{\prime}(x,y)d(x,y)=\sum_{e\in E(T)}w_ {e}|\mu^{\prime}(T_{e})|=\sum_{e\in E(T),e\neq(u,v)}w_{e}|\mu(T_{e})|.\] If \(\mu(v)>0\) then \(\nu:=\nu^{\prime}+\mu(v)(\delta_{v}-\delta_{u})\in\mathcal{M}^{+}(\mu^{+},\mu^{-})\) and if \(\mu(v)<0\) then \(\nu:=\nu^{\prime}+|\mu(v)|(\delta_{u}-\delta_{v})\in\mathcal{M}^{+}(\mu^{+},\mu^ {-})\), and in both cases \[\sum_{x,y\in V(T)}\nu(x,y)d(x,y)=\sum_{x,y\in V(T)}\nu^{\prime}(x,y)d(x,y)+|\mu( v)|d(u,v)=\sum_{e\in T}w(e)|\mu(T_{e}|.\] **Corollary 3.11**.: \(\mathcal{F}(V(T),d_{T})\) _is isometrically isomorphic to \(\ell_{1}(E(T),w)\) via the operator_ \[I:\mathcal{F}(V(T),d_{T})\to\ell_{1}(E(T),w),\quad\sigma\mapsto\big{(}w_{e} \sigma(T_{e}):e\in E(T)\big{)}.\] Proof.: For \(\mu\in\mathcal{F}(V(T),d_{T})\) we have, by Theorem 3.10, \(\|I(\mu)\|=\sum_{e\in E(T)}w_{e}|\mu(T_{e})|=\|\mu\|_{\rm tc}\). Since \[I\Big{(}\frac{\delta_{e^{+}}-\delta_{e^{-}}}{d(e)}:e\in E(T)\Big{)}\] is the unit vector basis of \(\ell_{1}(E(T),w)\)\(I\) is also surjective. **Remark**.: Alain Godard [8] proved Corollary 3.11 for more general trees (\(\mathbb{R}\)-trees), we followed a simplified version of his arguments. The most general version of Corollary 3.11 can be found in [4]. In [8], Godard also proved a converse, namely that every metric space \((M,d)\), for which \(\mathcal{F}(M,d)\) isometrically to \(L_{1}\), \(\ell_{1}\) or \(\ell_{1}^{N}\), is isometrically equivalent to an \(\mathbb{R}\)-tree. We will prove that fact for a finite metric space (in which case the proof is much simpler). **Proposition 3.12**.: _Assume \((M,d)\) is a finite metric space and \(\mathcal{F}(M)\) is isometrically equivalent to \(\ell_{1}^{n}\) for some \(n\in\mathbb{N}\)._ _Then \((M,d)\) is isometrically equivalent to a geodesic tree, meaning there is a \(E\subset[M]^{2}\) so that \(T=(M,E)\) is a tree and \(d\) is a geodesic metric for \(T\)._ We will use the following fact: **Proposition 3.13**.: _A Banach space \(E\) of dimension \(n\in\mathbb{N}\) is isometric to \(\ell_{1}^{n}\) if and only if there are \(x_{1},x_{2},\ldots,x_{n}\in S_{E}\) for which \(\{\pm x_{j}:j=1,2,\ldots n\}\) are the extreme points of \(B_{E}\)._ Proof of Proposition 3.12.: Define \[E=\big{\{}\{x,y\}\in[M]^{2}:\text{ there does not exist }z\in M\setminus\{x,y\} \text{ with }d(x,y)=d(x,z)+d(z,y)\big{\}}.\] Then \(G=(M,E)\) is a connected graph and \(d\) is a geodesic metric with respect to \(G\). Thus, by Theorem 2.12 the extreme points of \(B_{\mathcal{F}(M)}\) are \[\Big{\{}\pm\frac{\delta_{u}-\delta_{v}}{d(u,v)}:\{u,v\}\in E\Big{\}}.\] Assume that \(\mathcal{F}(M)\) is isometrically equivalent to \(\ell_{1}^{n}\). Thus, \(\dim(\mathcal{F}(M))=n\), which implies that \(|M|=n+1\), and \(B_{\mathcal{F}(M)}\) has \(2n\) extreme points, which means that the cardinality of \(E\) must be \(n\). So \((M,E)\) is a connected graph with \(n+1\) vertices and \(n\) edges. This implies by Proposition 3.5 that \((M,E)\) is a tree. ## 4. Stochastic Embeddings of metric spaces into trees ### Definition of Stochastic Embeddings, Examples **Definition**.: Let \(\mathcal{M}\) be a class of metric spaces, and let \((M,d_{M})\) be a metric space. A family \((f_{i})_{i=1}^{n}\) of maps \(f_{i}:M\to M_{i}\), with \((M_{i},d_{i})\in\mathcal{M}\), together with numbers \(\mathbb{P}=(p_{i})_{i=1}^{\infty}\subset(0,1]\), with \(\sum_{i=1}^{\infty}p_{i}=1\) is called a \(D\)_-stochastic embedding of \(M\) into elements of the class \(\mathcal{M}\)_ if for all \(x,y\in M\) and \(i=1,2\ldots\), \[d_{M}(x,y)\leq d_{i}(f_{i}(x),f_{i}(y))\ (\text{expansiveness}), \tag{20}\] \[\mathbb{E}_{\mathbb{P}}\big{(}d_{i}(f_{i}(x),f_{i}(y)\big{)}= \sum_{i=1}^{\infty}p_{i}d_{i}\big{(}f_{i}(x),f_{i}(y)\big{)}\leq Dd_{M}(x,y). \tag{19}\] In that case we say that \((M,d_{M})\) is \(D\)_-stochastically embeds into \(\mathcal{M}\)_. If moreover the maps \(f_{i}:M\to M_{i}\) are bijections we say that \((f_{i})_{i=1}^{n}\) together with \((p_{i})_{i=1}^{n}\) is a _bijective \(D\)-stochastic embedding of \(M\) into elements of the class \(\mathcal{M}\)_. **Remark**.: Of course if \((f_{i})_{i=1}^{\infty}\), \(f_{i}:M\to M_{i}\), \(i=1,2,3,\ldots\) together with numbers \((p_{i})_{i=1}^{\infty}\) is a bijective \(D\)-stochastic embedding of \(M\) into elements of the class \(\mathcal{M}\), we can assume that as sets \(M_{i}=M\), and that \(d_{i}\) is a metric on \(M\). We will mainly be interested in how a finite metric graph can be stochastically embedded into trees. **Example 4.1**.: Let \(C_{n}=(V(C_{n}),E(C_{n}))\) be a cycle of length \(n\). We can write \(V(C_{n})\) and \(E(C_{n})\) as \(V(C_{n})=\mathbb{Z}/nZ\) and \(E(C_{n})=\big{\{}\{j-1,j\}:j=1,2,\ldots n\big{\}}\) and let \(d\) be the geodesic metric generated by the constant weight function \(1\). Consider for \(j_{0}=1,2,\ldots n\), the path \(P_{j_{0}}\) defined by \(V(P_{j})=V(C_{n})=\{0,1,2,\ldots,n-1\}\) and \[E(P_{j_{0}})=E(C_{n})\setminus\big{\{}\{j_{0}-1,j_{0}\}\big{\}}.\] We consider on \(P_{j_{0}}\) the (usual _path distance_ ) generated by the weight function \(w(e)=1\), for \(e\in P_{j_{0}}(e)\), and denote it by \(d_{j_{0}}\) It follows that \[d_{j_{0}}(j_{0}-1,j_{0})=n-1\] and thus it follows for the identity \(Id:(V(C_{n}),d_{C_{n}})\to(V(C_{n}),d_{j_{0}})\) that \(\operatorname{dist}(Id)=n-1\). Let \(\mathcal{T}\) be the set of all metric trees. It can be shown that the \(\mathcal{T}\)-distortion of a cycle \(C_{n}\) of length \(n\) is of the order \(n\), ie \(c_{T}(C_{n})\geq c\cdot n\). In other words, there are no embeddings of cycles into trees with sublinear (with respect to \(n\)) distortion. Nevertheless, the distortion of stochastic embedding of \(C_{n}\) into trees is not larger than \(2\): \(I=\{1,2,\ldots n\}\), \(p_{i}=\frac{1}{n}\), and let for \(i\in I\), \(P_{i}\) be the above introduced path. Then \(d_{C_{n}}(u,v)\leq d_{i}(u,v)\), for \(u,v\in V(C_{n})\), and for \(e=\{j-1,j\}\in E(C_{n})\) it follows that \[\sum_{i=1}^{n}p_{i}d_{i}(j-1,j)=\frac{1}{n}(2(n-1))<2.\] ### Stochastic Embeddings into Trees: the Theorem of Fakcharoenphol, Rao, and Talwar **Theorem 4.2**.: _[_6_]_ _Let \(M\) be a metric space with \(n\in\mathbb{N}\) elements. Then there is a \(O(\log n)\) stochastic embedding of \(M\) into the class of weighted trees._ A complete proof of Theorem 4.2 is given in the Appendix. Here, we only want to define the stochastic embedding. Let \(M=\{x_{1},x_{2},\ldots,x_{n}\}\). After rescaling, we can assume that \(d(x,y)>1\) for all \(x\neq y\) in \(M\). We introduce the following notation. 1. \(B_{r}(x)=\{z\in M:d(z,x)\leq r\}\) for \(x\in X\) and \(r>0\). 2. For \(A\subset M\), \(\operatorname{diam}(A)=\max_{x,y\in A}d(x,y)\). We choose \(k\in\mathbb{N}\) so that \(2^{k-1}<\operatorname{diam}(M)\leq 2^{k}\). 3. \(\mathcal{P}_{M}\) denotes the sets of all partitions of \(M\). Let \(P=\{A_{1},A_{2},\ldots A_{l}\}\in\mathcal{P}_{M}\). \(P\) is called an \(r\)-Partition if \(\operatorname{diam}(A_{j})\leq r\), \(j=1,2\ldots l\) for \(B\subset M\), let \(P|_{B}=(A_{1}\cap B,A_{2}\cap B,\ldots,A_{l}\cap B)\) 4. For two partitions \(P=\{A_{1},A_{2},\ldots A_{m}\}\) and \(Q=\{B_{1},B_{2},\ldots,B_{n}\}\) we say \(Q\) subdivides \(P\) and write \(Q\succeq P\), if for each \(i=1,2,\ldots,m\) there is a \(j=1,2\ldots,n\) with \(B_{j}\subset A_{i}\) 5. Let \(\Omega\) be the set of all sequences \((P^{(j)})_{j=0}^{k}\) so that: \(P^{(j)}\) is a \(2^{k-j}\)-partition of \(M\), with \(P^{(0)}=(M)\) and \(P^{(j)}\succeq P^{(j-1)}\) Note: \(P^{(k)}\) are singletons. 6. Every \((P^{(j)})_{j=0}^{k}\in\Omega\) defines a tree \(T=(V(T),E(T))\) as follows: \(V(T)=\{(j,A):j=0,1,2,\ldots k,\text{ and }A\in P^{(j)}\}\), and \(E(T)=\big{\{}\{(j,B),(j-1,A)\}:j=1,2,\ldots k\text{ and }A\subset B\big{\}}\). The weight function is defined by \(w(\{(j,B),(j-1,A)\})=2^{k-j}\) if \(\{(j,B),(j-1,A)\}\in E(T)\) For each such tree \(T\), we can define a map from \(M\) to the leaves of \(T\) by assigning each \(x\) the element \((k,\{x\})\in V(T)\). We now have to define a Probability on \(\Omega\). We first define for each \(R\) a probability \(\mathbb{P}^{R}\) on \(R\)-bounded partitions: Let \(\Pi_{n}\) be the set of all permutations on \(\{1,2,\ldots n\}\) We consider on \(\Pi\times[R/4,R/2]\) the product of the uniform distribution on \(\Pi\) and the uniform distribution on \([R/4,R/2]\). each pair \((\pi,r)\in\Pi\times[R/4,R/2]\) defines the following \(r\)-partition \((C_{j}(\pi,r))_{j=1}^{l}\): \(\tilde{C}_{1}(\pi,r)=B_{r}(x_{\pi(1)})\), and assuming \((\tilde{C}_{1}(\pi,r),\tilde{C}_{2}(\pi,r)),\ldots,\tilde{C}_{j-1}(\pi,r)\) are defined we put \(\tilde{C}_{j}(\pi,r)=B_{r}(x_{\pi(j)})\backslash\bigcup_{i=1}^{j-1}\tilde{C }_{i}(\pi,r)\). Then let \(\{C_{j}(\pi,r):j\!=\!1,2\ldots l\}\) be the non empty elements of \((\tilde{C}_{j}(\pi,r))_{j=1}^{n}\) and let \(\mathbb{P}^{R}\) be the image distribution of that mapping \((\pi,r)\mapsto C(\pi,r)\). For \(B\subset M\) let \(\mathbb{P}^{(R,B)}\) be the image distribution of the mapping \((\pi,r)\mapsto C(\pi,r)|_{B}\) Let \(\mathbb{P}\) be the probability on \(\Omega\) uniquely defined by \[\mathbb{P}\big{(}(P^{(j)})_{j=0}^{k}:P^{(0)}=(0,\{M\})\big{)}=1, \tag{21}\] for \(j=1,2\ldots n\), \(B\subset M\), \(\operatorname{diam}(B)\leq 2^{k-(i-1)}\), and any \(\mathcal{A}\subset\ \mathcal{P}_{M}\) \[\mathbb{P}(P^{(i)}|_{B}\in\mathcal{A}|(i-1,B)\in P^{(i-1)})=\mathbb{P}^{(2^{k-i },B)}(\mathcal{A}). \tag{22}\] ### Bijective embeddings onto trees: The Restriction Theorem by Gupta Our goal is to show that for some universal constant \(c\), every metric space with \(n\) elements can be bijectively \(c\log(n)\)-stochastic embedded into trees. This will follow from Theorem 4.2 and the following result by Gupta. **Theorem 4.3**.: _[_10_, Theorem 1.1]_ _Let \(T=(V,E,W)\) be a weighted tree and \(V^{\prime}\subset V\). Then there is \(E\subset[V^{\prime}]^{2}\) and \(W^{\prime}:E(\mathbb{G}^{\prime})\to[0,\infty)\) so that \(T^{\prime}=(V^{\prime},E(\mathbb{G}^{\prime}),W^{\prime})\) is a tree_ \[\frac{1}{4}\leq\frac{d_{T^{\prime}}(x,y)}{d_{T}(x,y)}\leq 2,\text{ for }x,y\in V^{\prime}. \tag{23}\] A proof of Theorem 4.3 is given in the appendix. **Corollary 4.4**.: _If a finite metric space \((M,d)\)\(D\) stochastically embeds into geodesic trees, it bijectively \(8\)-stochastically embeds into geodesic trees._ _In particular, by 4.2 is a constant \(c>0\) so that every metric space \((M,d)\) with \(n\) elements bijectively \(c\log(n)\)-stochastically embeds into a weighted tree._ **Exercise 4.5**.: The estimate in Corollary 4.4 is optimal in the following sense: Let \(\mathbb{G}_{n}=(\mathbb{Z}/n\mathbb{Z})^{2}\) is the \(n\)-t discrete torus, then there is constant \(c>0\) so that for all \(n\in\mathbb{N}\) the following holds: If \(T\) is a tree and \(f:V(\mathbb{G}_{n})\to V(T)\) is in an injective map so that \[1\leq d_{T}(f(x),f(y))\text{ for all }x,y\in\mathbb{G}_{n}\text{, with }\{x,y\}\in E (\mathbb{G})\] then \[\frac{1}{|E(\mathbb{G})|}\sum_{e=\{x,y\}\in E(\mathbb{G})}d_{T}(f(x),f(y))\geq c \log(n)\] The same is true for the family of diamond graphs \((D_{n})_{n\in\mathbb{N}}\) (see [11, Theorem 5.6]). **Remark**.: Assume that \((G,d_{G})\) is a finite geodesic graph, and that the geodesic trees \((T_{i},d_{i})\), expansive maps \(f_{i}:V(T_{i})\to V(G)\), \(i=1,2,\ldots n\), together with the probability \(\mathbb{P}=(p_{i})_{i=1}^{n}\) form a bijective \(D\)-stochastic embedding of \(V(G)\). Then we can assume that actually \(V(T)=V(G)\), and thus \(E(T_{i})\subset[V(G)]^{2}\), and that \(f_{i}\) is the identity. For \(i=1,2,\ldots n\) and \(e=\{x,y\}\in E(T_{i})\) (which does not need to be in E(G)) \[w_{i}^{\prime}(e)=d_{G}(x,y)=\min\big{\{}\text{length}_{d_{G}}(P):\text{$P$ is a path in $G$ from $x$ to $y$}\big{\}}\leq d_{i}(e).\] and let \(d_{i}^{\prime}\) be the geodesic metric on \(V(T_{i})=V(G)\) generated by \(w_{i}^{\prime}\). We note that \(d_{i}^{\prime}(x,y)\leq d_{i}(x,y)\), for \(x,y\in V(G)\), but that the identity \((V(G),d_{G})\to(V(G),d_{i}^{\prime})\) is still expansive. It follows therefore that if we replace \(d_{i}\) by \(d_{i}^{\prime}\) we do not increase the stochastic distortion, may only reduce it. ### Application: Embedding Transportation Cost Spaces into \(\ell_{1}\) **Theorem 4.6**.: _[_3, 16_]_ _Let \((M,d)\) be a countable metric space which \(D\)-stochastically embeds into geodesic trees. Then, there is an isomorphic embedding_ \[\Phi:\mathcal{F}(M)\to\ell_{1}^{\infty}\] _with_ \[\|\mu\|_{\mathcal{F}}\leq\|\Phi(\mu)\|_{1}\leq D\|\mu\|_{\mathcal{F}}\text{ for all }\mu\in\mathcal{F}(\mathcal{M}).\] We will use the following easy fact: Let \(\mathbb{P}\) be a probability on a finite or countable infinite set \(I\), with \(\mathbb{P}(i)>0\), for \(i\in I\). Put \[L_{1}(\mathbb{P},\ell_{1})=\{f:I\to\ell_{1}\}\] and for \(f\in L_{1}(\mathbb{P},\ell_{1})\), we write \(f(i)=\sum_{j=1}^{\infty}e_{j}f(i,j)\), where \((e_{j})\) denotes the unit vector basis of \(\ell_{1}\), and put \[\|f\|_{L_{1}(\mathbb{P},\ell_{1})}=\int_{I}\|f\|_{1}d\mathbb{P}=\sum_{i\in I} \|f(i)\|_{1}\mathbb{P}(i)=\sum_{i\in I}\sum_{j=1}^{\infty}|f(i,j)|\] Then \[L_{1}(\mathbb{P},\ell_{1})\to\ell_{1}^{I\times\mathbb{N}},\quad f\mapsto \Big{(}\frac{f(i,j)}{\mathbb{P}(i)}:i\in I,j\in\mathbb{N}\Big{)}.\] is an onto isometry. Proof of Theorem 4.6.: For \(i\in I\) let \((T_{i},d_{i})\) let \(d_{i}\) be a geodesic metric on \(T_{i}\), \(\mathbb{P}=(p_{i})_{i\in I}\) a (strictly positive) probability on \(I\), and let \(\phi_{i}:M\to V(T_{i})\) so that \[d_{i}(\phi_{i}(x),\phi(y))\geq d(x,y)\text{ for }i\!\in\!I\text{ and }x,y\!\in\!M,\text{ and }\sum_{i\in I}p_{i}d_{i}(\phi_{i}(x),\phi_{i}(y))\leq Dd(x,y)\text{ for }x,y\!\in\!M.\] We can assume that \(T_{i}\) are countable trees. By Corollary 3.11\(\mathcal{F}(V(T_{i}),d_{i})\) is isometrically isomorphic to \(\ell_{1}^{(E(T_{i}))}\), for \(i\in I\), and thus, there are isometric embeddings \(E_{i}:\mathcal{F}(V(T_{i},)d_{i})\to\ell_{1}\), for \(i\in I\). We define \[\Psi_{i}:\tilde{\mathcal{F}}(M,d)\to\mathcal{F}(V(T_{i}),d_{i}),\quad\sum_{j=1 }^{n}r_{i}(\delta_{x_{i}}-\delta_{y_{i}})\mapsto\sum_{j=1}^{n}r_{i}(\delta_{ \phi_{i}(x_{i})}-\delta_{\phi(y_{i})}).\] Note that if \(\mu\in\tilde{\mathcal{F}}(M,d)\) is represented by \(\mu=\sum_{j=1}^{n}r_{i}(\delta_{x_{i}}-\delta_{y_{i}})\) then \(\Psi_{i}(\mu)\) is represented by \(\Psi_{i}(\mu)=\sum_{j=1}^{n}r_{j}(\delta_{\phi_{i}(x_{j})}-\delta_{\phi_{i}(y_ {j})})\), and thus \[\|\Psi_{i}(\mu)\|_{tc} =\inf\Big{\{}\sum_{j=1}^{n}r_{j}d_{i}(u_{j},v_{j}):\Psi_{i}(\mu)= \sum_{j=1}^{n}r_{j}(\delta_{u_{j}}-\delta_{v_{j}}),\,(r_{j})_{j=1}^{n}\subset \mathbb{R}^{+}\Big{\}}\] \[=\inf\Big{\{}\sum_{j=1}^{n}r_{j}d_{i}(\phi_{i}(x_{j}),\phi_{i}(y_ {j})):\mu=\sum_{j=1}^{n}r_{i}(\delta_{x_{j}}-\delta_{y_{j}}),\,(r_{j})_{j=1}^{ n}\subset\mathbb{R}^{+}\Big{\}}\geq\|\mu\|_{tc}.\] For the second equality note that "\(\leq\)" is clear and that "\(\geq\)" follows from the fact (see Proposition 2.4) that among the optimal representations of \(\Psi_{i}(\mu)\), there always must be one whose support is the support of \(\Psi_{i}(\mu)\), which is a subset of \(\phi_{i}(M)\). We define \[\Psi:\tilde{\mathcal{F}}(M,d)\to L_{1}(\mathbb{P},\ell_{1})\equiv\ell_{1}, \quad\mu\mapsto\Psi(\mu):I\ni i\to E_{i}\circ\Psi_{i}(\mu)\in\ell_{1}.\] Then it follows for \(\mu\in\tilde{\mathcal{F}}(M,d)\), that \[\|\Psi(\mu)\|_{L_{1}(\mathbb{P},\ell_{1})}=\sum_{i\in I}\pi\|\Psi_{i}(\mu)\|_{ tc}\geq\|\mu\|_{tc}\] and on the other hand if \(\mu=\sum_{j=1}^{n}r_{j}(\delta_{x_{j}}-\delta_{y_{j}})\) is an optimal representation of \(\mu\) then \[\|\Psi(\mu)\|_{L_{1}(\mathbb{P},\ell_{1})} =\sum_{i\in I}p_{i}\|\Psi_{i}(\mu)\|_{tc}\leq\sum_{i\in I}p_{i} \sum_{j=1}^{n}r_{j}d(\phi_{i}(x_{j}),\phi_{i}(y_{j}))\] \[\leq D\sum_{i\in I}p_{i}\sum_{j=1}^{n}r_{j}d(x_{j},y_{j}))=D\|\mu \|_{tc}.\] Application: Extension of Integral Operators from a Conservative Field to the whole Vector Field, and the Embedding of Transportation Cost Spaces into \(L_{1}\) complementary. We recall some notation from _discrete Calculus_. We are given a finite graph \(G=(V(G),E(G))\) with a geodesic metric \(d_{G}\). We put \(\bar{E}(G)=\big{\{}(x,y),(y,x):\{x,y\}\in E(G)\big{\}}\). A map \(f:\bar{E}(G)\to\mathbb{R}\) is called a _vector field on \(G\)_ if \(f(x,y)=-f(y,x)\) for all \(\{x,y\}\in E(G)\), and we put \[\|f\|_{\infty}=\sup_{e=(x,y)\in\bar{E}(G)}|f(x,y)|.\] We denote the space of vector fields on \(G\) by \(\operatorname{VF}(G)\). Together with this norm \(\operatorname{VF}(G)\equiv\ell_{\infty}(E(G))\) If \(W=(x_{j})_{j=0}^{n}\) is a walk in \(G\) and \(f\in\operatorname{VF}(G)\), we call the _integral of \(f\) along \(W\)_ to be \[\int_{W}f(e)d_{G}(e)=\sum_{j=1}^{n}f(x_{j-1},x_{j})d_{G}(x_{j-1},x_{j}).\] A vector field \(f\) on \(G\) is called _conservative_ if the integral along any cycle \(C\) vanishes, _i.e.,_ if \(C=(x_{j})_{j=1}^{n}\) is a walk with \(x_{n}=x_{0}\), then \[\int_{C}f(e)d_{G}(e)=\sum_{j=1}^{n}f(x_{j-1},x_{j})d_{G}(x_{j-1},x_{j})=0.\] Equivalently, if for any \(x,y\in V(G)\) and any two paths \(P\) and \(Q\) from \(x\) to \(y\) it follows that \[\int_{P}f(e)d_{G}(e)=\int_{Q}f(e)d_{G}(e).\] We denote the space of conservative vector fields on \(G\) by \(\operatorname{CVF}(G)\). **Remark.** If \(T\) is a tree, then \(\operatorname{VF}(T)=\operatorname{CVF}(T)\). For a map \(F:V(G)\to\mathbb{R}\) we define the _gradient of \(F\)_ by \[\nabla F=\nabla_{d}F:\bar{E}\to\mathbb{R},\quad(x,y)\mapsto\frac{f(y)-f(x)}{d_ {G}(x,y)}.\] **Proposition 4.7**.: _For \(F:V(G)\to\mathbb{R}\)_ \[\|F\|_{\operatorname{Lip}}=\|\nabla F\|_{\infty}.\] Proof.: We need to observe that \[\sup_{x,y\in V(G),x\neq y}\frac{|f(y)-f(x)|}{d_{G}(x,y)}=\sup_{\{x,y\}\in E(G) }\frac{|f(y)-f(x)|}{d_{G}(x,y)}.\] Indeed, let \(x,y\in V(G)\), \(x\neq y\) and \(P=(x_{j})_{j=0}^{n}\) path of shortest metric length from \(x\) to \(y\) then \[\frac{|f(y)-f(x)|}{d_{G}(x,y)} \leq\frac{\left|\sum_{j=1}^{n}f(x_{j})-f(x_{j-1})\right|}{\sum_{ j=1}^{n}d_{G}(x_{j-1},x_{j})}\] \[\leq\frac{\sum_{j=1}^{n}|f(x_{j})-f(x_{j-1})|}{\sum_{j=1}^{n}d_{G }(x_{j-1},x_{j})}\leq\max_{j=1,2\ldots}\frac{|f(x_{j})-f(x_{j-1})|}{d_{G}(x_{j -1},x_{j})},\] where the last inequality follows from iteratively applying the inequality \[\frac{a+b}{c+d}\leq\max\Big{(}\frac{a}{c},\frac{b}{d}\Big{)}.\] **Proposition 4.8**.: _Fix a point \(0\in V(G)\). Then for every \(F\in\operatorname{Lip}_{0}(V(G))\), it follows that \(\nabla F\in\operatorname{CVF}(G)\) and_ \[\|F\|_{\operatorname{Lip}}=\|\nabla F\|_{\infty}.\] _Moreover_ \[\nabla:\operatorname{Lip}_{0}(V(G))\to\operatorname{CVF}(G)\] _is a surjective isometry whose inverse is defined by the Integral operator_ \[I(f)(x)=\int_{P}f(e)d(e),\text{ for }x\in V(G).\] _where \(P\) is any path from \(0\) to \(x\), for \(x\in V(G)\)._ **Question:** How well is \(\operatorname{CVF}(G)\) complemented in \(\operatorname{VF}(G)\) Equivalently, what is the smallest constant \(C\geq 1\), so that the operator \[I:\operatorname{CVF}(G)\to\operatorname{Lip}_{0}(V(G),d)\] can be extended to an operator \[\tilde{I}:\operatorname{VF}(G)\to\operatorname{Lip}_{0}(V(G),d).\] **Theorem 4.9**.: _Assume that \((V(G),d_{G})\) is finite \(D\)-stochastically embeds into geodesic trees. Then the integral operator \(I:\operatorname{CVF}(G)\to\operatorname{Lip}_{0}(V(G),d_{G})\) can be extended to an operator \(\tilde{I}:\operatorname{VF}(G)\to\operatorname{Lip}_{0}(V(G),d)\), with \(\|\tilde{I}\|\leq 8D\)._ Proof.: Theorem 4.3 and the assumption imply that \((V(G),d_{G})\) bijectively \(8D\)-stochastically embeds into geodesic trees. We can find \(n\in\mathbb{N}\), geodesic trees \((T_{i},d_{i})\), with \(V(T_{i})=V(G)\), for \(i=1,2\ldots,n\), and a probability \(\mathbb{P}=(p_{j})_{j=1}^{n}\), so that \[d_{G}(x,y)\leq d_{i}(x,y)\text{ for all }i=1,2\ldots,n,\text{ and }x,y\in V(G),\text{ and } \tag{25}\] \[\sum_{j=1}^{n}p_{i}d_{i}(x,y)\leq 8Dd(x,y),\text{ for all }x,y\in V(G). \tag{24}\] By the Remark after Theorem 3.3. we can assume that for each \(i=1,2,\ldots,n\) and any \(e=\{x,y\}\in E(T_{i})\), it follows that \(d_{i}(x,y)=d_{G}(x,y)\) and we will choose a path \(P_{e}\) in \(G\) from \(x\) to \(y\), so that \[d_{i}(x,y)=\operatorname{length}_{d_{G}}(P_{e})=d_{G}(x,y).\] For \(f\in VF(G)\), and \(i=1,2\ldots,n\) we define \(f^{(i)}\in\operatorname{VF}(T_{i})\) (=\(\operatorname{CVF}(T_{i})\)) by \[f^{(i)}(e)=\frac{1}{d_{i}(x,y)}\int_{P_{e}}f(e^{\prime})d_{G}(e^{\prime})= \frac{1}{d_{G}(x,y)}\int_{P_{e}}f(e^{\prime})d_{G}(e^{\prime})\text{ for }e\in E(T_{i}).\] From (24) we deduce that \[|f^{(i)}(e)|\leq\|f\|_{\infty}\frac{d_{G}(x,y)}{d_{i}(x,y)}\leq\|f\|_{\infty} \text{ for all }i=1,2,\ldots,n\text{ and }e\in E(T_{i}). \tag{26}\] Denote the (unique) path from \(x\) to \(y\) in \(T_{i}\) by \([x,y]_{i}.\) We define \(\tilde{I}(f)\in\operatorname{Lip}_{0}(V(G),d_{G})\) by \[\tilde{I}(f)(x) :=\sum_{i=1}^{n}p_{i}\int_{[0,x]_{i}}f^{(i)}(e)d_{i}(e)\] \[=\sum_{i=1}^{n}p_{i}\sum_{e\in E([0,x]_{i})}f^{(i)}(e)d_{i}(e)= \sum_{i=1}^{n}p_{i}\sum_{e\in E([0,x]_{i})}\int_{P_{e}}f(e^{\prime})d_{G}(e^{ \prime}).\] We note: * If \(f\) is a conservative field, then \(\tilde{I}(f)=I(f)\). Indeed, denote for \(i=1,2,\ldots,n\) and \(x\in V(G)\), the walk in \(G\) from \(0\) to \(x\), obtained by concatenating the paths \(P_{e}\), \(e\in E([0,x]_{i})\), by \(W_{i}\). Then \[\tilde{I}(f)(x)=\sum_{i\in I}p_{i}\sum_{e\in[0,x]_{i}}\int_{P_{e}}fd_{G}(e)= \sum_{i\in I}p_{i}\int_{W_{i}}f(e)d_{G}(e)=I(f)(x).\] * For \(e=\{x,y\}\in E\), it follows from (26) that \[|\tilde{I}(f)(y)-\tilde{I}(f)(x)|=\left|\sum_{i\in I}p_{i}\int_{[x,y]_{i}}f^{( i)}(e)d_{i}(e)\right|\] \[\leq\sum_{i\in I}p_{i}\Bigg{|}\int_{[x,y]_{i}}f^{(i)}(e)d_{i}(e) \Bigg{|}\] \[\leq\|f\|_{\infty}\sum_{i\in I}p_{i}\sum_{e\in[x,y]_{i}}d_{i}(e)\] \[=\|f\|_{\infty}\sum_{i\in I}p_{i}d_{i}(x,y)\leq 8Dd_{G}(x,y)\|f\|_{ \infty},\] and, thus, \(\|\tilde{I}(f)\|_{\infty}\leq 8D\|f\|_{\infty}\). **Corollary 4.10**.: _If \((V(G),d_{G})\) is a finite graph which \(D\)-stochastically embeds into geodesic trees, then \(\mathcal{F}(V(G),d_{G})\) is \(8D\)-complemented in \(\ell_{1}(E(G))\)._ Proof.: By Theorem 4.2\(\operatorname{Lip}_{0}(V(G),d_{G})\equiv\operatorname{CVF}(G)\) is \(8D\) complemented in \(\operatorname{VF}(G)\equiv\ell_{\infty}(E(G))\). Passing to the dual we obtain that \(\mathcal{F}(V(G),d_{G})\) is \(8D\) complemented in \(\ell_{\infty}(E(G))\). Now assume that \(G=(V(G),E(G))\) is a countable graph with geodesic metric \(d_{G}\), which is \(D\) stochastically embeddable into trees. We cannot use Gupta's result Theorem 4.3. Therefore, we must also assume that \((G,d_{G})\) is bijectively \(D\) stochastically embeddable into trees. So let \(I\subset\mathbb{N}\), \(\mathbb{P}=(p_{i})_{i\in I}\subset(0,1]\) with \(\sum_{i\in I}p_{i}=1\), and for \(i\in I\) let \(T_{i}=(V(T_{i}),E(T_{i}))\) be a tree with \(V(T_{i})=V(T)\) with a geodesic metric \(d_{i}\), with \(d_{i}(e)=d_{G}(e)\), if \(e\in E(T_{i})\) so that: \[d_{i}\big{(}x,y\big{)}\geq d_{G}(x,y),\text{ for }i\in I,\text{ and }x,y\in V(G), \tag{28}\] \[\mathbb{E}_{P}\big{(}d_{i}\big{(}f_{i}(x),f_{i}(y)\big{)}\big{)} =\sum_{i\in I}p_{i}d_{i}\big{(}f_{i}(x),f_{i}(y)\big{)}\leq Dd_{G}(x,y),\text { for }x,y\in V(G). \tag{27}\] Choose for each \(i\in I\) a root \(v_{i}\in V(T_{i})\) and define the Tree \(T\), by _gluing the trees \(T_{i}\) together at \(v_{i}\), i.e.,_ put \(V(T)=\bigcup_{i\in I}\{i\}\times V(T_{i})\), and identify \((i,0)\) with \((j,0)\) for \(i,j\in I\), and denote this point by \(0\) (the root of \(T\)) and \(E(T)=\bigcup_{i\in I}\big{\{}\{(i,x),(i,y)\}:\{x,y\}\in E(T_{i})\big{\}}\). We put \(w_{e}=d_{i}(x,y)\), for \(e=\{(i,x),(i,y)\}\in E(T)\) (and thus \(\{x,y\}\in E(T_{i})\)). This defines a weight function on \(E(T)\), which generates a geodesic metric on \(d_{T}\), which has the property that \(d_{T}(e)=d_{i}(x,y)\), for \(e=\{(i,x),(i,y)\}\in T\). We direct the edges of \(E(T)\) by choosing the orientation \(((i,x),(i,y))\) for \(\{(i,x),(i,y)\}\in T\) if \(d(0,(i,x))<d(0,(i,y))\). We now consider the maps \(f_{i}:V(G)\to V(T)\), \(x\mapsto(i,x)\) which satisfy for \(x,y\in V(G)\) \[d_{T}\big{(}f_{i}(x),f_{i}(y)\big{)}\geq d_{G}(x,y)\text{ and }\mathbb{E}_{P} \big{(}d_{T}\big{(}f_{i}(x),f_{i}(y)\big{)}\big{)}\leq Dd_{G}(x,y). \tag{29}\] Again, as in Theorem 4.9, we can choose for each \(e\in E_{d}(T)\), say \(e=\big{(}(i,x),(i,y)\big{)}\), with \(i\in I\), a path in \(G\) from \(x\) to \(y\), which we denote by \(P_{e}\), with \(\operatorname{length}(P_{e})=d_{T}(e)=d_{i}(e)\) For \(f\in VF(G)\) and \(i\in I\), and \(e=\{(i,x),(i,y)\}\in E(T)\) we define \[f^{(i)}(e)=\frac{1}{d_{T}(e)}\int_{P_{e}}f(e^{\prime})\,d_{G}(e^{\prime})= \frac{1}{d_{G}(e)}\int_{P_{e}}f(e^{\prime})\,d_{G}(e^{\prime})\] From (29) we deduce that \[|f^{(i)}(e)|\leq\|f\|_{\infty}\frac{d_{G}(x,y)}{d_{T}(e)}\leq\|f\|_{\infty}\text{ for all }i\in I\text{ and }e=((i,x),(i,y))\in E_{d}(T). \tag{30}\] For \(x,y\in V(G)\) and \(i\in I\) denote the (unique) path from \((i,x)\) to \((i,y)\) in \(T\) by \([x,y]_{i}.\) We define \(\tilde{I}(f)\in\operatorname{Lip}_{0}(V(G),d_{G})\) by \[\tilde{I}(f)(x) :=\sum_{i\in I}p_{i}\int_{[0,x]_{i}}f^{(i)}(e)d_{T}(e)\] \[=\sum_{i\in I}p_{i}\sum_{e\in E([0,x]_{i})}f^{(i)}(e)d_{i}(e)=\sum _{i\in I}p_{i}\sum_{e\in E([0,x]_{i})}\int_{P_{e}}f(e^{\prime})d_{G}(e^{\prime }).\] We note: * If \(f\) is a conservative field, then \(\tilde{I}(f)=I(f)\). Indeed, denote for \(i\in I\) and \(x\in V(G)\) the walk in \(G\) from \(0\) to \(x\), obtained by concatenating the paths \(P_{e}\), \(e\in E([0,x]_{i})\), by \(W_{i}\). Then \[\tilde{I}(f)(x)=\sum_{i\in I}p_{i}\sum_{e\in[0,x]_{i}}\int_{p_{e}}fd_{G}(e)= \sum_{i\in I}p_{i}\int_{W_{i}}f(e)d_{G}(e)=I(f)(x).\] * For \(e=\{x,y\}\in E\), it follows from (30) that \[|\tilde{I}(f)(y)-\tilde{I}(f)(x)| =\Bigg{|}\sum_{i\in I}p_{i}\int_{[x,y]_{i}}f^{(i)}(e)d_{i}(e)\Bigg{|}\] \[\leq\sum_{i\in I}p_{i}\Bigg{|}\int_{[x,y]_{i}}f^{(i)}(e)d_{i}(e)\Bigg{|}\] \[\leq\|f\|_{\infty}\sum_{i\in I}p_{i}\sum_{e\in[x,y]_{i}}d_{i}(e)\] \[=\|f\|_{\infty}\sum_{i\in I}p_{i}d_{i}(x,y)\leq 8Dd_{G}(x,y)\|f\|_{ \infty},\] and, thus, \(\|\tilde{I}(f)\|_{\infty}\leq 8D\|f\|_{\infty}\) ## 5. Lower estimates for embeddings of \(\mathcal{F}(M)\) into \(L_{1}\) In this last section, we want to formulate a criterion on geodesic graphs \((G,d)\) which implies that the distortion of embeddings of \(\mathcal{F}(V(G),d)\) has to satisfy lower estimates. This will lead to sequences of geodesic graphs \((G_{n},d_{n})\), for which \[c_{L_{1}}(\mathcal{F}(V(G_{n}),d_{n})\geq C\sqrt{\log(V(G_{n}))}.\] Among these sequences are, for example the sequence of discrete tori \(\big{(}(\mathbb{Z}/n\mathbb{Z})^{2}:n\in\mathbb{N}\big{)}\)[17] and the sequence of diamond graphs \((D_{n}:n\in\mathbb{N})\), [2]. The idea of the proof goes back to a result of Kislyakov from 1975 [15]. This result of Kislyakov implies, for example that \(\mathcal{F}(\mathbb{R}^{2})\) is not isomorphic to \(\mathcal{F}(\mathbb{R})\) (which is isometrically isomorphic to \(L_{1}(\mathbb{R})\)). Throughout this section, \((G,d)\) is a geodesic finite graph and \(\nu\) a probability on \(E(G)\), whose support is all of \(E(G)\). We define the probability on \(V(G)\), _induced by \(\nu\)_ as follows \[\mu(v)=\mu_{\nu}(v)=\frac{1}{2}\sum_{e\in E(G),v\in e}\nu(e)v\in V(G).\] Note that \[\sum_{v\in V(G)}\mu(v)=\sum_{v\in V(G)}\frac{1}{2}\sum_{e\in E(G),v\in e}\nu( e)=\frac{1}{2}\sum_{e\in E(G)}\sum_{v\in e}\nu(e)=1,\] which shows that \(\mu\) is indeed a probability on \(\mu\) ### Isoperimetric dimension and the Sobolev inequality Let \((G,d)\) be a geodesic finite graph and let \(\nu\) be a probability on \(E(G)\). We define the probability on \(V(G)\), _induced by \(\nu\)_ as follows \[\mu(v)=\mu_{\nu}(v)=\frac{1}{2}\sum_{e\in E(G),v\in e}\nu(e)v\in V(G).\] Note that \[\sum_{v\in V(G)}\mu(v)=\sum_{v\in V(G)}\frac{1}{2}\sum_{e\in E(G),v\in e}\nu( e)=\frac{1}{2}\sum_{e\in E(G)}\sum_{v\in e}\nu(e)=1.\] For \(A\subset V(G)\) we define the _boundary_ of \(A\) to be \[\partial_{G}A:=\{\{x,y\}\in E(G)\colon|\{x,y\}\cap A|=1\}\] and the perimeter of \(A\) to be \[\mathrm{Per}_{\nu,d}(A):=\sum_{e\in\partial_{G}A}\frac{\nu(e)}{d(e)}.\] **Definition 5.1** (Isoperimetric dimension).: For \(\delta\in[1,\infty)\), and \(C_{iso}\in(0,\infty)\), \(\mu\) we say that \((G,d)\) has \(\nu\)-_isoperimetric dimension_\(\delta\) with constant \(C_{iso}\) if for every \(A\subset V(G)\) \[\min\{\mu(A),\mu(A^{c})\}^{\frac{\delta-1}{\delta}}\leq C_{iso}\mathrm{Per}_{ \nu,d}(A), \tag{31}\] **Definition 5.2** (Sobolev Norm).: For \(f\colon(V(G),d)\to\mathbb{R}\) and \(p\in[1,\infty]\), we define the \((1,p)\)-Sobolev norm (with respect to \(\nu\) and \(d\)) of \(f\) by \[\|f\|_{W^{1,p}(\nu,d)} =\|\nabla_{d}f\|_{L_{p}(\nu)}=\mathbb{E}_{\nu}[|\nabla_{d}f|^{p}]^ {1/p}\] \[=\left[\int_{E(G)}|\nabla_{d}f(e)|^{p}d\nu(e)\right]^{1/p}=\left[ \sum_{e=\{u,v\}\in E(G)}\frac{|f(u)-f(v)|^{p}}{d(u,v)^{p}}\nu(e)\right]^{1/p},\] with the usual convention when \(p=\infty\). Note that if \(\nu(e)>0\) for all \(e\in E(G)\), then \[\|f\|_{W^{1,\infty}(\nu,d)}=\max_{e=\{u,v\}\in E(G)}\frac{|f(u)-f(v)|}{d(u,v)}=\| f\|_{\mathrm{Lip}}.\] **Theorem 5.3** (Sobolev inequality from isoperimetric inequality).: _Assume that \((G,d)\) has \(\nu\) - isoperimetric dimension \(\delta\) with constant \(C\), then for every map \(f:(V(G),d)\to\mathbb{R}\),_ \[\|f-\mathbb{E}_{\mu}f\|_{L_{\delta^{\prime}}(\mu)}\leq 2C\|f\|_{W^{1,1}(\nu,d)}, \tag{32}\] _where \(\mathbb{E}_{\mu}f=\int_{V(G)}f(x)d\mu(x)\), and \(\delta^{\prime}\) is the Holder conjugate exponent of \(\delta\), i.e. \(\frac{1}{\delta}+\frac{1}{\delta^{\prime}}=1\)._ Proof.: Exercise. Hint: (32) follows imediately from (31) if \(f=1_{A}\). ### Lipschitz- spectral profile of a graph Before we define what we mean by a "Lipschitz-spectral profile of a graph" we want to motivate it with an example: **Example 5.4**.: Consider the finite abelian group \(G=(\mathbb{Z}/n\mathbb{Z})^{2}\), with the metric \[d\big{(}(v_{1},v_{2}),(u_{1},u_{2})\big{)}=\frac{1}{n}\max\big{(}|v_{1}-u_{1} |,|v_{2}-u_{2}|\big{)}.\] The _characters_ of \(G\) are the group homomorphisms \(\chi:G\to T=\{e^{i2\pi x}:0\leq x\leq 1\}\). These characters can be represented as follows: \(\chi:G\to T\) is a character if and only if for some \(k,m\in\{0,1,2,\ldots n-1\}\) \[\chi=\chi_{(k,m)}:(\mathbb{Z}/n\mathbb{Z})^{2}\to T,\quad(x,y)=e^{\frac{2\pi i }{n}(xk+ym)}.\] Note the following properties of \((\chi_{(k,m)}:0\leq k,m\leq n)\) * \((\chi_{(k,m)}:0\leq k,m\leq n)\) is an orthoromal basis in \(L_{2}((\mathbb{Z}/n\mathbb{Z})^{2},\mu)\) where \(\mu\) is the uniform distribution. * \(\|\chi_{(k,m)}\|_{L_{\infty}(\mu)}=\|\chi_{(k,m)}\|_{L_{1}(\mu)}=1\), for \(0\leq k,m\leq n\), * \(\|\chi_{k,m}\|\leq C\max(k,m)\) and thus \[\big{|}\{(k,m)\in(\mathbb{Z}/n\mathbb{Z})^{2}:\|\chi_{(k,m)}\|_{\mathrm{Lip}} \leq L\}\geq cL^{2},\;\text{for }L=1,2\ldots n.\] **Definition 5.5** (Lipschitz-spectral profile).: Let \(\delta\in[1,\infty)\), and \(\beta\in[1,\infty)\), and \(C\geq 1\). We say that \((G,d)\) has \((\mu,d)\)-_Lipschitz-spectral profile of dimension \(\delta\) and bandwidth \(\beta\) with constant \(C\)_ if there exists a collection of functions \(F=\{f_{i}\colon V(G)\to\mathbb{R}\}_{i\in I}\) satisfying: 1. \(C^{-1}\leq\inf_{i\in I}\|f_{i}\|_{L_{1}(\mu)}\leq\sup_{i\in I}\|f_{i}\|_{L_{ \infty}(\mu)}\leq C\), 2. \(\{f_{i}\}_{i\in I}\) is an orthogonal family in \(L_{2}(\mu)\), and 3. for every \(s\in[1,\beta]\), \(|\{i\in I\colon\mathrm{Lip}(f_{i})\leq s\}|\geq C^{-1}s^{\delta}\). ### Main result **Theorem 5.6**.: _Let \(\delta_{iso}\in[2,\infty)\), \(\delta_{spec}\in[1,\infty)\) and \(C\geq 1\). If \(G\) has \((\nu,d)\)-isoperimetric dimension \(\delta_{iso}\) with constant \(C\), and Lipschitz-spectral profile of dimension \(\delta_{spec}\), bandwidth \(\beta\), with constants \(C\) _then any \(D\)-isomorphic embedding from the Lipschitz-free space \(\mathcal{F}(V(G),d)\) into a finite-dimensional \(L_{1}\)-space \(\ell_{1}^{N}\) satisfies_ \[D\geq\frac{1}{2C^{5}}\left(\int_{1}^{\beta}s^{\delta_{spec}-\delta_{iso}-1}ds \right)^{\frac{1}{\delta_{iso}}}. \tag{33}\] _Sketch._ Assume that \(T:\mathcal{F}(V(G),d)\to\ell_{1}^{N}\), is such that \(\|\mu\|_{\mathcal{F}}\leq\|T(\mu)\|_{1}\leq D\|\mu\|_{\mathcal{F}}\) for all \(\mu\in\mathcal{F}(V(G),d)\). We need to find a lower estimate for \(D\). Passing to the adjoint \(T^{*}:\ell_{\infty}^{N}\searrow\operatorname{Lip}_{0}(V(G),d)\), which is a surjection, it follows that \[B_{\operatorname{Lip}_{0}(V(G),d)}\subset T^{*}(\ell_{\infty}^{N})\text{ and }\|T^{*}\|\leq D. \tag{34}\] We define several operators: * \(\iota_{1}:\operatorname{Lip}_{0}(V(G),d)\to W^{1,1}(V(G),d,\nu)\) identity, 1-summing operator \(\pi_{1}(\iota_{1})=1\), (Note that \(\nabla_{d}:\operatorname{Lip}_{0}(V(G),d)\to\ell_{\infty}(E(G))\equiv L_{ \infty}(E(G),\nu)\), \(\nabla_{d}:W^{(1,1)}(V(G),d,\nu_{G})\to L_{1}(\nu_{G})\) are isometric embeddings.) * \(\iota_{2}:W^{1,1}(d,\nu)\to L_{\delta^{\prime}_{iso}}(\mu)\) identity. By Sobolev inequality \(\|\iota_{2}\|\leq 2C\). * and \(L_{\infty}\)-norms of \((f_{j})_{j\in J}\) \[\max_{j\in J}\big{\|}\mathbb{E}_{\mu}(g\cdot f_{j})\big{\|}\leq C\|g\|_{L_{1} (\mu)},\text{ for all }g\in L_{1}(\mu).\] and thus \(\|FT\|_{L_{1}(\mu)\to\ell_{\infty}(J)}\leq C\). We deduce, therefore from the Riesz-Thorin Interpolation Theorem (Recall that \(2\leq\delta_{iso}<\infty\), and thus \(1<\delta^{\prime}_{iso}\leq 2\)) that \(\|FT\|_{L_{\delta^{\prime}_{iso}}(\mu)\to\ell_{iso}(J)}\leq C\). Then we consider the product of all these operators \[R:\ell_{\infty}^{N}\searrow^{T^{*}}\operatorname{Lip}_{0}(d)\mathop{\to}^{ \iota_{1}}W^{1,1}(d,\nu)\mathop{\to}^{\iota_{2}}L_{\delta^{\prime}_{iso}}(\mu )\mathop{\to}^{\text{FT}}\ell_{\delta_{iso}}(J)\] Since the summing norm \(\pi_{1}\) is an _ideal norm_, we deduce that \[\pi_{1}(R)=\pi_{1}(\iota_{1})\|T^{*}\|\cdot\|\iota_{2}\|\cdot\|\cdot\|FT\| \leq 2DC^{2}.\] An important property of 1-summing operators is that 1-summing operators between two Banach lattices are _Lattice bounded_. In our case, this means the following: Let \(R_{j}:\ell_{\infty}^{N}:\ell_{\infty}\to\mathbb{R}\) be the \(j\)-componenet of \(R\), _i.e.,\(R(x)=\sum_{j\in J}R_{j}(x)e_{j}\)_, where \(e_{j}\) is the \(j\)-th unit vector basis of \(\ell_{\delta_{iso}}(J)\). Then there exists a \(b\in\ell_{\delta_{iso}}^{+}(J)\) with \(\|b\|_{\delta_{iso}}\leq\pi_{1}(R)\leq 2DC^{2}\) so that for every \(x\in\ell_{\infty}^{N}\)_ \[|R_{j}(x)\leq b_{j}\|x\|_{\infty}.\] Now, using that \(B_{\text{Lip}_{0}(V(G),d)}\subset T^{*}(B_{\ell\underline{N}})\) choose for every \(j\in J\) and \(x_{j}\in B_{\ell\underline{N}}\) so that \[T^{*}(x_{j})=\frac{f_{j}}{\|f_{j}\|_{\text{Lip}}},\text{ for all }j\in J.\] It follows that \[R(x_{j})=\frac{R(f_{j})}{\|f_{j}\|_{\text{Lip}}}=\frac{1}{\|f_{j}\|_{\text{Lip }}}e_{j}\text{ and }|R_{j}(x_{j})|\leq b_{j}\text{ for all }j\in J.\] Therefore we obtain \[2C^{2}D \geq\|b\|_{\ell_{\delta_{\text{iso}}}}=\Big{(}\sum_{j\in J}b_{j}^ {\delta_{\text{iso}}}\Big{)}^{1/\delta_{\text{iso}}}\geq\Big{(}\sum_{j\in J}|R _{j}(x_{j})|^{\delta_{\text{iso}}}\Big{)}^{1/\delta_{\text{iso}}}\] \[=\Big{(}\sum_{j\in J}\big{(}\|\mathcal{F}(f_{j})\|_{L_{2}}^{2}/\| f_{j}\|_{\text{Lip}}\big{)}^{\delta_{\text{iso}}}\Big{)}^{1/\delta_{\text{ iso}}}\geq C^{-2}\Big{(}\sum_{j\in J}\Big{(}\frac{1}{\|f_{j}\|_{\text{Lip}}} \Big{)}^{\delta_{\text{iso}}}\Big{)}^{1/\delta_{\text{iso}}}\] Thus \[D\geq\frac{1}{2C^{4}}\Big{(}\sum_{j\in J}\Big{(}\frac{1}{\|f_{j}\|_{\text{Lip }}}\Big{)}^{\delta_{\text{iso}}}\Big{)}^{1/\delta_{\text{iso}}}. \tag{35}\] From here, we calculate the sum by applying the classical formula \[\int_{\Omega}|h|^{p}d\sigma=p\int_{0}^{\infty}t^{p-1}\sigma(\{|h|>t\})dt\] with \(\Omega=J\) and \(\sigma\) the counting measure: \[\sum_{j\in J}\frac{1}{\text{Lip}(f_{j})^{\delta_{iso}}} =\delta_{iso}\int_{0}^{\infty}t^{\delta_{iso}-1}\Big{|}\Big{\{}j \in J\colon\frac{1}{\text{Lip}(f_{j})}>t\Big{\}}\Big{|}dt\] \[=\delta_{iso}\int_{0}^{\infty}\frac{1}{s^{\delta_{iso}-1}}\Big{|} \Big{\{}j\in J\colon\frac{1}{\text{Lip}(f_{j})}>\frac{1}{s}\Big{\}}\Big{|} \frac{1}{s^{2}}ds\] \[=\delta_{iso}\int_{0}^{\infty}\frac{1}{s^{\delta_{iso}+1}}\Big{|} \Big{\{}j\in J\colon\text{Lip}(f_{j})<s\Big{\}}\Big{|}ds \tag{36}\] \[\geq \delta_{iso}\int_{1}^{\beta}\frac{1}{s^{\delta_{iso}+1}}\frac{s^{ \delta_{spec}}}{C}ds.\] (where we used (3) of the Lipschitz spectral profile in the last inequality) from which, together with (33), we deduce our claim (35). ### Examples satisfying Theorem 5.6 **Lemma 5.7**.: _Let \(\Omega\) be a ## 6. Appendix ### Proof of Birkhoff's Theorem 2.15 **Theorem 6.1**.: _(Birkhoff) Assume \(n\in\mathbb{N}\) and that \(A=(a_{i,j})_{i,j=1}^{n}\) is a doubly stochastic matrix, i.e.,_ \[0\leq a_{i,j}\leq 1\text{ for all }1\leq i,j\leq n\text{,}\] \[\sum_{j=1}^{n}a_{i,j}=1\text{ for }i=1,2,\ldots,n\text{, and }\sum_{i=1}^{n}a_{i,j}=1\text{ for }j=1,2,\ldots,n\text{.}\] _Then \(A\) is a convex combination of permutation matrices, i.e.,matrices which have in each row and each column exactly one entry whose value is \(1\) and vanishes elsewhere._ Proof.: Clearly the set \(\text{DS}_{n}\) of all doubly stochastic \(n\text{b}\text{y}n\) matrices is bounded, convex and compact in \(\mathbb{R}^{n\times n}\), and is thus the convex hull of its extreme points. It is, therefore, enough to show that a matrix \(A\in\text{DS}\), which has entries that are not integers, cannot be an extreme point of DS. So assume that for some \(r_{1}\) and \(s_{1}\) in \(\{1,2,\ldots,n\}\) we have \(0<a_{r_{1},s_{1}}<1\) and since the row \(r_{1}\) adds up to \(1\) there must be an \(s_{2}\neq s_{1}\) in \(\{1,2,\ldots,n\}\), with \(0<a_{r_{1},s_{2}}<1\). Again, since the column \(s_{2}\) adds to up to \(1\) there must be an \(r_{2}\neq r_{1}\) in \(\{1,2,\ldots,n\}\) with \(0<a_{r_{2},s_{2}}<1\). We continue this way and eventually for some \(k\), either \(\{r_{k},s_{k}\}\) or \(\{r_{k},s_{k+1}\}\) must be among the previously chosen pairs \((r_{j},s_{j})\) or \((r_{j},s_{j+1})\). Possibly by changing the starting point, relabeling, and exchanging rows with columns, we obtain a cycle which is either of the form \[(r_{1},s_{1}),(r_{1},s_{2}),(r_{2},s_{2}),\ldots,(r_{k-1},s_{k}),(r_{k},s_{k}) =(r_{1},s_{1})\] (implying the cycle is of even length) or of the form \[(r_{1},s_{1}),(r_{1},s_{2}),(r_{2},s_{2}),\ldots,(r_{k},s_{k}),(r_{k},s_{k+1}) =(r_{1},s_{1}),\] (implying the cycle is of odd length) so that \(r_{j}\neq r_{j+1}\) and \(s_{j}\neq s_{j+1}\) (if the cycle is of the first form, we put \(s_{i}=s_{i\mod(k-1)}\), if \(I>k-1\) and if it is of the second form we put \(s_{i}=s_{i\mod(k)}\), if \(i>k\)) and \(0<a_{r_{j},s_{j}},a_{r_{j},s_{j+1}}<1\). Let us assume we have chosen a cycle of minimal length. Then we claim it must be of the first form, _i.e.,_ it must be of even length. Indeed, assume it is of the second form, then \((r_{k},s_{k+1})=(r_{1},s_{1})\) and \((r_{1},s_{2})\) are in the same row and therefore \[(r_{2},s_{2}),(r_{2},s_{3}),(r_{3},s_{3}),\ldots,(\underbrace{r_{k}}_{=r_{1}},s_{k}),(r_{1},s_{2}),\] is a shorter cycle, which is a contradiction; thus, our shortest cycle is of the first form. Let now \(0<\varepsilon<1\) small enough so that for all \(j\leq k\) \[\varepsilon<\min(a_{r_{j},s_{j}},a_{r_{j},s_{j+1}},1-a_{r_{j},s_{j}},1-a_{r_{ j},s_{j+1}})\] Then define \[b_{s,t}^{(1)}=\begin{cases}a_{r_{j},s_{j}}+\varepsilon&\text{ if }(s,t)=(r_{j},s_{j}) \text{ for some }j\\ a_{r_{j},s_{j+1}}-\varepsilon&\text{ if }(s,t)=(r_{j},s_{j+1})\text{ for some }j\\ a_{s,t}&\text{ otherwise,}\end{cases}\] and \[b_{s,t}^{(2)}=\begin{cases}a_{r_{j},s_{j}}-\varepsilon&\text{ if }(s,t)=(r_{j},s_{j}) \text{ for some }j\\ a_{r_{j},s_{j+1}}+\varepsilon&\text{ if }(s,t)=(r_{j},s_{j+1})\text{ for some }j\\ a_{s,t}&\text{ otherwise.}\end{cases}\] It follows that \(B^{(1)}=(b^{(1)})_{s,t=1}^{n}\) and \(B^{(2)}=(b^{(2)})_{s,t=1}^{n}\) are in DS and \[A=\frac{1}{2}\big{(}B^{(1)}+B^{(2)}\big{)},\] which implies that \(A\) cannot be an extreme point. Proof of Proposition 2.14.: Let \(A=\{x_{1},x_{2},\ldots x_{n}\}\) and \(B=\{y_{1},y_{2},\ldots,y_{n}\}\). We note that for every \(\pi\in\mathcal{P}(\sigma,\tau)\) the matrix \(M=(n\pi(x_{i},y_{j}):1\leq i,j\leq n)\) is a doubly stochastic matrix (since \(\sum_{x\in A}\pi(x,y)=\tau(y)=\frac{1}{|B|}=\frac{1}{|A|}=\sigma(x)=\sum_{y\in B }\pi(x,y)\)). Thus by (10) \[d_{\text{Wa}}(\mu_{A},\mu_{B})=\frac{1}{n}\min\Big{\{}\sum_{i,j=1}^{n}M_{i,j} d(x_{i},y_{j}):M\in\text{DS}_{n}\Big{\}}\] Since the map \[\text{DS}\to[0,\infty),\quad M\mapsto\sum_{i,j=1}^{n}M_{i,j}d(x_{i},y_{j})\] is linear, it achieves its minimum on an extreme point, our claim follows from Theorem 2.15 ### Proof of Theorem 4.2 on stochastic embeddings of finite metric spaces into trees **Theorem 6.2**.: _[_6_]_ _Let \(M\) be a metric space with \(n\in\mathbb{N}\) elements. Then, there is a \(O(\log n)\) stochastic embedding of \(M\) into the class of weighted trees._ We need some notation: We fix a metric space \((M,d)\). * \(B_{r}(x)=\{y\in M:d(x,y)\leq r\}\), for \(x\in M\) and \(r>0\). * For \(A\subset M\) the _diameter of \(A\)_ is \(\operatorname{diam}(A)=\sup_{x,y\in A}d(x,y)\). * The set of partitions of \(M\) is denoted by \(\mathcal{P}_{M}\). The elements of a partition \(P\) are called _clusters of \(P\)_. Let \(P=(A_{1},A_{2},\ldots A_{n})\in\mathcal{P}_{M}\). For \(r\geq 0\), \(P\) is called \(r\)-_bounded_ if \(\operatorname{diam}(A_{j})=\max_{x,z\in A_{j}}d(x,z)<r\), for all \(j=1,2,\ldots,n\). For \(x\in M\) and a partition \(P=(A_{1},A_{2},\ldots A_{n})\) of \(M\) we denote the unique \(A_{j}\in P\) which contains \(x\) by \(P_{x}\). * A _stochastic decomposition_ is a probability measure \(\mathbb{P}\) on \(\mathcal{P}_{M}\), and its _support_ is given by \(\operatorname{supp}(\mathbb{P})=\{P\in\mathcal{P}:\mathbb{P}(P)>0\}\). **Lemma 6.3**.: _Let \(R>0\). There is a Probability measure \(\mathbb{P}\) on \(\mathcal{P}_{M}\) so that_ \[\operatorname{supp}(\mathbb{P})\subset\big{\{}P\in\mathcal{P}_{M}:P \text{ is $R$-bounded}\big{\}}\text{ and } \tag{38}\] \[\mathbb{P}\big{(}P\!\in\!\mathcal{P}_{M}:B_{t}(x)\!\subset\!P_{x }\big{)}\!\geq\!\Big{(}\frac{|B_{R/8}(x)|}{|B_{R}(x)|}\Big{)}^{8t/R}\text{ for all $0<t\leq R/8$ and $x\!\in\!M$.} \tag{37}\] Proof.: Let \(M=\{x_{1},x_{2},\ldots,x_{n}\}\), let \(\pi\) be a permutation on \(\{1,2,\ldots,n\}\) and \(r\in\big{[}\frac{R}{4},\frac{R}{2}\big{)}\). We define a partition \(\tilde{P}(\pi,r)=\big{(}\tilde{C}_{i}(\pi,r)\big{)}_{i=1}^{n}\) as follows: \(\tilde{C}_{1}(\pi,r)=B_{r}(x_{\pi(1)})\) and assuming \(\tilde{C}_{1}(\pi,r)\), \(\tilde{C}_{2}(\pi,r),\ldots,\tilde{C}_{j-1}(\pi,r)\), let \(\tilde{C}_{j}(\pi,r)=B_{r}(x_{\pi(j)})\setminus\bigcup_{i=1}^{j-1}B_{r}(x_{\pi (i)})\). Some of the \(\tilde{C}_{j}(\pi,r)\), \(j=1,2\ldots n\) could be empty and we let \(P(\pi,r)=\big{(}C_{j}(\pi,r)\big{)}_{j=1}^{m}\), with \(1\leq m\leq n\) be the non empty members of \(\big{(}\tilde{C}_{j}(\pi,r)\big{)}_{j=1}^{n}\), in the order inherited from \(\tilde{P}(\pi,r)\). Let \(\mu\) be the uniform distribution on \(\Pi_{n}\) the set of all permutation on \(\{1,2,\ldots,n\}\) (and thus \(\mu(\pi)=\frac{1}{n!}\), for \(\pi\in\Pi_{n}\)) and let \(\nu\) be the uniform distribution on \(\big{[}\frac{R}{4},\frac{R}{2}\big{)}\). Finally let \(\mathbb{P}\) be the image distribution of \(\mu\otimes\nu\) under the map \((\pi,r)\mapsto\tilde{P}(\pi,r)\), and thus \[\mathbb{P}(A)=\mu\otimes\nu\big{(}\big{\{}(\pi,r):\tilde{P}(\pi,r)\in A\big{\}} \big{)}\text{ for $A\subset\mathcal{P}_{M}$.}\] It follows from Fubini's Theorem that \[\mathbb{P}(A) =\int_{R/4}^{R/2}\int_{\Pi_{n}}1_{\{(\pi^{\prime},r)\!:\tilde{P} (\pi^{\prime},r^{\prime})\in A\}}(\pi,r)\,d\mu(\pi)\,d\nu(r)\] \[=\int_{R/4}^{R/2}\mu(\{\pi\!\in\!\Pi_{n}:\tilde{P}(\pi,r)\!\in\!A \})\,d\nu(r)\] \[=\frac{4}{R}\int_{R/4}^{R/2}\mu(\{\pi\!\in\!\Pi_{n}:\tilde{P}(\pi,r)\!\in\!A\})\,d(r). \tag{39}\] **Claim.** For \(\frac{R}{4}\leq r<\frac{R}{2}\), \(0<t\leq r\) and \(x\in M\) it follows that \[\mu\big{(}\{\pi\in\Pi_{n}:B_{t}(x)\subset\tilde{P}_{x}(\pi,r)\big{)}\geq\frac{ |B_{r-t}(x)|}{|B_{r+t}(x)|}. \tag{40}\] In order to prove the claim we order \(B_{r+t}(x)\) into \((y_{i})_{i=1}^{a}\) and \((y_{i})_{i=a+1}^{b}\) so that \[0=d(x,y_{1})\leq d(x,y_{2})\leq\ldots\leq d(x,y_{a})\leq r-t<d(x,y_{a+1})\leq \ldots\leq d(x,y_{b})\leq r+t.\] Thus \(|B_{r-t}(x)|=a\), \(|B_{r+t}(x)|=b\), and \(y_{i}\in B_{r-t}(x)\) if \(1\leq i\leq a\), and \(y_{i}\in B_{r+t}(x)\setminus B_{r-t}(x)\) if \(a<i\leq b\). Define \[E_{i}:=\left\{\pi\in\Pi_{n}:\begin{matrix}y_{\pi(s)}\not\in\{y_{1},y_{2}, \ldots y_{b}\}&\text{for $s=1,2\ldots,i-1$}\\ y_{\pi(i)}\in\{y_{1},y_{2},\ldots y_{a}\}\end{matrix}\right\}.\] In other words \(E_{i}\) is the event that \(i\) is the smallest \(j\leq n\) for which \(y_{\pi(j)}\in B_{r+t}(x)\) intersected with the event that \(y_{\pi(i)}\in B_{r-t}(x)\). Since for \(\pi\in E_{i}\) and \(s<i\) we have \(d(y_{\pi(s)},x)>r+t\) and \(d(y_{\pi(i)},x)<r-t\) it follows that \(B_{t}(x)\subset\check{P}_{x}(\pi,r)\). For \(i=1,2,\ldots n\) let \[A_{i}=\left\{\pi\in\Pi_{n}:\begin{matrix}y_{\pi(s)}\not\in\{y_{1},y_{2},\ldots y _{b}\}&\text{for }s=1,2\ldots,i-1\\ y_{\pi(i)}\in\{y_{1},y_{2},\ldots y_{b}\}\end{matrix}\right\}.\] Then the sets \((A_{i})_{i=1}^{n}\) are a partition of \(\Pi_{n}\), and \(E_{i}\subset A_{i}\), and moreover \(\mu(E_{i}|A_{i})=\frac{a}{b}\) (since \(\mu(E_{i}|A_{i})\) is the probability that \(\pi(i)\leq a\) assuming that \(\pi(i)\leq b\)). Thus \[\mu(\{\pi\in\Pi_{n}:B_{t}(x)\!\subset\!\check{P}_{x}(\pi,r)\})\geq\mu\Big{(} \bigcup_{i=1}^{n}E_{i}\Big{)}=\sum_{i=1}^{n}\mu(A_{i})\mu(E_{i}|A_{i})=\frac {a}{b}=\frac{|B_{r-t}(x)|}{|B_{r+t}(x)|},\] which proves our claim. To finish the proof of the Lemma we deduce from (39) and (40) for \(0<t\leq R/8\) that \[\mathbb{P}(\{\pi\in\Pi_{n}:B_{t}(x)\subset P_{x}(\pi,r)\}) =\frac{4}{R}\int_{R/4}^{R/2}\mu\big{(}\{\pi\in\Pi_{n}:B_{t}(x) \subset P_{x}(\pi,r)\}\big{)}\,dr\] \[\geq\frac{4}{R}\int_{R/4}^{R/2}\frac{|B_{r-t}(x)|}{|B_{r+t}(x)|} \,dr\] \[=\frac{4}{R}\int_{R/4}^{R/2}e^{h(r-t)-h(r+t)}\,dr\] \[\text{where }h(s)=\log\big{(}|B_{s}(x)|\big{)}\] \[\geq\exp\Big{(}\frac{4}{R}\int_{R/4}^{R/2}h(r-t)-h(r+t)\,dr\Big{)}\] \[\text{(By Jensen's inequality)}\] \[=\exp\Big{(}\frac{4}{R}\int_{\frac{R}{4}-t}^{\frac{R}{2}-t}h(s)\, ds-\frac{4}{R}\int_{\frac{R}{2}+t}^{\frac{R}{2}+t}h(s)\,ds\Big{)}\] \[\geq\exp\Big{(}\frac{8t}{R}\Big{(}h\Big{(}\frac{R}{4}-t\Big{)}-h \Big{(}\frac{R}{2}+t\Big{)}\Big{)}\Big{)}\] \[\geq\exp\Big{(}\frac{8t}{R}\Big{(}h\Big{(}\frac{R}{8}\Big{)}-h(R) \Big{)}\Big{)}=\Bigg{(}\frac{|B_{R/8}(x)|}{|B_{R}(x)|}\Bigg{)}^{\frac{8t}{R}}.\] **Corollary 6.4**.: _For the probability measure \(\mathbb{P}\) defined in Lemma 6.3 it follows that_ \[\mathbb{P}\big{(}\{P\in\mathcal{P}_{M}:B_{t}(x)\not\subset P_{x}\}\big{)}\leq \frac{8}{R}t\log\Big{(}\frac{|B_{R}(x)|}{|B_{R/8}(x)|}\Big{)},\text{ for all }t\in(0,R/8).\] Proof.: We deduce from Lemma 6.3 that \[\mathbb{P}\big{(}\{P\in\mathcal{P}_{M}:B_{t}(x)\not\subset P_{x}\} \big{)} =1-\mathbb{P}\big{(}\{P\in\mathcal{P}_{M}:B_{t}(x)\subset P_{x}\} \big{)}\] \[\leq 1-\Big{(}\frac{|B_{R/8}(x)|}{|B_{R}(x)|}\Big{)}^{\frac{8t}{R}}\] \[\leq\log\Bigg{(}\Big{(}\frac{|B_{R}(x)|}{|B_{R/8}(x)|}\Big{)}^{ \frac{8t}{R}}\Bigg{)}=\frac{8}{R}t\log\Big{(}\frac{|B_{R}(x)|}{|B_{R/8}(x)|} \Big{)},\] where the last inequality follows from the fact that for \(z\geq 1\) \[1-\frac{1}{z}\leq\log(z).\] Proof of Theorem 6.2.: After rescaling we can assume that \(d(x,y)\geq 1\) for all \(x\neq y\), in \(M\), and choose \(k\in\mathbb{N}\) so that \(\operatorname{diam}(M)\in[2^{k-1},2^{k})\). We will first introduce some notation: For two partitions \(P\) and \(Q\) of the same set \(S\), we say \(P\)_subdivides_\(Q\) and write \(P\succeq Q\) if every cluster \(A\in Q\) is a union of clusters in \(P\). If \(A\subset S\) then the _restriction of \(P\) onto_\(A\), is the \[P|_{A}=\{B\cap A:B\in P\}\setminus\{\emptyset\}.\] For a metric space \(M\) and any \(R>0\), we denote the probability measure on the set \(\mathcal{P}\) on \(R\)-bounded Partitions constructed in Lemma 6.3 by \(\mathbb{P}^{(M,R)}\). Our probability will be defined on the set \[\Omega=\big{\{}(P^{(j)})_{j=0}^{k}\subset\mathcal{P}_{M}:P^{(j)}\text{ is $2^{k-j}$ bounded and }P^{(j)}\succeq P^{(j-1)},\text{ for }j=1,2,\ldots,k\big{\}}.\] We let \(\mathbb{P}\) be the probability measure on the subsets of \(\Omega\), uniquely defined by the following properties: \[\mathbb{P}\big{(}\{\bar{P}=(P^{(j)})_{j=0}^{k}\in\Omega:P^{(0)}=M\big{\}} \big{)}=1 \tag{41}\] for \(i=1,2,\ldots n\), \(B\subset M\), \(\operatorname{diam}(B)\leq 2^{k-(i-1)}\) and \(\mathcal{A}\subset\mathcal{P}_{B}\) we have \[\mathbb{P}\big{(}P^{(i)}|_{B}\in\mathcal{A}\,\big{|}B\text{ is cluster of }P^{(i-1)}\big{)}=\mathbb{P}^{(B,2^{k-i})}(\mathcal{A}). \tag{42}\] Condition (42) means the following: under the condition that \(P^{(i-1)}\) is given and contains a cluster \(B\subset M\), the distribution of \(P^{(i)}\) restricted to \(B\) is \(\mathbb{P}^{(B,2^{k-i})}\). So we can think of \((P^{(j)})_{j=0}^{k}\) as a stochastic process with values in \(\mathcal{P}_{M}\), whose distribution is determined by transition probabilities: \(P^{(0)}=M\) (the trivial partition and given \(P^{(i-1)}\), we consider each cluster \(B\) of \(P^{(i-1)}\), and randomly divide \(B\) according to the distribution \(\mathbb{P}^{(B,2^{k-i})}\). Since \(d(x,y)\geq 1\) for \(x\neq y\), and since \(\operatorname{diam}(M)<2^{k}\), it follows that \(P^{(k)}\) is the _finest partition_, _i.e.,_\(P^{(k)}=\big{\{}\{x\}:x\in M\big{\}}\). By induction on \(k\in\mathbb{N}\) it can easily be seen that the probability \(\mathbb{P}\) exists and is uniquely defined by the above property. For each \(\bar{P}=(P^{(j)})_{j=0}^{k}\in\Omega\) we define the following weighted tree \(T=T(\bar{P})=(V(\bar{P}),E(\bar{P}),W(\bar{P}))\), where \[V=\bigcup_{j=0}^{k}V_{j}\text{ with }V_{j}=\{(j,B):B\text{ is cluster of }P^{(j)}\},\text{ for }j=0,1,\ldots,n\] \[E=\bigcup_{j=1}^{k}E_{j}\text{ with }E_{j}=\big{\{}\{(j,A),(j-1,B)\}:A\in V_{j},B \in V_{j-1},\text{ and }A\subset B\big{\}}\] \[\text{ for }j=1,\ldots,n,\] \[W:E\to\mathbb{R},\quad e\mapsto 2^{k-j}\text{ if }e\in E_{j}.\] One might ask why we did not simply define the vertices of \(T\) as the set of all the clusters of the \(P^{(j)}\). The problem is that the partitions \(P^{(j)}\) and \(P^{(j-1)}\) could share the same clusters, and we need to distinguish them. We note that \(T\) is a tree, and that then \((k,\{x\})\), \(x\in M\) are the leaves of \(T\). For \(\bar{P}\) we define \[f_{\bar{P}}:M\to T_{\mathbb{P}},\quad x\mapsto(k,\{x\}).\] We claim that for some universal constant \(D\), \((f_{\bar{P}})_{\bar{P}\in\Omega}\) with the coefficients \((\mathbb{P}(\bar{B}))_{\bar{B}\in\Omega}\) is a \(D\log(n)\) stochastic embedding. Let us first show the lower bound. Assume that \(\bar{P}=(P^{(j)})_{j=0}^{k}\in\Omega\) and \(x\neq y\) are in \(M\). Then there is an \(i\in\{1,2,\ldots,k\}\) so that \(x,y\) are in the same cluster of \(P^{(i-1)}\) but in two different clusters of \(P^{(i)}\). This implies that \(d(x,y)\leq 2^{k-(i-1)}=2^{k+1-i}\) and and for the tree metric of \(T_{\bar{P}}\), it follows that \[d_{T_{\bar{P}}}(f_{\bar{P}}(x),f_{\bar{P}}(y))=2\sum_{j=i}^{k}2^{k-j}=2\sum_{j =0}^{k-i}2^{j}=2(2^{k+1-i}-1)\geq d(x,y).\] To get the upper estimate, first note that from Corollary 6.4 it follows for \(i=1,2,\ldots k\), and \(x,y\in M\), if \(d(x,y)<2^{k-i-3}\) \[\mathbb{P}(P_{x}^{(i-1)}\!=\!P_{y}^{(i-1)} \text{ and }P_{x}^{(i)}\neq P_{y}^{(i)})\] \[=\mathbb{P}(P_{x}^{(i)}\neq P_{y}^{(i)}\big{|}P_{x}^{(i-1)}\!=\!P _{y}^{(i-1)})\mathbb{P}(P_{x}^{(i-1)}\!=\!P_{y}^{(i-1)})\] \[\leq\mathbb{P}(P_{x}^{(i)}\neq P_{y}^{(i)}\big{|}P_{x}^{(i-1)}=P_{ y}^{(i-1)})\] \[=\mathbb{P}^{(P_{x}^{(i-1)},2^{k-i})}(P\in\mathcal{P}_{M}:y\not \in P_{x})\] \[(\text{Apply (\ref{eq:2}) to }B=P_{x}^{i-1}=P_{y}^{i-1})\] \[\leq\mathbb{P}^{(P_{x}^{(i-1)},2^{k-i})}(P\in\mathcal{P}_{M}:B_{d( x,y)}\not\subset P_{x})\] \[\leq\frac{8}{2^{k-i}}\log\Big{(}\frac{|B_{2^{k-i}}(x)|}{|B_{2^{k- i-3}}(x)|}\Big{)}d(x,y). \tag{43}\] It is also true that if \(d(x,y)>2^{k-i+1}\) then \[\mathbb{P}(P_{x}^{(i-1)}=P_{y}^{(i-1)}\text{ and }P_{x}^{(i)}\neq P_{y}^{(i)}) \leq\mathbb{P}(P_{x}^{(i-1)}=P_{y}^{(i-1)})=0.\] Secondly, note that for \(\bar{P}\in\{(P^{(j)})_{j=0}^{k}\in\Omega:\{P_{x}^{(i-1)}=P_{y}^{(i-1)}\text{ and }P_{x}^{(i)}\neq P_{y}^{(i)}\}\) it follows that \[d_{T(\bar{P})}(f(x),f(y))=2\sum_{j=0}^{k-i}2^{j}=2(2^{k-i+1}-1)<2^{k+2-i}.\] Let \(s\leq k\) so that \(2^{s-1}<d(x,y)\leq 2^{s}\). We compute \[\mathbb{E}_{\mathbb{P}}(d_{T(\bar{P})}(f(x),f(y))) \leq\sum_{i=1}^{k}2^{k+2-i}\mathbb{P}(P_{x}^{(i-1)}=P_{y}^{(i-1)} \text{ and }P_{x}^{(i)}\neq P_{y}^{(i)})\] \[=\underbrace{\sum_{i=1}^{k-s-3}2^{k+2-i}\mathbb{P}(P_{x}^{(i-1)} =P_{y}^{(i-1)}\text{ and }P_{x}^{(i)}\neq P_{y}^{(i)})}_{=\Sigma_{1}}\] \[\quad+\underbrace{\sum_{i=k-s-2}^{k-s}2^{k+2-i}\mathbb{P}(P_{x}^{ (i-1)}=P_{y}^{(i-1)}\text{ and }P_{x}^{(i)}\neq P_{y}^{(i)})}_{=\Sigma_{2}}\] \[\quad+\underbrace{\sum_{i=k-s+1}^{k}2^{k+2-i}\mathbb{P}(P_{x}^{(i- 1)}=P_{y}^{(i-1)}\text{ and }P_{x}^{(i)}\neq P_{y}^{(i)})}_{=\Sigma_{3}}.\] Since \(d(x,y)>2^{s-1}\) it follows that \(\Sigma_{3}=0\). Secondly \[\Sigma_{2}\leq 3\cdot 2^{k+2-(k-s-2)}=3\cdot 2^{s+4}\leq 3\cdot 2^{5}d(x,y)\] To estimate \(\Sigma_{1}\) we are able to use (43) since \(d(x,y)\leq 2^{s}=\frac{1}{8}2^{k-(k-s-3)}\) \[\Sigma_{1} \leq\sum_{i=1}^{k}2^{k+2-i}\frac{8}{2^{k-i}}\log\Big{(}\frac{|B_ {2^{k-i}}(x)|}{|B_{2^{k-i-3}}(x)|}\Big{)}d(x,y)\] \[=32\log\Big{(}\prod_{i=1}^{k}\frac{|B_{2^{k-i}}(x)|}{|B_{2^{k-i-3 }}(x)|}\Big{)}d(x,y)\] \[\leq 32\log(n^{3})d(x,y)=96\log(n)d(x,y)\] And thus \[\mathbb{E}_{\mathbb{P}}(d_{T(\bar{P})}(f(x),f(y)))\leq 96\log(n)d(x,y)+3\cdot 2^{5}d (x,y).\] ### Proof of Theorem 4.3 Our goal is to show that for some universal constant \(c\), every metric space with \(n\) elements can be bijectively \(c\log(n)\)-stochastically embedded into trees. This will follow from Theorem 4.2 and the following result by Gupta. **Theorem 6.5**.: _[_10_, Theorem 1.1]_ _Let \(T=(V,E,W)\) be a weighted finite tree and \(V^{\prime}\subset V\). Then there is \(E\subset[V^{\prime}]^{2}\) and \(W^{\prime}:E(\mathbb{G}^{\prime})\to[0,\infty)\) so that \(T^{\prime}=(V^{\prime},E(\mathbb{G}^{\prime}),W^{\prime})\) is a tree_ \[\frac{1}{4}\leq\frac{d_{T^{\prime}}(x,y)}{d_{T}(x,y)}\leq 2,\text{ for }x,y\in V^{\prime}. \tag{44}\] We first show the claim of Theorem 6.5 in the special case that \(V^{\prime}\) consists of leaves of \(T\). Recall that in a tree \(T=(V,E)\) \[\text{Leaf }(T)=\{v\in V:\deg_{T}(v)=1\}.\] **Lemma 6.6**.: _Let \(T=(V,E,W)\) be a weighted finite tree and \(V^{\prime}\subset\operatorname{Leaf}\left(T\right)\), and let \(d_{T}\) be the geodesic metric generated by \(W:E\to(0,\infty)\). Then there is \(E(\mathbb{G}^{\prime})\subset[V^{\prime}]^{2}\) and \(W^{\prime}:E^{\prime}\to(0,\infty)\) so that \(T^{\prime}=(V^{\prime},E^{\prime},W^{\prime})\) is a tree and_ \[\frac{1}{4}\leq\frac{d_{T^{\prime}}(x,y)}{d_{T}(x,y)}\leq 2,\text{ for }x,y\in V^{\prime}.\] Proof.: We start with a general weighted tree \(T=(V,E,W)\) and a subset \(V^{\prime}\) of \(V\). By eliminating successively every leaf of \(T\) which is not in \(V^{\prime}\), we can assume that \(V^{\prime}\) are all the leaves. Unless \(V^{\prime}=V\) (in which case we are done), there must be an element \(x_{0}\in V\setminus V^{\prime}\), and the degree of this element must be at least \(2\). Denote the partial order defined by letting \(x_{0}\) be the root of \(T\) by \(\succeq\). For \(x,y\in V\) we denote by \(x\wedge y\) the _minimum of \(x\) and \(y\)_ meaning the maximal vertex \(z\) with respect to \(\succeq\) for which \(x\succeq z\) and \(y\succeq z\). Let \(v_{0}\in V^{\prime}\) for which \(r_{0}:=d_{T}(x_{0},v_{0})\) is minimal. Let \(\tilde{E}\subset E\) be the set of edges \(e=\{a,b\}\) in \(E\) for which \(d_{T}(x_{0},a)<r_{0}/2\leq d_{T}(x_{0},b)\). Order \(\tilde{E}\) into \(\{e_{1},e_{2},\ldots,e_{n}\}\), \(e_{i}=\{a_{i},b_{i}\}\) with \(d_{T}(x_{0},a_{i})<r_{0}/2\leq d_{T}(x_{0},b_{i})\). One of the edges in \(\tilde{E}\) must be contained in \([v_{0},x_{0}]\) and assume that \(\tilde{E}\) was ordered so that \(e_{1}\subset[v_{0},x_{0}]\). We now define new trees \(T_{1},T_{2},\ldots,T_{n}\), which are, up to possibly one additional element subtrees of \(T\). If \(r_{0}/2=d_{T}(x_{0},b_{j})\) we let \(T_{j}\) be the subtree \(T_{j}=(V_{j},E_{j},W_{j})\) with \[V_{j}=\{x\in V:x\succeq b_{j}\},\quad E_{j}=E\cap[V_{j}]^{2}\text{ and }W_{j}=W|_{E_{j}}.\] In that case we put \(x_{j}=b_{j}\). If \(r_{0}/2<d_{T}(x_{0},b_{j})\) then \(T_{j}=(V_{j},E_{j},W_{j})\), with \(V_{j}=\{x\in V:x\succeq b_{j}\}\cup\{x_{j}\}\) where \(x_{j}\) is an element not in \(V\), and distinct from all the other \(x_{i}\) and we let \[E_{j}=E\cap[V_{j}]^{2}\cup\big{\{}\{x_{j},b_{j}\}\big{\}}\text{ and }W_{j}(e)= \begin{cases}W(e)&\text{if }e\in E\cap[V_{j}]^{2},\\ d(x_{0},b_{j})-r_{0}/2&\text{if }e=\{b_{j},x_{j}\}.\end{cases}\] We also define the following tree \(\bar{T}=(\bar{V},\bar{E},\bar{W})\) which contains \(T\), and \(T_{1}\), \(T_{2},\ldots T_{n}\) isometrically: \[\bar{V}=V\cup\{x_{j}:j=1,2\ldots,n\}=V\dot{\cup}\{x_{j}:j=1,2\ldots,n,d_{T}(x_ {0},b_{j})>r_{0}/2\}\] \[\bar{E}=(E\setminus\tilde{E})\cup\left\{\{a_{j},x_{j}\}:j=1,2\ldots n \right\}\cup\left\{\{b_{j},x_{j}\}:j=1,2\ldots n,b_{j}\neq x_{j}\right\}\] \[\bar{W}:\bar{E}\to(0,\infty,\ \ \ e\mapsto\begin{cases}W(e)&\text{ if }e \in E\setminus E^{\prime},\\ r_{0}/2-d_{T}(a_{j},x_{0})&\text{ if }e=\{a_{j},x_{j}\},\\ d_{T}(b_{j},x_{0})-r_{0}/2&\text{ if }e=\{b_{j},x_{j}\}\text{ and }b_{j}\neq x_{j}. \end{cases}\] We note that the inclusion \((V,d_{T})\subset(\bar{V},d_{\bar{T}})\), \((V_{i},d_{T_{i}})\subset(\bar{V},d_{\bar{T}})\) are isometric embeddings. Let us make some observations: 1. Since for \(j=1,\ldots n\) the vertex \(a_{j}\) lies on the path \([x_{0},b_{j}]\) connecting \(x_{0}\) and \(b_{j}\), it follows if \(b_{i}=b_{j}\) then also \(a_{i}=a_{j}\) and thus \(i=j\). Thus, all the \(b_{j}\), \(j=1,\ldots n\), are pairwise distinct, and the \(V_{j}\) are pairwise disjoint. 2. We claim that \(n\) is at least \(2\). Indeed, let \(z_{1}\) and \(z_{2}\) be two direct successors of \(x_{0}\) (the degree of \(x_{0}\) is at least \(2\)) and let \(w_{1}\) be a leaf in \(\{x\in V:x\succeq z_{1}\}\) and \(w_{2}\) be a leaf in \(\{x\in V:x\succeq z_{2}\}\), and let for \(s=1,2\)\(e_{i_{s}}=\{a_{i_{s}},b_{i_{s}}\}\) be the edge in \([x_{0},w_{s}]\) for which \(d_{T}(x_{0},a_{i_{s}})<r_{0}/2\leq d_{T}(x_{0},b_{i_{s}})\) (such edges exist because of the minimality of \(r_{0}\)). Since the path from \(w_{1}\) to \(w_{2}\) must go through \(x_{0}\) it follows that \(b_{i_{1}}\neq b_{i_{2}}\) (it could be possible that \(a_{i_{1}}=a_{i_{2}}=x_{0}\)!) 3. Put \(V^{\prime}_{j}=V^{\prime}\cap V_{j}\) for \(j=1,\ldots n\). Since for all \(w\in V^{\prime}\)\(d(x_{0},w)\geq r_{0}\), it follows that \(V^{\prime}=\bigcup_{j=1}^{n}V^{\prime}_{j}\), and that the \(V^{\prime}_{j}\) are the leafs of \(T_{j}\). 4. For \(i\neq j\) and \(v\in V^{\prime}_{i}\) and \(w\in V^{\prime}_{j}\), we observe that \(a_{i}\) and \(a_{j}\) have to lie on the path connecting \(v\) with \(w\) and thus \[d_{T}(v,w)\geq d_{T}(v,a_{i})+d_{T}(w,a_{j})\] For \(j=1,2,\ldots n\) put \(T_{j}=(V_{j},E_{j},W_{j})\) with \(E_{j}=\bar{E}\cap[V_{j}]^{2}\) and \(W_{j}=\bar{W}|_{E_{j}}\), and \(V^{\prime}_{j}\) is of strictly less cardinality that \(V^{\prime}\). Moreover for \(x,y\in V_{j}\) we have \(d_{\bar{T}}(x,y)=d_{T_{j}}(x,y)\). For \(j=1,\ldots n\), put \(r_{j}=\min_{v\in V^{\prime}_{j}}d_{T_{j}}(x_{j},v)\), and choose \(v_{j}\in V_{j}\) so that \(d_{T_{j}}(v_{j},x_{j})=r_{j}\). Note that \(r_{1}=r_{0}/2=d_{T_{j}}(v_{0},x_{1})\) and thus we can assume that \(v_{1}=v_{0}\). Then we can continue to decompose \(T_{1}\), \(T_{2},\ldots T_{n}\) until we arrive at trees that consist of one single element of \(V^{\prime}\) by applying inductively the following claim. We claim the following: **Claim:** Assume that for \(j=1,2\ldots n\), \(T_{j}\) satisfies the following condition: There is a tree \(T^{\prime}_{j}=(V^{\prime}_{j},E^{\prime}_{j},W^{\prime}_{j})\) on \(V^{\prime}_{j}\) so that \[d_{T^{\prime}_{j}}(x,v_{j}) \leq 2d_{T_{j}}(x,x_{j})-r_{j},\text{ for }x\in V^{\prime}_{j} \tag{46}\] \[\frac{1}{4} \leq\frac{d_{T^{\prime}_{j}}(x,y)}{d_{T_{j}}(x,y)}\leq 2,\text{ for }x,y\in V^{\prime}_{j} \tag{45}\] Then construct \(T^{\prime}=(V^{\prime},E^{\prime},W^{\prime})\) by _glueing_\(T^{\prime}_{1}\), \(T^{\prime}_{2},\ldots T^{\prime}_{n}\) together, connecting \(v_{1}=v_{0}\) to the other \(v_{j}\). More precisely, we put \[E^{\prime}=\bigcup_{j=1}^{n}E^{\prime}_{j}\cup\left\{\{v_{1},v_{j}\}:2\leq j \leq n\right\}\] and use the weight \[W^{\prime}(e)=W^{\prime}(e)\text{ if }e\in\bigcup_{j=1}^{n}E_{j}^{\prime}\text{ and }W(\{v_{1},v_{j}\})=d(x_{0},v_{j}))=r_{j}.\] We claim that it follows that \[d_{T^{\prime}}(x,v_{0})\leq 2d_{T}(x,x_{0})-r_{0},\text{ for }x\in V ^{\prime}, \tag{48}\] \[\frac{1}{4}\leq\frac{d_{T^{\prime}}(x,y)}{d_{T}(x,y)}\leq 2,\text{ for }x,y \in V^{\prime}. \tag{47}\] Since (47) and (48) are clearly satisfied if \(V^{\prime}\) is a singleton, the Theorem follows by induction from the claim. We first verify the first inequality in (48): For \(x,y\in V^{\prime}\), either \(x,y\) lie both in \(V_{j}^{\prime}\) for some \(j\), we deduce our claim from the induction hypothesis or \(x\in V_{i}\) and \(y\in V_{j}\), \(1\leq i,\leq n\), \(i\neq j\), and so without loss of generality \(j\neq 1\) Then \[d_{T}(x,y) \leq d_{T}(x,v_{i})+d_{T}(v_{i},x_{0})+d_{T}(v_{j},x_{0})+d_{T}(v _{j},w)\] \[\leq 4d_{T^{\prime}}(x,v_{i})+4d_{T^{\prime}}(v_{j},y)+d_{T_{i}}( v_{i},x_{i})+d_{T_{j}}(v_{j},x_{j})+d_{\bar{T}}(x_{i},x_{0})+d_{\bar{T}}(x_{j},x_{0})\] (By the induction Hypothesis) \[=4d_{T^{\prime}}(x,v_{i})+4d_{T^{\prime}}(v_{j},y)+r_{i}+r_{j}+r_ {0}\] \[=4(d_{T^{\prime}}(x,v_{i})+d_{T^{\prime}}(v_{j},y))+r_{i}+r_{j}+2 r_{1}\] \[=4(d_{T^{\prime}}(x,v_{i})+d_{T^{\prime}}(v_{j},y))+\begin{cases} 3r_{1}+r_{j}&\text{if }i=1\\ 2r_{i}+2r_{j}&\text{if }i\neq 1\end{cases}\] \[\leq 4d_{T^{\prime}}(x,y).\] Secondly, we verify the second inequality in (48). Let \(x,y\in V^{\prime}\). If \(x,y\) lie both in \(V_{j}^{\prime}\) for some \(j\); we deduce our claim again from the induction hypothesis. So assume that \(x\in V_{i}\) and \(y\in V_{j}\), \(1\leq i,\leq n\), \(i\neq j\), Also assume that \(j\neq 1\). We deduce that \[d_{T^{\prime}}(x,y) =d_{T^{\prime}_{i}}(x,v_{i})+d_{T^{\prime}}(v_{i},v_{j})+d_{T^{ \prime}_{j}}(y,v_{j})\] \[\leq d_{T^{\prime}_{i}}(x,v_{i})+d_{T^{\prime}_{j}}(y,v_{j})+r_{ i}+r_{j}\] \[\leq 2d_{T_{i}}(x,x_{i})-r_{i}+2d_{T_{j}}(y,x_{j})-r_{j}+r_{i}+r_ {j}\] \[=2d_{T_{i}}(x,x_{i})+2d_{T_{j}}(y,x_{j})\] \[\leq 2d_{T}(x,a_{i})+2d_{T}(y,a_{j})\leq 2d_{T}(x,y).\] We finally verify (47). Let \(x\in V^{\prime}\) and thus \(x\in V_{j}^{\prime}\) for some \(j=1,2\ldots n\). **Case 1:**\(j=1\). Thus \(x\) and \(v_{1}=v_{0}\) are in \(T_{1}\) and it follows that \[d_{T^{\prime}}(x,v_{1}) =d_{T^{\prime}_{1}}(x,v_{1})\text{ by definition of }T^{\prime}\;[x,v_{1}]\subset V_{1}^{\prime}\] \[\leq 2d_{\bar{T}}(x,x_{1})-r_{1}\text{ (by (\ref{eq:2.1}) )}\] \[\leq 2(d_{T}(x,x_{0})-r_{0}/2)-r_{1}/2\] \[=2d_{T}(x,x_{0})-r_{0}.\] **Case 2:**\(j>1\). Then \[d_{T^{\prime}}(x,v_{1}) =d_{T^{\prime}_{j}}(x,v_{j})+d_{T^{\prime}}(v_{j},v_{1})\] \[\leq 2d_{T_{j}}(x,x_{j})-r_{j}+r_{j}\text{ (by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq: Proof.: The existence of \(n\) sets \(E_{i}\subset[M]^{2}\), a metrics \(d_{i}\), and numbers \(p_{i}\in(0,1]\), for \(i=1,2,\ldots,n\) follows from Theorem 6.2 and Corollary 6.5. To see the moreover part, let us denote for \(i=1,2,\ldots,n\) the geodesic distance on \(M\) generated by the above defined weight function \(w_{i}\) by \(\tilde{d}_{i}\). From the triangle inequality, it follows that \(\tilde{d}_{i}(x,y)\geq d(x,y)\). We note that from (49) it follows that for every \(i=1,2,\ldots,n\), any \(e=\{u,v\}\in E_{i}\), we have that \(d(u,v)\leq d_{i}(u,v)\), and thus \(d(x,y)\leq\tilde{d}_{i}(x,y)\leq d_{i}(x,y)\), for any \(x,y\in M\), which implies (50).
2301.13569
NP-Match: Towards a New Probabilistic Model for Semi-Supervised Learning
Semi-supervised learning (SSL) has been widely explored in recent years, and it is an effective way of leveraging unlabeled data to reduce the reliance on labeled data. In this work, we adjust neural processes (NPs) to the semi-supervised image classification task, resulting in a new method named NP-Match. NP-Match is suited to this task for two reasons. Firstly, NP-Match implicitly compares data points when making predictions, and as a result, the prediction of each unlabeled data point is affected by the labeled data points that are similar to it, which improves the quality of pseudo-labels. Secondly, NP-Match is able to estimate uncertainty that can be used as a tool for selecting unlabeled samples with reliable pseudo-labels. Compared with uncertainty-based SSL methods implemented with Monte-Carlo (MC) dropout, NP-Match estimates uncertainty with much less computational overhead, which can save time at both the training and the testing phases. We conducted extensive experiments on five public datasets under three semi-supervised image classification settings, namely, the standard semi-supervised image classification, the imbalanced semi-supervised image classification, and the multi-label semi-supervised image classification, and NP-Match outperforms state-of-the-art (SOTA) approaches or achieves competitive results on them, which shows the effectiveness of NP-Match and its potential for SSL. The codes are at https://github.com/Jianf-Wang/NP-Match
Jianfeng Wang, Xiaolin Hu, Thomas Lukasiewicz
2023-01-31T11:44:45Z
http://arxiv.org/abs/2301.13569v2
# NP-Match: Towards a New Probabilistic Model for Semi-Supervised Learning ###### Abstract Semi-supervised learning (SSL) has been widely explored in recent years, and it is an effective way of leveraging unlabeled data to reduce the reliance on labeled data. In this work, we adjust neural processes (NPs) to the semi-supervised image classification task, resulting in a new method named NP-Match. NP-Match is suited to this task for two reasons. Firstly, NP-Match implicitly compares data points when making predictions, and as a result, the prediction of each unlabeled data point is affected by the labeled data points that are similar to it, which improves the quality of pseudo-labels. Secondly, NP-Match is able to estimate uncertainty that can be used as a tool for selecting unlabeled samples with reliable pseudo-labels. Compared with uncertainty-based SSL methods implemented with Monte-Carlo (MC) dropout, NP-Match estimates uncertainty with much less computational overhead, which can save time at both the training and the testing phases. We conducted extensive experiments on five public datasets under three semi-supervised image classification settings, namely, the standard semi-supervised image classification, the imbalanced semi-supervised image classification, and the multi-label semi-supervised image classification, and NP-Match outperforms state-of-the-art (SOTA) approaches or achieves competitive results on them, which shows the effectiveness of NP-Match and its potential for SSL. Neural Processes, NP-Match, Semi-Supervised Image Classification, Imbalanced Semi-Supervised Image Classification, Multi-Label Semi-Supervised Image Classification ## 1 Introduction Deep neural networks have been widely used in computer vision tasks [1, 2, 3, 4, 5, 6] due to their strong performance. Training deep neural networks relies on large-scale labeled datasets, but annotating large-scale datasets is time-consuming, which encourages researchers to explore semi-supervised learning (SSL). SSL aims to learn from few labeled data and a large amount of unlabeled data, and it has been a long-standing problem in computer vision and machine learning [7, 8, 9, 10, 11, 12, 13, 14]. In this work, we focus on SSL for image classification. Most recent approaches to SSL for image classification are based on the combination of consistency regularization and pseudo-labeling [7, 8, 9, 10, 15, 16, 17]. They can be further classified into two categories, namely, deterministic [7, 8, 10, 15, 16, 17] and probabilistic ones [9]. A deterministic approach aims at directly making predictions, while a probabilistic approach tries to additionally model the predictive distribution, such as using Bayesian neural networks, which are implemented by Monte-Carlo (MC) dropout [18]. As a result, the former cannot estimate the uncertainty of the model's prediction, and unlabeled samples are selected only based on high-confidence predictions. In contrast, the latter can give uncertainties for unlabeled samples, and the uncertainties can be combined with high-confidence predictions for picking or refining pseudo-labels. Current SOTA methods for the semi-supervised image classification task are deterministic, including FixMatch [7], CoMatch [15], and FlexMatch [8], which have achieved promising results on public benchmarks. In contrast, progress on probabilistic approaches lags behind, which is mainly shown by the fact that there are only few studies on this task and MC dropout becomes the only option for implementing the probabilistic model [9]. In addition, MC dropout also dominates the uncertainty-based approaches to other SSL tasks [19, 20, 21, 22, 23, 24]. MC dropout, however, is time-consuming, requiring several feedforward passes to get uncertainty at both the training and the test stages, especially when some large models are used. To solve this drawback and to further promote related research, we need to find better probabilistic approaches for SSL. Considering that MC dropout is an approximation to the Gaussian process (GP) model [18], we turn to another approximation model called neural processes (NPs) [25], which can be regarded as a neural-network-based formulation that approximates GPs. Similarly to a GP, an NP is also a probabilistic model that defines distributions over functions. Thus, an NP is able to rapidly adapt to new observations, with the advantage of estimating the uncertainty of each observation. There are two main aspects that motivate us to investigate NPs in SSL. Firstly, GPs have been preliminarily explored for some SSL tasks [26, 27, 28], because of the property that their kernels are able to compare labeled data with unlabeled data when making predictions. NPs share this property, since it has been proved that NPs can learn non-trivial implicit kernels from data [25]. As a result, NPs are able to make predictions for target points conditioned on context points. This feature is highly relevant to SSL, which must learn from limited labeled samples to make predictions for unlabeled data, similarly to how NPs are able to impute unknown pixel values (i.e., target points) when given only a small number of known pixels (namely, context points) [25]. Due to the learned implicit kernels in NPs [25] and the successful application of GPs to different SSL tasks [26, 27, 28], NPs could be a suitable probabilistic model for SSL, as the kernels can compare labeled data with unlabeled data to improve the quality of pseudo-labels for the unlabeled data at the training stage. Secondly, previous GP-based works for SSL do not explore the semi-supervised large-scale image classification task, since GPs are computationally expensive, which usually incur a \(\mathcal{O}(n^{3})\) runtime for \(n\) training points. But, unlike GPs, NPs are more efficient than GPs, providing the possibility of applying NPs to this task. NPs are also computationally significantly more efficient than current MC-dropout-based approaches to SSL, since, given an input image, they only need to perform one feedforward pass to obtain the prediction with an uncertainty estimate. In this work, we take the first step to explore NPs in large-scale semi-supervised image classification, and propose a new probabilistic method called NP-Match. NP-Match still rests on the combination of consistency regularization and pseudo-labeling, but it incorporates NPs to the top of deep neural networks, and therefore it is a probabilistic approach. Compared to the previous probabilistic method for semi-supervised image classification [9], NP-Match not only can make predictions and estimate uncertainty more efficiently, inheriting the advantages of NPs, but also can achieve a better performance on public benchmarks. Summarizing, the main contributions are: * We propose NP-Match, which adjusts NPs to SSL, and explore its use in semi-supervised large-scale image classification. To our knowledge, this is the first such work. In addition, NP-Match has the potential to break the monopoly of MC dropout as the probabilistic model in SSL. * We experimentally show that the Kullback-Leibler (KL) divergence in the evidence lower bound (ELBO) of NPs [25] is not a good choice in the context of SSL, which may negatively impact the learning of global latent variables. To tackle this problem, we propose a new uncertainty-guided skew-geometric Jensen-Shannon (JS) divergence (\(JS^{G_{\alpha_{u}}}\)) for NP-Match. * We show that NP-Match outperforms SOTA results or achieves competitive results on public benchmarks, demonstrating its effectiveness for SSL. We also show that NP-Match estimates uncertainty faster than the MC-dropout-based probabilistic model, which can improve the training and the test efficiency. Some preliminary results have been presented at a conference [29], and we extend the previous work [29] by introducing more experiments on two new tasks. First, we supplement experiments on imbalanced semi-supervised image classification, which is more challenging, since deep models may overfit to frequent classes, leading to inaccurate pseudo-labels. Accordingly, we also present related works in this field. Second, we add additional results on another important task, i.e., multi-label semi-supervised image classification. Multi-label classification increases the risk of making wrong predictions for unlabeled data, therefore offering a more tough and realistic scenario for evaluating our model. As the uncertainty estimation is vital in safety-critical areas, such as medical applications, we choose a Chest X-Ray dataset as our multi-label classification benchmark. The new experiments and analysis provide a more solid evidence to justify our claim and contributions. The rest of this paper is organized as follows. In Section 2, we review related methods. Section 3 presents NP-Match and the uncertainty-guided skew-geometric JS divergence (\(JS^{G_{\alpha_{u}}}\)), followed by the experimental settings and results in Section 4. In Section 5, we give a summary and an outlook on future research. The source code is available at: [https://github.com/Jianf-Wang/NP-Match](https://github.com/Jianf-Wang/NP-Match) ## 2 Related Work We now briefly review related works, including _semi-supervised learning (SSL) for image classification, imbalanced semi-supervised image classification, Gaussian processes (GPs) for SSL, and _neural processes (NPs)_. **SSL for image classification.** Most methods for semi-supervised image classification in the past few years are based on pseudo-labeling and consistency regularization. Pseudo-labeling approaches rely on the high confidence of pseudo-labels, which can be added to the training data set as labeled data, and these approaches can be classified into two classes, namely, disagreement-based models and self-training models. The former models aim to train multiple learners and exploit the disagreement during the learning process [30, 31], while the latter models aim at training the model on a small amount of labeled data, and then using its predictions on the unlabeled data as pseudo-labels [32, 33, 10, 34]. Consistency-regularization-based approaches work by performing different transformations on an input image and adding a regularization term to make their predictions consistent [35, 36, 37, 38, 13]. Based on these two approaches, FixMatch [7] is proposed, which achieves new state-of-the-art (SOTA) results on the most commonly-studied SSL benchmarks. FixMatch [7] combines the merits of these two approaches: given an unlabeled image, weak data augmentation and strong data augmentation are performed on the image, leading to two versions of the image, and then FixMatch produces a pseudo-label based on its weakly-augmented version and a preset confidence threshold, which is used as the true label for its strongly augmented version to train the whole framework. The success of FixMatch inspired several subsequent methods [8, 9, 10, 15, 16, 17]. For instance, Li et al. [15] additionally design the classification head and the projection head for generating a class probability and a low-dimensional embedding, respectively. The projection head and the classification head are jointly optimized during training. Specifically, the former is learnt with contrastive learning on pseudo-label graphs to encourage the embeddings of samples with similar pseudo-labels to be close, and the latter is trained with pseudo-labels that are smoothed by aggregating information from nearby samples in the embedding space. Zhang et al. [8] propose to use dynamic confidence thresholds that are automatically adjusted according to the model's learning status of each class. Rizve et al. [9] propose an uncertainty-aware pseudo-label selection (UPS) framework for semi-supervised image classification. The UPS framework introduces MC dropout to obtain uncertainty estimates, which are then leveraged as a tool for selecting pseudo-labels. This is the first work using MC dropout for semi-supervised image classification. **Imbalanced semi-supervised image classification.** Training deep neural networks requires large-scale datasets, and one fundamental characteristic of these datasets in practice is that the data distribution over different categories is imbalanced (e.g., long-tailed), as it is common that the images of some categories are difficult to be collected. Training deep models on imbalanced datasets makes the models overfit to majority classes, and several methods have been proposed to ameliorate this issue [39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52]. According to Yang and Xu [53], semi-supervised learning benefits imbalanced learning, as using extra data during training can reduce label bias, which greatly improves the final classifier. Therefore, imbalanced semi-supervised image classification has been drawing extensive attention in recent years. Specifically, Hyun et al. [54] analyze how class imbalance affects SSL by looking into the topography of the decision boundary, and then they propose a suppressed consistency loss (SCL) to suppress the influence of consistency loss on infrequent classes. Kim et al. [55] design an iterative procedure, called distribution aligning refinery of pseudo-labels (DARP), to solve a convex optimization problem that aims at refining biased pseudo-labels so that their distribution can match the true class distribution of unlabeled data. Wei et al. [56] introduce a class-rebalancing selftraining scheme (CReST) to re-sample pseudo-labeled data and to add them into the labeled set, for refining the whole model. Inspired by the concept of decoupled learning [57], namely, training a feature extractor and a classifier separately, He et al. [58] propose a bi-sampling method, which integrates two data samplers with various sampling strategies for decoupled learning, therefore benefiting both the feature extractor and the classifier. Lee et al. [59] design an auxiliary balanced classifier (ABC) that is trained by using a mask that re-balances the class distribution within every mini-batch. Besides, Oh et al. [60] solve the imbalanced semi-supervised image classification problem by proposing a new framework called distribution-aware semantics-oriented (DASO) pseudo-labels. DASO focuses on debiasing pseudo-labels through blending two complementary types of pseudo-labels, which enables the framework to deal with the imbalanced semi-supervised image classification task, even when the class distribution of unlabeled data is inconsistent with that of labeled data. In this work, we evaluate our method for imbalanced semi-supervised image classification based on the DASO framework. **GPs for SSL.** Since NPs are also closely related to GPs, we review the application of GPs to different SSL tasks in this part. GPs, which are non-parametric models, have been preliminarily investigated in different semi-supervised learning tasks. For example, Sindhwani et al. [26] introduce a semi-supervised GP classifier, which incorporates the information of relationships among labeled and unlabeled data into the kernel. Their approach, however, has high computational costs and is thus only evaluated for a simple binary classification task on small datasets. Deep kernel learning [61] also lies on the spectrum between neural networks and GPs, and has been integrated into a new framework for the semi-supervised regression task, named semi-supervised deep kernel learning [27], which aims to minimize the predictive variance for unlabeled data, encouraging unlabeled embeddings to be near labeled embeddings. Semi-supervised deep kernel learning, however, has not been applied to semi-supervised image classification, and (similarly to semi-supervised GPs) also comes with a high (cubic) computational complexity. Recently, Yasarla et al. [28] propose to combine GPs with UNet [62] for SSL image deraining. Here, GPs are used to get pseudo-labels for unlabeled samples based on the feature representations of labeled and unlabeled images. GPs have also been combined with graph convolutional networks for semi-supervised learning on graphs [63, 64, 65]. Although many previous works explore GPs in different semi-supervised learning tasks, none of them investigates the application of GPs to semi-supervised large-scale image classification. **NPs.** The origin of NPs can be traced back to conditional NPs [66], which define conditional distributions over functions given a set of observations. Conditional NPs, however, do not introduce global latent variables for observations, which led to the birth of NPs [25]. In NPs, the mean-aggregator is used to summarize the encoded inputs of a task into a global latent variable, which is then used to make predictions on targets in the task. In recent years, several NP variants have been proposed to better approximate stochastic processes. For example, Kim et al. [67] consider that the mean-aggregator may cause difficulties for the decoder to pick relevant information with regard to making predictions, and they introduce a differentiable attention mechanism to solve this issue, resulting in new attentive NPs. Gordon et al. [68] consider that the translation equivariance is important for prediction problems, which should be considered. Therefore, they incorporate translation equivariance into NPs and design a new model, called convolutional conditional NPs. Besides, Louizos et al. [69] consider that using global latent variables is not flexible for encoding inductive biases. Thus, they propose to use local latent variables along with a dependency structure among them, resulting in new functional NPs. Lee et al. [70] point out the limitation of using a single Gaussian latent variable to model functional uncertainty. To solve the limitation, they propose to use bootstrapping for inducing functional uncertainty, leading to a new NP variant, called Bootstrapping Neural Processes (BNP). Currently, NPs and their variants have been widely used in many different settings, including meta-learning [71, 72, 73] and sequential data modelling [74], but they have not been studied in SSL. Our work is the first to leverage NPs for semi-supervised large-scale image recognition. We choose the most basic model from [25] (i.e., the original NPs), and we expect that future works can further study the application of other variants to this task. ## 3 Methodology In this section, we provide a brief introduction to neural processes (NPs) and a detailed description of our NP-Match. ### _NPs_ NPs approximate stochastic processes via finite-dimensional marginal distributions [25]. Formally, given a probability space \((\Omega,\Sigma,\Pi)\) and an index set \(\mathcal{X}\), a stochastic process can be written as \(\{F(x,\omega):x\in\mathcal{X}\}\), where \(F(\cdot\,\ \omega)\) is a sample function mapping \(\mathcal{X}\) to another space \(\mathcal{Y}\) for any point \(\omega\in\Omega\). For each finite sequence \(x_{1:n}\), a marginal joint distribution function can be defined on the function values \(F(x_{1},\ \omega),F(x_{2},\ \omega),\ldots,F(x_{n},\ \omega)\), which satisfies two conditions given by the Kolmogorov Extension Theorem [75], namely, exchangeability and consistency. Assuming that a density \(\pi\), where \(d\Pi=\pi d\mu\), and the likelihood density \(p(y_{1:n}|F(\cdot\,\ \omega),x_{1:n})\) exist, the marginal joint distribution function can be written as: \[p(y_{1:n}|x_{1:n})=\int\pi(\omega)p(y_{1:n}|F(\cdot\,\ \omega),x_{1:n})d\mu( \omega). \tag{1}\] The exchangeability condition requires the joint distributions to be invariant to permutations of the elements, i.e., \(p(y_{1:n}|x_{1:n})=p(\varphi(y_{1:n})|\varphi(y_{1:n}))\), where \(\varphi\) is a permutation of \(\{1,\ldots,n\}\). The consistency condition expresses that if a part of the sequence is marginalised out, then the resulting marginal distribution is consistent with that defined on the original sequence. Letting \((\Omega,\Sigma)\) be \((\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))\), where \(\mathcal{B}(\mathbb{R}^{d})\) denotes the _Borel \(\sigma\)-algebra_ of \(\mathbb{R}^{d}\), NPs parameterize the function \(F(\cdot\,\ \omega)\) with a high-dimensional random vector \(z\) sampled from a multivariate Gaussian distribution. Then, \(F(x_{i},\ \omega)\) can be replaced by \(g(x_{i},z)\), where \(g(\cdot)\) denotes a neural network, and Eq. (1) becomes: \[p(y_{1:n}|x_{1:n})=\int\pi(z)p(y_{1:n}|g(x_{1:n},z),x_{1:n})d\mu(z). \tag{2}\] The training objective of NPs is to maximize \(p(y_{1:n}|x_{1:n})\), and the learning procedure reflects the NPs' property that they have the capability to make predictions for target points conditioned on context points [25]. ### _NP-Match_ As Figure 1 shows, NP-Match is mainly composed of two parts: a deep neural network and an NP model. The deep neural network is leveraged for obtaining feature representations of input images, while the NP model is built upon the network to receive the representations for classification. #### 3.2.1 NP Model for Semi-supervised Image Classification Since we extend the original NPs [25] to the classification task, \(p(y_{1:n}|g(x_{1:n},z),x_{1:n})\) in Eq. (2) should define a categorical distribution rather than a Gaussian distribution. Therefore, we parameterize the categorical distribution by probability vectors from a classifier that contains a weight matrix (\(\mathcal{W}\)) and a softmax function (\(\Phi\)): \[p(y_{1:n}|g(x_{1:n},z),x_{1:n})=categorical(\Phi(\mathcal{W}g(x_{1:n},z))). \tag{3}\] Note that \(g(\cdot)\) can be learned via amortised variational inference, and to use this method, two steps need to be done: (1) parameterize a variational distribution over \(z\), and (2) find the evidence lower bound (ELBO) as the learning objective. For the first step, we let \(q(z|x_{1:n},y_{1:n})\) be a variational distribution defined on the same measure space, which can be parameterized by a neural network. For the second step, given a finite sequence with length \(n\), we assume that there are \(m\) context points (\(x_{1:m}\)) and \(r\) target points (\(x_{m+1:\ m+r}\) Fig. 1: Overview of NP-Match: it contains a convolutional neural network (CNN) and an NP model that is shown in the red dotted box. The feature vectors come from the global average pooling layer in the CNN. in it, i.e., \(m+r=n\). Then, the ELBO is given by (with proof in the supplementary material): \[\begin{split}& log\ p(y_{1:n}|x_{1:n})\geq\\ &\mathbb{E}_{q(z|x_{m+1:\ m+r},y_{m+1:\ m+r})}\Big{[}\sum_{i=m+1}^{ m+r}log\ p(y_{i}|z,x_{i})-\\ & log\ \frac{q(z|x_{m+1:\ m+r},y_{m+1:\ m+r})}{q(z|x_{1:m},y_{1:m})} \Big{]}+const.\end{split} \tag{4}\] To learn the NP model, one can maximize this ELBO. Under the setting of SSL, we consider that only labeled data can be treated as context points, and either labeled or unlabeled data can be treated as target points, since the target points are what the NP model makes predictions for. #### 3.2.2 NP-Match Pipeline We now introduce the NP-Match pipeline. We first focus on the configuration of the NP model, which is shown in the red dotted box in Figure 1. The NP model is mainly constructed by MLPs, memory banks, and a classifier. Specifically, the classifier is composed of the weight matrix (\(\mathcal{W}\)) and the softmax function (\(\Phi\)). Similarly to the original implementation of NPs, we build two paths with the memory banks and MLPs, namely, the latent path and the deterministic path. The decoder \(g(\cdot)\) is also implemented with MLPs. The workflow of NP-Match at the training stage and the inference stage are different, which are shown in Figure 1, and they are introduced separately as follows. **Training mode.** Given a batch of \(B\) labeled images \(\mathcal{L}=\{(x_{i},y_{i}):i\in\{1,\ldots,B\}\}\) and a batch of unlabeled images \(\mathcal{U}=\{x_{i}^{u}:i\in\{1,\ldots,\mu B\}\}\) at each iteration, where \(\mu\) determines the relative size of \(\mathcal{U}\) to \(\mathcal{L}\), we apply weak augmentation (i.e., crop-and-flip) on the labeled and unlabeled samples, and strong augmentation (i.e., RandAugment [76]) on only the unlabeled samples. After the augmentation is applied, the images are passed through the deep neural network, and the features are input to the NP model, which finally outputs the predictions and associated uncertainties. The detailed process can be summarized as follows. At the start of each iteration, NP-Match is switched to inference mode, and it makes predictions for the weakly-augmented unlabeled data. Then, inference mode is turned off, and these predictions are treated as pseudo-labels for unlabeled data. After receiving the features, real labels, and pseudo-labels, the NP model first duplicates the labeled samples and treats them as context points, and all the labeled and unlabeled samples in the original batches are then treated as target points, since the NP model needs to make a prediction for them. Thereafter, the target points and context points are separately fed to the latent path and the deterministic path. As for the latent path, target points are concatenated with their corresponding real labels or pseudo labels, and processed by MLPs to get new representations. Then, the representations are averaged by a mean aggregator along the batch dimension, leading to an order-invariant representation, which implements the exchangeability and the consistency condition, and they are simultaneously stored in the latent memory bank, which is updated with a first-in-first-out strategy. After the mean aggregator, the order-invariant representation is further processed by other two MLPs in order to get the mean vector and the variance vector, which are used for sampling latent vectors via the reparameterization trick, and the number of latent vectors sampled at each feed-forward pass is denoted \(T\). As for the deterministic path, context points are input to this path and are processed in the same way as the target points, until an order-invariant representation is procured from the mean aggregator. We also introduce a memory bank to the deterministic path for storing representations. Subsequently, each target point is concatenated with the \(T\) latent vectors and the order-invariant representations from the deterministic path (note that, practically, the target point and the order-invariant representations from the deterministic path must be copied \(T\) times). After the concatenation operation, the \(T*r\) feature representations are fed into the decoder \(g(\cdot)\) and then the classifier, which outputs \(T\) probability distributions over classes for each target point. The final prediction for each target point can be obtained by averaging the \(T\) predictions, and the uncertainty is computed as the entropy of the average prediction [77]. The ELBO (Eq. (4)) shows the learning objective. Specifically, the first term can be achieved by using the cross-entropy loss on the labeled and unlabeled data with their corresponding real labels and pseudo-labels, while the second term is the KL divergence between \(q(z|x_{m+1:\ m+r},y_{m+1:\ m+r})\) and \(q(z|x_{1:m},y_{1:m})\). **Inference mode.** Concerning a set of test images, they are also passed through the deep neural network at first to obtain their feature representations. Then, they are treated as target points and are fed to the NP model. Since the labels of test data are not available, it is impossible to obtain the order-invariant representation from test data. In this case, the stored features in the two memory banks can be directly used. As the bottom diagram of Figure 1 shows, after the order-invariant representations are obtained from the memory banks, the target points are leveraged in the same way as in the training mode to generate concatenated feature representations for the decoder \(g(\cdot)\) and then the classifier. #### 3.2.3 Uncertainty-guided Skew-Geometric JS Divergence NP-Match, like many SSL approaches, relies on the use of pseudo-labels for the unlabeled samples. Pseudo-labels, however, are sometimes inaccurate and can lead to the neural network learning poor feature representations. In our pipeline, this can go on to impact the representation procured from the mean-aggregator and hence the model's estimated mean vector, variance vector, and global latent vectors (see "Latent Path" in 1). To remedy this, similarly to how the KL divergence term in the ELBO (Eq. (4)) is used to learn global latent variables [25], we propose a new distribution divergence, called the uncertainty-guided skew-geometric JS divergence (\(JS^{G_{\alpha_{u}}}\)). We first formalize the definition of \(JS^{G_{\alpha_{u}}}\): **Definition 1**.: _Let \((\Omega,\Sigma)\) be a measurable space, where \(\Omega\) denotes the sample space, and \(\Sigma\) denotes the \(\sigma\)-algebra of measurable events. \(P\) and \(Q\) are two probability measures defined on the measurable space. Concerning a positive measure1, which is denoted as \(\mu\), the uncertainty-guided skew-geometric JS divergence (\(JS^{G_{\alpha_{u}}}\)) can be defined as:_ \[\begin{split}& JS^{G_{\alpha_{u}}}(p,q)=\\ &(1-\alpha_{u})\int p\;log\frac{p}{G(p,q)_{\alpha_{u}}}d\mu+\alpha _{u}\int q\;log\frac{q}{G(p,q)_{\alpha_{u}}}d\mu,\end{split} \tag{5}\] _where \(p\) and \(q\) are the Radon-Nikodym derivatives of \(P\) and \(Q\) with respect to \(\mu\), the scalar \(\alpha_{u}\in[0,1]\) is calculated based on the uncertainty, and \(G(p,q)_{\alpha_{u}}=p^{1-\alpha_{u}}q^{\alpha_{u}}\) / \((\int_{\Omega}p^{1-\alpha_{u}}q^{\alpha_{u}}d\mu)\). The dual form of \(JS^{G_{\alpha_{u}}}\) is given by:_ \[\begin{split}& JS^{G_{\alpha_{u}}}_{*}(p,q)=(1-\alpha_{u})\int G (p,q)_{\alpha_{u}}\;log\frac{G(p,q)_{\alpha_{u}}}{p}d\mu+\\ &\alpha_{u}\int G(p,q)_{\alpha_{u}}\;log\frac{G(p,q)_{\alpha_{u }}}{q}d\mu.\end{split} \tag{6}\] The proposed \(JS^{G_{\alpha_{u}}}\) is an extension of the skew-geometric JS divergence first proposed by Nielsen et al. [78]. Specifically, Nielsen et al. [78] generalize the JS divergence with abstract means (quasi-arithmetic means [79]), in which a scalar \(\alpha\) is defined to control the degree of divergence skew.2 By selecting the weighted geometric mean \(p^{1-\alpha}q^{\alpha}\), such generalized JS divergence becomes the skew-geometric JS divergence, which can be easily applied to the Gaussian distribution because of its property that the weighted product of exponential family distributions stays in the exponential family [80]. Our \(JS^{G_{\alpha_{u}}}\) extends such divergence by incorporating the uncertainty into the scalar \(\alpha\) to dynamically adjust the divergence skew. We assume the real variational distribution of the global latent variable under the supervised learning to be \(q^{*}\). If the framework is trained with real labels, the condition \(q(z|x_{m+1:\ m+r},y_{m+1:\ m+r})=q(z|x_{1:m},y_{1:m})=q^{*}\) will hold after training, since they are all the marginal distributions of the same stochastic process. However, as for SSL, \(q(z|x_{m+1:\ m+r},y_{m+1:\ m+r})\) and \(q(z|x_{1:m},y_{1:m})\) are no longer equal to \(q^{*}\), as some low-quality representations are involved during training, which affect the estimation of \(q(z|x_{m+1:\ m+r},y_{m+1:\ m+r})\) and \(q(z|x_{1:m},y_{1:m})\). Our proposed \(JS^{G_{\alpha_{u}}}\) solves this issue by introducing an intermediate distribution that is calculated via \(G(q(z|x_{1:m},y_{1:m})\), \(q(z|x_{m+1:\ m+r},y_{m+1:\ m+r}))_{\alpha_{u}}\), where \(\alpha_{u}=u_{c_{avg}}/(u_{c_{avg}}+u_{t_{avg}})\). Here, \(u_{c_{avg}}\) denotes the average value over the uncertainties of the predictions of context points, and \(u_{t_{avg}}\) represents the average value over that of target points. With this setting, the intermediate distribution is usually close to \(q^{*}\). For example, when \(u_{c_{avg}}\) is large, and \(u_{t_{avg}}\) is small, which means that there are many low-quality feature presentations involved for calculating \(q(z|x_{1:m},y_{1:m})\), and \(q(z|x_{m+1:\ m+r},y_{m+1:\ m+r})\) is closer to \(q^{*}\), then \(G(q(z|x_{1:m},y_{1:m}),q(z|x_{m+1:\ m+r},y_{m+1:\ m+r}))_{\alpha_{u}}\) will be close to \(q(z|x_{m+1:\ m+r},y_{m+1:\ m+r})\), and as a result, the network is optimized to learn the distribution of the global latent variable in the direction to \(q^{*}\), which mitigates the issue to some extent.3 Concerning the variational distribution being supposed to be a Gaussian distribution, we introduce the following theorem (with proof in the supplementary material) for calculating \(JS^{G_{\alpha_{u}}}\) on Gaussian distributions: Footnote 2: The divergence skew means how closely related the intermediate distribution (the abstract mean of \(p\) and \(q\)) is to \(p\) or \(q\). Footnote 3: As long as one of \(q(z|x_{1:m},y_{1:m})\) and \(q(z|x_{m+1:\ m+r},y_{m+1:\ m+r})\) is close to \(q_{\pi}\), the proposed \(JS^{G_{\alpha_{u}}}\) mitigates the issue, but \(JS^{G_{\alpha_{u}}}\) still has difficulties to solve the problem when both of their calculations involve many low-quality representations. **Theorem 1.**_Given two multivariate Gaussians \(\mathcal{N}_{1}(\mu_{1},\Sigma_{1})\) and \(\mathcal{N}_{2}(\mu_{2},\Sigma_{2})\), the following holds:_ \[\begin{split}& JS^{G_{\alpha_{u}}}(\mathcal{N}_{1},\mathcal{N}_{2})= \frac{1}{2}(tr(\Sigma_{\alpha_{u}}^{-1}((1-\alpha_{u})\Sigma_{1}+\alpha_{u} \Sigma_{2}))+\\ &(1-\alpha_{u})(\mu_{\alpha_{u}}-\mu_{1})^{T}\Sigma_{\alpha_{u}}^ {-1}(\mu_{\alpha_{u}}-\mu_{1})+\\ &\alpha_{u}(\mu_{\alpha_{u}}-\mu_{2})^{T}\Sigma_{\alpha_{u}}^{-1}( \mu_{\alpha_{u}}-\mu_{2})+\\ & log|\frac{det[\Sigma_{\alpha_{u}}]}{det[\Sigma_{1}]^{1-\alpha_{u }}det[\Sigma_{2}]^{\alpha_{u}}}]-D)\\ & JS^{G_{\alpha_{u}}}_{*}(\mathcal{N}_{1},\mathcal{N}_{2})=\\ &\frac{1}{2}(log|\frac{det[\Sigma_{1}]^{1-\alpha_{u}}det[\Sigma_ {2}]^{\alpha_{u}}}{det[\Sigma_{\alpha_{u}}]}]+\alpha_{u}\mu_{2}^{T}\Sigma_{2} ^{-1}\mu_{2}\\ &-\mu_{\alpha_{u}}^{T}\Sigma_{\alpha_{u}}^{-1}\mu_{\alpha_{u}}+(1- \alpha_{u})\mu_{1}^{T}\Sigma_{1}^{-1}\mu_{1}),\end{split} \tag{7}\] _where \(\Sigma_{\alpha_{u}}=((1-\alpha_{u})\Sigma_{1}^{-1}+\alpha_{u}\Sigma_{2}^{-1})^ {-1}\) and \(\mu_{\alpha_{u}}=\Sigma_{\alpha_{u}}((1-\alpha_{u})\Sigma_{1}^{-1}\mu_{1}+\alpha_{ u}\Sigma_{2}^{-1}\mu_{2})\), \(D\) denotes the number of dimension, and \(det[\cdot]\) represents the determinant._ With Theorem 1, one can calculate \(JS^{G_{\alpha_{u}}}\) or its dual form \(JS^{G_{\alpha_{u}}}\) based on the mean vector and the variance vector, and use \(JS^{G_{\alpha_{u}}}\) or \(JS^{G_{\alpha_{u}}}_{*}\) to replace the original KL divergence term in the ELBO (Eq. (4)) for training the whole framework. When the two distributions are diagonal Gaussians, \(\Sigma_{1}\) and \(\Sigma_{2}\) can be implemented by diagonal matrices with the variance vectors for calculating \(JS^{G_{\alpha_{u}}}\) or \(JS^{G_{\alpha_{u}}}_{*}\). #### 3.2.4 Loss Functions To calculate loss functions, reliable pseudo-labels are required for unlabeled data. In practice, to select reliable unlabeled samples from \(\mathcal{U}\) and their corresponding pseudo-labels, we preset a confidence threshold (\(\tau_{c}\)) and an uncertainty threshold (\(\tau_{u}\)). In particular, as for unlabeled data \(x_{i}^{u}\), NP-Match gives its prediction \(p(y|Aug_{w}(x_{i}^{u}))\) and associated uncertainty estimate under the inference mode, where \(Aug_{w}(\cdot)\) denotes the weak augmentation. When the highest prediction score \(max(p(y|Aug_{w}(x_{i}^{u})))\) is higher than \(\tau_{c}\), and the uncertainty is smaller than \(\tau_{u}\), the sample will be chosen, and we denote the selected sample as \(x_{i}^{u_{c}}\), since the model is certain about his prediction, and the pseudo-label of \(x_{i}^{u_{c}}\) is \(\hat{y}_{i}=arg\;max(p(y|Aug_{w}(x_{i}^{u_{c}})))\). Concerning \(\mu B\) unlabeled samples in \(\mathcal{U}\), we assume \(B_{c}\) unlabeled samples are selected from them in each feedforward pass. According to the ELBO (Eq. (4)), three loss terms are used for training, namely, \(L_{cls}\), \(L_{cls}^{u}\), and \(JS^{G_{\alpha_{u}}}\). For each input (labeled or unlabeled), the NP model can give \(T\) predictions, and hence \(L_{cls}\) and \(L_{cls}^{u}\) are defined as: \[\begin{split} L_{cls}&=\frac{1}{B\times T}\sum_{i=1}^{B} \sum_{j=1}^{T}H(y_{i}^{*},p_{j}(y|Aug_{w}(x_{i}))),\\ L_{cls}^{u}&=\frac{1}{B_{c}\times T}\sum_{i=1}^{B_ {c}}\sum_{j=1}^{T}H(\hat{y}_{i},p_{j}(y|Aug_{s}(x_{i}^{u_{c}}))),\end{split} \tag{8}\] where \(Aug_{s}(\cdot)\) denotes the strong augmentation, \(y_{i}^{ where \(\lambda_{u}\) and \(\beta\) are coefficients. During training, we followed previous work [7, 8, 9, 15] to utilize the exponential moving average (EMA) technique. Note that, in the implementation, NP-Match only preserves the averaged representation over all representations in each memory bank after training, which just takes up negligible storage space. ## 4 Experiments We now report our experiments on five public image classification benchmarks under three different semi-supervised image classification settings. For readability, implementation details are given in the supplementary material. ### _Datasets_ For standard semi-supervised image classification, we conducted our experiments on four widely used public SSL benchmarks, including CIFAR-10 [82], CIFAR-100 [82], STL-10 [83], and ImageNet [84]. CIFAR-10 and CIFAR-100 contain 50,000 images of size \(32\times 32\) from 10 and 100 classes, respectively. We evaluated NP-match on these two datasets following the evaluation settings used in previous works [7, 8, 15]. The STL-10 dataset has 5000 labeled samples with size \(96\times 96\) from 10 classes and 100,000 unlabeled samples, and it is more difficult than CIFAR, since STL-10 has a number of out-of-distribution images in the unlabeled set. We follow the experimental settings for STL-10 as detailed in [8]. Finally, ImageNet contains around 1.2 million images \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{3}{c}{STL-10} \\ \hline Label Amount & 40 & 250 & 4000 & 40 & 250 & 1000 & 40 & 250 & 1000 \\ \hline MixMatch [13] & 36.19 (\(\pm\)6.48) & 13.63 (\(\pm\)0.59) & 6.66 (\(\pm\)0.26) & 67.59 (\(\pm\)0.66) & 39.76 (\(\pm\)0.48) & 27.78 (\(\pm\)0.29) & 54.93 (\(\pm\)0.96) & 34.52 (\(\pm\)0.32) & 21.70 (\(\pm\)0.68) \\ RekMatch [14] & 9.88 (\(\pm\)1.03) & 5.06 (\(\pm\)0.05) & 4.84 (\(\pm\)0.01) & 42.75 (\(\pm\)1.05) & 26.03 (\(\pm\)0.35) & **20.02** (\(\pm\)0.27) & 32.12 (\(\pm\)6.24) & 12.49 (\(\pm\)1.28) & 6.74 (\(\pm\)0.14) \\ UDA [38] & 10.62 (\(\pm\)3.75) & 5.16 (\(\pm\)0.06) & 4.29 (\(\pm\)0.07) & 4.39 (\(\pm\)1.99) & 27.73 (\(\pm\)0.21) & 22.49 (\(\pm\)0.23) & 37.42 (\(\pm\)6.84) & 9.72 (\(\pm\)1.15) & 6.64 (\(\pm\)0.17) \\ CoMatch [15] & 6.88 (\(\pm\)0.92) & 4.90 (\(\pm\)0.35) & 4.06 (\(\pm\)0.03) & 40.02 (\(\pm\)1.11) & 27.01 (\(\pm\)0.21) & 21.83 (\(\pm\)0.23) & 31.77 (\(\pm\)2.56) & 11.56 (\(\pm\)1.27) & 8.66 (\(\pm\)0.41) \\ SemCo [16] & 7.87 (\(\pm\)0.22) & 2.12 (\(\pm\)0.27) & **3.89** (\(\pm\)0.08) & 44.11 (\(\pm\)1.18) & 31.93 (\(\pm\)0.33) & 24.45 (\(\pm\)0.12) & 34.17 (\(\pm\)2.87) & 12.23 (\(\pm\)1.40) & 7.49 (\(\pm\)0.29) \\ Meta Pseudo Labels [10] & 6.93 (\(\pm\)0.17) & 4.94 (\(\pm\)0.01) & 3.89 (\(\pm\)0.07) & 44.23 (\(\pm\)0.09) & 27.68 (\(\pm\)0.02) & 22.48 (\(\pm\)0.18) & 34.29 (\(\pm\)3.29) & 9.90 (\(\pm\)0.96) & 6.45 (\(\pm\)0.26) \\ FlexMatch [8] & 4.96 (\(\pm\)0.06) & 4.98 (\(\pm\)0.09) & 4.19 (\(\pm\)0.01) & 39.94 (\(\pm\)1.62) & 26.49 (\(\pm\)0.20) & 21.90 (\(\pm\)0.15) & 29.15 (\(\pm\)4.16) & **8.23** (\(\pm\)0.39) & 5.77 (\(\pm\)0.18) \\ SimMatch [81] & 5.63 (\(\pm\)0.72) & 4.50 (\(\pm\)0.04) & 3.97 (\(\pm\)0.03) & 39.29 (\(\pm\)0.55) & **25.21** (\(\pm\)0.17) & 20.63 (\(\pm\)0.05) & 25.13 (\(\pm\)0.76) & 8.72 (\(\pm\)0.45) & 6.11 (\(\pm\)0.19) \\ UPS [9] & 5.26 (\(\pm\)0.29) & 5.11 (\(\pm\)0.05) & 4.25 (\(\pm\)0.05) & 41.07 (\(\pm\)1.66) & 27.14 (\(\pm\)0.20) & 21.97 (\(\pm\)0.23) & 30.82 (\(\pm\)1.26) & 9.77 (\(\pm\)0.44) & 6.02 (\(\pm\)0.28) \\ FixMatch [7] & 7.47 (\(\pm\)0.28) & **4.86** (\(\pm\)0.05) & 4.21 (\(\pm\)0.08) & 46.42 (\(\pm\)0.88) & 28.03 (\(\pm\)0.16) & 22.20 (\(\pm\)0.12) & 35.96 (\(\pm\)1.44) & 9.81 (\(\pm\)1.04) & 6.25 (\(\pm\)0.33) \\ NP-Match (ours) & **4.91** (\(\pm\)0.04) & 4.96 (\(\pm\)0.06) & 4.11 (\(\pm\)0.02) & **38.91** (\(\pm\)0.99) & 26.03 (\(\pm\)0.26) & 21.22 (\(\pm\)0.13) & **14.20** (\(\pm\)0.67) & 9.51 (\(\pm\)0.37) & **5.59** (\(\pm\)0.24) \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison with SOTA results on CIFAR-10, CIFAR-100, and STL-10. The error rates are reported with standard deviation. \begin{table} \begin{tabular}{c|c c c} \hline \hline Dataset & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{STL-10} \\ \hline Label Amount & 40 & 250 & 4000 & 40 & 250 & 1000 \\ \hline UPS (MC Dropout) & 7.96 & 7.02 & **5.82** & 17.23 & 9.65 & 5.69 \\ NP-Match & **7.23** & **6.85** & 5.89 & **12.45** & **8.72** & **5.28** \\ \hline \hline \end{tabular} \end{table} TABLE II: Expected UCEs (%) of the MC-dropout-based model (i.e., UPS [9]) and of NP-Match on the test sets of CIFAR-10 and STL-10. Fig. 3: Time consumption of estimating uncertainty for the MC-dropout-based model (i.e., UPS [9]) and NP-Match. The horizontal axis refers to the number of predictions used for the uncertainty quantification, and the vertical axis indicates the time consumption (sec). \begin{table} \begin{tabular}{c|c c c} \hline \hline & Method & Top-1 & Top-5 \\ \hline \multirow{2}{*}{Deterministic Methods} & FixMatch [7] & 43.66 & 21.80 \\ & FlexMatch [8] & 41.85 & 19.48 \\ & CoMatch [15] & 42.17 & 19.64 \\ \hline \multirow{2}{*}{Probabilistic Methods} & UPS [9] & 42.69 & 20.23 \\ & NP-Match & **41.78** & **19.33** \\ \hline \hline \end{tabular} \end{table} TABLE III: Error rates of SOTA methods on ImageNet. Fig. 2: Analysis of class-wise uncertainty and accuracy on CIFAR-10 and STL-10. For each class, all samples are collected, and their average uncertainty and the accuracy are calculated. from 1000 classes. Following the experimental settings in [8], we used 100K labeled data, namely, 100 labels per class. For the imbalanced semi-supervised image classification task, we still chose CIFAR-10 [82], CIFAR-100 [82], and STL-10 [83], but with imbalanced class distribution for both labeled and unlabeled data. By following [60], the imbalanced settings were achieved by exponentially decreasing the number of samples within each class. Specifically, the head class size is denoted as \(N_{1}\) (\(M_{1}\)) and the imbalance ratio is denoted as \(\gamma_{l}\) (\(\gamma_{u}\)) for labeled (unlabeled) data separately, where \(\gamma_{l}\) and \(\gamma_{u}\) are independent from each other. For the multi-label semi-supervised image classification task, we chose a widely-used medical dataset, named Chest X-Ray14. Chest X-Ray14 is a collection of 112,120 chest X-ray images from 30,805 patients, with 14 labels (each label is a disease) and _No Finding_ class. Note that each patient can have more than one label, leading to a multi-label classification problem. We employed the official training and testing data split, and we followed [85] to use area under the ROC curve (AUC) as the evaluation metric. ### _Main Results_ #### 4.2.1 Semi-Supervised Image Classification Experimental Results In the following, we report the experimental results on the accuracy, the average uncertainty, the expected uncertainty calibration error, and the running time of NP-Match compared with SOTA approaches. First, in Table I, we compare NP-Match with SOTA semi-supervised image classification methods on CIFAR-10, CIFAR-100, and STL-10. We see that NP-Match outperforms SOTA results or achieves competitive results under different SSL settings. We highlight two key observations. First, NP-Match outperforms all other methods by a wide margin on all three benchmarks under the most challenging settings, where the number of labeled samples is smallest. Second, NP-Match is compared to UPS4, since the UPS framework is the MC-dropout-based probabilistic model for semi-supervised image classification, and NP-Match completely outperforms them on all three benchmarks. This suggests that NPs can be a good alternative to MC dropout in probabilistic approaches to semi-supervised learning tasks. Footnote 4: Note that UPS [9] does not use strong augmentations, thus we re-implemented it with RandAugment [76] for fair comparisons. Second, we analyse the relationship between the average class-wise uncertainty and accuracy at test phase on CIFAR-10 and STL10. From Figure 2, we empirically observe that: (1) when more labeled data are used for training, the average uncertainty of samples' predictions for each class decreases. This is consistent with the property of NPs and GPs where the model is less uncertain with regard to its prediction when more real and correct labels are leve \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{CIFAR-10-LT} & \multicolumn{4}{c}{CIFAR-100-LT} \\ & \multicolumn{2}{c}{\(\gamma=\gamma_{l}=1\gamma_{u}=100\)} & \multicolumn{2}{c}{\(\gamma=\gamma_{l}=\gamma_{u}=150\)} & \multicolumn{2}{c}{\(\gamma=\gamma_{l}=\gamma_{u}=10\)} & \multicolumn{2}{c}{\(\gamma=\gamma_{l}=\gamma_{u}=20\)} \\ \cline{3-10} Methods & \(N_{1}\)=500 & \(N_{1}\)=1500 & \(N_{1}\)=500 & \(N_{1}\)=1500 & \(N_{1}\)=500 & \(N_{1}\)=150 & \(N_{1}\)=500 & \(N_{1}\)=150 \\ & \(M_{1}\)=4000 & \(M_{1}\)=3000 & \(M_{1}\)=4000 & \(M_{1}\)=3000 & \(M_{1}\)=400 & \(M_{1}\)=3000 & \(M_{1}\)=3000 & \(M_{1}\)=400 & \(M_{1}\)=300 \\ \hline DARP [55] & 74.50 (\(\pm\)0.78) & 77.80 (\(\pm\)0.63) & 67.20 (\(\pm\)0.32) & 73.60 (\(\pm\)0.73) & 49.40 (\(\pm\)0.20) & 58.10 (\(\pm\)0.44) & 43.40 (\(\pm\)0.87) & 52.20 (\(\pm\)0.66) \\ CResF [56] & 76.30 (\(\pm\)0.86) & 78.10 (\(\pm\)0.42) & 67.50 (\(\pm\)0.45) & 73.70 (\(\pm\)0.34) & 44.50 (\(\pm\)0.94) & 57.40 (\(\pm\)0.18) & 40.10 (\(\pm\)1.28) & 52.10 (\(\pm\)0.21) \\ DASO [60] & 76.00 (\(\pm\)0.37) & 79.10 (\(\pm\)0.75) & 70.10 (\(\pm\)1.81) & 75.10 (\(\pm\)0.77) & 49.80 (\(\pm\)0.24) & 59.20 (\(\pm\)0.35) & 43.60 (\(\pm\)0.09) & 52.90 (\(\pm\)0.32) \\ ABC [59] + DASO [60] & 80.10 (\(\pm\)1.16) & 83.40 (\(\pm\)0.31) & 70.60 (\(\pm\)0.80) & 80.40 (\(\pm\)0.56) & 50.20 (\(\pm\)0.26) & 60.00 (\(\pm\)0.32) & 44.50 (\(\pm\)0.25) & **53.50** (\(\pm\)0.53) \\ LA [52] + DARP [55] & 76.60 (\(\pm\)0.92) & 80.80 (\(\pm\)0.62) & 68.20 (\(\pm\)0.94) & 76.70 (\(\pm\)1.13) & 50.50 (\(\pm\)0.78) & 59.90 (\(\pm\)0.32) & 44.40 (\(\pm\)0.65) & 53.80 (\(\pm\)0.43) \\ LA [52] + CReF [56] & 76.70 (\(\pm\)1.13) & 81.10 (\(\pm\)0.57) & 70.90 (\(\pm\)1.18) & 77.90 (\(\pm\)0.21) & 57.10 (\(\pm\)0.55) & 46.00 (\(\pm\)0.55) & 52.30 (\(\pm\)0.20) \\ LA [52] + DASO [50] & 77.90 (\(\pm\)0.88) & 82.50 (\(\pm\)0.08) & 70.10 (\(\pm\)1.68) & 79.00 (\(\pm\)2.23) & 50.70 (\(\pm\)0.51) & **66.00** (\(\pm\)0.71) & 41.10 (\(\pm\)0.61) & 55.10 (\(\pm\)0.72) \\ \hline DASO w. UPS [9] & 75.44 (\(\pm\)0.79) & 78.11 (\(\pm\)0.43) & 69.64 (\(\pm\)1.01) & 74.39 (\(\pm\)0.83) & 49.16 (\(\pm\)0.33) & 57.87 (\(\pm\)0.33) & 43.02 (\(\pm\)0.38) & 52.23 (\(\pm\)0.61) \\ LA + DASO w. UPS [9] & 78.89 (\(\pm\)0.24) & 81.24 (\(\pm\)0.54) & 71.39 (\(\pm\)0.78) & 77.83 (\(\pm\)0.94) & 50.04 (\(\pm\)0.47) & 58.92 (\(\pm\)0.49) & 43.95 (\(\pm\)0.54) & 53.98 (\(\pm\)0.82) \\ ABC + DASO w. UPS [9] & 79.22 (\(\pm\)0.31) & 81.02 (\(\pm\)0.39) & 71.67 (\(\pm\)0.65) & 78.61 (\(\pm\)0.88) & 50.39 (\(\pm\)0.67) & 58.55 (\(\pm\)0.69) & 44.07 (\(\pm\)0.38) & 54.12 (\(\pm\)0.77) \\ \hline DASO w. NPs & 76.06 (\(\pm\)0.11) & 79.23 (\(\pm\)0.42) & 70.13 (\(\pm\)1.39) & 75.17 (\(\pm\)0.81) & 49.46 (\(\pm\)0.42) & 57.66 (\(\pm\)0.55) & 43.32 (\(\pm\)0.83) & 51.96 (\(\pm\)0.51) \\ LA + DASO w. NPs & **80.44** (\(\pm\)0.42) & **84.02** (\(\pm\)0.23) & **73.24** (\(\pm\)0.94) & **81.25** (\(\pm\)0.87) & **50.97** (\(\pm\)0.55) & 58.77 (\(\pm\)0.69) & **44.65** (\(\pm\)0.66) & 53.86 (\(\pm\)0.31) \\ \hline \hline \end{tabular} \end{table} TABLE IV: Comparison with SOTA results on long-tailed CIFAR-10 (CIFAR-10-LT) and long-tailed CIFAR-100 (CIFAR-100-LT) under \(\gamma_{l}=\gamma_{u}\) setup. The accuracy is reported with standard deviation. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{4}{c}{CIFAR-1 with higher average uncertainties have lower accuracy, meaning that the uncertainty is a good standard for choosing unlabeled samples. Third, the expected uncertainty calibration error (UCE) of our method is also calculated to evaluate the uncertainty estimation. The expected UCE is used to measure the miscalibration of uncertainty [91], which is an analogue to the expected calibration error (ECE) [92, 93]. The low expected UCE indicates that the model is certain when making accurate predictions and that the model is uncertain when making inaccurate predictions. More details about the expected UCE can be found in previous works [91, 94]. The results of NP-Match and the MC-dropout-based model (i.e., UPS [9]) are shown in Table II; their comparison shows that NP-Match can output more reliable and well-calibrated uncertainty estimates. Furthermore, we compare the running time of NP-Match and the MC dropout-based model (i.e., UPS [9]). We use a batch of 16 samples and two network architectures that are widely used in previous works [15, 8, 7, 95], namely, WRN-28-2 on CIFAR-10 (Figure 3 (a)) and WRN-28-8 on CIFAR-100 (Figure 3 (b)). In (a), we observe that when the number of predictions (\(T\)) increases, the time cost of the UPS framework rises quickly, but the time cost of NP-Match grows slowly. In (b), we observe that the time cost gap between these two methods is even larger when a larger model is tested on a larger dataset. This demonstrates that NP-Match is significantly more computationally efficient than MC dropout-based methods. Finally, Table III shows the experiments conducted on ImageNet. Here, NP-Match achieves a SOTA performance, suggesting that it is effective at handling challenging large-scale datasets. Note that previous works usually evaluate their frameworks under distinct SSL settings, and thus it is hard to compare different methods directly. Therefore, we followed the training details in the previous work [8] due to our limited computational resources, and we also re-evaluate another two methods proposed recently under the same SSL setting with the same training details, namely, UPS and CoMatch. #### 4.2.2 Imbalanced Semi-Supervised Image Classification Experimental Results Concerning the imbalanced semi-supervised image classification task, a recent SOTA method called distribution-aware semantics-oriented (DASO) framework [60] is used to validate our method by simply incorporating the NP model into it 5. We followed the experimental settings in the previous work [60], and the results are shown in Tables IV and V. Note that the pipeline of how the NP model generates and selects pseudo-labels for DASO is the same as the descriptions in Section 3.2.2, and therefore, we consider "DASO w. NPs" is the framework that combines NP-Match with DASO for imbalanced semi-supervised image classification. We summarize some findings according to the accuracy from Tables IV and V as follows. First of all, the NP model does not perform well when both labeled and unlabeled data have the same imbalanced distribution, since we can observe minor performance drops in some experimental settings in Table IV when DASO is equipped with NPs. However, the logit adjustment strategy [52] benefits "DASO w. NPs" more than the original DASO framework, achieving new SOTA results in all imbalanced settings on CIFAR-10 and competitive results on CIFAR-100. Second, concerning another MC-dropout-based probabilistic method for SSL, namely, UPS [9], we combined it with DASO and then found out that it harms the performance in most imbalanced settings. Even though we used two different strategies [52, 59] to rectify the bias towards majority classes, the performance of "DASO w. UPS" is still worse than "DASO w. NPs" in most cases, which demonstrates the meliority of the NP model over UPS and MC dropout for imbalanced semi-supervised image classification. Third, the NP model has a strong capability to handle the situation when the imbalanced distribution of labeled data and that of unlabeled data are different, and as shown in Table V, it not only improves the accuracy of DASO in most imbalanced settings, but also outperforms UPS [9] by a healthy margin. \begin{table} \begin{tabular}{c|c c c|c c c c c} \hline \hline Method Type & \multicolumn{4}{c|}{Consistency based} & \multicolumn{4}{c}{Pseudo-labelling} \\ \hline Method & MT [87] & SRC-MT [88] & \(S^{2}\)MITS\({}^{2}\)[89] & GraphXNet [90] & UPS [9] & ACPL [85] & ACPL-UPS & ACPL-NPs \\ \hline Atelectasis & 75.12 & 75.38 & 77.45 & 72.03 & 76.87 & 77.25 & 77.08 & **77.54** \\ Cardiomegaly & 87.37 & 87.70 & 86.84 & **88.21** & 86.01 & 84.68 & 85.26 & 85.30 \\ Effusion & 80.81 & 81.58 & 82.11 & 79.52 & 81.12 & **83.13** & 82.27 & 82.95 \\ Infiltration & 70.67 & 70.40 & 70.32 & **71.64** & 71.02 & 71.26 & 71.04 & 71.03 \\ Mass & 77.72 & 78.03 & **82.82** & 80.29 & 81.59 & 81.68 & 81.79 & 82.39 \\ Nodule & 73.27 & 73.64 & 75.29 & 71.13 & **76.89** & 76.00 & 76.42 & 75.85 \\ Pneumonia & 69.17 & 69.27 & 72.66 & **76.28** & 71.44 & 73.66 & 73.11 & 72.78 \\ Pneumothorax & 85.63 & 86.12 & 86.78 & 84.24 & 86.02 & 86.08 & 86.12 & **86.80** \\ Consolidation & 72.51 & 73.11 & 74.21 & 73.24 & 74.38 & **74.48** & **74.48** & 74.34 \\ Edema & 82.72 & 82.94 & 84.23 & 81.77 & 82.88 & 84.23 & 83.91 & **84.61** \\ Emphysema & 88.16 & 88.98 & 91.55 & 84.89 & 90.17 & 92.47 & 91.99 & **92.69** \\ Fibrosis & 78.24 & 79.22 & 81.29 & 81.25 & 80.54 & **81.97** & 81.55 & 80.94 \\ Pleural Thicken & 74.43 & 75.63 & 77.02 & 76.23 & 76.13 & 76.92 & 76.53 & **77.05** \\ Hernia & 87.74 & 87.27 & 85.64 & 86.89 & 84.12 & 84.49 & 85.10 & **89.08** \\ \hline Mean & 78.83 & 79.23 & 80.58 & 79.12 & 79.94 & 80.59 & 80.48 & **80.95** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Class-level AUC testing set results comparison on Chest X-Ray14 based on DenseNet-169 [86], under the 20% of labelled data setting. Bold number denotes the best result per class and underlined shows second best result in each class. #### 4.2.3 Multi-Label Semi-Supervised Image Classification Experimental Results We adopted a recent SOTA method, named anti-curriculum pseudo-labelling (ACPL) [85], and integrated our NP model into it 6, which is denoted as "ACPL-NPs" in Table VI. Similarly, the pipeline of how the NP model generates and selects pseudo-labels for ACPL is also the same as the descriptions in Section 3.2.2, and therefore, we consider "ACPL-NPs" is the framework that combines NP-Match with ACPL for multi-label semi-supervised image classification. For a fair comparison with another probabilistic approach (a.k.a. MC dropout), we combined UPS [9] with ACPL, which is denoted as "ACPL-UPS". Footnote 6: The details about how the NP model is combined with ACPL are shown in the supplementary material. According to Table VI, we can summarize the following observations. First, after involving the NP model, ACPL-NPs brings a direct improvement over ACPL in terms of the mean AUC, and also outperforms other SOTA methods, indicating that the NP model and our NP-Match pipeline introduces more reliable pseudo-labels for the label ensemble process. Besides, compared against ACPL, ACPL-NPs is able to provide uncertainty estimates, based on which clinicians can further double-check the results in a real-world scenario. Second, compared to ACPL, ACPL-UPS performs worse due to the introduced MC dropout. Concerning that the pseudo-labels from the model are selected based on both confidence scores and uncertainty, we empirically consider that the performance drop is caused by the inconsistency between the accuracy and the uncertainty, which is supported by the results in Table II to some extend. Third, it is crucial to notice the performance gap between ACPL-NPs and ACPL-UPS, which demonstrates that NP-Match is a superior probabilistic model for SSL. ### _Ablation Studies_ We only conducted our ablation studies on the standard semi-supervised image classification task, and the results on CIFAR-10, CIFAR-100, and STL-10 are reported. The ablation studies contain two parts. First, we evaluated uncertainty-guided skew-geometric JS divergence and its dual form to verify which one is more suitable for SSL. Second, we did experiments in terms of the hyperparameters related to the NP model in NP-Match, in order to explore how the performance is affected by the changes of hyperparameters, which may provide some hints to readers for applying NP-Match to other datasets. #### 4.3.1 Uncertainty-guided Skew-geometric JS Divergence We evaluate our uncertainty-guided skew-geometric JS divergence (\(JS^{G_{\alpha_{u}}}\)) as well as its dual form (\(JS^{G_{\alpha_{u}}}_{*}\)), and compare them to the original KL divergence in NPs. In Table VII, we see that NP-Match with KL divergence consistently underperforms relative to our proposed \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Dataset & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} & \multicolumn{2}{c}{STL-10} \\ \hline Label Amount & 40 & 250 & 4000 & 400 & 2500 & 10000 & 40 & 250 & 1000 \\ \hline NP-Match with KL & 5.32 (\(\pm\)0.06) & 5.20 (\(\pm\)0.02) & 4.36 (\(\pm\)0.03) & 39.15 (\(\pm\)1.53) & 26.48 (\(\pm\)0.23) & 21.51 (\(\pm\)0.17) & 14.67 (\(\pm\)0.38) & 9.92 (\(\pm\)0.24) & 6.21 (\(\pm\)0.23) \\ NP-Match with \(JS^{G_{\alpha_{u}}}_{*}\) & 4.93 (\(\pm\)0.02) & **4.87** (\(\pm\)0.03) & 4.19 (\(\pm\)0.04) & **38.67** (\(\pm\)1.29) & 26.24 (\(\pm\)0.17) & 21.33 (\(\pm\)0.10) & 14.45 (\(\pm\)0.55) & **9.48** (\(\pm\)0.28) & **5.47** (\(\pm\)0.19) \\ NP-Match with \(JS^{G_{\alpha_{u}}}_{*}\) & **4.91** (\(\pm\)0.04) & **4.96** (\(\pm\)0.06) & **4.11** (\(\pm\)0.02) & 38.91 (\(\pm\)0.99) & **26.03** (\(\pm\)0.26) & **21.22** (\(\pm\)0.13) & **14.20** (\(\pm\)0.67) & 9.51 (\(\pm\)0.37) & 5.59 (\(\pm\)0.24) \\ \hline \hline \end{tabular} \end{table} TABLE VII: Ablation studies of the proposed uncertainty-guided skew-geometric JS divergence and its dual form. Fig. 4: Performance for different hyperparameters. Four hyperparameters are explored, including the uncertainty threshold (\(\tau_{u}\)), the length of memory banks (\(\mathcal{Q}\)), the coefficient of \(JS^{G_{\alpha_{u}}}\) (\(\beta\)), and the number of sampled latent vectors (\(T\)). and \(JS_{*}^{G_{\alpha_{u}}}\). This suggests that our uncertainty-guided skew-geometric JS divergence can mitigate the problem caused by low-quality feature representations. Between the two, \(JS^{G_{\alpha_{u}}}\) and \(JS_{*}^{G_{\alpha_{u}}}\) achieve a comparable performance across the three benchmarks, and thus we select \(JS^{G_{\alpha_{u}}}\) to replace the original KL divergence in the ELBO (Eq. (4)) for the comparisons to previous SOTA methods in Section 4.2. #### 4.3.2 Hyperparameter Exploration For the hyperparameters exploration, we consider four hyperparameters in total, including the uncertainty threshold (\(\tau_{u}\)), the length of memory banks (\(\mathcal{Q}\)), the coefficient of \(JS^{G_{\alpha_{u}}}\) (\(\beta\)), and the number of sampled latent vectors (\(T\)). By Figure 4(a), a reasonable \(\tau_{u}\) is important. Specifically, lower \(\tau_{u}\) usually leads to worse performance, because lower \(\tau_{u}\) enforces NP-Match to select a limited number of unlabeled data during training, which is equivalent to training the whole framework with a small dataset. Conversely, when \(\tau_{u}\) is too large, more uncertain unlabeled samples are chosen, whose pseudo-labels might be incorrect, and using these uncertain samples to train the framework can also lead to a poor performance. Furthermore, the difficulty of a training set also affects the setting of \(\tau_{u}\), as a more difficult dataset usually has more classes and hard samples (e.g., ImageNet), which makes the uncertainties of predictions large, so that \(\tau_{u}\) should be adjusted accordingly. From Figure 4(b), the performance becomes better with the increase of \(\mathcal{Q}\). When more context points are used, the more information is involved for inference, and then the NP model can better estimate the global latent variable and make predictions. This observation is consistent with the experimental results where the original NPs are used for image completion [25]. Figure 4(c) shows the ablation study of \(\beta\) that controls the contribution of \(JS^{G_{\alpha_{u}}}\) to the total loss function, and when \(\beta=0.01\), we can obtain the best accuracy on both datasets. By Figure 4(d), if \(T\) is smaller than 5, then the performance will go down, but when \(T\) is further increased, then the performance of NP-Match is not influenced greatly. ## 5 Summary and Outlook In this work, we proposed the application of neural processes (NPs) to semi-supervised learning (SSL), designing a new framework called NP-Match, and explored its use in semi-supervised large-scale image classification. To our knowledge, this is the first such work. To better adapt NP-Match to the SSL task, we proposed a new divergence term, which we call uncertainty-guided skew-geometric JS divergence, to replace the original KL divergence in NPs. We demonstrated the effectiveness of NP-Match and the proposed divergence term for SSL in extensive experiments, and also showed that NP-Match could be a good alternative to MC dropout in SSL. Future works will explore the following two directions. First, due to the successful application of NPs to semi-supervised image classification, it is valuable to explore NPs in other SSL tasks, such as object detection and segmentation. Second, many successful NPs variants have been proposed since the original NPs [25] (see Section 2). We will also explore these in SSL for image classification. ## Acknowledgments This work was partially supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1, by the AXA Research Fund, and by the EPSRC grant EP/R013667/1. We also acknowledge the use of the EPSRC-funded Tier 2 facility JADE (EP/P02075/1) and GPU computing support by Scan Computers International Ltd.
2309.10642
Correcting Selection Bias in Standardized Test Scores Comparisons
This paper addresses the issue of sample selection bias when comparing countries using International assessments like PISA (Program for International Student Assessment). Despite its widespread use, PISA rankings may be biased due to different attrition patterns in different countries, leading to inaccurate comparisons. This study proposes a methodology to correct for sample selection bias using a quantile selection model. Applying the method to PISA 2018 data, I find that correcting for selection bias significantly changes the rankings (based on the mean) of countries' educational performances. My results highlight the importance of accounting for sample selection bias in international educational comparisons.
Onil Boussim
2023-09-19T14:22:26Z
http://arxiv.org/abs/2309.10642v4
# Testing and correcting sample selection in academic achievement comparisons ###### Abstract. Country comparisons using standardized test scores may in some cases be misleading unless we make sure that the potential sample selection bias created by drop-outs and non-enrollment patterns does not alter the analysis. In this paper, I propose an answer to this issue which consists of identifying the counterfactual distribution of achievement (I mean the distribution of achievement if there was hypothetically no selection) from the observed distribution of achievements. International comparison measures like means, quantiles, and inequality measures have to be computed using that counterfactual distribution which is statistically closer to the observed one for a low proportion of out-of-school children. I identify the quantiles of that latent distribution by readjusting the percentile levels of the observed quantile function of achievement. Because the data on test scores is by nature truncated, I have to rely on auxiliary data to borrow identification power. I finally applied my method to compute selection corrected means using PISA 2018 and PASEC 2019 and I found that ranking/comparisons can change. _Keywords_: Pisa, Pasec, Sample selection, International student achievement tests, Learning poverty. _JEL codes_: C34, C83, I20 The first version is TBD. This version is of October 19, 2023. ## 1. Introduction Standardized tests 1 data on student achievements have been widely used for economic analysis and countries comparisons (see Nagy (1996), Martin et al. (2000), McEwan and Marshall (2004), Cromley (2009), Tienken (2008), McGaw (2008), and Jakubowski and Pokropek (2015). Also, the rankings derived from those tests are of great interest to policymakers in the area of education. In fact, they can sometimes be used as a motivation to adjust programs or a justification to transfer educational reforms from other countries (see, Feniger and Lefstein (2014), Taylor and Henry (2007), Suleyman (2020)). Also even within a given country, most measures of inequality in educational achievement are based on distributions of standardized test scores. However many critics suggest that the data considered in the comparisons may suffer from sample selectivity due to countries' differences in enrollment patterns and can therefore lead to inaccurate comparisons (see Rotberg (1995), Berliner (1993), Ferreira and Gignoux (2014)). In fact, standardized test samples may in various cases not be representative of the underlying population of children who belong to the age group of comparison if a not negligible proportion of them are out of school. This cannot be ignored given the fact that enrollment is correlated with some student characteristics that are also supportive of higher test scores (family socioeconomic background, area of living, and some unobserved factors...). For example, Rotberg (1990) found that student selectivity is positively correlated to higher results on the science and Maths achievement tests. Footnote 1: PISA( Programme for International Student Assessment), PIRLS (Progress in International Reading Literacy Study ), TIMSS(Trends in International Maths and Science Study), PASEC(Programme for the Analysis of Education Systems),... On the other hand, Hanushek and Woessmann (2011) found a positive correlation between the enrollment rate and the mean of test scores by combining PISA (2000, 2003) and TIMSS (1995, 1999, 2003) data. This finding may seem counter-intuitive since we just explained that students with potentially low achievement are more likely to leave school, which would suggest that a higher out-of-school proportion should result in a higher mean (upward bias). However, the authors explain this discrepancy with the following arguments. Firstly, almost all developed countries have a 100% enrollment rate, so we should only be concerned with selection bias when comparing countries at different levels of development. Secondly, at the country level, any bias due to low enrollment rates may be outweighed by the fact that such rates are a sign of a generally underdeveloped or dysfunctional education system. Because of these two views, it is not clear whether we should worry about selection bias when comparing countries or simply ignore it. One may think that if the distribution of counterfactual achievement is statistically the same as the distribution of observed achievement, then selection bias is not relevant and has no impact on any economic analysis or comparisons. On top of that, using standardized test scores on reading, the World Bank developed the concept of learning poverty (see Azevedo (2020)). It is a simple index whose aim is to evaluate the effectiveness of the primary school system of a country. The idea of the index is to measure the proportion of children unable to read by age 10 as a proxy of effectiveness. It is computed by firstly multiplying the share of children in school who haven't achieved minimum reading proficiency by the probability of being enrolled in school and secondly, it is adjusted by adding the proportion of children who are out of school (here all of them are assumed not able to read proficiently). This assumption may be strong in a context where some kids have dropped before reaching the considered grade of analysis but still have a good reading ability or are just home-schooled. Identifying the distribution of the counterfactual distribution of achievements if all the students were schooled may help define another measure of primary school effectiveness in the same logic as learning poverty by a simple adjustment of that index without the mentioned implicit assumption. Hence, we would be able to compute the probability of being below the minimum proficiency level based on that distribution. Considering the reasons given above, the identification of the counterfactual distribution of achievement becomes a relevant question. This question is an interesting econometric problem since we are confronted with a truncated data problem. In fact, non-enrolled children are not observed at all in the data on test scores the researcher has access to. In that sense, it differs from the context of correcting the distribution of wages on the basis of surveys that contain information on labor force participants and also non-participants which is a censored data problem (see Heckman (1974), Arellano and Bonhomme (2017), Chernozhukov et al. (2023)...). The second challenge is that most methods rely on the existence of a valid instrument while in our case, there is no justifiable good instrument (at least to my knowledge). Because of these two challenges, the identification power is limited, and thus auxiliary data is essential. The econometrics literature offers a vast array of data combination techniques (refer Ridder and Moffitt (2007) for an exhaustive survey). I focus on quantiles and I show that selection-corrected quantiles (quantiles of the counterfactual distribution) can be obtained by suitably shifting the percentile levels of the observed conditional (on selection) quantile function. I obtain non-parametric partial identification of the selection-corrected quantiles (quantiles of the counterfactual distribution of achievement) with minimal assumptions and I use a parametric assumption to get point identification. I finally applied to the PISA 2018 and PASEC 2019 data and I found that the counterfactual distribution of achievement is in many cases different from the one observed in the data and ranks can change because of that. The rest of the paper is as follows. In Section 2, I present the selection model. In section 3, I discuss the identification and implementation of the test. In section 4, I present the application and I conclude in section 5. All the detailed proofs can be found in the appendix. ## 2. Selection Model Let \(Y^{*}\) be the counterfactual achievement (test score in our context) if there is no selection in school. The distribution of \(Y^{*}\) is fully characterized by its quantile function \(q_{Y^{*}}(U)\) where \(U\sim U[0,1]\) is the rank. I consider the following sample selection model: \[Y^{*}=q_{Y^{*}}(U)\] \[Y=Y^{*}\text{ if }S=1\] I will refer to \(U\) as the rank variable. \(S\) is the indicator variable that takes the value \(1\) if the individual has been enrolled in school in the considered year of analysis. We will make more assumptions about \(S\) later. **Assumption 1** (Existence of auxiliary data).: \(p\equiv\mathbb{P}(S=1)\) _can be identified from auxilliary data._ Assumption 1 states that there exists a dataset that would allow us to retrieve the enrollment/coverage rate for the population of interest. This can be any nationally representative survey. ## 3. Identification and Test In that section, I explain how to recover \(q_{Y^{*}}\). My strategy involves applying a suitable shifting of percentile levels of the already identified quantile function \(q_{Y}\). For \(u\in[0,1]\), define \(G(u)\equiv\mathbb{P}(U\leq u|S=1)\) as the selection corrected rank. The first important result of this paper is the following proposition which reveals the shifting of percentile levels : **Lemma 1**.: \[\forall u\in[0,1],\text{ we have :}\] \[q_{Y^{*}}(u)=q_{Y}(G(u))\] In the absence of more restrictive assumptions, our quantiles are generally partially identified. ### Partial Identification : In that section, I derive sharp bounds for our quantile function. **Lemma 2**.: \[\text{Under Assumptions 1}\] \[\forall u\in[0,1],\\ q_{Y}\left(\frac{\max\{u+p-1,0\}}{p}\right)\leq q_{Y^{*}}(u) \leq q_{Y}\left(\frac{\min\{u,p\}}{p}\right)\] This lemma follows from an application of the Frechet-Hoeffding inequality on the probability of a joint event. The construction of the bounds only necessitates the knowledge of the propensity score. By assuming that I can get the propensity score from auxiliary data, I can compute the bounds. However, one can make the bounds more informative by adding a reasonable structure to the unobservables. **Assumption 2**.: _[Stochastic Dominance]_ \[\forall\ u\in[0,1],\ \mathbb{P}(U\leq u|S=1)\leq\mathbb{P}(U\leq u|S=0)\] _or equivalently_ \[\forall\ u\in[0,1],\ q_{Y^{*}|S=1}(u)\geq q_{Y^{*}|S=0}(u)\] This assumption shows that conditional on being enrolled in school, the distribution of potential achievement stochastically dominates the distribution of potential achievement conditional on non-enrollment. In other words, schooling has a non-negative effect on the potential achievements of children. Given Assumption 4, I derive new bounds in the following theorem. **Theorem 1** (Partial Identification ).: _Under Assumptions 1 and 2, the following bounds are valid and sharp_ \[\forall u\in[0,1]\text{, }x\in\mathcal{X}\text{,}\] \[q_{Y^{*}}(u)\in\left[q_{Y}\left(\frac{\max\{u+p-1,0\}}{p}\right)\ \text{, }q_{Y}(u)\right]\] This theorem states that given the model and the assumptions, our latent quantile function for a given \(u\in[0,1]\) lies in the above interval whose bounds cannot be improved upon without further assumptions. This interval is smaller than the one derived in Lemma 1 since the upper bound upper of this one is smaller. One interpretation we have here is that the observed quantile function will be exactly equal to the latent quantile function if the selection \(S\) is not relevant. This result clarifies why the observed quantiles are upward biased. ### Point Identification : In the absence of a good instrument, point identification of our quantile function is almost impossible if we don't rely on parametric assumptions. Therefore, in order to make the model more tractable, I make the following assumptions : **Assumption 3**.: _[Structure on S]._ 1. \(S=\mathbb{1}\{V\leq U\}\) _(2) \(V\sim P_{\theta_{0}}\) with support contained in \([0,1]\)._ _(3) \(\theta_{0}\) is identified by \(\mathbb{P}(S=1)=\mathbb{P}(V\leq U)\)._ The variables \(U\) and \(V\) are unobserved, \(U\) can be seen as a return and \(V\) as a cost. so the first assumption says that children are enrolled in school when their return to schooling is higher than their cost. In the second one, I choose to parameterize the distribution of \(V\) to reach point identification. Assumption 3 gives us a moment that should be enough to identify the parameter in the distribution of \(V\). **Theorem 2** (Identification of \(q_{Y^{*}}\)).: _Under Assumptions 1 and 3, \(q_{Y^{*}}\) is identified and we have :_ \[q_{Y^{*}}(u)=q_{Y}\left(\frac{1}{p}\left(uF_{V,\theta_{0}}(u)-\int_{0}^{u}vdF_{ V,\theta_{0}}(v)\right)\right)\] _where \(\theta_{0}\) is such that :_ \[E_{\theta_{0}}(V)=1-p\] This theorem gives us a practical way to correct the selection. We just need to make sure the parametric distribution assumed for \(V\) allows us to identify the parameter of interest. If for example, we choose \(V\sim\beta(1,\theta_{0})\), we have : \[\frac{1}{1+\theta_{0}}=1-p\] Therefore it leads to : \[\theta_{0}=\frac{1}{1-p}-1\] From the quantiles, we have : \[F_{Y^{*}}(y) = \int_{0}^{1}\mathbb{1}\{q_{Y^{*}}(u)\leq y\}du\] \[\mathbb{E}(Y^{*}) = \int_{0}^{1}q_{Y^{*}}(u)du\] Let \(\bar{y}\) be the minimum proficiency level. One can define a new measure of learning poverty that we call adjusted learning poverty : \[ALp=F_{Y^{*}}(\bar{y})\] ### Test of selection We can test for the relevance of selection for economic analysis or comparisons. The idea is to use a non-parametric test to test whether the distribution of \(Y^{*}\) is statistically different from the distribution of \(Y\). Various tests can be used like the Kolmogorov-Smirnov test, the Anderson-Darling Test, Cramer-von Mises Test,... which can be used to test whether two underlying one-dimensional probability distributions are the same almost surely. ## 4. Applications In this section, I compute estimates of the selection-corrected means to make educational achievement comparisons using PISA 2018 and PASEC 2019 data. I only consider ranking among the countries chosen for the applicatio and for both applications, \(V\sim\beta(1,\theta_{0})\) ### Pisa 2018 The Programme for International Student Assessment (PISA) is an international assessment conducted by the Organisation for Economic Co-operation and Development (OECD) that aims to evaluate and compare the educational outcomes of 15-year-old students from different countries around the world. PISA is designed to provide insights into how well students are prepared to meet real-world challenges and to inform education policies and practices. Here I have selected 10 countries that participated in PISA 2018 and I illustrate with them how rankings can change after correcting the selection. Table 1 summarizes the results. From there, we can see that the corrected means are all below the observed ones, evidence that the selection bias creates an upward bias of the mean of the distribution of test scores. Because of that, ranks can change as we observe in the last column of the table. We can analyze how rankings are affected after the correction of selection. The pattern is that lower values of \(p\) correspond to more decrease in the counterfactual mean, which is the key to changing the ranking. \begin{table} \begin{tabular}{c c} \hline \hline 4. Applications & \\ \hline In this section, I compute estimates of the selection-corrected means to make educational achievement comparisons using PISA 2018 and PASEC 2019 data. I only consider ranking among the countries chosen for the applicatio and for both applications, \(V\sim\beta(1,\theta_{0})\) \\ \hline \hline 4.1. **PISA 2018.** & The Programme for International Student Assessment (PISA) is an international assessment conducted by the Organisation for Economic Co-operation and Development (OECD) that aims to evaluate and compare the educational outcomes of 15-year-old students from different countries around the world. PISA is designed to provide insights into how well students are prepared to meet real-world challenges and to inform education policies and practices. \\ \hline \hline \end{tabular} \end{table} Table 1. PISA 2018 ### **Pasec 2019** I also use a dataset from the Programme for the Analysis of Education Systems (PASEC) conducted in 2019. It is a large data set on International learning assessment conducted in 14 countries. But in this application, I focus on 6 countries which are: Benin, Burkina Faso, Cote d'Ivoire, Niger, Senegal, and Togo. The sampling is carried out in such a way as to ensure representativity for the population of enrolled children. The children are evaluated in reading and Maths in a multiple choice questions format in the country's official language(s) of instruction, (see Pasec (2019)). I use the harmonized survey on household living conditions (EHCVM) data to identify \(p=\mathbb{P}(S=1)\). It is a harmonized survey carried out in the period 2018-2019 in WAEMU countries in order to produce nationally representative household survey data in those countries. Here is a table summarizing the results. Since I am interested in comparing grade 6 students (corresponding to late primary school), the corresponding age group is \([10,14]\) according to the World Bank (see Azevedo (2020)). in Table 2, one can see that before the correction, for reading Cote d'Ivoire had a higher mean (499.37) than Togo (492.10). But after the correction, Togo has a higher mean (473.5) against (463.0) for Cote d'Ivoire. That can be explained by the fact Togo (almost 90% of enrollment) is much less affected by the selection than Cote d'Ivoire (around 76% of enrollment) while they have observed means which are close in value. Also, there is a shifting of rank in maths for Cote d'Ivoire and Niger \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Country & p & subject & mean (\(Y\)) & mean (\(Y^{*}\)) & rank shift \\ \hline Benin & 0.703 & Maths & 527.59 & 491.3 & 3-3 \\ & & Reading & 576.16 & 533.3 & 1-1 \\ \hline Burkina-Faso & 0.713 & Maths & 551.19 & 506.6 & 2-2 \\ & & Reading & 556.10 & 509.8 & 3-3 \\ \hline Cote d’Ivoire & 0.763 & Maths & 452.88 & 426.8 & 6-5 \\ & & Reading & 499.37 & 463.0 & 4-5 \\ \hline Niger & 0.561 & Maths & 464.03 & 411.5 & 5-6 \\ & & Reading & 476.51 & 415.2 & 6-6 \\ \hline Senegal & 0.695 & Maths & 554.64 & 510.6 & 2-2 \\ & & Reading & 567.63 & 521.9 & 1-1 \\ \hline Togo & 0.896 & Maths & 491.14 & 473.6 & 4-4 \\ & & Reading & 491.82 & 473.5 & 5-4 \\ \hline \end{tabular} \end{table} Table 2. Results PASEC ## 5. Conclusion In this paper, I have introduced a method to account for selection in country comparisons using test scores data. The idea consists of estimating quantiles of the counterfactual distribution of achievements if there was no selection and using that distribution to make comparisons. Under different assumptions, I explain how we can partially identify and point identify the quantiles of that distribution. The results of my application on some countries in PISA 2018 and PASEC 2019 countries suggest that the observed quantiles are upward biased and comparisons can actually be affected. ## Appendix A Proofs of the results in the main text ### Proof of Lemma 1 \[G(u) = \mathbb{P}(U\leq u|S=1)\] \[= \mathbb{P}(Y^{*}\leq q_{Y^{*}}(u)|S=1)\] \[= \mathbb{P}(Y^{*}\leq q_{Y^{*}}(u)|S=1)\] \[= F_{Y^{*}|S=1}(q_{Y^{*}}(u))\] \[= F_{Y}(q_{Y^{*}}(u))\] From there, we have that : \[q_{Y^{*}}(u)=q_{Y}(G(u))\] ### Proof of Lemma 2 \[G(u) = \mathbb{P}(U\leq u|S=1)\] \[= \frac{\mathbb{P}(U\leq u,S=1)}{p}\] Now we apply frechet-bounds to the joint probability, and we obtain : \[\frac{\max\{u+p-1,0\}}{p}\leq G(u)\leq\frac{\min\{u,p\}}{p}\] Now, using the monononicity of \(q_{Y}\), we finally obtain : \[q_{Y}\left(\frac{\max\{u+p-1,0\}}{p}\right)\leq q_{Y^{*}}(u)\leq q_{Y}\left(\frac {\min\{u,p\}}{p}\right)\] ### Proof of Theorem 1 : **STEP 1** : Validity First, we need to prove the validity of the inequalities. By Lemma 2, we already know that : \[q_{Y^{*}}(u)\geq q_{Y}\left(\frac{\max\{u+p-1,0\}}{p}\right)\] We already know that : \[u=\mathbb{P}\left(U\leq u|S=1\right)p+\mathbb{P}\left(U\leq u|S=0\right)(1-p)\] Using the stochastic dominance assumption, we get : \[\mathbb{P}\left(U\leq u|S=1\right)p+\mathbb{P}\left(U\leq u|S=1 \right)(1-p) \leq \mathbb{P}\left(U\leq u|S=1\right)p+\mathbb{P}\left(U\leq u|S=0 \right)(1-p)\] \[= u\] which is simply : \[\mathbb{P}\left(U\leq u|S=1\right)\leq u\] Using these probability bounds, and the monotonocity of \(q_{Y}\), we have that \[q_{Y^{*}}(u) \leq q_{Y}(u)\] **STEP 2** : Sharpness -The dependence between \(U\) and \(S\) can be a counter-monotonic relationship and will still be consistent with the data we observe. In that case, we have : \[G(u)=\frac{\max\{u+p-1,0\}}{p}\] -Independence between \(U\) and \(S\) can also produce the observables. In that case: \[G(u)=u\] ### Proof of Theorem 2 : We have that : \[G(u) = \mathbb{P}(U\leq u|S=1)\] \[= \frac{1}{p}\left(\mathbb{P}(U\leq u,S=1)\right)\] \[= \frac{1}{p}\left(\mathbb{P}(U\leq u,U\geq V)\right)\] \[= \frac{1}{p}\left(\int\mathbb{P}(U\leq u,U\geq v)dF_{V,\theta_{0}} (v)\right)\] \[= \frac{1}{p}\left(\int_{0}^{u}(u-v)dF_{V,\theta_{0}}(v)\right)\] \[= \frac{1}{p}\left(uF_{V,\theta_{0}}(u)-\int_{0}^{u}vdF_{V,\theta_{ 0}}(v)\right)\] Now we also have that : \[p = \mathbb{P}(U\geq V)\] \[= \int\mathbb{P}(U\geq v)dF_{V,\theta_{0}}(v)\] \[= \int(1-v)dF_{V,\theta_{0}}(v)\] \[= 1-\mathbb{E}(V,\theta_{0})\] From there, we have : \(E(V,\theta_{0})=1-p\)
2309.16227
Fast rare events in exit times distributions of jump processes
Rare events in the first-passage distributions of jump processes are capable of triggering anomalous reactions or series of events. Estimating their probability is particularly important when the jump probabilities have broad-tailed distributions, and rare events are therefore not so rare. We formulate a general approach for estimating the contribution of fast rare events to the exit probabilities in the presence of fat tailed distributions. Using this approach, we study three jump processes that are used to model a wide class of phenomena ranging from biology to transport in disordered systems, ecology and finance: discrete time random-walks, L\'evy walks and the L\'evy-Lorentz gas. We determine the exact form of the scaling function for the probability distribution of fast rare events, in which the jump process exits from an interval in a very short time at a large distance opposite to the starting point. In particular, we show that events occurring on time scales orders of magnitude smaller than the typical time scale of the process can make a significant contribution to the exit probability. Our results are confirmed by extensive numerical simulations.
Alessandro Vezzani, Raffaella Burioni
2023-09-28T08:09:01Z
http://arxiv.org/abs/2309.16227v2
# Fast rare events in exit times distributions of jump processes ###### Abstract Rare events in the first-passage distributions of jump processes are capable of triggering anomalous reactions or series of events. Estimating their probability is particularly important when the jump probabilities have broad-tailed distributions, and rare events are therefore not so rare. We study three jump processes that are used to model a wide class of stochastic processes ranging from biology to transport in disordered systems, ecology and finance. We consider discrete time random-walks, continuous time random-walks and the Levy-Lorentz gas and determine the exact form of the scaling function for the probability distribution of fast rare events, in which the jump process exits in a very short time at a large distance opposite to the starting point. For this estimation we use the so called big jump principle, and we show that in the regime of fast rare events the exit time distributions are not exponentially suppressed, even in the case of normal diffusion. This implies that fast rare events are actually much less rare than predicted by the usual estimates of large deviations and can occur on timescales orders of magnitude shorter than expected. Our results are confirmed by extensive numerical simulations. A stochastic process that reaches a certain threshold value for the first time can trigger many events: a chemical reaction occurs [1], a target is reached [2; 3; 4; 5; 6; 7], the dynamics of a financial process starts [8], biological and ecological processes take place [9; 10]. The study of these triggering events and the probability of their occurrence is based on the knowledge of first passage probabilities [11]. As history-dependent quantities, first-passage probabilities are difficult to determine: general results are typically available for the average first-passage time and its lower-order moments [11; 12], but knowledge of complete first-passage distributions is generally limited. Given the 'trigger' property of exit times, the tails of these distributions are particularly important as they allow the probability of rare anomalous events to be estimated, such as, for example, an exit in a very short time [13]. These estimates are particularly relevant in the case of broad-tailed distributions, where rare events are not so rare. In particular, a few results are available for first-passage probabilities of jump processes, which are stochastic processes that involve random jumps between different states or positions, occurring at random times and with random magnitudes [14]. Jump processes are so important in modelling the dynamics of stochastic processes in many fields that even their one-dimensional formulation is of great relevance [15; 16; 17]. In the one-dimensional case, exit time probabilities refer to the process leaving a particular state or interval within a specified time. In stochastic processes with two alternative outcomes, the exit side can also be of importance [10], for example to quantify observables such as transmission or backscattering probabilities [18]. A well know examples of jump processes is that of random walks (RW)s, in discrete and continuous time [19; 20], for which recent results have been obtained [21; 22; 23]. An interesting question is to determine the exit probabilities from the side of a domain opposite to the starting point. A rare events in this case corresponds to a fast walker exiting in a very short time, that is in a time for which the dynamical scaling length of the process is still much shorter than the size of the domain. To exit in a short time, the trajectory of the walker should correspond to a long jump, allowing it to travel far from the starting point. When fat-tailed distributions for jumps are present in the stochastic process, these very fast events may not be exponentially suppressed and may occur in timescales orders of magnitude shorter than expected, proving crucial in predicting anomalous events. Recently, we have investigated the role played in rare events by the so called big jump principle [24]. The principle explains extreme events in a wide class of systems with heavy tailed distributions not in terms of an accumulation of many small subevents but as an effect of only the biggest event, the big jump [25]. The principle has been successfully applied to characterize the tail of the probability distribution, at distances much larger than the scaling length, for a wide class of jump processes for RWs involving Levy and sub-exponential statistics for space and time [26], even in disordered settings such as the Levy-Lorentz gas [27; 28]. This is a system of scatterers placed randomly in one dimension, with a broad distributed spacing. A walker moves ballistically and is randomly transmitted or reflected when it reaches a scatterer. The Levy-Lorentz gas is a relevant model in the study of transport in disordered media [29; 30], and there is currently no known first-passage probability estimate for this system. In this letter, we consider three well known models of one-dimensional jump processes: the discrete time RW, the continuous time RW, and the Levy-Lorentz gas. Taking advantage of the big jump approach, we estimate the exit probability from one side of a domain. In particular, for a walker starting at the origin, we analytically determine the form of the probability distribution to reach for the first time a distance \(X\) without being absorbed at the origin, in the asymptotic limit where \(X\) is much larger than the scaling length of the process (i.e. in the limit of very fast atypical events). We show that in this regime the three exit time distributions are not exponentially suppressed, even when the scaling length of the process is diffusive. This implies that, in these processes where jumps follows a subexponential statistics, fast rare events are actually much less rare than predicted by the usual estimates of large deviations. We test our results by comparing them with extensive numerical simulations, with very good agreement. _The discrete time random walk._ In discrete time RWs [20], at each discrete time step the walker moves with probability \(1/2\) to the right or to the left, with the step length \(r\) drawn from the probability density function (PDF) \(p(r)\). A finite second moment of \(p(r)\) implies standard diffusion while, if the second moment diverges, the walker performs a symmetric Levy flight. We consider a PDF \(p(r)\) with a power law decay at large \(r\) and a cut off at short \(r\): \[p(r)=\begin{cases}\frac{\alpha r_{0}^{\alpha}}{\alpha r^{1+\alpha}}&\text{if }r>r_{0}\\ 0&\text{if }r<r_{0}\end{cases} \tag{1}\] In this case, standard diffusion or a Levy process are observed for \(\alpha>2\) and \(0<\alpha<2\), respectively. Our results apply to any distribution that features the same behaviour at large \(r\). We denote with \(x(n)\) the position of the walker after \(n\) steps (\(x(0)=0\)) and we consider the exit time probability \(P_{X}(n)\), that is the probability that at step \(n\) the walker reaches for the first time a distance larger that \(X\) without being absorbed at the origin (\(x(n)>X\) and \(0<x(k)<X\) for \(0<k<n\)). Numerical results in Figure 1 show \(P_{X}(n)\) as a function of the rescaled exit time \(n/X^{2}\). For at \(\alpha=2.5\) (i.e. RWs with finite variance of the step length) the typical exit time is driven by the growth of the scaling length \(\ell(n)\) of standard diffusion: \(X\sim\ell(n)\sim n^{1/2}\). The scaling of \(P_{X}(n)\) and the analytic results displayed by the dashed line have been obtained in [23] using a continuous limit. However, very fast events, which occur in much shorter timescales (i.e. for \(X\gg\ell(n)\sim n^{1/2}\)) are also observed in the figure with non-negligible probability. These are the rare events we wish to describe. If \(X\) is much larger than the typical length scale \(\ell(n)\), we expect that \(P_{X}(n)\) can be evaluated using the single big jump principle [24; 25; 26; 31; 32]. The principle states that for sub-exponential \(p(r)\), a distance \(X\gg\ell(n)\) in a time \(n\) is reached by a single event involving a large jump of order \(X\), while smaller distances of order \(\ell(n)\) are reached in multiple steps. For the process we are considering here, the scaling lengths are known, and read \(\ell(n)\sim n^{1/2}\) for diffusive processes and \(\ell(n)\sim n^{1/\alpha}\) for the Levy flight. However, we stress again that the big jump principle is expected to be valid for any sub-exponential step length PDF \(p(r)\)[26]. In this case, the RW exits at time \(n\) at a distance \(X\gg\ell(n)\) if it is not absorbed at the origin in the first \(n-1\) steps, and if exactly at step \(n\) the walker performs a jump in the positive direction, of length larger than \(X\). In the spirit of the big jump approach, the distance covered in the previous \(n-1\) steps can be neglected, as it is of the order of \(\ell(n)\). The asymptotic behavior of the survival probability \(\pi(n)\) after \(n-1\) steps in a semi infinite-line is analytically known by the Sparre Andersen theorem [33; 34; 35] i.e. \(\pi(n-1)\sim(\pi n)^{-1/2}\), while the probability to perform a jump larger than \(X\) is directly obtained from the single step length distribution. Therefore the asymptotic behavior of the exit time probability for \(n\gg 1\) and \(X\gg\ell(n)\) is given by: \[P_{X}(n)\sim\frac{1}{2}\pi(n-1)\int_{X}^{\infty}drp(r)\sim\frac{1}{2(\pi n)^{ 1/2}}\cdot\frac{r_{0}^{\alpha}}{X^{\alpha}} \tag{2}\] where the factor \(1/2\) takes into account the fact that the Figure 1: Exit probability \(P_{X}(n)\) for a Lévy flight. \(P_{X}(n)\) is multiplied by \(X^{3}\) and plotted as a function of the rescaled exit time \(n/X^{2}\). For \(\alpha>2\), the typical dynamics is diffusive and, at large number of steps \(n\), converges slowly to the theoretical prediction in [23], with no free parameters. Very fast events at short times are however significantly present. last long jump is covered in the positive direction with probability \(1/2\). The results in Eq. (2) are compared with numerical simulations in Figure 2 showing an excellent agreement in the asymptotic regime at large \(X\), both for the Levy flights \(\alpha<2\) and for normal diffusion \(\alpha>2\). On the other hand, the peaks of the distributions at shorter \(X\) are expected to occur when the system size is comparable to the scaling length of the system \(\ell(n)\), according to the analytical predictions in [23], as shown in Figure 1 for \(\alpha>2\). _The continuous time random-walk_. The continuous time RW [20; 36] is a process in which a walker performs random steps of duration \(t\) at constant velocity \(v\). In one dimension, each step is covered with equal probability in the positive and the negative direction, and the duration of the step follows the PDF \(p(t)\), with \[p(t)=\begin{cases}\frac{\alpha}{t^{1+\alpha}}&\text{if }t>1\\ 0&\text{if }t<1\end{cases} \tag{3}\] The scaling length of the stochastic process is \(\ell(t)\sim t^{\gamma}\), with \(\gamma=1\) for \(\alpha<1\), \(\gamma=1/\alpha\) for \(1<\alpha<2\) and \(\gamma=1/2\) for \(\alpha>2\)[36]. Moreover, the single step contains another scaling length that grows linearly with time, due to the constant velocity motion. Therefore, for \(\alpha<1\) in a single step the walker cannot reach a distance much larger than \(\ell(t)\) and the single big jump approach cannot be applied. On the other hand, for \(\alpha>1\) as well as for other sub-exponential PDF (e.g. Weibull), the principle is valid and can be used efficiently [24; 26]. Analogously to the discrete case, \(P_{X}(T)dT\) represents the probability to reach for the first time a distance \(X\) from the origin in a time between \(T\) and \(T+dT\), without being absorbed at the starting point \(x=0\). For \(X>>\ell(T)\) according to the big jump, this process is realized by a single stochastic event, a big jump carrying the walker at large distance. In particular, the last step should be taken at time \(T_{w}=T-X/v\) and the duration of this last step should be larger than \(X/v\), so that the walker reaches the distance \(X\) exactly at time \(T\). Once again, the motion up to time \(T_{w}\) can be neglected. Moreover, the survival probability can again be evaluated using the Sparre Andersen theorem. In particular the number of steps performed by the walker in a time \(T_{w}\) is \(T_{w}/\langle t\rangle\), where \(\langle t\rangle=\int dt^{\prime}t^{\prime}p(t^{\prime})\) is the average duration of the steps, so that the survival probability in a time \(T_{w}\) is \(\pi(T_{w})\sim(T_{w}/\langle t\rangle)^{-1/2}\). Summing up we obtain: \[P_{X}(T) \sim \frac{1}{2\langle t\rangle}\pi((T-X/v)/\langle t\rangle)\int_{X/ v}^{\infty}dtp(t) \tag{4}\] \[\sim \frac{1}{(vT)^{\alpha+1/2}}\frac{\langle t\rangle^{-1/2}t_{0}^{ \alpha}}{2(\pi(1-y))^{1/2}y^{\alpha}}.\] In the first expression the prefactor \(1/2\) takes into accounts the probability that the big jump is covered in the positive direction, while \(1/\langle t\rangle\) is the jump rate of the process [24; 26] (in the discrete time process, the rate is naturally fixed to one). Notice that in Eq. (4), \(y=X/(vT)\) represents the natural scaling of the distance, defined by the ballistic motion performed in the big jump. Notice also that \(vT\) is the maximum distance that is covered in a time \(T\). This means that \(y<1\). In Figure 3 we shown that \(P_{X}(T)\) plotted against \(X/(vT)\) indeed converges to Eq. (4) at large times, so that the big jump estimate is correct. _The Levy Lorentz gas_. The Levy Lorentz gas introduced in [27; 28] is a system of scatterers randomly placed in one dimension. The distances between scatterers are drawn again from a power law PDF \(p(l)\): \[p(l)=\begin{cases}\frac{\alpha l_{0}^{\alpha l_{0}^{\alpha}}}{l^{1+\alpha}}& \text{if }l>l_{0}\\ 0&\text{if }l<l_{0}\end{cases}. \tag{5}\] Figure 2: Exit probability \(P_{X}(n)\) for a Lévy flight. \(P_{X}(n)\) is multiplied by \(n^{1/2}\) and plotted as a function of the distance \(X\). Upper panel: \(\alpha=1.2\) lower panel \(\alpha=2.5\). The dashed line shows the asymptotic analytical prediction in Eq. (2). A RW is naturally defined on the Levy Lorentz gas: the walker moves at constant speed \(v\) and it is reflected with probability \(\epsilon\) (\(0<\epsilon<1\)) when it hits a scatterer [37]. We will focus on the case where the walker starts at \(t=0\) in a scattering site placed at \(x=0\)[27; 28]. For this model, the PDF of the walker position \(x\) as a function of time has been recently studied [37; 38; 39; 40; 28], and its asymptotic behaviour at large distances has been estimated using the big jump principle [24; 28; 41]. However, no results are yet known for the exit time probabilities. In the Levy Lorentz gas, if a walker crosses a scatterer that has never been crossed before, the choice of the distance of the subsequent scattering point is a random process independent of the RW history. Hence in this event we can successfully apply the big jump principle. The scaling length \(\ell(T)\) is the typical distance covered by the walker up to time \(T\). We have \(\ell(T)\sim T^{\frac{1}{1+\alpha}}\) where \(\tilde{\alpha}=\alpha\) if \(\alpha<1\) and \(\tilde{\alpha}=1\) if \(\alpha>1\)[28]. According to the principle, the walker exits from the boundary at a distance \(X\gg\ell(T)\), only if at a time \(T_{w}=T-X/v\) the walker crosses a scattering point separated from the subsequent scatterer by a distance larger than \(X\). The definition of \(T_{w}\) takes into account the finite velocity. In order to evaluate \(P_{X}(T)\) we need the probability of not being absorbed at the origin before \(T_{w}\) and the rate at which at time \(T_{w}\) new scattering sites are crossed by the walker. The rate has been calculated in [24] and it decays with \(T_{w}\) as \(T_{w}^{-\frac{1}{1+\alpha}}\). The surviving probability can be estimated using again the Sparre Andersen theorem, by evaluating the number of scattering events \(N(T_{w})\) occurring up to time \(T_{w}\). According to [42], the number of different scattering sites visited up to time \(T_{w}\) is \(S(T_{w})\sim\ell(T_{w})^{\tilde{\alpha}}\sim T_{w}^{\frac{\tilde{\alpha}}{1+ \alpha}}\). Since the discrete process occurring on the scatterers, without taking into account the distances among the sites, is a simple diffusion on a lattice, we obtain \(N(T_{w})\sim S(T_{w})^{2}\sim T_{w}^{\frac{2\tilde{\alpha}}{1+\alpha}}\) (for \(\alpha>1\) the average distance between sites is finite and we recover the expected result that the number of scattering events is proportional to the elapsed time). Now we can evaluate the exit time probability: \[P_{X}(T) \sim \frac{1}{N(T_{w})^{1/2}}\cdot\frac{1}{T_{w}^{\frac{1}{1+\alpha}} }\cdot\int_{X}^{\infty}dlp(l) \tag{6}\] \[\sim \frac{1}{(T-X/v)^{\frac{1}{1+\alpha}}}\cdot\frac{1}{(T-X/v)^{ \frac{1}{1+\alpha}}}\cdot\int_{X}^{\infty}dlp(l)\] \[\sim \frac{1}{T^{\alpha+1}}\frac{1}{(1-y)y^{\alpha}}.\] Figure 3: Exit probability \(P_{X}(T)\) for a continuos time RW. \(P_{X}(T)\) is multiplied by \((vT)^{1/2+\alpha}\) and plotted as a function of the rescaled distance \(X/(vT)\), \(\alpha=1.5\). The continuous line shows the asymptotic analytical prediction in Eq. (4). Figure 4: Exit probability \(P_{X}(n)\) for a Lévy Lorentz gas. \(P_{X}(n)\) is multiplied by \((vT)^{1+\alpha}\) and plotted a function of the distance \(X/(vT)\). Upper panel: \(\alpha=0.6\) lower panel \(\alpha=1.5\). The continuous line shows the asymptotic analytical prediction in Eq. (6). In this case a multiplicative constant has been optimized in order to reproduce numerical data. In the first line the term \(N(T_{w})^{-1/2}\) represents the decay of the surviving probability at \(T_{w}\) according to Sparre Andersen result; the term \(T_{w}^{\frac{1}{1+\alpha}}\) is the jumping rate at time \(T_{w}\) and the integral is the probability to have a jump larger than \(X\). We also introduced the scaled distance \(y=X/(vT)\) according to the ballistic motion characterizing the single jump evolution. Eq (6) contains a multiplicative constant that in this case cannot be simply evaluated since the rate and the number of jumps have been calculated only by means of a scaling arguments. Eq. (6) is compared with numerical simulation in Figure 4, showing an excellent agreement. _Conclusions._ A correct estimation of rare events in first-passage distributions is crucial for predicting the occurrence of reactions, the attainment of a target, the extinction of a population or a financial asset. In this letter, using the big jump principle, we derived the analytical form of the tail in the exit distributions on the opposite side of an interval from the starting point for three fundamental jump processes: the discrete and the continuous time RWs and the Levy-Lorentz gas. In a nutshell, our result shows that, in the presence of power law jump distributions, anomalous exit events can occur on time scales that are orders of magnitude smaller than typical exit times, even when the process is Gaussian. The result is heuristic and based on the application of the big jump principle, yet comparison with detailed numerical simulations suggests that the estimate is essentially correct, thus opening the way to a rigorous derivation. We obtained our result for a power law distribution but we expect such estimates to hold for a large class of sub-exponential jump processes, providing a boost to the study of rare events in first-passage probabilities. ###### Acknowledgements. We warmly thank Olivier Benichou and Jeremie Klinger for interesting discussions.
2309.11548
Studying [CII] Emission in Low-mass Galaxies at z ~ 7
We report on a $\rm{[CII]}_{158\mu\rm{m}}$ search using the Atacama Large Millimeter/submillimeter Array (ALMA) on three lensed, confirmed {\lya} emitting galaxies at $z \sim 7$. Our targets are ultra-violet (UV) faint systems with stellar masses on the order of $M_{*} \sim 10^{9} M_{\odot}$. We detect a single [CII] line emission ($4\sigma$) from the brightest ($L \sim 2.4 \times 10^{10}L_{\odot}$) galaxy in our sample, MACS0454-1251. We determine a systemic redshift ($z_{\rm{[CII]}} = 6.3151 \pm 0.0005$) for MACS0454-1251 and measure a {\lya} velocity offset of $\Delta v \approx 300 \pm 70 \rm{km\,s}^{-1}$. The remaining two galaxies we detect no {\ct} but provide $3 \sigma$ upper limits on their {\ct} line luminosities which we use to investigate the $L_{\textrm{[CII]}} - \rm{SFR}$ relation. Overall our single {\ct} detection shows agreement with the relation for dwarf and local starburst galaxies. Our [CII] deficient galaxies could potentially be exhibiting low metallicities ($Z<Z_{\odot}$). Another possible explanation for weaker [CII] emission could be strong feedback from star formation disrupting molecular clouds. We do not detect continuum emission in any of the sources, placing upper limits on their dust masses. Assuming a single dust temperature of $T_{d}=35 \rm{K}$ dust masses ($M_{\rm{dust}}$) range from $< 4.8 \times 10^{7} M_{\odot} $ to $2.3 \times 10^{8} M_{\odot}$. Collectively, our results suggest faint reionization era sources could be metal poor and/or could have strong feedback suppressing [CII] emission.
Kelsey Glazer, Marusa Bradac, Ryan L. Sanders, Seiji Fujimoto, Patricia Bolan, Andrea Ferrara, Victoria Strait, Tucker Jones, Brian C. Lemaux, Livia Vallini, Russell Ryan
2023-09-20T18:00:02Z
http://arxiv.org/abs/2309.11548v2
# Studying [CII] Emission in Low-mass Galaxies at \(z\sim 7\) ###### Abstract We report on a [CII]\({}_{158\mu m}\) search using the Atacama Large Millimeter/submillimeter Array (ALMA) on three lensed, confirmed Ly\(\alpha\) emitting galaxies at \(z\sim 7\). Our targets are ultra-violet (UV) faint systems with stellar masses on the order of \(M_{*}\sim 10^{9}M_{\sun}\). We detect a single [CII] line emission (\(4\sigma\)) from the brightest (\(L\sim 2.4\times 10^{10}L_{\sun}\)) galaxy in our sample, MACS0454-1251. We determine a systemic redshift (\(z_{\rm[CII]}=6.3151\pm 0.0005\)) for MACS0454-1251 and measure a Ly\(\alpha\) velocity offset of \(\Delta v\approx 300\pm 70\)km s\({}^{-1}\). The remaining two galaxies we detect no [CII] but provide \(3\sigma\) upper limits on their [CII] line luminosities which we use to investigate the \(L_{\rm[CII]}-\) SFR relation. Overall our single [CII] detection shows agreement with the relation for dwarf and local starburst galaxies. Our [CII] deficient galaxies could potentially be exhibiting low metallicities (\(Z<Z_{\sun}\)). Another possible explanation for weaker [CII] emission could be strong feedback from star formation disrupting molecular clouds. We do not detect continuum emission in any of the sources, placing upper limits on their dust masses. Assuming a single dust temperature of \(T_{d}=35\)K dust masses (\(M_{\rm dust}\)) range from \(<4.8\times 10^{7}M_{\sun}\) to \(2.3\times 10^{8}M_{\sun}\). Collectively, our results suggest faint reionization era sources could be metal poor and/or could have strong feedback suppressing [CII] emission. keywords: galaxies:high-redshift - gravitational lensing:strong ## 1 Introduction In the past decade, the Atacama Large Millimeter/submillimeter Array (ALMA) observations of metal fine structure lines such as the [CII]\({}_{158\mu m}\) line have opened up studies into the epoch of reionization (EoR; \(z>6\)) by providing an unobscured view of galaxies. With the advent of the James Webb Space Telescope (JWST), which can also provide a similar a view of non-resonant optical lines (e.g., H\(\alpha\)), ALMA observations still provide a complementary view for far infrared (FIR) emission lines. FIR lines hold immense value for EoR studies because they are not affected by dust extinction, in contrast to rest-optical lines accessible with JWST. There have been numerous [CII] detections in UV-bright, high-\(z\) (\(z>6\)) galaxies (e.g., Willott et al., 2015; Carniani et al., 2017; Laporte et al., 2017; Smit et al., 2018; Matthee et al., 2019; Harkinae et al., 2019; Balk et al., 2020). However, there are considerably fewer recorded [CII] detections for characteristically faint, \(L<L_{\rm e-7}^{*}\) (where \(L^{*}\) is the characteristic luminosity) EoR galaxies (e.g., Schaerer et al., 2015; Watson et al., 2015; Knudsen et al., 2016; Bradac et al., 2017; Fujimoto et al., 2021; Laporte et al., 2021). Faint galaxies are much more numerous than bright (\(L<L_{\rm e-7}^{*}\)) galaxies (e.g. Bouwens et al., 2022; Bolan et al., 2022) and, as such, can be a key driver of reionization. However, this connection has been heavily debated (e.g. Finkelstein et al., 2019; Naidu et al., 2020; Robertson, 2022; Endsley et al., 2023). In order to resolve this question, we need to study the physical properties of these fainter primordial systems; this step is key to understanding their role in cosmic reionization. Here we use [CII] observations to study \(z\sim 7\) galaxies. The [CII] line is of particular interest because it predominantly traces the dense neutral gas in photodissociation regions (PDRs, Wolfire et al., 2022) associated with molecular clouds, and the diffuse neutral gas (Wolfire et al., 2003; Hollenbach & Tielens, 1999). It is the most luminous line in the FIR band (\(\sim 0.1-1\)% of the FIR luminosity, Baier-Soto et al., 2022) and one the strongest emission lines of star-forming galaxies at FIR/radio wavelengths (Carilli & Walter, 2013; Stacey et al., 1991, 2010). Additionally, the [CII] line can be used to trace the systemic redshift, and therefore velocity, of the host galaxy because it is optically thin and not affected by dust extinction. When paired with detected Ly\(\alpha\) emission, we can estimate interstellar medium (ISM) properties of the galaxy under some assumptions. For instance, we can estimate the amount of neutral gas assuming the offset stems from Ly\(\alpha\) resonant scattering in ISM gas (Mason et al., 2018).1 Ly\(\alpha\) photons travelling through neutral ISM scatter more frequently and emerge with a larger \(\Delta v\) compared to photons traveling through less neutral ISM (Yang et al., 2016, 2017; Guaita et al., 2017). Previous observations of \(z>5\) galaxies have typically measured \(\Delta v\leq 500\)kms\({}^{-1}\)(Cassata et al., 2020; Carniani et al., 2018; Pentericci et al., 2016; Bunker et al., 2023; Prieto-Lyon et al., 2023) with the largest \(\Delta v\approx 1000\)kms\({}^{-1}\) recorded by Baier-Soto et al. (2022). Cassata et al. (2020) find that galaxies (\(4.4<z<6\)) with smaller \(\Delta v\) have larger Ly\(\alpha\) rest-frame equivalent widths and \(f_{\rm esc}({\rm Ly}\alpha)\). However, for intrinsically faint systems (\(M_{\rm UV}\gtrsim-20.5\)) at \(z>6\), a limited sample size restricts our ability to make robust conclusions (see Endsley et al., 2022 and references therein). Footnote 1: We remind that outflows might also generate comparable velocity offsets. In addition to velocity offsets, we can also measure [CII] line luminosities (\(L_{\rm[CII]}\)) and evaluate the empirical [CII] line luminosity to star formation rate (\({\rm L_{\rm[CII]}-SFR}\)) relation. This tight relation is well established for a wide range of local galaxy types (e.g., Boselli et al., 2002; De Looze et al., 2011, 2014; Sargsyan et al., 2012; Pineda et al., 2014; Herrera-Camus et al., 2015), but initial [CII] searches in \(z>5\) galaxies revealed lower than expected or missing [CII] emission from "normal" (\({\rm SFR}<100{\rm M}_{\odot}{\rm yr}^{-1}\)) star-forming galaxies (see Carniani et al., 2020 and references therein). The coined "[CII] deficit" problem brought into question whether or not [CII] could remain a good tracer of SFR at higher-\(z\) systems as well as the applicability of the low-\(z\) L\({}_{\rm[CII]}-{\rm SFR}\) relation. Observational studies (e.g., Maiolino et al., 2015; Capak et al., 2015; Knudsen et al., 2016; Matthee et al., 2017; Carniani et al., 2018; Matthee et al., 2019; Schaerer et al., 2020; Romano et al., 2022) have focused on understanding the \({\rm L_{\rm[CII]}-SFR}\) relation and if there indeed is missing [CII] emission for \(z\sim 7\). Theoretical studies have also extensively modeled the under/undetected [CII] emission (e.g., Vallini et al., 2013, 2015; Olsen et al., 2017; Katz et al., 2017, 2019; Pallottini et al., 2017, 2022; Lupi & Bovino, 2020). The analytical approach by Ferrara et al. (2019) concluded that under-luminous [CII] systems can result from a combination of factors: (a) large upward deviations from the Kennicutt-Schmidt relation (corresponding to bursty phases); (b) low metallicity; (c) low gas density, at least for the most extreme sources. These results have been supported by cosmological numerical simulations (Vallini et al., 2015; Pallottini et al., 2019). As mentioned previously, majority of EoR studies investigating [CII] have targeted UV-bright galaxies, which traditionally have higher SFRs. This, however, gives a biased view especially when fainter galaxies with lower SFRs outnumber the larger and brighter galaxies. In this paper we report on a [CII] study carried out with ALMA that targeted a rare set of lensed, sub-\(L_{*}\) galaxies with SFRs\(<\)20\(M_{\odot}\)yr\({}^{-1}\), confirmed to be in the EoR via spectroscopically detected Ly\(\alpha\) emission. The paper is organized as follows. Section 2 explains various observations used to compile our sample and the specific ALMA data reduction performed on our data. In Section 3 we discuss the measurements and derived properties from the ALMA data as well as the spectral energy density (SED) fitting performed for galaxy property estimates. In Section 4 we analyze and discuss our results with literature findings. In Section 5 we summarize our conclusions. Throughout this paper we assume a \(\Lambda\)CDM concordance cosmology with \(\Omega_{m}=0.27\), \(\Omega_{\Lambda}=0.73\) and Hubble constant \(H_{0}=73\)km s\({}^{-1}\) Mpc\({}^{-1}\). Coordinates are given for the epoch J2000.0, magnitudes are in the AB system, and we use the Chabrier (2003) initial mass function (IMF). ## 2 Observations and Data Reduction The lensing galaxy clusters of our sample have been extensively studied in past works. All three clusters have imaging from the _Hubble Space Telescope (HST)_. The Cluster Lensing and Supernova Survey with Hubble (CLASH, Postman et al., 2012) program observed MACS2129 and RXJ1347. The two CLASH clusters were also spectroscopically observed with the Grism Lens-Amplified Survey from Space (GLASS Treu et al., 2015) program. MACS0454 was imaged with the _HST_-GO-11591(PI: Kneib)/GO-9836(PI:Ellis)/GO-9722(PI: Ebeling) programs. Additionally, the three clusters were imaged with the _Spitzer_ UltRa Faint SUrvey Program (SURFSUPP Bradac et al., 2014) which observed a total of ten lensing clusters. Spectroscopic follow ups using the Deep Imaging Multi-Object Spectrograph (DEIMOS, Faber et al., 2003) and Multi-Object Spectrometer for InfraRed Exploration (MOSFIRE, McLean et al., 2010) instruments on _Keck_ confirmed Ly\(\alpha\) emission in the galaxies that make up our sample for this work (Huang et al., 2016, 2016; Hoag et al., 2019). ALMA observations (Proposal ID: 2019.1.00003.S) were carried out between March of 2020 and April 2021 using ALMA Band 6 with 43 12-m antennae in array configurations C43 \(-\) 4 and C43 \(-\) 5. The precipitable water vapor (PWV) ranged from 0.5mm to 2.1mm. The spectral setup consisted of one spectral window centered on the expected observed frequency of [CII]\({}_{158\mu\rm m}\) estimated from the Ly\(\alpha\) redshift. The remaining two spectral windows were used for continuum measurements. The on-source time varied from 20 to 35 min. per target. The data was reduced and calibrated with the Common Astronomy Software Applications (CASA) package, version 6.1.1.15 following standard procedures. We reimaged the data with the CASA task TCLEAN, setting ROBUST=2.0, UVTAPER= 0".6, and adopting Briggs weighting. The angular resolution for each object is equal to beam size which differed slightly from object to object. All three tar Figure 1: MACS0454-1251 velocity-integrated [CII] line intensity overlaid on a _HST_/WFC3 F160W image. The contours are shown in red and are spaced linearly by intervals of \(1\sigma\) which range from \(2-4\sigma\). The beam is given in the bottom right with a 3”\(\times\) 3” zoom-in shown in the bottom left. Blue cross hairs mark the target in the center. gets were unresolved in their final data products and were spectrally binned into 25-km s\({}^{-1}\) velocity channels. To ensure positional accuracy, we astrometrically calibrated _HST_ reference images to GAIA DR2 when comparing [CII] to UV emission. Continuum maps were made with all spectral windows. ## 3 Measurements and Derived Properties We observed three highly magnified (\(\mu\sim 4-20\)) Lyman alpha emitting (LAEs) galaxies with ALMA at \(z_{\rm Ly\alpha}=6.323\pm 0.003\) (MACS0454-1251), \(6.846\pm 0.001\) (MACS2129-1412), and \(7.161\pm 0.001\) (RXJ1347-018). We report a single [CII] detection (\(4\sigma\), Fig 1) from MACS0454-1251. We extracted the spectrum over an extended elliptical aperture (semi-axes: \(\sim 1.\arcsec\la 0.\arcsec 5\), pitch angle: \(\sim 35^{\circ}\)) to account of the slightly oblong shape in the [CII] contours of the moment0 map (see Fig 1). We fit a Gaussian (see Fig 2) to estimate the peak line flux \(S_{\rm line}=3.7\pm 0.9\)mJy. We calculate the integrated line flux \((S_{\rm line,g}\Delta v)\) as \(0.64\pm 0.15\)Jykm s\({}^{-1}\) with a FWHM of \(163\pm 46\)km s\({}^{-1}\). We determine the systemic redshift \(z_{\rm[CII]}=6.315\pm 0.001\) which is in close agreement with the redshift found via Ly\(\alpha\) (\(z_{\rm spec}=6.323\)). The calculated [CII] line luminosity (eq.18, Casey et al. 2014) is \(L_{\rm[CII]}=1.5^{+0.5}_{-0.4}\times 10^{8}L_{\odot}\) and find a velocity offset (the difference between Ly\(\alpha\) and [CII]: \(v_{\rm Ly\alpha}-v_{\rm[CII]}\)) \(\Delta v=320\pm 70\)km s\({}^{-1}\). The remaining two galaxies, MACS2129-1412 and RXJ1347-018, yielded non-detections for [CII] emission (see Fig 3). Because their intrinsic [CII] line widths are unknown, we estimate their \(L_{\rm[CII]}\) (\(3\sigma\)) upper limits by integrating over the same width of channels (9 channels, 225km s\({}^{-1}\)) as was done in MACS0454-1251. In the absence of known systemic redshifts, we use Ly\(\alpha\) redshifts to predict expected [CII] emitting frequency of each object 2. The RMS uncertainty was calculated using the \(1\sigma\) intensity value extracted from a \(0.\arcsec 5\) aperture centered to the HST imaging centroid of the target. We list the calculated \(L_{\rm[CII]}\) (\(3\sigma\)) upper limits in Table 2. Footnote 2: This assumes there is no velocity offset between Ly\(\alpha\) and [CII] emission for these two galaxies We constructed a continuum map for MACS0454-1251 from all four spectral windows (SPWs), masking out the channels expected to contain [CII] emission. As seen in Fig 2, the [CII] line of MACS0454-1254 appears to be extended over roughly nine channels which corresponds to a width of 225km s\({}^{-1}\). All nine channels were masked when producing the final continuum map from which we derived an upper limit on \(L_{\rm FIR}\). We assume a wavelength range of \(8-1000\mu\)m for \(L_{\rm FIR}\) calculation. For the remaining non-detection galaxies, we constructed continuum maps in the same manner excluding nine channels centered on the central frequency. To estimate \(L_{\rm FIR}\), we used a grey-body spectral energy distribution model (Casey 2012), assuming a spectral index of \(\beta=1.5\). In the absence of multi-band observations, we assumed a dust temperature (\(T_{\rm d}=35\)K) across all three objects and estimated \(3\sigma L_{\rm FIR}\) which is recorded in table 2. Alternative literature values for assumed \(T_{\rm d}\) can range from \(38-90\)K (Fujimoto et al. 2021; Fujimoto et al. 2022; Fudamoto et al. 2023). A recent study by Sommowigo et al. (2022) showed \(z\sim 7\) galaxies \begin{table} \begin{tabular}{l c c c c c c c c} \hline Target ID & \(z_{\rm Ly\alpha}\) & \(\mu_{\rm best}^{\alpha}\) & Ref\({}^{b}\) & \(M_{\rm stellar}\times f_{\mu}^{c,d}\) & SFR\({}_{\rm SED}\times f_{\mu}^{c,d}\) & SFR\({}_{\rm UV}\times f_{\mu}^{c,e}\) & Age\({}^{d}\) & \(M_{\rm UV}-2.5\log(f_{\mu})^{c}\) & \(L^{f}\) \\ – & – & – & – & – & \(10^{9}M_{\odot}\) & \(M_{\odot}yr^{-1}\) & \(M_{\odot}yr^{-1}\) & Myr & mag & \(L^{*}\) \\ \hline MACS0454-1251 & \(6.323\pm 0.002\)\({}^{a}\) & \(4.4\pm 0.4\) & [1] & \(5.52^{+1.6}_{-1.5}\) & \(13.5^{+4.1}_{-3.9}\) & \(8.82\pm 0.15\) & \(55.0^{+58}_{-29}\) & \(-20.9\pm 0.1\) & \(0.7\pm 0.09\) \\ MACS2129-1412 & \(6.846\pm 0.001\) & \(11^{+0.1}_{-1.3}\) & [2] & \(1.73^{+4.2}_{-0.61}\) & \(6.93^{+5.9}_{-2.4}\) & \(1.06\pm 0.04\) & \(216^{+330}_{-80}\) & \(-18.6\pm 0.2\) & \(0.1\pm 0.02\) \\ RXJ1347-018 & \(7.161\pm 0.001\) & \(21.4^{+1.7}_{-1.3}\) & [3] & \(2.49^{+3.0}_{-2.0}\) & \(7.48^{+11.8}_{-5.7}\) & \(0.292\pm 0.01\) & \(249^{+135}_{-147}\) & \(-17.2\pm 0.2\) & \(0.03\pm 0.01\) \\ \hline \end{tabular} \({}^{a}\) To use a different magnification factor \(\mu\), simply use \(f_{\mu}\equiv\mu/\mu_{\rm best}\), where \(\mu_{\rm best}\) is the magnification factor we adopt for each object. \({}^{b}\) References for photometry, lens modeling and, Ly\(\alpha\): [1]Huang et al. (2016)Huang et al. (2016), [2]Huang et al. (2016)Hoag et al. (2019)Hoag et al. (2019). \({}^{c}\) Values have been corrected for lensing using \(\mu_{\rm best}\). \({}^{d}\) Values derived from SED fitting described in Section 3. \({}^{e}\) SFR\({}_{\rm UV}\) calculated from lens-corrected \(M_{UV}\) magnitudes with Eq.1 from Kennicutt (1998a) converted to Chabrier IMF via \(0.63\times{\rm SFR}({\rm Salpeter})_{\rm UV}={\rm SFR}({\rm Chabrier})_{\rm UV}\). \({}^{f}\) We adopt the characteristic magnitude (\(M^{*}\)), where \(M^{*}=-21.13\pm 0.08\) from Bouwens et al. (2022). \({}^{g}\) Measured on two different DEIMOS slitmasks at \(z=6.32403\) and \(z=6.32228\) with \((z)=6.3232\pm 0.0018\). Uncertainty reflects difference between the mask measurements. \end{table} Table 1: Properties of observed galaxies. Figure 2: Extracted spectrum showing the detected [CII] emission. Flux (left axis) shown as a function of frequency (bottom axis) and velocity (top axis). Red dashed line represents the best fit Gaussian and grey dashed line represents Lyman-\(\alpha\) redshift with boxed \(1\sigma\) uncertainty. having an average dust temperature of \((T_{d})=47\pm 6\)K. Because SFR\({}_{\rm FIR}\) and \(M_{\rm dust}\) are extremely sensitive to the assumed \(T_{d}\), assuming a lower \(T_{\rm d}\) than the average found by Sommovigo et al. (2022) provides conservative upper limit on \(M_{\rm dust}\). Assuming conservatively \(T_{d}=35\)K for all three galaxies, we estimated SFR\({}_{\rm FIR}\) using eq.3 from Kennicutt (1998b) converted from Salpeter IMF to Chabrier IMF and calculated \(3\sigma\) dust mass (\(M_{\rm dust}\)) estimates for each object. We assumed a dust mass absorption coefficient of \(\kappa=\kappa_{0}(\nu/\nu_{0})^{R_{\rm d}}\), where \(\kappa_{0}=0.232{\rm m^{2}kg^{-1}}\) at \(\nu_{0}=250\mu\)m (Draine, 2003; Bianchi, 2013). Resulting \(SF_{\rm FIR}\) and \(M_{\rm dust}\) limits are listed in Table 2. To estimate age and stellar mass (\(M_{\rm stellar}\)) of the sample, we use Bayesian Analysis of Galaxies for Physical Inference and Parameter EStimation (BAGPIPES, Camali et al., 2018). BAGPIPES fits physical parameters using the MultiNest sampling algorithm (Feroz & Hobson, 2008; Feroz et al., 2009). We use the default set of stellar population templates from Bruzal and Charlot (Bruzual & Charlot, 2003, BC03). The SED fitting is done assuming the Kroupa (2001) IMF which we convert to Chabrier (2003), a metallicity of \(0.02{\rm Z_{\odot}}\), the Calzetti dust law (Calzetti et al., 2000), and a constant star formation history. We allow dust extinction to range from \(A_{\nu}=0-3\) magnitudes. The \(M_{\rm stellar}\) values reflected in Table 1 have been converted to Chabrier IMF via conversion factor 0.923. We take the general prescription for the fitting from Strait et al. (2020). See Bolan et al. in prep. for more information on the SED fitting. ## 4 Results ### Velocity and Spatial Offset Observational studies have used [CII] emission as a tracer of systemic velocities (e.g Pentericci et al., 2016; Matthee et al., 2019). Using the peak [CII] emission from the extracted spectrum of MACS0454-1251, we find \(\Delta{\rm V_{Ly}}_{\rm{\alpha}}-\rm{[CII]}\approx 320\pm 70km\,s^{-1}\). This is shown in Fig 2 where the velocity axis is centered on the [CII] emission and the grey dashed line represents Ly\(\alpha\). The magnitude of the offset falls within literature ranges quoted for \(z=2\sim 3\)(Erb et al., 2014) and high-\(z\) galaxies (e.g., Cassata et al., 2020; Endsley et al., 2022). The spectrum shows a clear redshift of Ly\(\alpha\) emission. Given the resonant nature of Ly\(\alpha\), revealing the direct cause of the offset is both a complex and active place of research. Redshifted Ly\(\alpha\) could indicate scattered emission from outflowing (or expanding) gas from the galaxy. Outflows may originate from strong star formation feedback which could reduce the covering fraction of neutral gas in the ISM and boost Ly\(\alpha\) escape (Jones et al., 2013; Trainor et al., 2015; Leethochawalit et al., 2016). Ly\(\alpha\) could also be redshifted by neutral hydrogen inside a galaxy's ISM, where the emerging \(\Delta\nu\) would be a proxy for the column density of neutral hydrogen (Yang et al., 2016, 2017; Guaita et al., 2017). Additionally, the model put forth by Mason et al. (2018) used \(\Delta\nu\) as a way to measure the intergalactic medium (IGM) neutral fraction. A more neutral IGM will scatter Ly\(\alpha\) photons more causing Ly\(\alpha\) to emerge at a higher \(\Delta\nu\) relative to systemic. It is not uncommon for \(z>5\) galaxies to have [CII] emission tracing the systemic redshift of the galaxy but spatially offset from the UV component (Maiolino et al., 2015; Willott et al., 2015; Capak et al., 2015; Carniani et al., 2017, 2018, 2018, 2017; Matthee et al., 2019; Fujimoto et al., 2022). In fact, spatial offsets between Ly\(\alpha\) and UV have also been found at \(z>5\)(Hoag et al., 2019; Lemaux et al., 2021). Carniani et al. (2018) shows that most of the [CII] spatial offsets are indeed physically motivated but further observations are needed to understand the mechanisms causing the offsets. Recently, Fujimoto et al. (2022) was able to determine the necessity of past outflow activity in a galaxy at \(z\sim 8.5\) based on dual observations with ALMA and JWST. We roughly estimate the [CII]-UV spatial offset in MACS0454-1251 using the brightest pixels found in the [CII] moment0 map and the control of the _HST_ rest-UV image. The lensed spatial offset is \(\sim 0.\arcsec 5\) which is greater than the ALMA astrometric accuracy of \(\sim 0.\arcsec 08\)3. Taking into account the magnification, we estimate the lens-corrected offset to be \(\sim 1.4\)kpc 4. The offset could be physically associated with intrinsic ISM properties (e.g. different distribution in the ionized vs neutral gas phase). A possible explanation for the spatial offset could also be the ejection of material by galactic outflows or galaxy mergers (e.g., Maiolino et al., 2015; Vallini et al., 2015; Pallottini et al., 2017; Katz et al., 2017; Gallerani et al., 2018; Kohandel et al., 2019). Footnote 3: Calculated from ALMA technical handbook [https://almascience.nrao.edu/documents-and-tools/cycle18/alm-technical-handbook](https://almascience.nrao.edu/documents-and-tools/cycle18/alm-technical-handbook). Footnote 4: For isotropic lensing distortion, we spatially scale with \(/\sqrt{\mu}\) ### \(L_{\rm[CII]}\) - SFR Relation In Fig 4 we show the \(L_{\rm[CII]}\) - SFR\({}_{\rm UV}\) relation for our galaxies (stars) alongside available \(z>6\) observations from the literature. The reported SFR\({}_{\rm UV}\) values for our objects can be found in Table 1 along with SFR\({}_{\rm BED}\) values that were derived through SED fitting described in Section 3. SFR\({}_{\rm UV}\) is calculated assuming no dust attenuation and therefore should be considered a lower limit as it does not include obscured star formation. We also recognize that in the event of a more top heavy IMF (i.e., the median of the mass-to-light ratio of stars \begin{table} \begin{tabular}{c c c c c c c c c} \hline Target ID & \(z_{\rm[CII]}\) & \(S_{\rm line}\) & FWHM & \(S_{\rm line}\Delta\nu\) & \(L_{\rm[CII]}\times f_{\mu}\) & Continuum\({}^{a}\) & \(L_{\rm FIR}\times f_{\mu}^{a,b}\) & SFR\({}_{\rm FIR}^{a,b,c}\) & \(M_{\rm dust}^{a,c}\) \\ – & – & (mJy) & (kms\({}^{-1}\)) & (Jykm s\({}^{-1}\)) & (\(\times 10^{8}L_{\odot}\)) & (\(\mu\)Jy) & (\(\times 10^{10}L_{\odot}\)) & \(M_{\odot}\)yr\({}^{-1}\) & \(\times 10^{8}M_{\odot}\) \\ \hline MACS0454-1251 & \(6.3151\pm 0.0005\) & \(3.7^{0.9}_{-0.9}\) & \(163\pm 46\) & \(0.64\pm 4.05\) & \(1.5^{+0.5}_{-0.4}\) & \(<1380\) & \(<32\) & \(<35\) & \(<2.3\) \\ MACS2129-1412 & – & – & – & – & – & \(<0.043\) & \(<18.3\) & \(<0.19\) & \(<0.21\) & \(<0.93\) \\ RXJ1347-018 & – & – & – & – & \(<0.062\) & \(<47.0\) & \(<1.1\) & \(<1.2\) & \(<0.48\) \\ \hline \end{tabular} \({}^{a}\) All limits are \(3\sigma\) \({}^{b}\) Calculated from Kennicutt (1998b) Eq.3, which was the converted to Chabrier IMF via \(0.63\times{\rm SFR}\)(Salepeter)\({}_{\rm FIR}\) = SFR(Chabrier)\({}_{\rm FIR}\) \({}^{c}\) Assumed \(T_{d}\) = 35K. \end{table} Table 2: Results from ALMA. being born decreases), SFR\({}_{\rm UV}\) would be an overestimate assuming constant dust properties. Our single [CII] detection, MACS0454-1251, falls within the 1\(\sigma\) scatter of the De Looze et al. (2014) relation for dwarf(\(\sim 0.2\) dex below) and local starburst galaxies(\(\sim 0.2\) dex above). We note, however, MACS0454-1251 is the most UV luminous galaxy (\(L=0.74L^{*}\)) in our sample. The detection is also consistent with the C1 model put forth by the Vallini et al. (2015) simulations which corresponds to a galaxy with constant solar metallicity. If we compare this to our non-detections, the upper limit set by RXJ1347-018 also is consistent with the. De Looze et al. (2014) relation; having a scatter of \(\sim 0.3\) dex from the average dwarf and local starburst relations. The upper limit set by MACS2129-1412 is not consistent with the De Looze et al. (2014) relations, falling \(\sim 0.5\) dex from starburst galaxies and \(\sim 0.6\) dex from the dwarf galaxies. The one displayed Vallini et al. (2015) model it could possibly support is the C005 model which represents a galaxy with a constant metallicity of \(Z=0.05\)Z\({}_{\odot}\). The possibility of a sub-solar metallicities is not surprising for these two low-mass galaxies and aligns with recent work (Curti et al., 2023). Probing a wide stellar mass range, (\(M_{\rm stellar}/M_{\odot}\simeq 10^{6.5}-10^{8.5}\)), Curti et al. (2023) shows their \(z>6\) galaxies exhibit sub-solar metallicities (\(Z<Z_{\odot}\)), with a scaling relation drawn near \(\sim~{}0.1\)Z\({}_{\odot}\). While we do not have the observations to determine the true metallicities of our sample, a low metallicity could explain the absence of [CII] in our non-detections and would not contradict their respective upper limits. Another possible reason for the lack of detected [CII] emission could be negative feedback disrupting molecular clouds (MCs, Vallini et al., 2015). For example, [CII] emission predominately originates from PDRs. Negative feedback disrupting MCs would reduce the PDR layer at the edge of the MC where most of the [CII] emission comes from. Additionally, we know stellar feedback is efficient in galaxy centers, places where there is very active star formation. In fact, studies done at lower-\(z\) have supported a connection between feedback efficiency and SFR surface density (e.g., Hayward and Hopkins, 2016; Heckman et al., 2011). Since high-\(z\) galaxies are more compact than local analogs, this would also support negative feedback suppressing [CII] emission at the systemic redshift of the galaxy. ## 5 Conclusions We present new ALMA observations investigating [CII] 158\(\mu\)m line emission in three lensed galaxies at \(z>6\). We find a 4\(\sigma\) [CII] detection in our most luminous galaxy (\(L\sim 0.7L^{*}\)) MACS0454-1251 and calculate the systemic redshift to be \(z_{\rm[CII]}=6.3151\pm 0.0005\). We measure a \(\Delta v=320\pm 20\)km s\({}^{-1}\) and calculate \(L_{\rm[CII]}=1.5^{+0.5}_{-0.4}\times 10^{8}L_{\odot}\). For the remaining two galaxies we do not detect [CII] emission but provide 3\(\sigma\) upper limits for their \(L_{\rm[CII]}\). Our main findings are: (i) MACS0454-1251 exhibits both a velocity and a spatial offset between ALMA [CII] and rest-frame HST UV emission. The spatial offset is larger than the the ALMA astrometric uncertainty which could mean the offset is physically motivated by intrinsic ISM properties. (ii) Feedback is a very important process in both the emission of [CII] and Ly\(\alpha\). On one hand, strong feedback can suppress [CII] emission by disrupting MCs (Vallini et al., 2015). On the other hand strong feedback from star formation can drive outflows which redshifts the Ly\(\alpha\) line. A possibility that puts together the spatial offset and the [CII] deficit is that feedback destroys the emitting MCs at the center, allowing only displaced MCs to contribute to the [CII] emission. (iii) Our single [CII] detection in MACS0454-1251 falls within the 1\(\sigma\) scatter of the De Looze et al. (2014) \(L_{\rm[CII]}-\) SFR relations. Figure 3: MACS2129-1412 (left) and RXJ1347 -018 (right) velocity-integrated [CII] line intensity overlaid on _HST_/WFCS F160W image. The contours are shown in red and linearly spaced at 1\(\sigma\) intervals from 1 – 4\(\sigma\). The beam is given in the bottom right with a 3\({}^{\prime\prime}\)\(\times\)3\({}^{\prime\prime}\) zoom-in shown in the bottom left. The galaxies are out lined by the blue cross hairs on the _HST_ images. As such, it would support with the applicability of the \(L_{\rm[CII]}-\) SFR relation put forth by De Looze et al. (2014) for \(z>6\) galaxies. That being said, the upper limits set by our RXJ1347-018 and MACS2129-1412, would argue an inconclusive result, especially with the \(3\sigma\) upper limit of MACS2129-1412 still falling well below De Looze et al. (2014). In general, more observations of low-mass, UV faint galaxies are needed in order to break this degeneracy. (iv) Low metallicity is a possible justification for fainter galaxies falling below the De Looze et al. (2014) \(L_{\rm[CII]}-\) SFR relation (Vallini et al., 2013; Ferrara et al., 2019). Based on recent work (Curti et al., 2023), we would expect lower mass, (\(M_{\rm stellar}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$<$}}10^{9.5}\)) galaxies at \(z>6\) to exhibit sub-solar metallicites. A possible scenario for [CII] deficient systems with very low metalicity is powerful feedback; involving the destruction of star forming sites occurring during bursty evolutionary phases in relatively chemically unevolved systems (Vallini et al., 2015; Ferrara et al., 2019). (v) While we lack the proper observations to determine the true metallicity of MACS0454-1251, based on the Vallini et al. (2015) models, MACS0454-1251 could be experiencing a solar metallicity. The work shown in this paper aims to shed light on faint EoR galaxy properties which in the past was significantly been undersampled. ## Acknowledgements We acknowledge support from the program HST-GO-16667, provided through a grant from the STScI under NASA contract NAS5-26555. MB also acknowledges support from the ERC Advanced Grant FIRSTLIGHT and Slovenian national research agency ARRS through grants N1-0238 and P1-0188. RLS acknowledges support provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51469.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. AF acknowledges support from the ERC Advanced Grant INTERSTELLAR H2020/740120. ## Data Availability The original data used in the work can be found and downloaded from the ALMA archive [https://almascience.nrao.edu](https://almascience.nrao.edu) using the science project ID: 2019.1.00003.S. The reduced data generated in this research will be shared on reasonable request to the corresponding author.
2309.09307
A Global Method for Relaxation for Multi-levelled Structured Deformations
We prove an integral representation result for a class of variational functionals appearing in the framework of hierarchical systems of structured deformations via a global method for relaxation. Some applications to specific relaxation problems are also provided.
Ana Cristina Barroso, José Matias, Elvira Zappale
2023-09-17T15:44:53Z
http://arxiv.org/abs/2309.09307v3
# A global method for relaxation for multi-levelled structured deformations ###### Abstract. We prove an integral representation result for a class of variational functionals appearing in the framework of hierarchical systems of structured deformations via a global method for relaxation. Some applications to specific relaxation problems are also provided. Key words and phrases:global method for relaxation, hierarchical system of structured deformations, multiscale geometry, elasticity, disarrangements, integral representation 2020 Mathematics Subject Classification: 49J45, 46E30, 74A60, 74M99, 74B20 ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Notation * 2.2 Approximation theorem for hierarchical (first-order) structured deformations * 3 The global method * 4 Applications ## 1. Introduction Our purpose in this paper is to establish a global method for relaxation applicable in the context of multi-levelled structured deformations. The aim is to provide an integral representation for a class of functionals, defined in the set of \((L+1)\)-level (first-order) structured deformations (see Definition 2.2), via the study of a related local Dirichlet-type problem, and to identify the corresponding relaxed energy densities (see [6, 7]). First-order structured deformations were introduced by Del Piero and Owen [11] in order to provide a mathematical framework that captures the effects at the macroscopic level of smooth deformations and of non-smooth deformations (disarrangements) at one sub-macroscopic level. In the classical theory of mechanics, the deformation of the body is characterised exclusively by the macroscopic deformation field \(g\) and its gradient \(\nabla g\). In the framework of structured deformations, an additional geometrical field \(G\) is introduced with the intent to capture the contribution at the macroscopic scale of smooth sub-macroscopic changes, while the difference \(\nabla g-G\) captures the contribution at the macroscopic scale of non-smooth sub-macroscopic changes, such as slips and separations (referred to as _disarrangements_[12]). The field \(G\) is called the deformation without disarrangements, and, heuristically, the disarrangement tensor \(M\coloneqq\nabla g-G\) is an indication of how "non-classical" a structured deformation is. This broad theory is rich enough to address mechanical phenomena such as elasticity, plasticity and the behaviour of crystals with defects. The variational formulation for first-order structured deformations in the \(SBV\) setting was first addressed by Choksi and Fonseca [10] where a (first-order) structured deformation is defined to be a pair \((g,G)\in SBV(\Omega;\mathbb{R}^{d})\times L^{p}(\Omega;\mathbb{R}^{d\times N}), \ p\geqslant 1.\) Departing from a functional which associates to any deformation \(u\) of the body an energy featuring a bulk contribution, which measures the deformation (gradient) throughout the whole body, and an interfacial contribution, accounting for the energy needed to fracture the body, an integral representation for the "most economical way" to approach a given structured deformation was derived. The theory of first-order structured deformations was broadened by Owen and Paroni [18] to second-order structured deformations, which also account for other geometrical changes, such as curvature, at one sub-macroscopic level. The variational formulation in the \(SBV^{2}\) setting for second-order structured deformations was carried out by Barroso, Matias, Morandotti and Owen [3]. This last formulation allows for jumps on both the approximating fields, as well as on their gradients. In the recent contribution [13], Deseri and Owen took a further step, extending the theory of [11] to hierarchical systems of structured deformations in order to include the effects of disarrangements at more than one sub-macroscopic level. In fact, many natural and man-made materials, for example, muscles, cartilage, bones, plants and some biomedical materials, exhibit different levels of disarrangements whose mechanical behaviour can be addressed within the generalized field theory proposed in [13]. In the setting of hierarchical systems of structured deformations, a first-order structured deformation \((g,G)\) corresponds to a two-level hierarchical system (the macroscopic level, \(g\), plus one microscopic level, \(G\)), while, for \(L>1\), an \(L+1\)-level hierarchical system consists of \((g,G_{1},\dots,G_{L})\), where each \(G_{i},i=1,\dots L\), provides the effects at the macro-level of the corresponding sub-macroscopic level \(i\). A first approach to the mathematical formulation of hierarchical systems of structured deformations was considered in [4], where the authors provide an approximation theorem for an \((L+1)\)-level structured deformation \((g,G_{1},\dots G_{L})\) and propose the assignment of an energy to this multi-levelled structured deformation by means of a well-posed recursive relaxation process, consisting of iterated applications of the integral representation result for first-order structured deformations in [10]. In [16], and the references therein, the interested reader can find a comprehensive survey about the theory of structured deformations, as well as applications. The global method for relaxation, introduced by Bouchitte, Fonseca and Mascarenhas [6] in the \(BV\) setting, and later addressed in the \(SBV^{p}\) setting by Bouchitte, Fonseca, Leoni and Mascarenhas [7], provides a general method for the identification of the integral representation of a class of functionals defined on \(BV(\Omega;\mathbb{R}^{d})\times\mathcal{O}(\Omega)\), where \(\mathcal{O}(\Omega)\) represents the family of open sets contained in \(\Omega\). Since its inception, this global method for relaxation has known numerous applications, in particular, very recently it was used in the context of variable exponent spaces, see [19], spaces of bounded deformations, see [9] and [5], and second-order structured deformations in the space \(BH(\Omega;\mathbb{R}^{d})\times\mathcal{O}(\Omega)\) by Fonseca, Hagerty and Paroni [14]. Note that in the \(BH\) case, only jumps on gradients are allowed. In this work we obtain a global method for relaxation appropriate for the study of functionals defined on the space of \(L+1\)-levelled hierarchical systems of first-order structured deformations of any order \(L\geqslant 1\). As an application of our general theorem, we are able to extend the integral representation for first-order structured deformations proved in [10] to the case of Caratheodory bulk energy densities. In particular, we recover the formulae in [10], and later generalized in [3], [17] to allow for an explicit dependence on the space variable. This extension holds when the bulk energy density satisfies a growth condition of order \(p>1\), with the restriction that the deformation \(g\) belongs to \(L^{\infty}(\Omega;\mathbb{R}^{d})\); in the case \(p=1\) this extra requirement on \(g\) is no longer needed. We further show that, in the case \(p>1\), the cell formula for the interfacial relaxed energy density in [10] and [17], still holds when the bulk energy density is Caratheodory (see Theorems 4.1 and 4.2). Another natural application of our abstract result is to homogenization problems, such as the one considered in [1]. In this case, we depart from an energy of the form \[E_{\varepsilon}(u)\coloneqq\int_{\Omega}W(x/\varepsilon,\nabla u(x))\, \mathrm{d}x+\int_{\Omega\cap S_{u}}\psi(x/\varepsilon,[u](x),\nu_{u}(x))\, \mathrm{d}\mathcal{H}^{N-1}(x),\] where \(\varepsilon\to 0^{+}\). Besides a periodicity condition, the densities \(W\) and \(\psi\) satisfy other hypotheses (cf. Theorem 4.4) which ensure that the relaxed functional (4.11) can be placed in the setting of our main theorem. As a further application, we recover the integral representation for one of the relaxed energies in [3]. In this paper, an integral representation for the relaxation of an energy arising in the context of second-order structured deformations is obtained. A simple argument allows for the decomposition of the relaxed functional \(I\) as the sum of two terms, \(I=I_{1}+I_{2}\). Although \(I_{1}\) does not fit the scope of our global method result, due to the topology which is considered in its definition, we will show that Theorem 3.2 applies to \(I_{2}\) and we recover the same expressions for the relaxed energy densities that were deduced in [3]. A relaxation result for hierarchical systems of first-order structured deformations with an arbitrary number of levels \(L\), and the comparison with this abstract formulation, will be the subject of a forthcoming work. The paper is organized as follows: in Section 2 we set the notation which will be used in the sequel and recall the notion of a multi-levelled structured deformation, as well as the approximation theorem for these deformations. In Section 3 we state and prove our main theorem (see Theorem 3.2), whereas Section 4 is devoted to the aforementioned applications of our abstract result. ## 2. Preliminaries ### Notation We will use the following notations * \(\mathbb{N}\) denotes the set of natural numbers without the zero element; * \(\Omega\subset\mathbb{R}^{N}\) is a bounded, connected open set with Lipschitz boundary; * \(\mathbb{S}^{N-1}\) denotes the unit sphere in \(\mathbb{R}^{N}\); * \(Q\coloneqq(-\frac{1}{2},\frac{1}{2})^{N}\) denotes the open unit cube of \(\mathbb{R}^{N}\) centred at the origin; for any \(\nu\in\mathbb{S}^{N-1}\), \(Q_{\nu}\) denotes any open unit cube in \(\mathbb{R}^{N}\) with two faces orthogonal to \(\nu\); * for any \(x\in\mathbb{R}^{N}\) and \(\delta>0\), \(Q(x,\delta)\coloneqq x+\delta Q\) denotes the open cube in \(\mathbb{R}^{N}\) centred at \(x\) with side length \(\delta\); likewise \(Q_{\nu}(x,\delta)\coloneqq x+\delta Q_{\nu}\); * \(\mathcal{O}(\Omega)\) is the family of all open subsets of \(\Omega\), whereas \(\mathcal{O}_{\infty}(\Omega)\) is the family of all open subsets of \(\Omega\) with Lipschitz boundary; * \(\mathcal{L}^{N}\) and \(\mathcal{H}^{N-1}\) denote the \(N\)-dimensional Lebesgue measure and the \((N-1)\)-dimensional Hausdorff measure in \(\mathbb{R}^{N}\), respectively; the symbol \(\mathrm{d}x\) will also be used to denote integration with respect to \(\mathcal{L}^{N}\); * \(\mathcal{M}(\Omega;\mathbb{R}^{d\times N})\) is the set of finite matrix-valued Radon measures on \(\Omega\); \(\mathcal{M}^{+}(\Omega)\) is the set of non-negative finite Radon measures on \(\Omega\); given \(\mu\in\mathcal{M}(\Omega;\mathbb{R}^{d\times N})\), the measure \(|\mu|\in\mathcal{M}^{+}(\Omega)\) denotes the total variation of \(\mu\); * \(SBV(\Omega;\mathbb{R}^{d})\) is the set of vector-valued _special functions of bounded variation_ defined on \(\Omega\). Given \(u\in SBV(\Omega;\mathbb{R}^{d})\), its distributional gradient \(Du\) admits the decomposition \(Du=D^{a}u+D^{s}u=\nabla u\mathcal{L}^{N}+[u]\otimes\nu_{u}\mathcal{H}^{N-1} \mathsf{L}\,S_{u}\), where \(S_{u}\) is the jump set of \(u\), \([u]\) denotes the jump of \(u\) on \(S_{u}\), and \(\nu_{u}\) is the unit normal vector to \(S_{u}\); finally, \(\otimes\) denotes the dyadic product; * \(L^{p}(\Omega;\mathbb{R}^{d\times N})\) is the set of matrix-valued \(p\)-integrable functions; * for \(p\geqslant 1\), \(SD^{p}(\Omega)\coloneqq SBV(\Omega;\mathbb{R}^{d})\times L^{p}(\Omega; \mathbb{R}^{d\times N})\) is the space of structured deformations \((g,G)\) (notice that \(SD^{1}(\Omega)\) is the space \(SD(\Omega)\) introduced in [10]); the norm in \(SD(\Omega)\) is defined by \(\left\|(g,G)\right\|_{SD(\Omega)}\coloneqq\left\|g\right\|_{BV(\Omega; \mathbb{R}^{d})}+\left\|G\right\|_{L^{1}(\Omega;\mathbb{R}^{d\times N})}\); * \(C\) represents a generic positive constant that may change from line to line. A detailed exposition on \(BV\) functions is presented in [2]. The following result, whose proof may be found in [15], will be used in the proof of Theorem 3.2. **Lemma 2.1**.: _Let \(\lambda\) be a non-negative Radon measure in \(\mathbb{R}^{N}\). For \(\lambda\) a.e. \(x_{0}\in\mathbb{R}^{N}\), for every \(0<\delta<1\) and for every \(\nu\in\mathbb{S}^{N-1}\), the following holds_ \[\limsup_{\varepsilon\to 0^{+}}\frac{\lambda(Q_{\nu}(x_{0},\delta \varepsilon))}{\lambda(Q_{\nu}(x_{0},\varepsilon))}\geqslant\delta^{N},\] _so that_ \[\lim_{\delta\to 1^{-}}\limsup_{\varepsilon\to 0^{+}}\frac{\lambda(Q_{\nu}(x_{0}, \delta\varepsilon))}{\lambda(Q_{\nu}(x_{0},\varepsilon))}=1.\] ### Approximation theorem for hierarchical (first-order) structured deformations **Definition 2.2**.: _For \(L\in\mathbb{N}\), \(p\geqslant 1\) and \(\Omega\subset\mathbb{R}^{N}\) a bounded connected open set, we define_ \[HSD_{L}^{p}(\Omega)\coloneqq SBV(\Omega;\mathbb{R}^{d})\times\underbrace{L^{ p}(\Omega;\mathbb{R}^{d\times N})\times\cdots\times L^{p}(\Omega;\mathbb{R}^{d \times N})}_{\text{L-times}}\] _the set of \((L+1)\)-level (first-order) structured deformations on \(\Omega\)._ In the case \(L=1\) and \(p=1\), this space was introduced and studied in [10], where it was denoted by \(SD(\Omega)\). In particular, the following approximation result was shown (see [10, Theorem 2.12]). **Theorem 2.3** (Approximation Theorem in \(SD(\Omega)\)).: _For every \((g,G)\in SD(\Omega)\), there exists a sequence \(\{u_{n}\}\) in \(SBV(\Omega;\mathbb{R}^{d})\) which converges to \((g,G)\) in the sense that_ \[u_{n}\to g\quad\text{in }L^{1}(\Omega;\mathbb{R}^{d})\qquad\text{and}\qquad \nabla u_{n}\stackrel{{*}}{{\rightharpoonup}}G\quad\text{in }\mathcal{M}(\Omega;\mathbb{R}^{d\times N}).\] We now present the definition of convergence of a sequence of \(SBV\) functions to an \((L+1)\)-level structured deformation \((g,G_{1},\dots,G_{L})\) belonging to either \(HSD_{L}(\Omega)\) or \(HSD_{L}^{p}(\Omega)\). **Definition 2.4**.: _Let \(L\in\mathbb{N}\), let \(p>1\), let \((g,G_{1},\dots,G_{L})\in HSD_{L}^{p}(\Omega)\), and let \(\mathbb{N}^{L}\ni(n_{1},\dots,n_{L})\mapsto u_{n_{1},\dots,n_{L}}\in SBV( \Omega;\mathbb{R}^{d})\) be a (multi-indexed) sequence. We say that \(\{u_{n_{1},\dots,n_{L}}\}\) converges in the sense of \(HSD_{L}^{p}(\Omega)\) to \((g,G_{1},\dots,G_{L})\) if_ 1. \(\lim_{n_{1}\to+\infty}\cdots\lim_{n_{L}\to+\infty}u_{n_{1},\dots,n_{L}}=g\)_, with each of the iterated limits in the sense of_ \(L^{1}(\Omega;\mathbb{R}^{d})\)_;_ 2. _for all_ \(\ell=1,\dots,L-1\)_,_ \(\lim_{n_{\ell+1}\to+\infty}\cdots\lim_{n_{L}\to+\infty}u_{n_{1},\dots,n_{L}}= \overline{\cdot}g_{n_{1},\dots,n_{\ell}}\in SBV(\Omega;\mathbb{R}^{d})\) _and_ \[\lim_{n_{1}\to+\infty}\cdots\lim_{n_{\ell}\to+\infty}\nabla g_{n_{1},\dots,n_{ \ell}}=G_{\ell},\] _with each of the iterated limits in the sense of weak convergence in_ \(L^{p}(\Omega;\mathbb{R}^{d\times N})\)_;_ 3. \(\lim_{n_{1}\to+\infty}\cdots\lim_{n_{L}\to+\infty}\nabla u_{n_{1},\dots,n_{L}} =G_{L}\) _with each of the iterated limits in the sense of weak convergence in_ \(L^{p}(\Omega;\mathbb{R}^{d\times N})\)_;_ _we use the notation \(u_{n_{1},\dots,n_{L}}\stackrel{{ p}}{{\rightharpoonup}}(g,G_{1}, \dots,G_{L})\) to indicate this convergence._ _In the case \(p=1\), if \((g,G_{1},\dots,G_{L})\in HSD_{L}^{1}(\Omega)\) and if the weak \(L^{p}\) convergences above are replaced by weak-* convergences in \(\mathcal{M}(\Omega;\mathbb{R}^{d\times N})\), then we say that \(\{u_{n_{1},\dots,n_{L}}\}\) converges in the sense of \(HSD_{L}^{1}(\Omega)\) to \((g,G_{1},\dots,G_{L})\) and we use the notation \(u_{n_{1},\dots,n_{L}}\stackrel{{*}}{{\rightharpoonup}}(g,G_{1}, \dots,G_{L})\) to indicate this convergence._ The sequential application of the idea behind the Approximation Theorem 2.3 provides the method for constructing a (multi-indexed) sequence \(\{u_{n_{1},\dots,n_{L}}\}\) that approximates an \((L+1)\)-level structured deformation \((g,G_{1},\dots,G_{L})\). We thus obtain the following result, whose proof may be found in [4]. **Theorem 2.5** (Approximation Theorem for \((L+1)\)-level structured deformations).: _Let \((g,G_{1},\dots,G_{L})\) belong to \(HSD_{L}^{p}(\Omega)\). Then there exists a sequence \((n_{1},\dots,n_{L})\mapsto u_{n_{1},\dots,n_{L}}\in SBV(\Omega;\mathbb{R}^{d})\) converging to \((g,G_{1},\dots,G_{L})\) in the sense of Definition 2.4._ ## 3. The global method Motivated by the Approximation Theorem 2.5, let \(p\geqslant 1\) and let \(\mathcal{F}:HSD_{L}^{p}(\Omega)\times\mathcal{O}(\Omega)\to[0,+\infty]\) be a functional satisfying the following hypotheses 1. for every \((g,G_{1},\dots,G_{L})\in HSD_{L}^{p}(\Omega)\), \(\mathcal{F}(g,G_{1},\dots,G_{L};\cdot)\) is the restriction to \(\mathcal{O}(\Omega)\) of a Radon measure; 2. for every \(O\in\mathcal{O}(\Omega)\), if \(p>1\), \(\mathcal{F}(\cdot,\dots,\cdot O)\) is \(HSD_{L}^{p}\)-lower semicontinuous, i.e., if \((g,G_{1},\dots,G_{L})\in HSD_{L}^{p}(\Omega)\) and \(\{(g^{n},G_{1}^{n},\dots,G_{L}^{n})\}\subset HSD_{L}^{p}(\Omega)\) are such that \(g_{n}\to g\) in \(L^{1}(\Omega;\mathbb{R}^{d})\), \(G_{i}^{n}\rightharpoonup G^{i}\) in \(L^{p}(\Omega;\mathbb{R}^{d\times N})\), as \(n\to+\infty\), for every \(i=1,\dots L\), then \[\mathcal{F}(g,G_{1},\dots,G_{L};O)\leqslant\liminf_{n\to+\infty}\mathcal{F}(g^{ n},G_{1}^{n},\dots,G_{L}^{n};O);\] the same holds in the case \(p=1\), replacing the weak convergences \(G_{i}^{n}\rightharpoonup G^{i}\) in \(L^{p}(\Omega;\mathbb{R}^{d\times N})\), as \(n\to+\infty\), for every \(i=1,\dots L\), with weak star convergences in the sense of measures \(\mathcal{M}(\Omega;\mathbb{R}^{d\times N})\); 3. for all \(O\in\mathcal{O}(\Omega)\), \(\mathcal{F}(\cdot,\dots,\cdot;O)\) is local, that is, if \(g=u\), \(G_{1}=U_{1}\),..., \(G_{L}=U_{L}\) a.e. in \(O\), then \(\mathcal{F}(g,G_{1},\dots,G_{L};O)=\mathcal{F}(u,U_{1},\dots,U_{L};O)\); 4. there exists a constant \(C>0\) such that \[\frac{1}{C}\left(\sum_{i=1}^{L}\||G_{i}|^{p}\|_{L^{1}(O;\mathbb{R }^{d\times N})}+|Dg|(O)\right) \leqslant\mathcal{F}(g,G_{1},\dots,G_{L};O)\] \[\leqslant C\left(\mathcal{L}^{N}(O)+\sum_{i=1}^{L}\||G_{i}|^{p}\| _{L^{1}(O;\mathbb{R}^{d\times N})}+|Dg|(O)\right),\] for every \((g,G_{1},\dots,G_{L})\in HSD_{L}^{p}(\Omega)\) and every \(O\in\mathcal{O}(\Omega)\). **Remark 3.1**.: Due to hypotheses (H1) and (H4), given any \((u,U_{1},\ldots,U_{L})\in HSD^{p}_{L}(\Omega)\) and any open sets \(O_{1}\subset\subset O_{2}\subseteq\Omega\), it follows that \[\mathcal{F}(u,U_{1},\ldots,U_{L};O_{2})\leqslant \,\mathcal{F}(u,U_{1},\ldots,U_{L};O_{1})\] \[+C\left(\mathcal{L}^{N}(O_{2}\setminus O_{1})+\sum_{i=1}^{L} \||U_{i}|^{p}\|_{L^{1}(O_{2}\setminus O_{1};\mathbb{R}^{d\times N})}+|Du|(O_{2 }\setminus O_{1})\right).\] Indeed, for \(\varepsilon>0\) small enough, let \(O_{\varepsilon}:=\{x\in O_{1}:\operatorname{dist}(x,\partial O_{1})>\varepsilon\}\) and notice that \(O_{2}\) is covered by the union of the two open sets \(O_{1}\) and \(O_{2}\setminus\overline{O_{\varepsilon}}.\) Thus, by (H1) and (H4) we have \[\mathcal{F}(u,U_{1},\ldots,U_{L};O_{2})\leqslant \,\mathcal{F}(u,U_{1},\ldots,U_{L};O_{1})+\mathcal{F}(u,U_{1}, \ldots,U_{L};O_{2}\setminus\overline{O_{\varepsilon}})\] \[\leqslant \,\mathcal{F}(u,U_{1},\ldots,U_{L};O_{1})\] \[+C\left(\mathcal{L}^{N}(O_{2}\setminus\overline{O_{\varepsilon}} )+\sum_{i=1}^{L}\||U_{i}|^{p}\|_{L^{1}(O_{2}\setminus\overline{O_{\varepsilon }};\mathbb{R}^{d\times N})}+|Du|(O_{2}\setminus\overline{O_{\varepsilon}}) \right).\] To conclude the result it suffices to let \(\varepsilon\to 0^{+}\). Given \((g,G_{1},\ldots,G_{L})\in HSD^{p}_{L}(\Omega)\) and \(O\in\mathcal{O}_{\infty}(\Omega)\), we introduce the space of test functions \[\mathcal{C}_{HSD^{p}_{L}}(g,G_{1},\ldots,G_{L};O):=\left\{(u,U_{1},\ldots,U_{L })\in HSD^{p}_{L}(\Omega):u=g\text{ in a neighbourhood of }\partial O,\right.\] \[\left.\int_{O}(G_{i}-U_{i})\,\mathrm{d}x=0,i=1,\ldots,L\right\}, \tag{3.1}\] and we let \(m:HSD^{p}_{L}(\Omega)\times\mathcal{O}_{\infty}(\Omega)\) be the functional defined by \[m(g,G_{1},\ldots,G_{L};O):=\inf\left\{\mathcal{F}(u,U_{1},\ldots,U_{L};O):(u, U_{1},\ldots,U_{L})\in\mathcal{C}_{HSD^{p}_{L}}(g,G_{1},\ldots,G_{L};O)\right\}. \tag{3.2}\] Following the ideas of the global method of relaxation introduced in [6], our aim in this section is to prove the theorem below. **Theorem 3.2**.: _Let \(p\geqslant 1\) and let \(\mathcal{F}:HSD^{p}_{L}(\Omega)\times\mathcal{O}(\Omega)\to[0,+\infty]\) be a functional satisfying (H1)-(H4). Then_ \[\mathcal{F}(u,U_{1},\ldots,U_{L};O)=\int_{O}\!f(x,u(x),\nabla u(x),U_{1}(x), \ldots,U_{L}(x))\,\mathrm{d}x+\int_{O\cap S_{u}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The proof of Theorem 3.2 is based on several auxiliary results and follows the reasoning presented in [6, Theorem 3.7] and [14, Theorem 4.6]. For this reason we don't provide the arguments in full detail but point out only the main differences that arise in our setting. We start by proving the following lemma which is used to obtain Theorem 3.6. **Lemma 3.4**.: _Assume that (H1) and (H4) hold. For any \((u,U_{1},\ldots,U_{L})\in HSD^{p}_{L}(\Omega)\) it follows that_ * _if_ \(p>1\)_,_ \[\limsup_{\delta\to 0^{+}}m(u,U_{1},\ldots,U_{L};Q_{\nu}(x_{0},(1-\delta)r)) \leqslant m(u,U_{1},\ldots,U_{L};Q_{\nu}(x_{0},r)),\] _where_ \(Q_{\nu}(x_{0},r)\) _is any cube centred at_ \(x_{0}\) _with side length_ \(r\)_, two faces orthogonal to_ \(\nu\) _and contained in_ \(\Omega\)_;_ * _if_ \(p=1\)_,_ \[\limsup_{\delta\to 0^{+}}m(u,U_{1},\ldots,U_{L};O_{\delta})\leqslant m(u,U_{1}, \ldots,U_{L};O),\] _where_ \(O_{\delta}=\{x\in O:\operatorname{dist}(x,\partial O)>\delta\}\) _and_ \(O\in\mathcal{O}(\Omega)\)_._ Proof.: Suppose first that \(p>1\). Without loss of generality we can assume that \(x_{0}=0\), \(r=1\), \(\nu=\mathbf{e_{1}}\) and \(Q\subset\Omega\). For every \(\varepsilon>0\) there exists \((v,V_{1},\ldots V_{L})\in\mathcal{C}_{HSD^{p}_{L}}(u,U_{1},\ldots U_{L};Q)\) such that \[\mathcal{F}(v,V_{1},\ldots,V_{L};Q)\leqslant m(u,U_{1},\ldots,U_{L};Q)+\varepsilon. \tag{3.5}\] Let \(0<\delta<1\) be small enough so that \(u=v\) in a neighbourhood of \(Q\setminus Q(1-2\delta)\), and let \(\delta<\alpha(\delta)<2\delta\) be such that \(\lim_{\delta\to 0^{+}}\alpha(\delta)=0\), \(Q(1-\alpha(\delta))\subset\subset Q(1-\delta)\) and \[\frac{\mathcal{L}^{N}(Q\setminus Q(1-\alpha(\delta)))}{\mathcal{L}^{N}(Q(1- \delta)\setminus Q(1-\alpha(\delta)))}\leqslant C, \tag{3.6}\] where the constant \(C\) depends only on the space dimension \(N\) and is, therefore, independent of \(\delta\). For every \(i=1,\ldots,L\), define \[\overline{V}_{i}=\begin{cases}V_{i},&\text{in }Q(1-\alpha(\delta))\\ \frac{1}{\mathcal{L}^{N}(Q(1-\delta)\setminus Q(1-\alpha(\delta)))}\left( \int_{Q(1-\delta)}U_{i}\,\mathrm{d}x-\int_{Q(1-\alpha(\delta))}V_{i}\,\mathrm{ d}x\right),&\text{in }Q(1-\delta)\setminus Q(1-\alpha(\delta))\\ U_{i},&\text{in }\Omega\setminus Q(1-\delta)\end{cases}\] and \[\overline{v}=\begin{cases}v,&\text{in }Q(1-\alpha(\delta))\\ u,&\text{in }\Omega\setminus Q(1-\alpha(\delta)).\end{cases}\] It is easily verified that \((\overline{v},\overline{V}_{1},\ldots,\overline{V}_{L})\in\mathcal{C}_{HSD^{p}_{L}}(u,U_{1}, \ldots,U_{L};Q(1-\delta))\). Thus, by Remark 3.1, by (H1) and by (3.5), we have \[m(u,U_{1},\ldots,U_{L};Q(1-\delta)) \leqslant\mathcal{F}(\overline{v},\overline{V}_{1},\ldots, \overline{V}_{L};Q(1-\delta))\] \[\leqslant\mathcal{F}(v,V_{1},\ldots,V_{L};Q(1-\alpha(\delta)))+C \Big{[}\mathcal{L}^{N}(Q_{1-\delta}\setminus Q(1-\alpha(\delta)))\] \[\qquad\qquad+|D\overline{v}|(Q(1-\delta)\setminus Q(1-\alpha( \delta)))+\sum_{i=1}^{L}\||\overline{V}_{i}|^{p}\|_{L^{i}(Q(1-\delta)\setminus Q (1-\alpha(\delta));\mathbb{R}^{d\times N})}\Big{]}\] \[\leqslant\mathcal{F}(v,V_{1},\ldots,V_{L};Q)+C\Big{[}\mathcal{L} ^{N}(Q_{1-\delta}\setminus Q(1-\alpha(\delta)))\] \[\qquad\qquad+|D\overline{v}|(Q(1-\delta)\setminus Q(1-\alpha( \delta)))+\sum_{i=1}^{L}\||\overline{V}_{i}|^{p}\|_{L^{i}(Q(1-\delta)\setminus Q (1-\alpha(\delta));\mathbb{R}^{d\times N})}\Big{]}\] \[\leqslant m(u,U_{1},\ldots,U_{L};Q)+\varepsilon+C\Big{[} \mathcal{L}^{N}(Q_{1-\delta}\setminus Q(1-\alpha(\delta)))\] \[\qquad\qquad+|D\overline{v}|(Q(1-\delta)\setminus Q(1-\alpha( \delta)))+\sum_{i=1}^{L}\||\overline{V}_{i}|^{p}\|_{L^{i}(Q(1-\delta)\setminus Q (1-\alpha(\delta));\mathbb{R}^{d\times N})}\Big{]}. \tag{3.7}\] Clearly, \(\lim\limits_{\delta\to 0^{+}}\mathcal{L}^{N}(Q_{1-\delta}\setminus Q(1-\alpha( \delta)))=0\) and, since \(u=v\) on \(\partial Q(1-\alpha(\delta))\), it also follows that \[\lim\limits_{\delta\to 0^{+}}|D\overline{v}|(Q(1-\delta)\setminus Q(1-\alpha( \delta)))=0.\] On the other hand, for every \(i=1,\ldots,L\), we have \[\begin{split}&\|\overline{V}_{i}|^{p}\|_{L^{1}(Q(1-\delta) \setminus Q(1-\alpha(\delta));\mathbb{R}^{d\times N})}=\frac{1}{(\mathcal{L} ^{N}(Q(1-\delta)\setminus Q(1-\alpha(\delta))))^{p-1}}\left|\int_{Q(1-\delta) }U_{i}\,\mathrm{d}x-\int_{Q(1-\alpha(\delta))}V_{i}\,\mathrm{d}x\right|^{p}\\ &\qquad\qquad=\frac{1}{(\mathcal{L}^{N}(Q(1-\delta)\setminus Q(1 -\alpha(\delta))))^{p-1}}\left|\int_{Q(1-\alpha(\delta))}U_{i}-V_{i}\,\mathrm{ d}x+\int_{Q(1-\delta)\setminus Q(1-\alpha(\delta))}U_{i}\,\mathrm{d}x\right|^{p} \\ &\qquad\qquad\leqslant\frac{C}{(\mathcal{L}^{N}(Q(1-\delta) \setminus Q(1-\alpha(\delta))))^{p-1}}\left(\left|\int_{Q(1-\alpha(\delta))}U _{i}-V_{i}\,\mathrm{d}x\right|^{p}+\left|\int_{Q(1-\delta)\setminus Q(1- \alpha(\delta))}U_{i}\,\mathrm{d}x\right|^{p}\right).\end{split} \tag{3.8}\] Recalling that \(\int_{Q}U_{i}-V_{i}\,\mathrm{d}x=0\), \(\forall i=1,\ldots,L\), the first term in (3.8) can be estimated by using Holder's inequality yielding \[\begin{split}&\frac{C}{(\mathcal{L}^{N}(Q(1-\delta)\setminus Q(1 -\alpha(\delta))))^{p-1}}\left|\int_{Q(1-\alpha(\delta))}U_{i}-V_{i}\,\mathrm{ d}x\right|^{p}\\ &\qquad\qquad=\frac{C}{(\mathcal{L}^{N}(Q(1-\delta)\setminus Q(1 -\alpha(\delta))))^{p-1}}\left|\int_{Q\setminus Q(1-\alpha(\delta))}U_{i}-V_{i }\,\mathrm{d}x\right|^{p}\\ &\qquad\qquad\leqslant\frac{C}{(\mathcal{L}^{N}(Q(1-\delta) \setminus Q(1-\alpha(\delta))))^{p-1}}\|U_{i}-V_{i}\|_{L^{p}(Q\setminus Q(1- \alpha(\delta));\mathbb{R}^{d\times N})}^{p}(\mathcal{L}^{N}(Q\setminus Q(1- \alpha(\delta))))^{p-1}.\end{split}\] By (3.6) and the fact that \(\lim\limits_{\delta\to 0^{+}}\mathcal{L}^{N}(Q\setminus Q(1-\alpha(\delta)))=0\) we conclude that \[\lim\limits_{\delta\to 0^{+}}\frac{C}{(\mathcal{L}^{N}(Q(1-\delta)\setminus Q(1 -\alpha(\delta))))^{p-1}}\left|\int_{Q(1-\alpha(\delta))}U_{i}-V_{i}\,\mathrm{ d}x\right|^{p}=0.\] Regarding the second term in (3.8), a similar argument using Holder's inequality leads to \[\begin{split}\lim\limits_{\delta\to 0^{+}}\frac{C}{(\mathcal{L}^{N}(Q(1- \delta)\setminus Q(1-\alpha(\delta))))^{p-1}}\left|\int_{Q(1-\delta)\setminus Q (1-\alpha(\delta))}U_{i}\,\mathrm{d}x\right|^{p}\\ \leqslant\lim\limits_{\delta\to 0^{+}}C\|U_{i}\|_{L^{p}(Q(1- \delta)\setminus Q(1-\alpha(\delta));\mathbb{R}^{d\times N})}^{p}=0.\end{split}\] Therefore, from (3.7), we obtain \[\limsup\limits_{\delta\to 0^{+}}m(u,U_{1},\ldots,U_{L};Q(1-\delta))\leqslant m(u,U_{1},\ldots,U_{L};Q)+\varepsilon\] and it suffices to let \(\varepsilon\to 0^{+}\) to complete the proof in the case \(p>1\). When \(p=1\) the proof is similar and we omit the details. In this case the estimate of the last term in (3.7) is simpler and does not require the use of Holder's inequality. Also, in this case, more general sets other than cubes may be considered as there is no need to use inequality (3.6) (see also [14]). Following [6, 7], for a fixed \((u,U_{1},\ldots,U_{L})\in HSD^{p}_{L}(\Omega)\), we set \(\mu:=\mathcal{L}^{N}\lfloor\Omega+|D^{*}u|\) and we define \[\mathcal{O}^{\star}(\Omega):=\{Q_{\nu}(x,\varepsilon):x\in\Omega,\nu\in \mathbb{S}^{N-1},\varepsilon>0\},\] and, for \(O\in\mathcal{O}(\Omega)\) and \(\delta>0\), we let \[\begin{split} m^{\delta}(u,U_{1},\ldots,U_{L};O):=\inf\Big{\{} \sum\limits_{i=1}^{\infty}m(u,U_{1},\ldots,U_{L};Q_{i}):Q_{i}\in\mathcal{O}^{ \star}(\Omega),Q_{i}\subseteq O,Q_{i}\cap Q_{j}=\mathcal{O}\text{ if }i\neq j,\\ \text{diam}(Q_{i})<\delta,\mu\Big{(}O\setminus\bigcup_{i=1}^{ \infty}Q_{i}\Big{)}=0\Big{\}}.\end{split}\] Since \(\delta\mapsto m^{\delta}(u,U_{1},\ldots,U_{L};O)\) is an increasing function, we now define \[m^{\star}(u,U_{1},\ldots,U_{L};O):=\sup_{\delta>0}m^{\delta}(u,U_{1},\ldots,U_{L} ;O)=\lim_{\delta\to 0^{+}}m^{\delta}(u,U_{1},\ldots,U_{L};O).\] Adapting the reasoning given in [14, Lemma 4.2 and Theorem 4.3], with an even easier argument due to our hypothesis (H2) and to the fact that our fields \(u\) have bounded variation, we obtain the two results below. **Lemma 3.5**.: _Let \(p\geqslant 1\) and assume that (H1)-(H4) hold. Then, for all \((u,U_{1},\ldots,U_{L})\in HSD^{p}_{L}(\Omega)\) and all \(O\in\mathcal{O}(\Omega)\), we have_ \[\mathcal{F}(u,U_{1},\ldots,U_{L};O)=m^{\star}(u,U_{1},\ldots,U_{L};O).\] **Theorem 3.6**.: _Let \(p\geqslant 1\) and assume that hypotheses (H1), (H2) and (H4) hold. Then, for every \(\nu\in\mathbb{S}^{N-1}\) and for every \((u,U_{1},\ldots,U_{L})\in HSD^{p}_{L}(\Omega)\), we have_ \[\lim_{\varepsilon\to 0^{+}}\frac{\mathcal{F}(u,U_{1},\ldots,U_{L};Q_{\nu}(x_{0}, \varepsilon))}{\mu(Q_{\nu}(x_{0},\varepsilon))}=\lim_{\varepsilon\to 0^{+}} \frac{m(u,U_{1},\ldots,U_{L};Q_{\nu}(x_{0},\varepsilon))}{\mu(Q_{\nu}(x_{0}, \varepsilon))}\] _for \(\mu\)-a.e. \(x_{0}\in\Omega\), where \(\mu:=\mathcal{L}^{N}\lfloor\Omega+|D^{s}u|\)._ We now present the proof of our main result of this section. Proof of Theorem 3.2.: We begin by proving that, for \(\mathcal{L}^{N}\)- a.e. \(x_{0}\in\Omega\), \[\frac{d\mathcal{F}(u,U_{1},\ldots,U_{L};\cdot)}{d\mathcal{L}^{N}}(x_{0})=f(x_ {0},u(x_{0}),\nabla u(x_{0}),U_{1}(x_{0}),\ldots,U_{L}(x_{0})). \tag{3.9}\] Let \(x_{0}\) be a fixed point in \(\Omega\) satisfying the following properties \[\lim_{\varepsilon\to 0^{+}}\frac{1}{\varepsilon}\,\fint_{Q(x_{0}, \varepsilon)}|u(x)-u(x_{0})-\nabla u(x_{0})(x-x_{0})|\,\mathrm{d}x=0; \tag{3.10}\] \[\lim_{\varepsilon\to 0^{+}}\frac{1}{\varepsilon^{N}}|Du|(Q(x_{0}, \varepsilon))=|\nabla u(x_{0})|,\quad\lim_{\varepsilon\to 0^{+}}\frac{1}{ \varepsilon^{N}}|D^{s}u|(Q(x_{0},\varepsilon))=0;\] (3.11) \[\lim_{\varepsilon\to 0^{+}}\,\fint_{Q(x_{0}, \varepsilon)}|U_{i}(x)-U_{i}(x_{0})|\,\mathrm{d}x=0,\forall i=1,\ldots,L;\] (3.12) \[\frac{d\mathcal{F}(u,U_{1},\ldots,U_{L};\cdot)}{d\mathcal{L}^{N}} (x_{0})=\lim_{\varepsilon\to 0^{+}}\frac{\mathcal{F}(u,U_{1},\ldots,U_{L};Q(x_{0}, \varepsilon))}{\varepsilon^{N}}=\lim_{\varepsilon\to 0^{+}}\frac{m(u,U_{1}, \ldots,U_{L};Q(x_{0},\varepsilon))}{\varepsilon^{N}};\] (3.13) \[\frac{d\mathcal{F}(v_{a},U_{1}(x_{0}),\ldots,U_{L}(x_{0});\cdot)} {d\mathcal{L}^{N}}(x_{0})=\lim_{\varepsilon\to 0^{+}}\frac{m(v_{a},U_{1}(x_{0}), \ldots,U_{L}(x_{0});Q(x_{0},\varepsilon))}{\varepsilon^{N}}; \tag{3.14}\] where we are denoting by \(v_{a}\) the function defined in \(\Omega\) by \(v_{a}(x):=u(x_{0})+\nabla u(x_{0})(x-x_{0})\). It is well known that the above properties hold for \(\mathcal{L}^{N}\)- a.e. point \(x_{0}\) in \(\Omega\), taking also in consideration Theorem 3.6 in (3.13) and (3.14). Having fixed \(x_{0}\) as above, let \(\delta\in(0,1)\) and let \(\varepsilon>0\) be small enough so that \(Q(x_{0},\varepsilon)\subset\Omega\). Given the definition of the density \(f\) in (3.3), due to (3.13) and (3.14), we want to show that \[\lim_{\varepsilon\to 0^{+}}\frac{m(v_{a},U_{1}(x_{0}),\ldots U_{L}(x_{0});Q(x_{0}, \varepsilon))}{\mathcal{L}^{N}(Q(x_{0},\varepsilon))}-\lim_{\varepsilon\to 0^{+}} \frac{m(u,U_{1},\ldots U_{L};Q(x_{0},\varepsilon))}{\mathcal{L}^{N}(Q(x_{0}, \varepsilon))}=0. \tag{3.15}\] Let \((\widetilde{u},\widetilde{U}_{1},\ldots,\widetilde{U}_{L})\in\mathcal{C}_{HSD^{p}_{L}}(v_{a},U_{1}(x_{0}),\ldots,U_{L}(x_{0});Q(x_{0}, \delta\varepsilon))\) be such that \[\varepsilon^{N+1}+m(v_{a},U_{1}(x_{0}),\ldots,U_{L}(x_{0});Q(x_{0},\delta \varepsilon))\geqslant\mathcal{F}(\widetilde{u},\widetilde{U}_{1},\ldots, \widetilde{U}_{L};Q(x_{0},\delta\varepsilon)). \tag{3.16}\] Then, as \(\widetilde{u}=v_{a}\) on \(\partial Q(x_{0},\varepsilon)\), we have \[|\mathrm{tr}\,u-\mathrm{tr}\,\widetilde{u}|(\partial Q(x_{0},\delta \varepsilon)):=\int_{\partial Q(x_{0},\delta\varepsilon)}|\widetilde{u}(x)-u(x )|\,\mathrm{d}\mathcal{H}^{N-1}(x)=\int_{\partial Q(x_{0},\delta\varepsilon)}|v _{a}(x)-u(x)|\,\mathrm{d}\mathcal{H}^{N-1}(x). \tag{3.17}\] Let \(\delta^{\prime}\in(\delta,1)\) be such that \(Q(x_{0},\delta\varepsilon)\subset\subset Q(x_{0},\delta^{\prime}\varepsilon)\) and define \[\widetilde{v}_{\varepsilon}:=\left\{\begin{array}{ll}\widetilde{u},&\text{ in }Q(x_{0},\delta\varepsilon),\\ u,&\text{ in }\Omega\setminus Q(x_{0},\delta\varepsilon)\end{array}\right.\] and, for every \(i\in\{1,\ldots,L\}\), let \[\widetilde{V}_{\varepsilon}^{i}(x):=\left\{\begin{array}{ll}\widetilde{U}_{i}(x),&\mbox{in }Q(x_{0},\delta\varepsilon),\\ \frac{1}{\mathcal{L}^{N}(Q(x_{0},\varepsilon)\setminus Q(x_{0},\delta \varepsilon))}\left[\int_{Q(x_{0},\varepsilon)}U_{i}(x)\,\mathrm{d}x-\int_{Q( x_{0},\delta\varepsilon)}U_{i}(x_{0})\,\mathrm{d}x\right],&\mbox{in }\Omega \setminus Q(x_{0},\delta\varepsilon).\end{array}\right.\] Recall that \(\int_{Q(x_{0},\delta\varepsilon)}\widetilde{U}_{i}(x)\,\mathrm{d}x=\int_{Q( x_{0},\delta\varepsilon)}U_{i}(x_{0})\,\mathrm{d}x=U_{i}(x_{0})(\delta \varepsilon)^{N}\), for every \(i\in\{1,\ldots,L\}\), so it is easy to see that \((\widetilde{v}_{\varepsilon},\widetilde{V}_{\varepsilon}^{1},\ldots, \widetilde{V}_{\varepsilon}^{L})\in\mathcal{C}_{HSD_{x}^{L}}(u,U_{1},\ldots,U_{L};Q(x_{0}, \varepsilon))\). Hence, by Remark 3.1, (H4) and (3.16), we have \[m(u,U_{1},\ldots,U_{L};Q(x_{0},\varepsilon)) \leqslant\mathcal{F}(\widetilde{v}_{\varepsilon},\widetilde{V}_{ \varepsilon}^{1},\ldots,\widetilde{V}_{\varepsilon}^{L};Q(x_{0},\varepsilon))\] \[\leqslant\mathcal{F}(\widetilde{v}_{\varepsilon},\widetilde{V}_{ \varepsilon}^{1},\ldots,\widetilde{V}_{\varepsilon}^{L};Q(x_{0},\delta^{ \prime}\varepsilon))+\mathcal{F}(\widetilde{v}_{\varepsilon},\widetilde{V}_{ \varepsilon}^{1},\ldots,\widetilde{V}_{\varepsilon}^{L};Q(x_{0},\varepsilon) \setminus\overline{Q(x_{0},\delta\varepsilon)})\] \[\leqslant\mathcal{F}(\widetilde{v}_{\varepsilon},\widetilde{V}_{ \varepsilon}^{1},\ldots,\widetilde{V}_{\varepsilon}^{L};Q(x_{0},\delta \varepsilon))+\mathcal{F}(\widetilde{v}_{\varepsilon},\widetilde{V}_{ \varepsilon}^{1},\ldots,\widetilde{V}_{\varepsilon}^{L};Q(x_{0},\varepsilon )\setminus\overline{Q(x_{0},\delta\varepsilon)})\] \[\qquad\qquad+C\Big{(}\mathcal{L}^{N}(Q(x_{0},\delta^{\prime} \varepsilon)\setminus Q(x_{0},\delta\varepsilon))+|D\widetilde{v}_{ \varepsilon}|(Q(x_{0},\delta^{\prime}\varepsilon)\setminus Q(x_{0},\delta \varepsilon))\] \[\qquad\qquad\qquad+\sum_{i=1}^{L}\int_{Q(x_{0},\delta^{\prime} \varepsilon)\setminus Q(x_{0},\delta\varepsilon)}|\widetilde{V}_{ \varepsilon}^{i}|^{p}\,\mathrm{d}x\Big{)}\] \[\leqslant\mathcal{F}(\widetilde{u},\widetilde{U}_{1},\ldots, \widetilde{U}_{L};Q(x_{0},\delta\varepsilon))+C\Big{(}\mathcal{L}^{N}(Q(x_{0},\varepsilon)\setminus Q(x_{0},\delta\varepsilon))\] \[\qquad\qquad\qquad+\sum_{i=1}^{L}\int_{Q(x_{0},\varepsilon) \setminus Q(x_{0},\delta\varepsilon)}|\widetilde{V}_{\varepsilon}^{i}|^{p}\, \mathrm{d}x+|D\widetilde{v}_{\varepsilon}|(Q(x_{0},\varepsilon)\setminus Q(x_ {0},\delta\varepsilon))\Big{)}\] \[\leqslant\varepsilon^{N+1}+m(v_{a},U_{1}(x_{0}),\ldots,U_{L}(x_{ 0});Q(x_{0},\delta\varepsilon))\] \[\qquad\qquad+C\Big{(}\varepsilon^{N}(1-\delta^{N})+|Du|(Q(x_{0}, \varepsilon)\setminus\overline{Q(x_{0},\delta\varepsilon)})+|\mathrm{tr} \,\widetilde{u}-\mathrm{tr}\,u|(\partial Q(x_{0},\delta\varepsilon))\] \[\qquad\qquad\qquad+\sum_{i=1}^{L}\int_{Q(x_{0},\varepsilon) \setminus Q(x_{0},\delta\varepsilon)}|\widetilde{V}_{\varepsilon}^{i}|^{p}\, \mathrm{d}x\Big{)}. \tag{3.18}\] We observe that for every \(i\in\{1,\ldots,L\}\) we have \[\int_{Q(x_{0},\varepsilon)\setminus Q(x_{0},\delta\varepsilon)}| \widetilde{V}_{\varepsilon}^{i}|^{p}\,\mathrm{d}x\leqslant\frac{1}{ \varepsilon^{N(p-1)}(1-\delta^{N})^{p-1}}\left|\int_{Q(x_{0},\varepsilon)}U_{i }(x)\,\mathrm{d}x-\int_{Q(x_{0},\delta\varepsilon)}U_{i}(x_{0})\,\mathrm{d}x \right|^{p}\] \[\leqslant\frac{C}{\varepsilon^{N(p-1)}(1-\delta^{N})^{p-1}} \left(\left|\int_{Q(x_{0},\varepsilon)\setminus Q(x_{0},\delta\varepsilon)}U_ {i}(x)\,\mathrm{d}x\right|^{p}+\left|\int_{Q(x_{0},\delta\varepsilon)}(U_{i}(x )-U_{i}(x_{0}))\,\mathrm{d}x\right|^{p}\right)\] \[\leqslant\frac{C\varepsilon^{Np}}{\varepsilon^{N(p-1)}(1-\delta ^{N})^{p-1}}\left(\left|\int_{\overline{Q(x_{0},\varepsilon)}}U_{i}(x)\, \mathrm{d}x-\delta^{N}\int_{\overline{Q(x_{0},\delta\varepsilon)}}U_{i}(x)\, \mathrm{d}x\right|^{p}+\left|\delta^{N}\int_{\overline{Q(x_{0},\delta \varepsilon)}}U_{i}(x)-U_{i}(x_{0})\,\mathrm{d}x\right|^{p}\right). \tag{3.19}\] Thus, to obtain (3.15), taking into account (3.18), (3.19) and Lemma 2.1, we have \[\lim_{\varepsilon\to 0^{+}}\frac{m(u,U_{1},\ldots,U_{L};Q(x_{0}, \varepsilon))}{\mathcal{L}^{N}(Q(x_{0},\varepsilon))}-\lim_{\varepsilon\to 0^{+}}\frac{m(v_{a},U_{1}(x_{0}), \ldots,U_{L}(x_{0});Q(x_{0},\varepsilon))}{\mathcal{L}^{N}(Q(x_{0}, \varepsilon))}\] \[\leqslant\lim_{\varepsilon\to 0^{+}}\frac{m(u,U_{1},\ldots,U_{L};Q(x_{0}, \varepsilon))}{\varepsilon^{N}}-\limsup_{\delta\to 1^{-}}\lim_{\varepsilon\to 0^{+}}\frac{m(v_{a},U_{1}(x_{0}), \ldots,U_{L}(x_{0});Q(x_{0},\delta\varepsilon))}{\varepsilon^{N}}\] \[\qquad\qquad\leqslant\limsup_{\delta\to 1^{-}}\limsup_{ \varepsilon\to 0^{+}}\left(\varepsilon+C(1-\delta^{N})+\frac{|Du|(Q(x_{0}, \varepsilon)\setminus\overline{Q(x_{0},\delta\varepsilon)}+|\mathrm{tr}\, \widetilde{u}-\mathrm{tr}\,u|(\partial Q(x_{0},\delta\varepsilon))}{ \varepsilon^{N}}+\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.\frac{C}{(1-\delta^{N })^{p-1}}\sum_{i=1}^{L}|U_{i}(x_{0})-\delta^{N}U_{i}(x_{0})|^{p}\right) \tag{3.20}\] where in the last line we have used the fact that \(x_{0}\) is a Lebesgue point for \(U_{i}\), see (3.12). Using (3.11) and [2, (5.79)] yields \[\limsup_{\delta\to 1^{-}}\limsup_{\varepsilon\to 0^{+}}\frac{|Du|(Q(x_{0}, \varepsilon)\setminus\overline{Q(x_{0},\delta\varepsilon)})}{\varepsilon^{N}} \leqslant\lim_{\delta\to 1^{-}}|\nabla u(x_{0})|(1-\delta^{N})=0. \tag{3.21}\] On the other hand, by (3.17) and a change of variables, we can apply [6, Lemma 2.3] to conclude that \[\limsup_{\varepsilon\to 0^{+}}\frac{|\mathrm{tr}\,\widetilde{u}- \mathrm{tr}\,u|(\partial Q(x_{0},\delta\varepsilon))}{\varepsilon^{N}} =\limsup_{\varepsilon\to 0^{+}}\delta^{N}\frac{|\mathrm{tr}\,v_{a}- \mathrm{tr}\,u|(\partial Q(x_{0},\delta\varepsilon))}{\delta^{N}\varepsilon^ {N}}\] \[=\limsup_{\varepsilon\to 0^{+}}\delta^{N}\int_{\partial Q}| \mathrm{tr}(\mathrm{u}_{\varepsilon\delta}-\nabla u(x_{0})y)|\,\mathrm{d} \mathcal{H}^{N-1}(y)=0, \tag{3.22}\] since, denoting by \(u_{\varepsilon\delta}(y):=\frac{u(x_{0}+\delta\varepsilon y)-u(x_{0})}{ \delta\varepsilon}\), it follows from (3.10) and (3.11) that \(u_{\varepsilon\delta}\to\nabla u(x_{0})y\) in \(L^{1}(Q;\mathbb{R}^{d})\) and \(|Du_{\varepsilon\delta}|(Q)\to|\nabla u(x_{0})|\), as \(\varepsilon\to 0^{+}\). Taking into account (3.20), (3.21) and (3.22) we conclude that \[\lim_{\varepsilon\to 0^{+}}\frac{m(u,U_{1},\ldots,U_{L};Q(x_{0},\varepsilon))}{ \varepsilon^{N}}\leqslant\lim_{\varepsilon\to 0^{+}}\frac{m(v_{a},U_{1}(x_{0}), \ldots,U_{L}(x_{0});Q(x_{0},\varepsilon))}{\varepsilon^{N}}.\] Interchanging the roles of \((u,U_{1},\ldots,U_{L})\) and \((v_{a},U_{1}(x_{0}),\ldots,U_{L}(x_{0}))\), the reverse inequality is proved in a similar fashion. This completes the proof of (3.9). Next we want to prove that, for \(\mathcal{H}^{N-1}\)- a.e \(x_{0}\in S_{u}\), \[\frac{d\mathcal{F}(u,U_{1},\ldots,U_{L};\cdot)}{d\mathcal{H}^{N-1}[S_{u}}(x_{0 })=\Phi(x_{0},u^{+}(x_{0}),u^{-}(x_{0}),\nu_{u}(x_{0})).\] For simplicity of notation we denote by \(\nu\) the unit vector \(\nu_{u}\) and by \(v_{j}\) the function defined in \(\Omega\) by \[v_{j}(x)=v_{u^{+}(x_{0}),u^{-}(x_{0}),\nu(x_{0})}(x-x_{0}):=\left\{\begin{array} []{ll}u^{+}(x_{0})&\mbox{ if }(x-x_{0})\cdot\nu(x_{0})>0,\\ u^{-}(x_{0})&\mbox{ if }(x-x_{0})\cdot\nu(x_{0})\leqslant 0.\end{array}\right.\] It is well known that, for \(\mathcal{H}^{N-1}\) a.e \(x_{0}\in S_{u}\), the following hold \[\lim_{\varepsilon\to 0^{+}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, Define \[\widetilde{v}_{\varepsilon}:=\left\{\begin{array}{ll}\widetilde{u}&\text{ in }Q_{\nu}(x_{0},\delta\varepsilon),\\ u&\text{ in }\Omega\setminus Q_{\nu}(x_{0},\delta\varepsilon)\end{array}\right.\] and, for every \(i\in\{1,\ldots,L\}\), let \[\widetilde{V}_{\varepsilon}^{i}(x):=\left\{\begin{array}{ll}\widetilde{U}_{i }(x)&\text{ in }Q_{\nu}(x_{0},\delta\varepsilon),\\ \frac{1}{\mathcal{L}^{N}(Q_{\nu}(x_{0},\varepsilon)\setminus Q_{\nu}(x_{0}, \delta\varepsilon))}\int_{Q_{\nu}(x_{0},\varepsilon)}U_{i}(x)\,\mathrm{d}x& \text{ in }\Omega\setminus Q_{\nu}(x_{0},\delta\varepsilon).\end{array}\right.\] Recall that, for every \(i\in\{1,\ldots,L\}\), we have \(\int_{Q_{\nu}(x_{0},\delta\varepsilon)}\widetilde{U}_{i}(x)\,\mathrm{d}x=0\). Thus, \((\widetilde{v}_{\varepsilon},\widetilde{V}_{\varepsilon}^{1},\ldots, \widetilde{V}_{\varepsilon}^{L})\) belongs to the class of admissible test functions \(\mathcal{C}_{HSD_{L}^{p}}(u,U_{1},\ldots,U_{L});Q_{\nu}(x_{0},\varepsilon))\) and therefore we obtain, using also Remark 3.1, (H4) and (3.29), \[m(u,U_{1},\ldots,U_{L};Q_{\nu}(x_{0},\varepsilon)) \leqslant\mathcal{F}(\widetilde{v}_{\varepsilon},\widetilde{V}_{ \varepsilon}^{1},\ldots,\widetilde{V}_{\varepsilon}^{L};Q_{\nu}(x_{0}, \varepsilon))\] \[\leqslant\mathcal{F}(\widetilde{v}_{\varepsilon},\widetilde{V}_{ \varepsilon}^{1},\ldots,\widetilde{V}_{\varepsilon}^{L};Q_{\nu}(x_{0},\delta \varepsilon))+\mathcal{F}(\widetilde{v}_{\varepsilon},\widetilde{V}_{ \varepsilon}^{1},\ldots,\widetilde{V}_{\varepsilon}^{L};Q_{\nu}(x_{0}, \varepsilon)\setminus\overline{Q_{\nu}(x_{0},\delta\varepsilon)})\] \[\qquad+C\Big{(}\mathcal{L}^{N}(Q_{\nu}(x_{0},\delta^{\prime} \varepsilon)\setminus Q_{\nu}(x_{0},\delta\varepsilon))+|D\widetilde{v}_{ \varepsilon}|(Q_{\nu}(x_{0},\delta^{\prime}\varepsilon)\setminus Q_{\nu}(x_{0},\delta\varepsilon))\] \[\qquad\qquad+\sum_{i=1}^{L}\int_{Q_{\nu}(x_{0},\delta^{\prime} \varepsilon)\setminus Q_{\nu}(x_{0},\delta\varepsilon)}|\widetilde{V}_{ \varepsilon}^{i}|^{p}\,\mathrm{d}x\Big{)}\] \[\leqslant\mathcal{F}(\widetilde{u},\widetilde{U}_{1},\ldots, \widetilde{U}_{L};Q_{\nu}(x_{0},\delta\varepsilon))+C\Big{(}\mathcal{L}^{N}(Q _{\nu}(x_{0},\varepsilon)\setminus Q_{\nu}(x_{0},\delta\varepsilon))\] \[\qquad\qquad+\sum_{i=1}^{L}\int_{Q_{\nu}(x_{0},\varepsilon) \setminus Q_{\nu}(x_{0},\delta\varepsilon)}|\widetilde{V}_{\varepsilon}^{i}| ^{p}\,\mathrm{d}x+|D\widetilde{v}_{\varepsilon}|(Q_{\nu}(x_{0},\varepsilon) \setminus Q_{\nu}(x_{0},\delta\varepsilon))\Big{)}\] \[\leqslant\varepsilon^{N}+m(v_{j},0,\ldots,0;Q_{\nu}(x_{0},\delta \varepsilon))\] \[\qquad+C\Big{(}\varepsilon^{N}(1-\delta^{N})+|Du|(Q_{\nu}(x_{0}, \varepsilon)\setminus\overline{Q_{\nu}(x_{0},\delta\varepsilon)})+|\mathrm{ tr}\,\widetilde{u}-\mathrm{tr}\,u|(\partial Q_{\nu}(x_{0},\delta\varepsilon))\] \[\qquad\qquad+\sum_{i=1}^{L}\int_{Q_{\nu}(x_{0},\varepsilon) \setminus Q_{\nu}(x_{0},\delta\varepsilon)}|\widetilde{V}_{\varepsilon}^{i}| ^{p}\,\mathrm{d}x\Big{)}. \tag{3.31}\] For every \(i\in\{1,\ldots,L\}\) we have, using Holder's inequality, \[\int_{Q_{\nu}(x_{0},\varepsilon)\setminus Q_{\nu}(x_{0}, \delta\varepsilon)}|\widetilde{V}_{\varepsilon}^{i}|^{p}\,\mathrm{d}x \leqslant\frac{1}{\varepsilon^{N(p-1)}(1-\delta^{N})^{p-1}} \left|\int_{Q_{\nu}(x_{0},\varepsilon)}U_{i}(x)\,\mathrm{d}x\right|^{p}\] \[\leqslant\frac{\varepsilon^{N(p-1)}}{\varepsilon^{N(p-1)}(1- \delta^{N})^{p-1}}\|U_{i}\|_{L^{p}(Q_{\nu}(x_{0},\varepsilon);\mathbb{R}^{d \times N})}^{p}\] \[=\frac{1}{(1-\delta^{N})^{p-1}}\|U_{i}\|_{L^{p}(Q_{\nu}(x_{0}, \varepsilon);\mathbb{R}^{d\times N})}^{p}. \tag{3.32}\] Hence, from (3.31) and (3.32), taking into account (3.27) and Lemma 2.1, it follows that \[\lim_{\varepsilon\to 0^{+}}\frac{m(u,U_{1},\ldots U_{L};Q_{\nu}(x_{0}, \varepsilon))}{\varepsilon^{N-1}}\leqslant \limsup_{\delta\to 1^{-}}\limsup_{\varepsilon\to 0^{+}}C\Big{(} \varepsilon+\frac{m(v_{j},0,\ldots,0;Q_{\nu}(x_{0},\delta\varepsilon))}{ \varepsilon^{N-1}}\] \[\qquad+\varepsilon(1-\delta^{N})+\frac{1}{(1-\delta^{N})^{p-1}} \sum_{i=1}^{L}\frac{1}{\varepsilon^{N-1}}\|U_{i}\|_{L^{p}(Q_{\nu}(x_{0}, \varepsilon);\mathbb{R}^{d\times N})}^{p}\] \[\qquad+\frac{|Du|(Q_{\nu}(x_{0},\varepsilon)\setminus\overline{Q _{\nu}(x_{0},\delta\varepsilon)})}{\varepsilon^{N-1}}+\frac{|\mathrm{tr} \tilde{u}-\mathrm{tr}|(\partial Q_{\nu}(x_{0},\delta\varepsilon))}{\varepsilon^{ N-1}}\Big{)}\] \[\leqslant\lim_{\varepsilon\to 0^{+}}\frac{m(v_{j},0,\ldots 0;Q_{\nu}(x_{0}, \varepsilon))}{\varepsilon^{N-1}}+\limsup_{\delta\to 1^{-}}(1-\delta^{N})|[u]|(x_{0})\] \[=\lim_{\varepsilon\to 0^{+}}\frac{m(v_{j},0,\ldots 0;Q_{\nu}(x_{0}, \varepsilon))}{\varepsilon^{N-1}} \tag{3.33}\] since, by [2, (5.79)] and (3.24), \[\lim_{\varepsilon\to 0^{+}}\frac{|Du|(Q_{\nu}(x_{0},\varepsilon)\setminus \overline{Q_{\nu}(x_{0},\delta\varepsilon)})}{\varepsilon^{N-1}}\leqslant(1- \delta^{N})|[u]|(x_{0}).\] and \[\lim_{\varepsilon\to 0^{+}}\frac{|\mathrm{tr}\tilde{u}-\mathrm{tr}u|(\partial Q _{\nu}(x_{0},\delta\varepsilon))}{\varepsilon^{N-1}}=0. \tag{3.34}\] To prove this last fact we change variables and use (3.30) to obtain \[\lim_{\varepsilon\to 0^{+}}\frac{|\mathrm{tr}\tilde{u}-\mathrm{tr}u|( \partial Q_{\nu}(x_{0},\delta\varepsilon))}{\varepsilon^{N-1}} =\lim_{\varepsilon\to 0^{+}}\delta^{N-1}\int_{\partial Q_{\nu}}| \mathrm{tr}(v_{j}(x_{0}+\delta\varepsilon y)-u(x_{0}+\delta\varepsilon y))| \,\mathrm{d}\mathcal{H}^{N-1}(y)\] \[=\lim_{\varepsilon\to 0^{+}}\delta^{N-1}\int_{Q_{\nu}}| \mathrm{tr}(v_{u^{+}(x_{0}),u^{-}(x_{0}),\nu(x_{0})}(y)-u_{\delta\varepsilon} (y))|\,\mathrm{d}\mathcal{H}^{N-1}(y).\] where \(u_{\delta\varepsilon}(y)=u(x_{0}+\delta\varepsilon y)\). Then (3.23) and (3.24) yield \[u_{\delta\varepsilon}\to v_{u^{+}(x_{0}),u^{-}(x_{0}),\nu(x_{0})}\text{ in }L^{1}(Q_{\nu};\mathbb{R}^{d})\text{ as } \varepsilon\to 0^{+}\] and \[|Du_{\delta\varepsilon}|(Q_{\nu})=\frac{1}{(\delta\varepsilon)^{N-1}}|Du|(Q_{ \nu}(x_{0},\delta\varepsilon))\to|[u]|(x_{0})=|Dv_{u^{+}(x_{0}),u^{-}(x_{0}), \nu(x_{0})}|(Q_{\nu})\text{ as }\varepsilon\to 0^{+}.\] Hence (3.34) follows from [6, Lemma 2.3] and this completes the proof of inequality (3.33). The reverse inequality can be shown in a similar way by interchanging the roles of \((u,U_{1},\ldots,U_{L})\) and \((v_{j},0,\ldots,0)\) leading to the conclusion stated in (3.28). Theorem 3.2 is thus proved. ## 4. Applications In this section we present some applications of the global method for relaxation obtained in Theorem 3.2. The first application concerns the case of a two-level structured deformation, that is, we take \(L=1\) in Definition 2.2. In this setting, given a deformation \(u\in SBV(\Omega;\mathbb{R}^{d})\), and two non-negative functions \(W\colon\Omega\times\mathbb{R}^{d\times N}\to[0,+\infty)\) and \(\psi\colon\Omega\times\mathbb{R}^{d}\times\mathbb{S}^{N-1}\to[0,+\infty)\), we consider the initial energy of \(u\) defined by \[E(u)\coloneqq\int_{\Omega}W(x,\nabla u(x))\,\mathrm{d}x+\int_{\Omega\cap S_{u }}\psi(x,[u](x),\nu_{u}(x))\,\mathrm{d}\mathcal{H}^{N-1}(x), \tag{4.1}\] which is determined by the bulk and surface energy densities \(W\) and \(\psi\), respectively. Then, as justified by the Approximation Theorem 2.5, we assign an energy to a structured deformation \((g,G)\in HSD^{p}_{1}(\Omega)\), which is equivalent to saying that \((g,G)\in SD(\Omega)\) and \(G\in L^{p}(\Omega;\mathbb{R}^{d\times N})\), via \[I_{p}(g,G)\coloneqq\inf\Big{\{}\liminf_{n\to\infty}E(u_{n}):u_{n}\in SBV( \Omega;\mathbb{R}^{d}),u_{n}\stackrel{{*}}{{\rightharpoonup}}(g,G) \Big{\}}. \tag{4.2}\] To simplify notation, here and in what follows, we write \(u_{n}\stackrel{{*}}{{\rightharpoonup}}(g,G)\) to mean \(u_{n}\to g\) in \(L^{1}(\Omega;\mathbb{R}^{d})\) and \(\nabla u_{n}\rightharpoonup G\) in \(L^{p}(\Omega;\mathbb{R}^{d\times N})\), if \(p>1\), and \(\nabla u_{n}\stackrel{{*}}{{\rightharpoonup}}G\) in \(\mathcal{M}(\Omega;\mathbb{R}^{d\times N})\), if \(p=1\). Notice that this notion of convergence coincides, in the case \(L=1\), with the one given in Definition 2.4. Under our coercivity hypothesis (3) below, the definition of \(I_{p}\) coincides with the one considered in [10], see [10, Remark 2.15]. The functional in (4.2) was studied in [10], in the homogeneous case, and later in [17], in the case of a uniformly continuous \(x\) dependence, where, under certain hypotheses on \(W\) and \(\psi\) (cf. [17, Theorem 5.1]) it was shown that \(I_{p}\) admits an integral representation, that is, that there exist functions \(H_{p}\colon\Omega\times\mathbb{R}^{d\times N}\times\mathbb{R}^{d\times N}\to[0,+ \infty)\) and \(h_{p}\colon\Omega\times\mathbb{R}^{d}\times\mathbb{S}^{N-1}\to[0,+\infty)\) such that \[I_{p}(g,G)=\int_{\Omega}H_{p}(x,\nabla g(x),G(x))\,\mathrm{d}x+\int_{\Omega \cap S_{g}}h_{p}(x,[g](x),\nu_{g}(x))\,\mathrm{d}\mathcal{H}^{N-1}(x). \tag{4.3}\] In order to present the expressions of the relaxed energy densities \(H_{p}\) and \(h_{p}\) we start by introducing some notation. For \(A,B\in\mathbb{R}^{d\times N}\) let \[\mathcal{C}_{p}^{\mathrm{bulk}}(A,B)\coloneqq\bigg{\{}u\in SBV(Q;\mathbb{R}^{d}) :u|_{\partial Q}(x)=Ax,\int_{Q}\nabla u\,\mathrm{d}x=B,|\nabla u|\in L^{p}(Q) \bigg{\}}, \tag{4.4}\] and for \(\lambda\in\mathbb{R}^{d}\) and \(\nu\in\mathbb{S}^{N-1}\) let \(u_{\lambda,\nu}\) be the function defined by \[u_{\lambda,\nu}(x)\coloneqq\begin{cases}\lambda&\text{if }x\cdot\nu\geqslant 0, \\ 0&\text{if }x\cdot\nu<0,\end{cases} \tag{4.5}\] and consider the classes given by \[\mathcal{C}_{p}^{\mathrm{surf}}(\lambda,\nu)\coloneqq\Big{\{}u\in SBV(Q_{\nu} ;\mathbb{R}^{d}):u|_{\partial Q_{\nu}}(x)=u_{\lambda,\nu}(x),\nabla u(x)=0 \text{ for }\mathcal{L}^{N}\text{-a.e. }x\in Q_{\nu}\Big{\}},\] for \(p>1\), and for \(p=1\), \[\mathcal{C}_{1}^{\mathrm{surf}}(\lambda,\nu)\coloneqq\Big{\{}u\in SBV(Q_{\nu} ;\mathbb{R}^{d}):u|_{\partial Q_{\nu}}(x)=u_{\lambda,\nu}(x),\int_{Q}\nabla u \,\mathrm{d}x=0\Big{\}}.\] Then, the functions \(H_{p}\) and \(h_{p}\) appearing in (4.3) are given by (cf. [17, (5.6), (5.7)]) \[H_{p}(x_{0},A,B)\coloneqq\inf\bigg{\{}\int_{Q}W(x_{0},\nabla u(x))\,\mathrm{d }x+\int_{Q\cap S_{u}}\psi(x_{0},[u](x),\nu_{u}(x))\,\mathrm{d}\mathcal{H}^{N- 1}(x):u\in\mathcal{C}_{p}^{\mathrm{bulk}}(A,B)\bigg{\}}, \tag{4.6}\] for all \(x_{0}\in\Omega\) and \(A,B\in\mathbb{R}^{d\times N}\), and, for all \(x_{0}\in\Omega\), \(\lambda\in\mathbb{R}^{d}\) and \(\nu\in\mathbb{S}^{N-1}\), \[h_{p}(x_{0},\lambda,\nu)\coloneqq\inf\bigg{\{}\delta_{1}(p)\!\!\int_{Q_{\nu}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 7. (positive \(1\)-homogeneity) for all \(x\in\Omega\), \(\lambda\in\mathbb{R}^{d}\), \(\nu\in\mathbb{S}^{N-1}\) and \(t>0\) \[\psi(x,t\lambda,\nu)=t\psi(x,\lambda,\nu);\] 8. (sub-additivity) for all \(x\in\Omega\), \(\lambda_{1},\lambda_{2}\in\mathbb{R}^{d}\) and \(\nu\in\mathbb{S}^{N-1}\), \[\psi(x,\lambda_{1}+\lambda_{2},\nu)\leqslant\psi(x,\lambda_{1},\nu)+\psi(x, \lambda_{2},\nu);\] 9. there exists a continuous function \(\omega_{\psi}\colon[0,+\infty)\to[0,+\infty)\) with \(\omega_{\psi}(s)\to 0\) as \(s\to 0^{+}\) such that, for every \(x_{0},x_{1}\in\Omega\), \(\lambda\in\mathbb{R}^{d}\) and \(\nu\in\mathbb{S}^{N-1}\), \[|\psi(x_{1},\lambda,\nu)-\psi(x_{0},\lambda,\nu)|\leqslant\omega_{\psi}(|x_{1 }-x_{0}|)|\lambda|.\] Under this set of hypotheses we prove the following theorem. **Theorem 4.1**.: _Let \(p\geqslant 1\) and let \(\Omega\subset\mathbb{R}^{N}\) be a bounded, open set. Consider \(E\) given by (4.1) where \(W\colon\Omega\times\mathbb{R}^{d\times N}\to[0,+\infty)\) and \(\psi\colon\Omega\times\mathbb{R}^{d}\times\mathbb{S}^{N-1}\to[0,+\infty)\) satisfy (1)-(4) and (5)-(9), respectively. Let \((g,G)\in SD(\Omega)\), with \(G\in L^{p}(\Omega;\mathbb{R}^{d\times N})\), and \(g\in L^{\infty}(\Omega;\mathbb{R}^{d})\) if \(p>1\), and assume that \(I_{p}(g,G)\) is defined by (4.2)._ _Then, there exist \(f:\Omega\times\mathbb{R}^{d\times N}\times\mathbb{R}^{d\times N}\to[0,+\infty)\), \(\Phi:\Omega\times\mathbb{R}^{d}\times S^{N-1}\to[0,+\infty)\) such that_ \[I_{p}(g,G)=\int_{\Omega}f(x,\nabla g(x),G(x))\,\mathrm{d}x+\int_{\Omega\cap S _{g}}\Phi(x,[g](x),\nu_{g}(x))\,\mathrm{d}\mathcal{H}^{N-1}(x), \tag{4.8}\] _where the relaxed energy densities are given by_ \[f(x_{0},\xi,B):=\limsup_{\varepsilon\to 0^{+}}\frac{m(\xi(\cdot-x_{0}),B;Q(x_{0}, \varepsilon))}{\varepsilon^{N}}, \tag{4.9}\] \[\Phi(x_{0},\lambda,\theta,\nu):=\limsup_{\varepsilon\to 0^{+}}\frac{m(u_{ \lambda-\theta,\nu}(\cdot-x_{0}),0;Q_{\nu}(x_{0},\varepsilon))}{\varepsilon^{ N-1}}, \tag{4.10}\] _for all \(x_{0}\in\Omega\), \(\theta,\lambda\in\mathbb{R}^{d}\), \(\xi,B\in\mathbb{R}^{d\times N}\) and \(\nu\in\mathbb{S}^{N-1}\). In the above expressions \(0\) denotes the zero \(\mathbb{R}^{d\times N}\) matrix, \(u_{\lambda-\theta,\nu}(y):=\begin{cases}\lambda-\theta,&\text{if $y\cdot\nu>0$}\\ 0,&\text{if $y\cdot\nu\leqslant 0$}\end{cases}\), the functional \(m\colon SD(\Omega)\times\mathcal{O}_{\infty}(\Omega)\to[0,+\infty)\) is given by (3.2) with \(L=1\) and \(\mathcal{F}=I_{p}\), and \(\mathcal{C}_{HSD_{1}^{p}}(g,G;O)\) is given by (3.1), taking into account that \(HSD_{1}^{p}(\Omega)\) in Definition 2.2 coincides with the set of fields \((g,G)\in SD(\Omega)\) such that \(G\in L^{p}(\Omega;\mathbb{R}^{d\times N})\)._ _Furthermore, if \(p>1\), then \(\Phi(x_{0},\lambda,\theta,\nu)=h_{p}(x_{0},\lambda-\theta,\nu)\), for every \(x_{0}\in\Omega\), \(\theta,\lambda\in\mathbb{R}^{d}\) and \(\nu\in\mathbb{S}^{N-1}\), where \(h_{p}\) is the function given in (4.7)._ Proof.: We divide the proof into two parts, according to the cases \(p>1\) and \(p=1\). In both cases we apply the global method for relaxation obtained in Theorem 3.2. Given \(O\in\mathcal{O}(\Omega)\) and \((g,G)\in SD(\Omega)\), with \(G\in L^{p}(\Omega;\mathbb{R}^{d\times N})\), we introduce the localized version of \(I_{p}(g,G)\), namely \[I_{p}(g,G;O)\coloneqq\inf\Big{\{}\liminf_{n\to\infty}E(u_{n}):u_{n}\in SBV(O; \mathbb{R}^{d}),u_{n}\xrightarrow[SD]{*}(g,G)\text{ in }O\Big{\}}.\] Our goal is to verify that \(I_{p}(g,G;O)\) satisfies assumptions (H1)-(H4) of Theorem 3.2 in the case \(L=1\), assuming also that \(g\in L^{\infty}(\Omega;\mathbb{R}^{d})\) in the case \(p>1\). Indeed, if \(p>1\) this hypothesis on \(g\) is crucial to conclude that (H1) holds. We follow the arguments presented in [10] and introduce, for every \(g\in L^{\infty}(\Omega;\mathbb{R}^{d})\cap SBV(\Omega;\mathbb{R}^{d})\) and every \(G\in L^{p}(\Omega;\mathbb{R}^{d\times N})\), the functional \[I_{p}^{\infty}(g,G;O)\coloneqq\inf\Big{\{}\liminf_{n\to\infty}E(u_{n}):u_{n} \in SBV(O;\mathbb{R}^{d}),u_{n}\xrightarrow[SD]{*}(g,G)\text{ in }O,\sup_{n}\|u_{n}\|_{L^{\infty}(O;\mathbb{R}^{d})}<+ \infty\Big{\}}.\] Using a truncation argument as in [10, Lemma 2.20], which still holds in the non-homogeneous case, we can show that \[I_{p}(g,G;O)=I_{p}^{\infty}(g,G;O).\] Next, using a slicing technique following [10, Lemma 2.21], which can be proved with entirely similar arguments to those adopted in the homogeneous setting, we obtain a nested subadditivity result stating that, if \(U,V,S\) are open subsets of \(\Omega\) such that \(U\Subset V\Subset S\), then \[I_{p}(g,G;S)\leqslant I_{p}(g,G;V)+I_{p}(g,G;S\setminus\overline{U}).\] Once again, the hypothesis \(g\in L^{\infty}(\Omega;\mathbb{R}^{d})\) is important at this stage, when \(p>1\). From here, the reasoning in [10, Proposition 2.22], which is still valid with the same proof in the non-homogeneous case, yields (H1). To show that (H2) holds, we argue as in [10, Proposition 5.1]. Indeed, we can prove lower semicontinuity of \(I_{p}(\cdot,\cdot;O)\) along sequences \((g_{n},G_{n})\) converging in \(L^{1}(O;\mathbb{R}^{d})_{\text{strong}}\times L^{p}(O;\mathbb{R}^{d\times N})_{ weak}\) (the second convergence is weak star in \(\mathcal{M}(\Omega;\mathbb{R}^{d\times N})\), if \(p=1\)). (H3) is an immediate consequence of the previous lower semicontinuity property in \(O\), as observed in [6, eq. (2.2)], whereas (H4) follows by standard arguments (as in [10, Lemma 2.18]) from (1), (2), (3) and (6) above and by the lower semicontinuity of integral functionals of power type and the total variation along weakly converging sequences. We point out that in order to obtain the lower bound in (H4) we can replace, without loss of generality, \(W\) by \(W+\frac{1}{C_{W}}\). Hence, Theorem 3.2 can be applied to conclude that, for every \((g,G)\in SD(\Omega)\times L^{p}(\Omega;\mathbb{R}^{d\times N})\), with \(g\in L^{\infty}(\Omega;\mathbb{R}^{d})\) if \(p>1\), we have \[I_{p}(g,G)=\int_{\Omega}f(x,g(x),\nabla g(x),G(x))\,\mathrm{d}x+\int_{\Omega \cap S_{g}}\Phi(x,g^{+}(x),g^{-}(x),\nu_{g}(x))\,\mathrm{d}\mathcal{H}^{N-1}( x),\] where the relaxed densities \(f\) and \(\Phi\) are given by \[f(x_{0},a,\xi,B)=\limsup_{\varepsilon\to 0^{+}}\frac{m(a+\xi(\cdot-x_{0}),B;Q(x_{0}, \varepsilon))}{\varepsilon^{N}},\] and \[\Phi(x_{0},\lambda,\theta,\nu)=\limsup_{\varepsilon\to 0^{+}}\frac{m(v_{ \lambda,\theta,\nu}(\cdot-x_{0}),0;Q_{\nu}(x_{0},\varepsilon))}{\varepsilon^{ N-1}},\] for all \(x_{0}\in\Omega\), \(a,\theta,\lambda\in\mathbb{R}^{d}\), \(\xi,B\in\mathbb{R}^{d\times N}\), \(\nu\in\mathbb{S}^{N-1}\). It is easy to see that the functional \(I_{p}\) is invariant under translation in the first variable, that is, \[I_{p}(g+a,G;O)=I_{p}(g,G;O),\ \forall(g,G)\in SD(\Omega),O\in\mathcal{O}( \Omega),a\in\mathbb{R}^{d}.\] Indeed, it suffices to notice that if \(\{u_{n}\}\) is admissible for \(I_{p}(g,G;O)\), then the sequence \(\{u_{n}+a\}\) is admissible for \(I_{p}(g+a,G;O)\). Hence, taking into account Remark 3.3 and the abuse of notation stated therein, we obtain (4.8) with \(f\) and \(\Phi\) given by (4.9) and (4.10), respectively. On the other hand, Theorem 3.6 and the fact that \(\mathcal{F}=I_{p}\), yield, for every \(x_{0}\in\Omega\), \(\lambda,\theta\in\mathbb{R}^{d}\) and \(\nu\in\mathbb{S}^{N-1}\), \[\Phi(x_{0},\lambda,\theta,\nu) =\Phi(x_{0},\lambda-\theta,\nu)=\limsup_{\varepsilon\to 0^{+}} \frac{m(u_{\lambda-\theta,\nu}(\cdot-x_{0}),0;Q_{\nu}(x_{0},\varepsilon))}{ \varepsilon^{N-1}}\] \[=\limsup_{\varepsilon\to 0^{+}}\frac{I_{p}(u_{\lambda-\theta,\nu}( \cdot-x_{0}),0;Q_{\nu}(x_{0},\varepsilon))}{\varepsilon^{N-1}}\] \[=\limsup_{\varepsilon\to 0^{+}}\frac{1}{\varepsilon^{N-1}}\inf\left\{ \liminf_{n\to+\infty}\left[\int_{Q_{\nu}(x_{0},\varepsilon)}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\leqslant\limsup_{\varepsilon\to 0^{+}}\frac{1}{\varepsilon^{N-1}}\inf\Bigg{\{} \liminf_{n\to+\infty}\int_{Q_{\nu}(x_{0},\varepsilon)\cap S_{u_{n}}}\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[=\limsup_{\varepsilon\to 0^{+}}\frac{1}{\varepsilon^{N-1}}\inf\Bigg{\{} \liminf_{n\to+\infty}\int_{Q_{\nu}(x_{0},\varepsilon)\cap\mathcal{S}_{u_{n}}} \hskip-14.226378pt\psi(x_{0},[u_{n}](x),\nu_{u_{n}}(x))\,\mathrm{d}\mathcal{H}^ {N-1}(x):\] \[u_{n}\in SBV(Q_{\nu}(x_{0},\varepsilon);\mathbb{R}^{d}),u_{n}\to u_{\lambda- \theta,\nu}(\cdot-x_{0})\text{ in }L^{1}(Q_{\nu}(x_{0},\varepsilon);\mathbb{R}^{d}),\] \[\nabla u_{n}\rightharpoonup 0\text{ in }L^{p}(Q_{\nu}(x_{0},\varepsilon); \mathbb{R}^{d\times N})\Bigg{\}},\] where the uniform continuity of \(\psi\) in the first variable was used in the final equality. We now argue as in [10, Propositions 4.2 and 4.4], in order to replace each weakly converging sequence \(u_{n}\) by one which converges strongly to \(0\) in \(L^{p}\). In this way, we are lead to the conclusion that \[\Phi(x_{0},\lambda,\theta,\nu)\geqslant h_{p}(x_{0},\lambda-\theta,\nu).\] We have thus proved that, for \(p>1\), \[\Phi(x_{0},\lambda,\theta,\nu)=h_{p}(x_{0},\lambda-\theta,\nu),\] for every \(x_{0}\in\Omega\), \(\lambda,\theta\in\mathbb{R}^{d}\) and \(\nu\in\mathbb{S}^{N-1}\), where \(h_{p}\) is the function given in (4.7). If \(p=1\), the functional \(I_{p}(g,G;\cdot)\) is the restriction to \(\mathcal{O}(\Omega)\) of a Radon measure even if \(g\not\in L^{\infty}(\Omega;\mathbb{R}^{d})\), as proven in [10, Proposition 2.22]. On the other hand, all the other arguments applied in the case \(p>1\) remain valid so we can invoke Theorem 3.2 to obtain the integral representation in (4.8). **Theorem 4.2**.: _Let \(p\geqslant 1\). Under the conditions of the previous theorem, if, in addition to the hypotheses stated therein, the density \(W\) also satisfies_ * _there exists a continuous function_ \(\omega_{W}:[0,+\infty)\to[0,+\infty)\) _such that_ \(\lim_{t\to 0^{+}}\omega_{W}(t)=0\) _and_ \[|W(x_{1},A)-W(x_{0},A)|\leqslant\omega_{W}(|x_{1}-x_{0}|)(1+|A|^{p}),\;\forall x _{0},x_{1}\in\Omega,A\in\mathbb{R}^{d\times N},\] _then (4.8) holds for every \((g,G)\in SD(\Omega)\) such that \(G\in L^{p}(\Omega;\mathbb{R}^{d\times N})\), with \(f(x_{0},\xi,B)=H_{p}(x_{0},\xi,B)\), for every \(x_{0}\in\Omega\), \(\xi,B\in\mathbb{R}^{d\times N}\), where \(H_{p}\) is given by (4.6)._ Proof.: Assuming (H), [17, Theorem 5.1] ensures that \(I_{p}(g,G;\cdot)\) is the restriction to \(\mathcal{O}(\Omega)\) of a Radon measure when \(g\in SBV(\Omega)\). Consequently, putting together Theorem 3.6, (4.9) and [17, Theorem 5.1], it follows that for every \(x_{0}\in\Omega\), \(\xi,B\in\mathbb{R}^{d\times N}\), we have \[f(x_{0},\xi,B)=H_{p}(x_{0},\xi,B).\] **Remark 4.3**.: We observe that, under the additional hypothesis (H), and given that the relaxed functional \(I_{p}\) admits an integral representation, by the uniqueness of the relaxed densities, in the above theorem we recover the integral representation that was obtained in [17, Theorem 5.1], even when \(g\not\in L^{\infty}\). On the other hand, Theorem 3.2 provides an integral representation also in the more general Caratheodory setting, for any \(p\geqslant 1\). When \(p>1\), the hypothesis \(g\in L^{\infty}\) is needed to ensure the validity of (H1) and an explicit formula for \(\Phi\) in terms of \(h_{p}\) as in (4.7) holds (see [17, (5.7)]). In the case \(p=1\), the assumption \(g\in L^{\infty}\) is not required to conclude (H1) and Theorem 3.2 yields an integral representation for the relaxed functional \(I\), however, in this case, no explicit expressions for the relaxed densities in terms of the original energy densities \(W\) and \(\psi\) are known. We point out that each step of the recursive relaxation procedure presented in [4], whose densities, at each stage, satisfy hypotheses (1)-(9), also fits into the scope of Theorem 3.2. Our method also applies to homogenization problems like the one considered in [1], indeed the following result holds. **Theorem 4.4**.: _Let \(p>1\) and \(\Omega\subset\mathbb{R}^{N}\) be a bounded, open set. Let \(\varepsilon\to 0^{+}\) and consider \(E_{\varepsilon}\) given by_ \[E_{\varepsilon}(u)\coloneqq\int_{\Omega}W(x/\varepsilon,\nabla u(x))\,\mathrm{d }x+\int_{\Omega\cap\mathcal{S}_{u}}\psi(x/\varepsilon,[u](x),\nu_{u}(x))\, \mathrm{d}\mathcal{H}^{N-1}(x),\] _where \(W\colon\Omega\times\mathbb{R}^{d\times N}\to[0,+\infty)\) and \(\psi\colon\Omega\times\mathbb{R}^{d}\times\mathbb{S}^{N-1}\to[0,+\infty)\) are continuous functions, \(Q\)-periodic in the first variable and such that they satisfy (1)-(3), (H) and (5)-(9), respectively. Let \((g,G)\in SD(\Omega)\), with \(G\in L^{p}(\Omega;\mathbb{R}^{d\times N})\), and \(g\in L^{\infty}(\Omega;\mathbb{R}^{d})\) if \(p>1\), and let \(I_{p,hom}\) be the functional defined by_ \[I_{p,hom}(g,G)\coloneqq\inf\Big{\{}\liminf_{\varepsilon\to 0}E_{\varepsilon}(u_{ \varepsilon}):u_{\varepsilon}\in SBV(\Omega;\mathbb{R}^{d}),u_{\varepsilon} \stackrel{{*}}{{\rightharpoonup}}(g,G)\Big{\}}. \tag{4.11}\] _Then, there exist \(f_{hom}:\mathbb{R}^{d\times N}\times\mathbb{R}^{d\times N}\to[0,+\infty)\), \(\Phi_{hom}:\mathbb{R}^{d}\times S^{N-1}\to[0,+\infty)\) such that_ \[I_{p,hom}(g,G)=\int_{\Omega}f_{hom}(\nabla g(x),G(x))\,\mathrm{d}x+\int_{ \Omega\cap S_{g}}\Phi_{hom}([g](x),\nu_{g}(x))\,\mathrm{d}\mathcal{H}^{N-1}(x),\] _where the limiting energy densities are given by_ \[f_{hom}(x_{0},\xi,B):=\limsup_{\varepsilon\to 0^{+}}\frac{m(\xi(\cdot-x_{0}),B;Q (x_{0},\varepsilon))}{\varepsilon^{N}}, \tag{4.12}\] \[\Phi_{hom}(x_{0},\lambda,\theta,\nu):=\limsup_{\varepsilon\to 0^{+}}\frac{m(v_{ \lambda,\theta,\nu}(\cdot-x_{0}),0;Q_{\nu}(x_{0},\varepsilon))}{\varepsilon^{N -1}}, \tag{4.13}\] _for all \(x_{0}\in\Omega\), \(\lambda,\theta\in\mathbb{R}^{d}\), \(\xi,B\in\mathbb{R}^{d\times N}\) and \(\nu\in\mathbb{S}^{N-1}\). In the above expressions \(0\) denotes the zero \(\mathbb{R}^{d\times N}\) matrix, \(v_{\lambda,\theta,\nu}(y):=\begin{cases}\lambda,&\text{if }y\cdot\nu>0\\ \theta,&\text{if }y\cdot\nu\leqslant 0\end{cases}\), the functional \(m\colon SD(\Omega)\times\mathcal{O}_{\infty}(\Omega)\to[0,+\infty)\) is given by (3.2) with \(L=1\) and \(\mathcal{F}=I_{p,hom}\), and \(\mathcal{C}_{HSD_{1}^{p}}(g,G;O)\) is given by (3.1), taking into account that \(HSD_{1}^{p}(\Omega)\) in Definition 2.2 coincides with the set of fields \((g,G)\in SD(\Omega)\) such that \(G\in L^{p}(\Omega;\mathbb{R}^{d\times N})\)._ As in the case of Theorem 4.1, the proof of this theorem amounts to the verification that the functional \(I_{p,hom}\) satisfies all of the assumptions of Theorem 3.2, we omit the details. However, we point out that \(f_{hom}\) and \(\Phi_{hom}\) are, actually, independent of \(x_{0}\), this is due to the fact that \(I_{p,hom}\) verifies the condition of [6, Lemma 4.3.3] which in turn can be proven in full analogy with [8, Lemma 3.7]. Notice that, a posteriori, relying on the results in [1], the restriction \(g\in L^{\infty}\) can be avoided. Furthermore, the densities given by (4.12) and (4.13) coincide necessarily with the bulk and surface energy densities \(H_{hom}\) and \(h_{hom}\) obtained in [1, eq. (1.11) and (1.12), respectively]. In particular, (4.13) admits the equivalent representation (see [1, Proposition 3.5]), \[\Phi_{\hom}(\lambda,\theta,\nu)\coloneqq\limsup_{T\to+\infty} \frac{1}{T^{N-1}}\inf\bigg{\{}\int_{(TQ_{\nu})\cap S_{u}}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! * there exists \(c_{W}>0\) such that, for every \((x,A,M)\in\Omega\times\mathbb{R}^{d\times N}\times\mathbb{R}^{d\times N^{2}}\), \[\frac{1}{c_{W}}(|A|+|M|)-c_{W}\leqslant W(x,A,M)\leqslant c_{W}(1+|A|+|M|);\] * there exists a continuous function \(\omega_{W}:[0,+\infty)\to[0,+\infty)\) such that \(\lim\limits_{t\to 0^{+}}\omega_{W}(t)=0\) and \[|W(x_{1},A,M)-W(x_{0},A,M)|\leqslant\omega_{W}(|x_{1}-x_{0}|)(1+|A|+|M|),\] for every \(x_{0},x_{1}\in\Omega\), \(A\in\mathbb{R}^{d\times N}\), \(M\in\mathbb{R}^{d\times N^{2}}\); * there exists \(\alpha\in(0,1)\) and \(L>0\) such that \[\left|W^{\infty}(x,A,M)-\frac{W(x,A,tM)}{t}\right|\leqslant\frac{C}{t^{\alpha}},\] for all \(t>L\), \(x\in\Omega\), \(A\in\mathbb{R}^{d\times N}\) and \(M\in\mathbb{R}^{d\times N^{2}}\) with \(|M|=1\), where \(W^{\infty}\) denotes the recession function at infinity of \(W(x,A,\cdot)\); and \(\Psi\) satisfies, in the matrix setting, (5)-(9) considered in Theorem 4.1. Considering the localized version of \(I_{2}\), defined in \(SD(\Omega;\mathbb{R}^{d\times N})\times\mathcal{O}(\Omega)\) by \[I_{2}(G,\Gamma;O):=\inf\bigg{\{}\liminf\limits_{n\to+\infty} \left[\int_{O}W(x,v_{n}(x),\nabla v_{n}(x))\,\mathrm{d}x+\int_{S_{v_{n}}\cap O }\Psi(x,[v_{n}](x),\nu_{v_{n}}(x))\,\mathrm{d}\mathcal{H}^{N-1}(x)\right]:\] \[v_{n}\in SBV(O;\mathbb{R}^{d\times N}),v_{n}\to G\text{ in }L^{1}(O; \mathbb{R}^{d\times N}),\nabla v_{n}\overset{*}{\rightharpoonup}\Gamma\text{ in }\mathcal{M}(O;\mathbb{R}^{d\times N^{2}})\bigg{\}},\] it was shown in [3, Theorem 4.5] that \(I_{2}\) is the restriction to the open subsets of \(\Omega\) of a Radon measure. Standard diagonalization arguments prove that it is sequentially lower semicontinuous in \(L^{1}(O;\mathbb{R}^{d\times N})\times\mathcal{M}(O;\mathbb{R}^{d\times N^{2}})\), from which locality follows. Taking into account the following alternative characterization of \(I_{2}\), proved in [3, Proposition 4.6], \[I_{2}(G,\Gamma):=\inf\bigg{\{}\liminf\limits_{n\to+\infty} \left[\int_{\Omega}W(x,G(x),\nabla v_{n}(x))\,\mathrm{d}x+\int_{S_{v_{n}}} \Psi(x,[v_{n}](x),\nu_{v_{n}}(x))\,\mathrm{d}\mathcal{H}^{N-1}(x)\right]:\] \[v_{n}\in SBV(\Omega;\mathbb{R}^{d\times N}),v_{n}\to G\text{ in }L^{1}( \Omega;\mathbb{R}^{d\times N}),\nabla v_{n}\overset{*}{\rightharpoonup}\Gamma \text{ in }\mathcal{M}(\Omega;\mathbb{R}^{d\times N^{2}})\bigg{\}}, \tag{4.14}\] it is also easy to see that the growth hypothesis (H4) from Theorem 3.2 holds. Hence, denoting by \(m\) the functional defined, in the matrix setting, by (3.2) with \(L=1\), this result applies to yield the following integral representation: \[I_{2}(G,\Gamma)=\int_{\Omega}f(x,G(x),\nabla G(x),\Gamma(x))\, \mathrm{d}x+\int_{\Omega\cap S_{G}}\Phi(x,[G](x),\nu_{G}(x))\,\mathrm{d} \mathcal{H}^{N-1}(x), \tag{4.15}\] where \[f(x_{0},A,B,D):=\limsup\limits_{\varepsilon\to 0^{+}}\frac{m(A+B(\cdot-x_{0}),D; Q(x_{0},\varepsilon))}{\varepsilon^{N}},\] and \[\Phi(x_{0},\lambda-\theta,\nu):=\limsup\limits_{\varepsilon\to 0^{+}}\frac{m(u_{ \lambda-\theta,\nu},0;Q_{\nu}(x_{0},\varepsilon))}{\varepsilon^{N-1}},\] for all \(x_{0}\in\Omega\), \(A,\lambda,\theta\in\mathbb{R}^{d\times N}\), \(B,D\in\mathbb{R}^{d\times N^{2}}\), \(\nu\in\mathbb{S}^{N-1}\), where \(0\) denotes the zero \(\mathbb{R}^{d\times N^{2}}\) matrix and \(u_{\lambda-\theta,\nu}(y):=\begin{cases}\lambda-\theta,&\text{if }y\cdot\nu>0\\ 0,&\text{if }y\cdot\nu\leqslant 0.\end{cases}\) We will now show that the relaxed densities \(f\) and \(\Phi\) coincide with those obtained in [3, Theorems 3.2 and 5.7], to this end we use the alternative characterization of \(I_{2}\) given in (4.14). Theorems 3.2 and 5.7 in [3] provide the following integral representation for \(I_{2}\) \[I_{2}(G,\Gamma)=\int_{\Omega}W_{2}(x,G(x),\nabla G(x),\Gamma(x))\,\mathrm{d}x+ \int_{\Omega\cap S_{G}}\gamma_{2}(x,[G](x),\nu_{G}(x))\,\mathrm{d}\mathcal{H}^{ N-1}(x),\] where \[W_{2}(x_{0},A,B,D)=\inf_{u\in SBV(Q;\mathbb{R}^{d\times N})}\left\{ \int_{Q}W(x_{0},A,\nabla u(y))\,\mathrm{d}y+\int_{S_{u}\cap Q}\Psi(x_{0},[u](y ),\nu_{u}(y))\,\mathrm{d}\mathcal{H}^{N-1}(y):\right.\] \[\left.u|_{\partial Q}(y)=B\cdot y,\,\int_{Q}\nabla u(y)\,\mathrm{ d}y=D\right\}, \tag{4.16}\] \[\gamma_{2}(x_{0},A,\lambda-\theta,\nu)=\inf_{u\in SBV(Q_{\nu}; \mathbb{R}^{d\times N})}\left\{\int_{Q_{\nu}}W^{\infty}(x_{0},A,\nabla u(y)) \,\mathrm{d}y+\int_{S_{u}\cap Q_{\nu}}\Psi(x_{0},[u](y),\nu_{u}(y))\,\mathrm{ d}\mathcal{H}^{N-1}(y):\right.\] \[\left.u|_{\partial Q_{\nu}}=u_{\lambda-\theta,\nu},\,\int_{Q_{ \nu}}\nabla u(y)\,\mathrm{d}y=0\right\}. \tag{4.17}\] In order to show that the densities in (4.15) are given by (4.16) and (4.17), \[f(x_{0},A,B,D)=W_{2}(x_{0},A,B,D)\quad\text{ and }\quad\Phi(x_{0},\lambda- \theta,\nu)=\gamma_{2}(x_{0},A,\lambda-\theta,\nu),\] for all \(x_{0}\in\Omega\), \(A,\lambda,\theta\in\mathbb{R}^{d\times N}\), \(B,D\in\mathbb{R}^{d\times N^{2}}\), \(\nu\in\mathbb{S}^{N-1}\), we begin by stressing the fact that the dependence of \(\gamma_{2}\) on \(A\) is fictitious. Indeed assumptions (I) and (IV) guarantee that \(W^{\infty}\) does not depend on \(A\), i.e. \[W^{\infty}(x,A,M)=W^{\infty}(x,0,M),\text{ for a.e. }x\in\Omega,\forall A\in \mathbb{R}^{d\times N},M\in\mathbb{R}^{d\times N^{2}},\] where \(0\) represents the zero matrix in \(\mathbb{R}^{d\times N}\). As \(x_{0}\) and \(A\) are fixed, we may invoke Propositions 3.1 and 4.1 in [10] to conclude that \[W_{2}(x_{0},A,B,D)=\inf\Bigg{\{}\liminf_{n\to\infty}\left[\int_{Q}W(x_{0},A, \nabla u_{n}(y))\,\mathrm{d}y+\int_{S_{u_{n}}\cap Q}\Psi(x_{0},[u_{n}](y),\nu_ {u}(y))\,\mathrm{d}\mathcal{H}^{N-1}(y)\right]:\] \[u_{n}\in SBV(Q;\mathbb{R}^{d\times N}),u_{n}\to B\cdot y\text{ in }L^{1}(Q; \mathbb{R}^{d\times N}),\nabla u_{n}\overset{*}{\rightharpoonup}D\text{ in }\mathcal{M}(Q;\mathbb{R}^{d\times N^{2}})\Bigg{\}},\] and \[\gamma_{2}(x_{0},\lambda-\theta,\nu)=\inf\Bigg{\{}\liminf_{n\to \infty}\left[\int_{Q_{\nu}}W^{\infty}(x_{0},0,\nabla u_{n}(y))\,\mathrm{d}y+ \int_{S_{u_{n}}\cap Q_{\nu}}\Psi(x_{0},[u_{n}](y),\nu_{u_{u}}(y))\,\mathrm{d} \mathcal{H}^{N-1}(y)\right]:\] \[u_{n}\in SBV(Q_{\nu};\mathbb{R}^{d\times N}),u_{n}\to u_{\lambda- \theta,\nu}\text{ in }L^{1}(Q_{\nu};\mathbb{R}^{d\times N}),\nabla u_{n}\overset{*}{\rightharpoonup}0 \text{ in }\mathcal{M}(Q;\mathbb{R}^{d\times N^{2}})\Bigg{\}}, \tag{4.18}\] respectively. Reasoning as in the proof of Theorem 4.1, by Theorem 3.6 and the fact that \(\mathcal{F}=I_{2}\), for every \(x_{0}\in\Omega\), \(\lambda,\theta\in\mathbb{R}^{d}\) and \(\nu\in\mathbb{S}^{N-1}\), we have \[\Phi(x_{0},\lambda,\theta,\nu) =\Phi(x_{0},\lambda-\theta,\nu)=\limsup_{\varepsilon\to 0^{+}} \frac{m(u_{\lambda-\theta,\nu}(\cdot-x_{0}),0;Q_{\nu}(x_{0},\varepsilon))}{ \varepsilon^{N-1}}\] \[=\limsup_{\varepsilon\to 0^{+}}\frac{I_{2}(u_{\lambda-\theta,\nu}( \cdot-x_{0}),0;Q_{\nu}(x_{0},\varepsilon))}{\varepsilon^{N-1}}\] \[=\limsup_{\varepsilon\to 0^{+}}\frac{1}{\varepsilon^{N-1}}\inf\Bigg{\{} \liminf_{n\to+\infty}\Bigg{[}\int_{Q_{\nu}(x_{0},\varepsilon)}W(x,u_{\lambda- \theta,\nu}(x-x_{0}),\nabla v_{n}(x))\,\mathrm{d}x\] \[+\int_{Q_{\nu}(x_{0},\varepsilon)\cap S_{\nu_{n}}}\Psi(x,[v_{n}]( x),\nu_{v_{n}}(x))\,\mathrm{d}\mathcal{H}^{N-1}(x)\Bigg{]}:\] \[v_{n}\in SBV(Q_{\nu}(x_{0},\varepsilon);\mathbb{R}^{d\times N}),\nabla v_{n} \stackrel{{*}}{{\rightharpoonup}}0\text{ in }\mathcal{M}(Q;\mathbb{R}^{d \times N^{2}}),\] \[v_{n}\to u_{\lambda-\theta,\nu}(\cdot-x_{0})\text{ in }L^{1}(Q_{\nu}(x_{0}, \varepsilon);\mathbb{R}^{d\times N})\Bigg{\}}\] \[=\limsup_{\varepsilon\to 0^{+}}\inf\Bigg{\{}\liminf_{n\to+\infty}\Bigg{[} \int_{Q_{\nu}}\varepsilon W\left(x_{0}+\varepsilon y,u_{\lambda-\theta,\nu}( y),\frac{1}{\varepsilon}\nabla u_{n}(y)\right)\,\mathrm{d}y\] \[+\int_{Q_{\nu}\cap S_{\nu_{n}}}\Psi(x_{0}+\varepsilon y,[u_{n}]( y),\nu_{u_{n}}(y))\,\mathrm{d}\mathcal{H}^{N-1}(y)\Bigg{]}:\] \[u_{n}\in SBV(Q_{\nu};\mathbb{R}^{d}),u_{n}\to u_{\lambda-\theta,\nu}\text{ in }L^{1}(Q_{\nu};\mathbb{R}^{d}),\nabla u_{n}\stackrel{{*}}{{ \rightharpoonup}}0\text{ in }\mathcal{M}(Q;\mathbb{R}^{d\times N^{2}}) \Bigg{\}},\] where in the last equality we performed a change of variables. Using hypotheses (III) and (9) first, and then (IV), we obtain \[\Phi(x_{0},\lambda-\theta,\nu)=\limsup_{\varepsilon\to 0^{+}}\inf \Bigg{\{}\liminf_{n\to+\infty}\Bigg{[}\int_{Q_{\nu}}\varepsilon W\left(x_{0}, u_{\lambda-\theta,\nu}(y),\frac{1}{\varepsilon}\nabla u_{n}(y)\right)\, \mathrm{d}y\] \[+\int_{Q_{\nu}\cap S_{\nu_{n}}}\Psi(x_{0},[u_{n}](y),\nu_{u_{n}}( y))\,\mathrm{d}\mathcal{H}^{N-1}(y)\Bigg{]}:\] \[u_{n}\in SBV(Q_{\nu};\mathbb{R}^{d}),u_{n}\to u_{\lambda-\theta,\nu}\text{ in }L^{1}(Q_{\nu};\mathbb{R}^{d}),\nabla u_{n}\stackrel{{*}}{{ \rightharpoonup}}0\text{ in }\mathcal{M}(Q;\mathbb{R}^{d\times N^{2}})\Bigg{\}}\] \[=\inf\Bigg{\{}\liminf_{n\to+\infty}\Bigg{[}\int_{Q_{\nu}}W^{\infty}\left(x_{0},u_{\lambda-\theta,\nu}(y),\nabla u_{n}(y)\right)\,\mathrm{d}y+\int_{Q_{\nu} \cap S_{\nu_{n}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The research of JM was supported by GNAMPA, Programma Professori Visitatori, year 2022 and through FCT/Portugal through CAMGSD, IST-ID, projects UIDB/04459/2020 and UIDP/04459/2020. He also gratefully acknowledges the support and hospitality of Sapienza-University of Rome through the Programma Professori Visitatori, year 2023. EZ is a member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica "F. Severi" (INdAM). She also acknowledges partial funding from the GNAMPA Project 2023 _Prospettive nelle scienze dei materiali: modelli variazionali, analisi asintotica e omogeneizzazione_.
2309.17139
Review of Neutrino Experiments Searching for Astrophysical Neutrinos
Over the last two decades, we have intensified our search for a ghost particle, with the hope that it would provide us with information on the darkest places of our Universe. This quest has been conducted from the deep caves of the Earth, up to the upper layers of our atmosphere, and from one Pole to another. In this review, I will summarize the odyssey of the search for astrophysical neutrinos. I will focus on the recent discoveries and technical developments that led us to the point where we stand now. I will highlight the different types of neutrino detectors, and their performances enabling the discovery of high-energy astrophysical neutrinos, and the understanding of their sources behind. Finally, I will present some possible paths to the remaining uncharted territory of the ultra-high-energy neutrino astronomy.
Valentin Decoene
2023-09-29T11:09:09Z
http://arxiv.org/abs/2309.17139v1
# Review of Neutrino Experiments Searching for Astrophysical Neutrinos ###### Abstract: Over the last two decades, we have intensified our search for a ghost particle, with the hope that it would provide us with information on the darkest places of our Universe. This quest has been conducted from the deep caves of the Earth, up to the upper layers of our atmosphere, and from one Pole to another. In this review, I will summarize the odyssey of the search for astrophysical neutrinos. I will focus on the recent discoveries and technical developments that led us to the point where we stand now. I will highlight the different types of neutrino detectors, and their performances enabling the discovery of high-energy astrophysical neutrinos, and the understanding of their sources behind. Finally, I will present some possible paths to the remaining uncharted territory of the ultra-high-energy neutrino astronomy. ###### Contents * 1 Introduction * 2 Underground detectors: a window to our local neighborhood * 2.1 Catching a ghost particle * 2.2 Illustration with Kamiokande and Super-Kamiokande * 2.3 Some loud neighbors * 3 Large scale detectors: a glimpse at the high-energy neutrino sky * 3.1 Getting bigger * 3.2 High-energy neutrino detectors * 3.3 A quiet neighborhood * 4 Ultra-high-energy neutrinos: an uncharted territory * 4.1 Alternative detection methods * 4.2 Alternative detection strategies * 5 Summary ## 1 Introduction Neutrinos are elusive particles, that hold key answers to long-standing questions in astrophysics and fundamental physics [1]. As detailed below, astrophysical neutrinos are a crucial piece of this puzzle because they cover a large energy range, way beyond what can be achieved by accelerators on Earth, and because they are expected to be produced in the most energetic sources of the Universe. Astrophysical neutrinos are produced in various types of environments and therefore through various production channels, leading to different expected energies. Therefore, detecting neutrinos in different energy ranges, allows for probing different types of production channels, hence, different types of environments and sources. Typically, two main channels of neutrino production can be distinguished: beta decay reactions, and interactions of accelerated cosmic rays. These interactions, with photon and baryon backgrounds, lead to the creation of charged pions, kaons and charmed hadrons that can decay and produce subsequent neutrinos [2]. This distinction, typically, draws a limit between the MeV-GeV energy range and energies above the TeV range. In reality, the two mechanisms can be interlinked and interactions of cosmic rays can lead to subsequent beta-decay reactions. However, this distinction holds in the sense that some sources cannot accelerate cosmic rays but may still produce MeV-GeV neutrinos via beta decay reactions. The left panel of Figure 1 presents an illustration of various fluxes expected from different types of neutrino sources. For instance, in the MeV energy range, the main sources of astrophysical neutrinos are the Sun and core-collapse supernovae, both resulting from beta decay nuclear reactions. At higher energy, in the TeV-PeV regime, also called high-energy, neutrinos are produced through the interactions of accelerated cosmic rays, either in steady-state sources, such as active-galactic-nuclei, or transient sources. These sources could also produce ultra-high energy neutrinos (above PeV). Transient sources are particularly promising candidates at the highest energies as they can inject a huge amount of energy over short time scales, thus enabling the production of astroparticles with fluxes detectable on Earth [2] (see the right panel of Figure 1). Furthermore, in the ultra-high-energy regime, so-called cosmogenic neutrinos are also expected to be produced by ultra-high-energy cosmic rays propagating through the Universe, and interacting with photon backgrounds. Because of their low cross sections, neutrinos are an excellent probe of the deep Universe, and can escape from opaque astrophysical sources, unlike light. Hence, they can trace processes and mechanisms hidden to standard astronomy, such as hadronic interactions and acceleration mechanisms. From a practical point of view, their theoretically predicted energy and fluxes vary greatly from source to source. From the left panel of Figure 1, it can be noticed that each source flux follows a broken power law, and the overall fluxes tend to drastically diminish with increasing energy. The main challenge to detect astrophysical neutrinos is to meet the detection volume required to reach those fluxes, while accommodating for the large energy ranges between the various sources. To illustrate this, let us mention that the typical required effective volume to detect Solar neutrino, in the MeV regime, is on the order of a thousand cubic meters of water, which corresponds to a kiloton of target, while in order to detect neutrinos in the TeV regime, as expected from active-galactic-nuclei, the required effective volume of water reaches a cubic kilometer, corresponding to a gigaton of water. Finally, the detection of ultra-high-energy neutrinos, as expected from cosmogenic fluxes, would require several tens of gigatons of target. In this review, I will present how these challenges have been tackled by the community through the last few decades, and across the different energy ranges of astrophysical neutrinos. In particular, I will summarize the detection principles that have allowed us to detect the first astrophysical neutrinos. Finally, I will present some new ideas and techniques that are studied today to pursue our exploration of the neutrino Universe towards the highest energies. ## 2 Underground detectors: a window to our local neighborhood Neutrinos in the MeV-GeV range, have been extensively targeted by underground detectors, using water Cherenkov, liquid scintillator and radio chemical techniques (see chapter 8 of [3]). At Figure 1: _Left :_ energy neutrino spectra for various types of sources (from chapter 17 of [3]). _Right :_ neutrino maximal energy, as a function of the bolometric luminosity and the time variability for different types of source. For more details, see [2]. these energies the neutrino fluxes expected from extra-terrestrial origins are the highest, still, the low interaction probability of neutrinos makes their detection challenging, regarding the backgrounds generated by natural radioactivity and cosmic-ray particles showering the Earth. Therefore, this type of detector relies on a typical volume of target on the order of hundreds to thousands of kilotons, buried in deep caves, such as old mines, in order to reduce the background generated by cosmic-ray particles. These precursors in the quest of astrophysical neutrinos have led to breakthrough discoveries with implications covering the fields of astrophysics and particle physics. We can in particular emphasize on the detection of the Solar neutrino fluxes, the atmospheric neutrinos, the neutrino oscillation effect and the famous observation of the neutrino burst from supernova SN1987A. The detection strategies developed by this first generation of astrophysical neutrino detectors, and their results, are presented in the following subsections. ### Catching a ghost particle The detection strategy to catch neutrinos in the energy regime of a few hundreds of MeV to a few GeV relies mostly on the scattering of these neutrinos with the particles of the active target used in the detector. The subsequent energy transfer from the neutrino to these particles leads to two main emission mechanisms of light: * Emission via scintillation, which happens when charged and neutral particles travel through a material, and interact via radiation interaction mechanisms. For charged particles, continuous interactions with the electrons of the material (Coulomb interactions) result in atomic excitation and ionization. For neutral particles, direct interactions occur, leading to proton recoil or fragment spallation, inducing a transfer of energy into the medium, and similar atomic excitation and ionization, as for charged particles (see chapter 3 of [3]). * Another well known emission mechanism is called Cherenkov emission and happens when charged particles move faster than the speed of light in a dielectric medium. The atoms of this medium are then polarized around the direction of propagation. In order to return to equilibrium (i.e., the ground energy state), the atoms release energy through the emission of photons. The wavefront of these light emissions move at the phase speed allowed by the medium, which is \(c/n\), where \(c\) is the speed of light and \(n\) the refractive index of the medium. The wavefronts interfere coherently resulting in the accumulation of on-phase waves, because the disturbance, created by the moving particle, propagates at a speed greater than this phase velocity. It results in shock waves of light forming a cone oriented along the particle's propagation direction and with an aperture \(\theta\) determined by \(\cos\theta=1/\beta n\), with \(\beta=v/c\) the normalized particle speed. This effect is often compared to the sonic boom produced by objects moving faster than the speed of sound [4]. The light produced by either or both mechanisms is then measured via photomultiplier tubes (PMTs) from which the time and amplitude information can be used to reconstruct the neutrino properties. However, due to the large volume of the detectors, the background noise can dominate the signal produced by the neutrinos. The majority of the noise stems from energetic particles produced either by the natural radioactivity of the environment, or by cosmic rays interacting within the atmosphere. The particles with sufficient energy that cross the detector can lead to scintillation or Cherenkov emissions similar to neutrino signals. Two strategies exist to control these noises: shield the detector to prevent the incoming particles to enter, and track the passing particles to discriminate them against the expected signal. The shielding method is the reason why detectors are deployed in deep underground. The rock on-top provides a natural way to block the propagation of the majority of the particles produced by cosmic rays, at a cost of an increase of the surrounding radioactivity from the rock. The tracking method can be achieved by deploying a detector "around" the detector (often made of several sub-detectors). The set of detectors can combine different methods such as plastic scintillators, and water tanks equipped with PMTs to track the Cherenkov and scintillation light produced by the incoming particles. These surrounding detectors can also provide a subsequent shielding from the natural radioactivity. ### Illustration with Kamiokande and Super-Kamiokande In the 1970s, motivated by the advent of the Grand Unification Theories, that predicted the decay of the protons and neutrons, several underground detectors were developed (see e.g., Soudan II and IMB). At that time, neutrinos were considered (mostly) as a background to understand and remove from the data. KamiokandeThe Kamiokande II detector was installed in this context, in the old mines of the Kamioka region, and started to take data in 1986 [5]. The inner detector was composed of 3 kton (2140 tons fiduciary) of pure water, monitored by 948 20-inch PMTs, arranged in a 1 m\({}^{2}\) step grid covering the inner part of the detector. The outer part of the detector was covered by a 123 PMTs Cherenkov counter, used to veto against incoming particles, and absorb \(\gamma\)-ray emissions from the radioactivity of the surrounding rock. Additionally, a slow muon monitor was installed to veto their potential decay in the detector. Neutrinos are detected through scattering interactions, such as \(\nu+e\rightarrow\nu+e\), where the electron recoil kinematic allows for the tracking the neutrino characteristics. This channel leads to an angular reconstruction with an average error of 28\({}^{\circ}\). Another channel that can be used to detect neutrino consists in neutrino scattering over free protons in water \(\bar{\nu}_{e}+p\to e^{+}+n\). This channel has a cross-section of about 2 order of magnitudes higher than the electron scattering channel, but leads to an isotropic emission of \(e^{+}\), hence no directional information can be extracted. Energy calibration was achieved through the measurements of muon decay into electron (\(\mu\to e\)), and by the use of Compton scattered electrons from \(\gamma\) rays produced by a neutron source interacting with a nickel target. A trigger was issued if at least 20 PMTs were fired within 100 ns. In that case, the charge and time information from each triggered channel was recorded. Typically, the trigger efficiency was of 50% for electrons with energy \(5-8\) MeV, and about 90% for energies above 14 MeV. Consequently, the trigger rate was 0.60 Hz, from which 0.37 Hz was from cosmic muons, and the rest from radioactive contamination in the water. The reconstruction procedure was processed in two steps: first, a reconstruction of the vertex location was achieved by triangulation from the PMTs position and times; then a fit was performed to obtain the direction of the electron. Super-KamiokandeThe successor of Kamiokande started in 1996 [6]. Its concept is very similar, but bigger in size and number of sensors. It is buried at a depth of \(\sim 1\) km into the Kamioka mine in Japan. Its design is a cylinder of 42.2 m height and 39.6 m diameter. The complete detector consists of 50 kton of water, with 22.5 kton of fiducial volume. The inner part is monitored by 11000 PMTs (50-cm diameter). The outer detector part, used for shielding and identification of the incoming particles, is made of a \(2-3\) m thick water layer monitored by 1885 of the same PMTs as inside. In addition, many improvements of the electronic were planned at the construction of the detector and also during its operation. In particular, the new electronics allows for an improvement of detection rate of supernova bursts by a factor of 100. Consequently, the detection efficiency for \(\mu\to e\) decays reaches 100% for the first microsecond, and the time accuracy is at the nanosecond level for any events. Super-Kamiokande can detect neutrinos from 3.5 MeV up to 100 GeV, from Solar up to atmospheric neutrinos, and covers the supernova range (single and relics). The energy resolution for solar and supernovae neutrinos amount to 14.2% at 10 MeV, while for atmospheric single muon, it is of order 2.4% and degrades with increasing energies. The angular resolution is kinematically limited to \(\sim 18^{\circ}\) for neutrinos at 10 MeV, but in practice due to scattering of recoiled electrons, it is closer to \(\sim 20^{\circ}\). For interactions at higher energy such as \(\nu_{\mu}+X\rightarrow\mu+Y\), the resolution is worse with 30\({}^{\circ}\) at 1 GeV, while for upward going muons it reaches down to 2\({}^{\circ}\). These two detectors are historically recognized for their pioneering experimental developments and scientific results. They are to be followed by another ambitious and promising successor: Hyper-Kamiokande [7]. ### Some loud neighbors Kamiokande and Super-Kamiokande have opened a neutrino window on our local environments. The most famous discovery is probably the neutrino burst from supernova SN1987A, followed by the solar and atmospheric neutrino oscillations. In the following, these discoveries are briefly summarized. The Galactic supernova SN1987AOn the 23 of February 1987 at 7h35min35s (\(\pm 1\) min) a neutrino burst was observed in the Kamiokande detector (as well as IMB and Baksan). The burst of 11 neutrinos lasted for 13s, and the signal was consistent with energies from 7.5 - 36 MeV. The first two neutrinos pointed back at the Large Magellanic Cloud, with angles \(18\pm 18^{\circ}\) and \(15\pm 27^{\circ}\). The association is supported by the time structure and energy distribution of the events, in addition to the correlation in direction. The events occurred 18 h prior to any optical detection. After correction of the detector response, it is possible to estimate the integrated neutrino flux to be about \(10^{10}\)\(\bar{\nu}_{e}\,\mathrm{cm}^{-2}\) for energies above 8.8 MeV. This can be extrapolated to the source output, and lead to a neutrino energy content of \(8\times 10^{52}\) ergs for an average neutrino energy of \(\sim 15\) MeV, consistent with the theoretical expectations [5]. The discovery led, at that time, to establish a limit on the neutrino decay lifetime on the electron neutrinos and anti-neutrinos of more than \(10^{5}\,\mathrm{yr}/(E_{\nu}/m_{\nu})\), where \(E_{\nu}\) and \(m_{\nu}\) are the neutrino energy and mass respectively. This observation shed light on the solar neutrino puzzle, still incomplete at that time. Solar neutrinosThe Sun produces the largest neutrino flux detectable on Earth. Solar neutrinos are produced in nuclear reactions occurring in the core of the Sun (and any stars above a certain mass). The basic reaction consists in the nuclear fusion of hydrogen nuclei and is called the p-p chain \(4p\rightarrow\,^{4}He+2e^{+}+2\nu_{e}+26.2\mathrm{MeV}\). Other reactions may take place, depending on the composition of the star and its evolution state (see e.g., p-e-p, \({}^{7}Be\), \({}^{8}B\)...) During these reactions, most of the energy is transferred through kinematic interaction to close-by charged particles and photons. After roughly \(10,000\) years, the photons escape the Sun (due to multi-scattering inside the Sun's plasma) resulting in a luminosity of \(3.9\times 10^{33}\) ergs/s. Only 3% of the energy from the nuclear reactions is carried away by neutrinos, which escapes only 2 s after creation. The discovery of Solar neutrinos was first achieved by the Homestake experiment in the 1960s, which observed only \(1/3\) of the expected flux. At that time, the possible explanations were: systematic errors, issues with solar models, and oscillations (still at the hypothesis stage). In 1989 Kamiokande observed 55% of the expected flux, thus confirming, at least partially, the results of Homestake Chlorine. Other experiments such as SAGE and GALLEX also confirmed the solar neutrino deficit. This was called the solar neutrino puzzle. In 2001, two studies were published by the Super-Kamiokande collaboration on the solar neutrino flux and spectrum, however the results were not significant enough to conclude on the oscillation of neutrinos from the Sun. Shortly after, the SNO experiment (a 1 kton water Cherenkov experiment in Canada), sensitive only to electronic neutrinos, combined its results with Super-Kamiokande, reaching a significance of 3.3 \(\sigma\) on the solar neutrino oscillation hypothesis. Kamiokande and Super-Kamiokande brought critical experimental data towards the understanding of the physics at place in our Sun. Nowadays, there is still remaining statistical uncertainty, such as the day/night effect on the neutrino flux (oscillation in matter effects), that remains to be completed. Atmospheric neutrinosIn the 1980s, many experiments started to study the neutrino oscillation effects. Most of them used neutrinos produced by accelerators and reactors, but could not find any evidence of neutrino oscillations. This can be explained by the fact that the neutrino energy range was about 1 GeV (1 MeV) and the flight length about 1 km (\(<100\) m) for accelerator experiments (reactor experiments). Since the oscillation probability is given by \(P\big{(}\nu_{\mu}\to\nu_{\mu}\big{)}\propto\sin^{2}\big{(}1.27\delta m^{2}\,L/ E_{\nu}\big{)}\), the probed parameter space for masses is \(\delta m^{2}\gtrsim 0.1-1\,\mathrm{eV}^{2}\), which is too large. In order to measure smaller values of mass, larger oscillation lengths are required. Neutrinos generated in the atmosphere by cosmic-ray interactions, via meson decay \(\pi/K\to\mu+\nu_{\mu}\to e+2\nu_{\mu}+\nu_{e}\) (taking into account pion/kaon charges and neutrino/anti-neutrino states), can travel up to the Earth diameter (\(\sim 12700\,\mathrm{km}\)) [8]. The first atmospheric neutrino experiments started in the 1960s, with in particular two experiments located in deep mines in South Africa and India. In 1978, the first results were published, which showed no evidence of neutrino oscillation, partly due to the large systematic effects on the data. During the meantime, in the 1970s, the Grand Unified Theories predicted the decay of nucleons, and several large volume experiments started to try to detect it: for instance, the Kamiokande and IMB experiments, for which atmospheric neutrinos were a background that needed to be correctly understood in order to search for nucleon decays. In 1986, the first evidence of lack of muon decays in the expected signal was brought to the community, and latter confirmed by other experiments. It was only in 1998, with the successor of Kamiokande, Super-Kamiokande, that the official announcement of the discovery of neutrino oscillations in the atmospheric neutrino flux was made at the 18th International Conference of Neutrino Physics and Astrophysics. The precise measurements of the ratio of muon and electronic neutrino fluxes, and the zenith distributions (which is independent of the flux models), showed a deficit of upward-going neutrinos, only explained by neutrino oscillation, with a significance of \(6\,\sigma\)[6]. These results with deep implications in particle physics were possible thanks to atmospheric neutrinos, produced by cosmic particles. Nowadays, the detailed study of these oscillations, and the questions of mass hierarchy between the different flavors of neutrinos remain to be answered. New experiments, such as JUNO [9] or KM3NeT (see section 3), intend to shed light on them. In fine, the underground detectors are limited in size, with a maximal cross-section of the order of \(\sim 1000\,\mathrm{m}^{2}\), for practical reason. Thus, they cannot reach the neutrino fluxes from cosmic accelerators in the TeV-PeV range. Large scale detectors: a glimpse at the high-energy neutrino sky The fluxes of astrophysical neutrino become drastically low when the energy increases, because the sources are located farther away from us. Indeed, at lower energies the neutrino fluxes are dominated by nearby sources such as the Sun and the atmospheric neutrinos. In addition, Galactic supernovae can be considered as relatively nearby sources. However, the sources producing neutrinos at higher energy are dispersed over large distances across the Universe, as for instance active-galactic-nuclei. Therefore, the fluxes are drastically diluted across the Universe. Consequently, the required volume of active target in order to statistically detect neutrinos in the energy range of TeV to PeV, becomes so large that it is no longer possible to scale up the Kamiokande or Super-Kamiokande techniques (see section 2.2). Therefore, large scale detectors are required, which make use of large natural active targets such as sea, ice, or lakes. These allow for a scale up of the underground detection technique, based on water Cherenkov light, in order to reach the low fluxes expected in the neutrino high-energy range. ### Getting bigger The first principles of such detection strategies were proposed by M. Markov in 1960 [10]. The idea relies on the charged current and neutral current interactions of a neutrino with a nucleon of the target: \(\nu_{l}+X\to l+Y\), and \(\nu_{l}+X\to\nu_{l}+Y\), with \(l=e,\mu,\tau\). In the case of a charged current interaction, the produced lepton \(l\) emits a light track from Cherenkov emission, while in the case of a neutral current interaction, the produced nucleus \(Y\) can decay and also leave a Cherenkov light imprint (due to the charged particles in the cascade). These Cherenkov lights can be measured thanks to PMTs, similarly as described in section 2.1. The charged particles produced during the neutrino interaction, can travel up to the detector and then either cross it, leaving a light track, or decay within it (or nearby), also leaving a light signature. Therefore, the effective volume is much larger than the detector volume itself. In particular, muon tracks with upward going directions, can efficiently be identified as neutrino signatures, since no other particle can cross the Earth at these energies. This detection strategy is well suited for high-energy neutrinos, in particular since (see chapter 17 of [3]): * both the neutrino cross-section and the muon range increase with energy, leading to larger effective volumes at higher energies. * the mean angular deviation between the neutrino direction and the muon direction goes as \(E^{-0.5}\), hence a better tracking of the sources and a better reconstruction at higher energies (with a better discrimination between up-going/down-going events), * above TeV energies, the Cherenkov light yield increases significantly, allowing for a better reconstruction of the muon energy, as good as \(\sigma\left(\log\left(E_{\mu}\right)\right)\sim 0.3\), hence a better estimate of the neutrino energy (through the unfolding of the lepton spectra into a neutrino spectra). Above \(\sim 100\,\)TeV, the Earth becomes opaque to neutrinos, leading to neutrino induced muons arriving preferentially near the horizon. At energies in the EeV regime, the opacity of the Earth makes neutrino induced muons to arrive from the horizon, reducing greatly the field of view. However, muons from astrophysical neutrino of energies in the PeV-EeV energy range, deposit a large amount of energy, which can be used as a handle to distinguish them from atmospheric muons, on a statistical basis. In addition, down-going muon tracks or cascades starting within the detector volume are necessarily produced by neutrinos. In order to reach the required effective volumes to detect the high-energy neutrino fluxes, large volumes of natural targets, such as glaciers, lakes and seas, are used. The general design consists of PMTs housed in pressure glass-transparent spheres, called optical module (OM), and spread over a large volume of ice/water along strings deployed in ice or anchored at the bottom of the sea or lake. The sphere spacing is typically between \(10-25\,\)m, and the strings are spaced between \(60-200\,\)m. Compared to underground detectors, this is larger by many factors. This technique allows for the coverage of a large volume of target, but makes the detector insensitive to events with energy below \(\sim 10\,\)GeV, due to light absorption, except if a denser sub-array is present. For such kinds of detector, there are three main sources of background for the identification of astrophysical neutrinos: * atmospheric muons: down-going muons produced by atmospheric cosmic-ray interactions, * random background: PMT dark counts, \({}^{40}K\) decay in seawater, and bioluminescence in water, * atmospheric neutrinos: produced in cosmic-ray interactions in the atmosphere. The background generated by atmospheric muons and traveling downward to the detectors can be shielded by deploying the detector in deep locations below the water (sea and lakes) or ice surface. This shielding method follows the underground detector shielding technique, with however a lesser efficiency due to the lower density of the water and ice compared to rock. Therefore, most of the detectors are deployed at least below a depth of \(\sim 1\,\)km to suppress the majority of the background from atmospheric muons. In addition, the reconstruction of the arrival direction can help discriminate against the atmospheric muons. Similarly to rock, water contains radioactive isotopes such as the potassium \({}^{40}K\) responsible for a constant background. In addition, transient luminescence phenomenon can happen in water, mostly from biological sources. These random backgrounds can be mitigated by using veto based on conditions of local coincidences between PMTs. Finally, atmospheric neutrinos can be separated from astrophysical neutrinos only on a statistical basis. The neutrino fluxes from astrophysical sources are expected to be harder than the atmospheric neutrino fluxes, leading to a higher signal to background discrimination as the energy increases. Down-going atmospheric neutrinos, interacting within the detector volume, can be rejected by looking for accompanying muons (muon bundle) produced in the same shower. It has to be noted that atmospheric neutrinos provide a convenient way to calibrate the detector since their fluxes are related to cosmic rays, which have been extensively characterized in this energy range. Furthermore, at lower energies, they are used to study the physics of neutrinos, such as oscillations or mass hierarchy, similarly to underground detectors. ### High-energy neutrino detectors During the last decades, intense efforts have been pursued in order to achieve the detection of high-energy neutrinos, and open a window on the high-energy Universe. The amazing results that we have witnessed these last two decades have been possible thanks to the developments of several pioneering experiments started more than 40 years ago. The origins, of the current large scale detectors of high-energy neutrinos, root back to a concept developed in the 1960s, and called DUMAND for Deep Underwater Muon And Neutrino Detector. It was located close to Hawai'i, about \(30\,\)km away from Big Island, at a depth of \(4.8\,\)km. In 1993, 24 OMs were deployed but failed due to water leakage. This drawback and financial issues caused the ending of the project in 1996 (see chapter 17 of [3]). From this starting point, various concepts emerged and deployment was investigated in various locations, such as ice caps, deep lakes and seas. Iee experimentsThe concept of detectors deployed in deep ice was investigated on the American side with **AMANDA** (**Antarctic Muon And Neutrino Detection Array**). The ice layer, of \(3\,\mathrm{km}\) thickness at the South Pole, is used as a target and a detection medium. It is located a few hundred meters away from the Amundsen-Scott station, and is still within the actual IceCube array (see the next paragraph). A first shallow test in 1993-1994, at a depth of \(800-1000\,\mathrm{m}\) showed that at these depths the effective length is only \(40-80\,\mathrm{cm}\), because of remnant bubbles, making the reconstruction of muon tracks impossible. Subsequent ice measurements, at different depths and locations, found that the bubbles would disappear below some depth. Therefore, a second test array was deployed at \(1500-2000\,\mathrm{m}\), where the scattering length is about \(20\,\mathrm{m}\), much worse than in water but enough to perform track reconstructions. The array was then gradually increased until its final size in 2000, with 19 strings and 677 OMs. If the scattering length is rather small, on the other hand, the absorption length is larger than in water, allowing for a better photon collection from Cherenkov light, hence a better sensitivity. The energy threshold was of \(\sim 50\,\mathrm{GeV}\). However, the resolution on the angular reconstruction for muon tracks was about \(2-2.5^{\circ}\) only, because of the strong ice scattering, which blurred the photon information from the Cherenkov light cones. It was even worse for the reconstruction of cascades, with about \(25^{\circ}\) on the angular resolution. The last analysis from AMANDA used 6595 neutrinos, collected during the period of 2000 - 2006. It was, finally, switched off in 2009. During the operating time of AMANDA, it was found that the ice properties would improve greatly below a major layer of dust, located at around \(2000-2100\,\mathrm{m}\) depth. This motivated the deployment of a larger scale detector following the design of AMANDA. The **IceCube** collaboration successfully built the first gigaton neutrino detector, thanks to the experience and development of its precursor AMANDA. It is located at the South Pole, and was completed in December 2010. It is composed of 5160 Digital Optical Modules (DOMs) along 86 strings between \(1450-2450\,\mathrm{m}\) depth in Antarctica's ice. In addition, 320 DOMs are placed at the surface in the IceTop array just above each string. Each string is composed of 60 DOMs, with a 10-inch PMT, connected by pairs in order to perform fast local coincidence triggering. AMANDA was integrated as a low energy sub-detector, and then replaced by DeepCore: a high density 6 strings sub-array in deep ice, at the center of the IceCube array. The energy threshold is about \(\sim 300\,\mathrm{GeV}\) for IceCube, and \(\sim 10\,\mathrm{GeV}\) for Deep Core [2]. PMT pulses are sent to the surface, but only local coincidence ends in full waveform, in order to reduce the data throughput from noise hits. The data rate is about \(100\,\mathrm{GB/day}\), written on tapes. Online processing and analysis, performed on local computer farms, extracts the interesting events, such as: up-going muons, cascades, high-energy events, coincidences between IceTop and IceCube, follow-up events, etc. The refined data amount for about \(20\,\mathrm{GB/day}\) and are sent via satellites communications to the outer world. Calibration can be achieved thanks to LEDs placed on the flasher board. The DOMs cannot be retrieved from the ice since they are frozen in it, however the main digital electronic board is made of a Field Programmable Gate Array (FPGA), where new functionalities can be uploaded. This enables a continuous update of the triggering and recording capacities of the sensors. Each DOM has a local clock pulse, synchronized every few seconds to a central GPS clock, allowing for a \(\sim 2\,\mathrm{ns}\) time resolution. The muon track angular resolution is about \(1^{\circ}\) at 1 TeV, and below \(0.5^{\circ}\) above 10 TeV. In particular, very deep ice of better quality improves greatly the reconstruction performances. On the other hand, cascades have a limited angular resolution of \(\sim 10^{\circ}\)[2], mostly due to scattering from the ice. IceCube is the only detector of its kind to benefit from the co-location of a surface detector, permanently operating. The IceTop array is made of tanks filled with ice, equipped of 2 DOMs, and placed at the top of each string. The array detects air-showers, from which the coincident detection of muons by IceCube allows for the calibration of the angular absolute pointing and angular resolution of IceCube. IceTop can measure the energy spectrum of air-showers, up to \(10^{18}\) eV, and the mass range of the primary particles can be estimated thanks to the combination of both detectors (sensitive both to electron and muon components of the air-shower). IceCube can detect supernovae neutrino bursts, thanks to its low dark count (static noise). It can measure the faint increase of count rate resulting from millions of MeV neutrino interactions, as expected from the wake of neutrinos produced by a supernova. The detector records counting rate every millisecond, therefore, even a supernova from the Large Magellanic Cloud (e.g., SN1987A) would be detected with a significance of \(5\,\sigma\), early enough to trigger the Super Novae Early Warning System (SNEWS). Thanks to criteria on the configurations and vetoing conditions, High Energy Starting Events (HESE) can be safely discriminated against atmospheric neutrinos and muons. This sub-category of events alone has evidenced over 6 years of data the existence of an astrophysical neutrino flux with more than \(7\,\sigma\) of significance. IceCube is taking data in its final configuration since January 2011, with a duty cycle larger than 99%, detecting every year about \(10^{5}\) neutrino events, from which 99.9% are atmospheric. The failure rate is of 1 DOM every year over the 5160 DOMs in total. Lake experimentsThe deployment of neutrino detectors in deep lakes was initiated by the **Baikal Neutrino Telescope NT200**. It was a prototype array of 192 OMs installed in the southern part of Lake Baikal, and completed in 1998. A few extensions and upgrades followed the initial prototype, until the Baikal collaboration initiated a stepwise installation of a cubic kilometer scale detector array in Lake Baikal. The project is called the **Baikal Giant Volume Detector** (**Baikal-GVD**). It is based on a modular structure of several clusters. The first cluster was deployed in 2016 at 4 km offshore. The detector volume increases at a rate of \(1-2\) clusters per season, and since 2023, 12 clusters have been deployed. It is currently the largest water Cherenkov detector in operation in the Northern Hemisphere. The first phase of GVD consists of 8 clusters totaling 2304 OMs for a detector volume of about \(0.3-0.4\) km\({}^{3}\), followed by a second phase that will consist in a total volume of \(1-2\) km\({}^{3}\). Each cluster, is a fully functional sub-detector, working both in standalone and full array modes. They are made of 8 strings arranged in a circle with one string in the center. The OMs are placed on vertical strings anchored at the bottom of the lake with 36 OMs per string, hence 288 per cluster. The OMs are composed of a 10-inch PMT, and spaced by 15 m, with the lowest OM at a depth of 1275 m (100 m above the bottom of the lake), and the highest at 750 m below the surface. The clusters are separated by 300 m. A pair of neighboring OMs in coincidence can issue a local trigger, which is sent as a request to their section. The request from the three sections arrive at the Control Module (CoM) of the string, which transfers to the Cluster DAQ (data acquisition system), and produces a global trigger. Calibrations are performed thanks to LEDs and laser stimulation. The LEDs are located in each OM, and perform amplitude and time calibrations of the OMs and between different sections. High power lasers are located within each cluster in order to both calibrate the whole cluster and the adjacent clusters. Finally, the OMs are positioned thanks to an acoustic system within each cluster and with 4 acoustic modems per string, allowing for a positioning accuracy of about 2 cm. The energy threshold is \(\sim 100\) GeV, the mean angular resolution for tracks is \(<1^{\circ}\) and \(4.5^{\circ}\) for cascades, both above 10 TeV [2]. A total of 10 cascades have been selected over the period 2018-2020 with energies \(>60\) TeV, making them the best astrophysical candidates so far. Furthermore, multi-messenger follow-up have been set, and alerts are planned to be sent soon. Sea experimentsTwo experiments have initiated the efforts towards the deployment of detectors in deep sea: **NESTOR (Neutrino Extended Submarine Telescope with Oceanographic Research Project)**, and **NEMO (Neutrino Mediterranean Observatory)**. NESTOR was located off the Greek coast at about 3800 m depth in the Ionian Sea, and NEMO was deployed close to Sicily at 100 km from Capo Passero. Both experiments measured the atmospheric muon fluxes and paved the way for a larger scale detector. **ANTARES (Astronomy with a Neutrino Telescope and Abyss Environmental Research)** was the first water detector with a size comparable to AMANDA. It consists of 12 strings anchored to the sea bed, and kept vertical thanks to buoys. Strings are separated by about 60 m, each string is composed of 25 storeys, spaced by 14.5 m, with the lowest at 100 m above the sea bed and the highest at about 460 m above. Each storey is composed of 3 10-inch PMT. The storeys are connected with electro-optical cables (21 optical fibers for digital communications). Each string is divided into 5 sectors, each containing 5 storeys. Storeys are controlled by a Local Control Module (LCM) which handles data communications between its sector and the shore station. The signals from PMTs are digitized with a sub-nanosecond precision, thanks to an interplay between the clock of the LCM and the master clock at shore. Time calibration is performed thanks to pulses between shore clock and LCM clocks, and with LED beacons that fires at the same time the digitizing system electrically and the PMTs optically. Because the strings are immersed in sea current, their positions can vary in time, therefore a calibration of the positions is achieved thanks to compasses in each storey, tilt meters along the strings, and an acoustic triangulation system, composed of transmitters at the bottom of the strings, and hydrophones along the strings. An overall precision of a few cm on the relative positions of OMs is achieved. The energy threshold is 20 GeV for tracks and 1 TeV for cascades [2]. The angular resolution, estimated from Monte-Carlo simulations for muon tracks, is \(\sim 0.2^{\circ}\) at 10 TeV, 0.7\({}^{\circ}\) at 1 TeV, and 1.8\({}^{\circ}\) at 100 GeV. Towards lower energies, kinematics effects between the neutrino and the muon limit the resolution. For cascades, a median mismatch of \(\sim 10^{\circ}\) is relatively easily achieved, while with refined methods and cuts a resolution down to \(\sim 3^{\circ}\) can be reached. Finally, the full configuration of ANTARES was completed in 2008, and it was switched off in 2022. The next generation of European large scale water Cherenkov detector is the **cubic KiloMeter cherenkov NEutrino Telescope (KM3NeT)**. It is currently under deployment at the bottom of the Mediterranean Sea, at two main locations: offshore Toulon in France (\(\sim 2450\) m), and Capo Passero in Sicily in Italy (\(\sim 3500\) m). The complete detector consist of 3 building blocks, each of the building blocks are made of 115 strings with 18 DOMs each. Offshore Sicily, the detector is called **ARCA** (**Astroparticle Research with Cosmics in the Abyss**) ; it focuses on the study of high-energy cosmic neutrinos in the similar energy range as IceCube. It will consist of 2 building blocks in a sparse layout, in order to reach a detector volume of a cubic kilometer. It will complement the field of view of IceCube by looking at the Southern Hemisphere, where the galactic center is visible. Furthermore, a better resolution on the angular reconstruction of the tracks is expected, compared to IceCube, because the scattering length is larger in water, compared to the ice. The targeted energy threshold is \(\sim 100\,\mathrm{GeV}\) for tracks and \(\sim 1\,\mathrm{TeV}\) for cascades [2]. The envisioned resolution on the angular reconstruction is \(0.1^{\circ}\) at \(1\,\mathrm{PeV}\) for muon tracks, and \(\sim 1.5^{\circ}\) for cascades [2]. Offshore Toulon, the detector is called **ORCA** (**Oscillations Research with Cosmics in the Abyss**). It is composed of the third building block with a high density layout, in order to focus on low energy neutrinos (down to \(\sim 4\,\mathrm{GeV}\)) produced in the atmosphere for precise oscillations measurements. Each DOM is made of a \(43\,\mathrm{cm}\) pressure glass sphere, with \(31\) PMTs of \(7.5\,\mathrm{cm}\) each. This configuration has several advantages compared to a single large PMT: the photocathode area is more than three times larger than the one of a single \(25\,\mathrm{cm}\) PMT. The individual readout from each PMT allows for a good separation between different photoelectrons, hence it helps to filter the data. Finally, the configuration provides information on the direction of emission. This concept was validated with in-situ prototypes. For each pulse seen by a PMT, after passing some preset thresholds, the leading edge time and time over threshold are digitized and send as hit, instead of digitizing the whole waveform. Therefore, each hit is only \(6\) bytes of data. All hits are sent to shore, following the concept called "All data to shore". The total rate of data from a single building block (of \(64,170\) PMTs) is \(\sim 25\,\mathrm{Gb}\)/s, sent through optical fibers to shore, using wavelength multiplexing to optimize the data transfer. Once at shore, each event is filtered and discriminated against noise, resulting in a reduction factor of \(10^{5}\), compared to the flux of data arriving. Each saved event contains all the data during that event, in order to have the maximum of information for off-line analysis, and are saved on disks. KM3NeT is considered as the European counterpart to IceCube, focusing on the Southern Hemisphere. In principle, a complete ARCA detector could detect IceCube astrophysical neutrino flux at a \(5\,\sigma\) level in \(1\) year of operation. Thanks to its location, its best sensitivity is toward the Galactic center, where several \(\gamma\)-ray TeV sources are detected. In less than \(4\) years, the predicted neutrino flux should be probed, and constraints set. Furthermore, the good angular resolution, as well as its field of view, should allow for multi-messenger followups. Finally, ORCA could determine the neutrino mass hierarchy with a significance of \(3\,\sigma\) in \(3\) years of data taking. To complete the picture, let us mention P-ONE (Pacific Ocean Neutrino Experiment) that builds on a similar concept as KM3NeT and Baikal GVD but deployed in the Pacific Ocean. It will be able to increase the event statistics from the Southern Hemisphere and complement the field of view and followups from the current experiments (see [1] for more details). Last but not least, IceCube-Gen2 is a large and multi-instrument extension planned for IceCube, and is detailed in section 4. ### A quiet neighborhood In 2013 IceCube has detected for the first time a diffuse astrophysical neutrino flux, now with a significance larger than \(7\,\sigma\): a breakthrough in the field. The origin of this diffuse flux still remains unclear. Recently, three major discoveries, made by IceCube, have shed new lights on this enigma. In 2018, a possible detection of a neutrino in coincidence with electromagnetic radiations (X-ray, \(\gamma\)-ray, and optical) was evidenced above 3 \(\sigma\) for the first time. The neutrino was identified as coming from the direction of the blazar TXS0506+056, in an active state at that time. This result hinted that active-galactic-nuclei could be a source of high-energy neutrinos. It was confirmed, last year, by the discovery of a neutrino excess in the direction of the active-galactic-nuclei NGC1068. This excess was observed over a time period of 3186 days, and represents \(79^{+22}_{-20}\) neutrinos above the atmospheric and cosmic neutrino backgrounds, leading to an significance of \(4.2\,\sigma\). The neutrino flux associated to the source is \(\Phi^{\rm 1TeV}_{\nu_{\mu}+\bar{\nu}_{\mu}}=(5.0\pm 1.5_{\rm stat})\times 10^{-11} \,{\rm TeV}^{-1}\,{\rm cm}^{-2}\,{\rm s}^{-1}\). This flux can be converted to a total neutrino luminosity at the source, taking into account the flavor ratio, emission mechanism and distance to NGC1068. The isotropic and redshift corrected neutrino luminosity is \(L_{\nu}(1.5\to 15\,{\rm TeV})=(2.9\pm 1.1_{\rm stat})\times 10^{42}\,{\rm erg \,s}^{-1}\). Interestingly, this neutrino flux is more than a factor 10 higher than the equivalent \(\gamma\)-ray luminosity observed in the energy range of 100 MeV to 100 GeV (and higher than the limits above 200 GeV), and which is \(L_{\gamma}=1.6\times 10^{41}\,{\rm erg\,s}^{-1}\). This result suggests a dense environment around or within the source, which absorbs the \(\gamma\) rays, and not the neutrinos. The consequence of this discovery, added to the evidences from TX0506+056, is that active-galactic-nuclei could contribute significantly to the overall diffuse neutrino flux, considering that their individual contribution to this flux is around 1% in their respective energy range. It has to be noted that the two sources aforementioned are intrinsically related to active-galactic-nuclei, but most likely emitted neutrino via different mechanisms: the former was a blazar in a flaring state, while the latter is a Seyfert Galaxy and a steady state neutrino emitter. Finally, this year, the detection of neutrinos coming from the Galactic plane, has evidenced a new origin for a part of the observed diffuse neutrino flux. Indeed, a diffuse neutrino emission from the galactic plane is expected from the interactions of cosmic rays inside the galaxy, and producing neutrinos. From their detection, it is possible to locate the interaction sites and infer the energetics at play. IceCube's location in the Southern Hemisphere is not ideal to observe the Galactic Center, due to atmospheric muon tracks that pollute the expected signals from the muon tracks produced by astrophysical neutrinos. However, by using cascade events it is possible to greatly reduce the expected background. Thanks to a novel hybrid method, which involves a complete parametrization of the detector response with a neural network, the number of retained events could be increased by a factor 20 (from 1980 to 59592, from 500 GeV to several PeVs). Furthermore, the angular resolution could be improved by a factor 2 at TeV energies. Consequently, a diffuse neutrino emission was observed from the Galactic Plane with a significance of \(4.71\,\sigma\) for the best model [11]. In the last decade, IceCube has made significant discoveries from the Northern sky, supported by the constraints set by ANTARES from the Southern sky. Their successors, namely IceCube-Gen2 and KM3NeT, should deepen these results by significantly increasing the number of neutrinos detected. Finally, no cosmogenic neutrinos, expected from the interaction of ultra-high-energy cosmic rays with cosmic photons, have been discovered yet. New detectors of multi cubic kilometers seem to be needed to reach the expected fluxes from this component of the ultra-high-energy astrophysical neutrinos. ## 4 Ultra-high-energy neutrinos: an uncharted territory The neutrinos fluxes become extremely low towards ultra-high-energies, and requires effective volumes up to hundreds of gigatons, to achieve detection. As an illustration, a detector scale larger than 100 km\({}^{3}\) is needed to detect more than a handful of cosmogenic neutrinos, with a typical energy range of \(100\,\mathrm{Pe\kern-1.0ptV}\) to \(10\,\mathrm{EeV}\). In principle, such detectors can be designed with the same technique and strategy as the detectors of high-energy neutrinos. Technically, the monitoring of several cubic kilometers of ice and sea can be conducted with the standard detection technique of the Cherenkov tracks and cascades induced by neutrinos. However, the practical deployment of such detector become extremely cost-constraining. Therefore, many projects have investigated alternative methods which, in fact, are already used and matured by other fields of the astroparticle community. ### Alternative detection methods The detectors need to monitor natural targets as big as possible, in order to reach the low fluxes at ultra-high-energies. The biggest accessible volumes on Earth are: the Earth itself, the atmosphere layers and the polar ice caps. On a side note, it is also possible to monitor the Moon from the Earth, and it has been investigated by a few experiments over the past decade. For these volumes, scaling-up the standard techniques starts to become cost-constraining (but doable in principle). Interestingly, new techniques become competitive with the standard Cherenkov tracking, such as the air-shower imaging (already well established by the cosmic-ray community), and radio detection (which has an excellent duty cycle). These methods no longer rely on the detection of tracks produced by neutrino induced lepton, but instead focus on the detection of the particle cascade (shower) resulting from the decay of the aforementioned lepton. Ultra-high-energy particles have a higher chance to decay and induce a particle shower, no matter the detection medium used. In the air, these showers, when produced by Earth-skimming neutrinos, propagate over 10s to 100s of kilometers and emit Cherenkov radiation, fluorescence light and electromagnetic radiation, all detectable on Earth, alongside with the particles themselves reaching the ground. In denser media, such as ice or water, the showers extend over smaller distances, of the order of a few meters for the longitudinal profile and of a few centimeters for the lateral one. Nevertheless, these showers also emit Cherenkov radiation, and even radio waves, except for liquid water environments. Cherenkov imagingParticles from an air-shower move at relativistic speeds, therefore, when propagating in the atmosphere, produce Cherenkov light along their path. The imaging technique relies on the use of so-called Cherenkov telescopes, similar to the telescopes used in standard astronomy: made of a primary mirror recorded by pixelated cameras. The system aims at imaging the Cherenkov tracks, of nanoseconds scale, seen in the atmosphere by dark moonless nights. Therefore, this technique makes use of a vast natural medium, the atmosphere, to produce Cherenkov light. However, due to possible light contamination from the Sun, the Moon, and human activities, the duty cycle is rather limited. This technique has been successfully used on ground based telescopes, onboard flying stratospheric balloons, and is envisioned to be deployed in space. Fluorescence imagingExtensive air-showers produce fluorescence light when traveling through the atmosphere. The charged particles of the shower, mainly electrons and positrons, deposit energy within the molecules of the air, under the form of ionization and excitation [12]. Most of the fluorescence of an air-shower results from the excitation of two electronic states from the nitrogen molecule. Some of this excitation energy is then released as visible and U.V. light, where the emission peaks in the \(300-400\,\mathrm{nm}\) band. The fluorescence light technique allows for the most direct measurement of the development of the longitudinal profile of the air-shower [13]. Therefore, it is very well suited for the reconstruction of the energy and the direction of the primary particle. However it is critical to correctly understand the local atmospheric conditions, and the modeling of the photon yield from fluorescence. The photon yield connects the detected photons to the energy deposited in the atmosphere by the shower, to reconstruct the energy of the primary particle. This quantity is highly dependent on the atmospheric conditions, where for instance, emitted light undergoes Rayleigh scattering with the atmosphere molecules, which absorbs a part of the photon energy. Similarly to Cherenkov imaging, this technique has been successfully used on ground, onboard flying balloons, and is envisioned to be deployed in space. Radio detectionThe interaction of the particles from the shower with its environment also leads to the emission of radio waves [14]. The macroscopic features of this radio emission highly depends on the type of media where the cascade develops, since it will affect the typical size of the shower and the propagation of the radio emission. For instance, in-ice showers have dimensions on the order of tens to hundreds of centimeters, while air-showers reach tens to hundreds of kilometers. In particular, the particle front of the shower, also called "pancake", has a typical thickness of a few centimeters in ice (a few meters in air), and tens of centimeters of diameter in ice (tens of meters in air). For air-showers, the radiation results from two main mechanisms, with an intensity peaking in the MHz regime: 1. The geomagnetic emission: it is due to the deflection of the lightest charged particles in the shower, i.e., positrons and electrons in opposite directions, due to the Lorentz force. This force induces a current varying in time as the particle content in the shower varies over time, leading to a radio signal polarized along the \(-\mathbf{v}\times\mathbf{B}\) direction (with \(\mathbf{B}\) being the direction of the magnetic field and \(\mathbf{v}\) the direction of the shower). 2. The charge-excess or Askaryan emission: while the shower propagates, electrons from air-molecules (or water molecules in ice) are struck by high-energy shower particles and then travel along with the shower front. This combined with positron annihilation leads to a build up of a net negative charge in the shower front. This negative charge excess can be up to \(20-30\%\) and induces a dipole between the positively charged plasma behind the shower front and the electrons in the front, inducing a signal radially polarized in a plane perpendicular to the shower axis. The geomagnetic emission is dominant in the air, where the charge-excess account for only \(\sim 1-20\%\) (depending on the geomagnetic orientation) of the signal. But it is negligible in denser media, such as ice (or rock) where the charge-excess corresponds to the dominant emission mechanism. This can be explained by the extension of the shower and the density of surrounding medium. At a specific angle, the radiations from the shower arrive simultaneously at the observer. The observed signal is therefore composed of a very brief and intense pulse in the time domain, and an extended emission in the frequency domain [14]. This geometrical time compression effect is called Cherenkov effect, and confines the emission in a cone with a typical aperture angle (called Cherenkov angle) of \(1^{\circ}\) for an air-shower and of roughly \(40^{\circ}\) to \(60^{\circ}\) for an ice-shower. For extensive-air-showers, the coherence of the signal is maintained outside the Cherenkov cone, even though the radio pulses get broader due to the difference of optical paths. For the same reason, but in a more critical regime because of the larger refractive index, the coherence is lost for in-ice radio emissions outside the Cherenkov cone. The radio detection of the emission from particle showers has been extensively studied over the past decades, and benefits in particular from the long-lasting experience of the radio telecommunication and radio astronomy fields (see e.g [15] for a complete review). Thanks to the great attenuation length of radio waves, both in air (\(\sim 1000\,\mathrm{km}\)) and in ice (\(\sim 1\,\mathrm{km}\)) this detection technique is competitive and even superior to optical techniques above \(10\,\mathrm{PeV}\). ### Alternative detection strategies The great variety of environments and topographies around the Earth has been used to investigate different concepts of detection strategies and techniques: from deep ice caps, up to space. Radio detection in the iceThe ice provides a denser interaction medium than the air. Which, in principle, results in a larger effective volume for the radio detection technique in the ice compared to the in-air technique. However, this is balanced by the shorter attenuation length of radio waves in the ice compared to the air. In addition, the emission is only coherent along the Cherenkov cone, and the variations of refractive index with depth leads to ray-bending and refraction effects. All these effects can reduce the effective volume for radio detectors in ice and complicate the reconstruction and the interpretation of the data. The ice caps offer a gigantic interaction volume for neutrinos, which makes possible, in principle, the deployment of a detector with an effective volume of the order of several gigatons, in a relatively radio quiet environment. This detection strategy has been investigated at the South Pole, by the pioneering experiments **RICE (Radio Ice Cherenkov Experiment)** 1995-2005, and **AURA (Antarctic Under-ice Radio Array)** 2003-2009 [16, 17]. They have demonstrated the feasibility of the technique for two concepts: subsurface and deep ice antenna arrays. The sub-surface arrays while being obviously more convenient for the deployment, faces more refraction and ray-bending effects than deep-ice ones, where the ice is colder and more stable. Following these two concepts, the **ARA (Askaryan Radio Array)** and **ARIANNA (Antarctic Ross Ice Shelf Antenna Neutrino Array)** experiments [18, 19] applied the same techniques at a larger scale. ARA is a deep-ice antenna concept, operating since 2010 near the Amundsen-Scott station at the South Pole, and is the evolution of AURA. Each station is situated at \(\sim 200\,\mathrm{m}\) depth, and made of 4 strings composed of 16 cylindrical antennas in interferometer mode, and operating in the \(200-850\,\mathrm{MHz}\) band, for a volume of \(20\times 20\times 20\,\mathrm{m}^{3}\). The energy threshold is about \(\sim 10\,\mathrm{PeV}\), for an angular reconstruction of \(\sim 5^{\circ}\)[2]. The 37 stations currently deployed, in a hexagon with a \(2\,\mathrm{km}\) spacing, lead to an effective volume of \(200\,\mathrm{km}^{3}\) at \(1\,\mathrm{PeV}\). This volume, while being the largest, monitored by in-ice radio antennas remains well below the required values since it would need a volume at least 4 times larger in order to reach the ultra-high-energy fluxes predicted by cosmogenic models. ARIANNA is a sub-surface antenna experiments, originally located in Moore's bay at \(110\,\mathrm{km}\) from the Mc Murdo station, and is in operation since 2012. Its goal is to observe the ice layer of \(\sim 570\,\mathrm{m}\) above the Ross sea. The initial detector concept relies on a layout of \(36\times 36\) stations with a spacing of \(1\,\mathrm{km}\) step. Each station is made of 8 down-looking and 2 upward looking antennas (for calibration and cosmic-ray veto), and operates between \(100-1300\,\mathrm{MHz}\). The detector profits from the radio reflections at the interface between water and ice at the bottom of the ice shelf. It allows for detecting both direct and indirect signals, leading to an increase in the field of view and effective volume, by a factor of almost 2. However, since the antennas are deployed close to the upper layers of the ice (the firn) the attenuation length is only of about \(400-500\,\mathrm{m}\). Consequently, the energy threshold is of order \(30\,\mathrm{PeV}\), and the angular resolution is between \(2.9^{\circ}-3.8^{\circ}\)[2]. A hexagonal array of pilot stations has been deployed since 2012 and gradually completed with up to 7 stations, with a few test stations deployed in 2018. In addition, two stations have also been deployed at the South Pole. Yet, only a few dozen of stations were deployed in total, far from the 1000 stations initially needed to reach the fluxes expected for cosmogenic neutrinos. The two concepts depicted above have recently merged to combine resources and advantages in the **Radio Neutrino Observatory in Greenland**[20] (**RNO-G**). It relies on the developments of both ARA and ARIANNA, in order to reach the effective volume needed to detect ultra-high-energy neutrinos, while minimizing the required number of antenna stations. The array is located at the Summit Station, profiting from the layer of \(3\,\mathrm{km}\) of deep ice in central Greenland. Each station is composed in total of 24 antennas: a deep-ice log-periodic array (\(150-600\,\mathrm{MHz}\)), a la ARA, designed with a phased trigger system, and a set of subsurface antennas (\(100-1300\,\mathrm{MHz}\)), a la ARIANNA, oriented in order to be able to fully measure the polarization of any signal induced by a shower. The combination of the two techniques should increase the reconstruction capabilities of the station. The deep-ice array is made of 3 strings plunging into the ice sheet, down to \(100\,\mathrm{m}\) and monitors a volume of ice of roughly \(1\,\mathrm{km}^{3}\). The expected energy threshold is \(50\,\mathrm{PeV}\), and the envisioned angular resolution is \(\sim 2^{\circ}\times 10^{\circ}\)[2]. The final configuration should be made of 35 independent stations separated by \(1.5\,\mathrm{km}\). The design of RNO-G will serve as a reference to build the IceCube-Gen2 radio detector (see the last paragraph of this section). Radio detection from the stratosphereThe radio signals induced by particle cascades in the ice can be refracted up to the surface, and propagate over long distances thanks to the large attenuation length of radio waves in the air. Following this idea, it is possible to benefit from a high altitude standing point to monitor a huge volume of ice. At stratospheric altitudes (on average \(40\,\mathrm{km}\)), it is possible to scan up to \(650\,\mathrm{km}\) away, providing an equivalent detector volume of a million gigaton of ice, at a cost of a threshold on the neutrino energy in the EeV range. This strategy was followed by the **ANITA** missions (**ANtarctic Impulsive Transient Antenna**, see e.g., [21] and references therein). It is a series of NASA missions, which consisted in a few flights of stratospheric balloons above the ice cap in Antarctica, profiting from the wind vortex at the South Pole. The detector onboard the payload was designed to detect both in-ice and in-air showers, by measuring the signals from geomagnetic (in air) and Askaryan (in ice) radio emissions. Its location in the atmosphere allows for detecting both down-going and up-going trajectories, and both direct and reflected signals. A total of 5 missions (2006, 2008, 2009, 2014 and 2016) were launched with successive improvements on the design. ANITA IV, lasted for \(\sim 30\,\mathrm{days}\), with the following payload detector: the radio instrument was made of 48 quad ridged horn antennas, dual polarized, and operating in the \(180-1200\,\mathrm{MHz}\) band. The array layout was designed on 3 cylindrical layers, covering the complete azimuth range in 16 sectors. The RF signal chain allowed for a "threshold riding" trigger system constantly adjusting to the background with an event rate up to \(50\,\mathrm{Hz}\). The energy threshold was \(0.1\,\mathrm{EeV}\), and \(2.8^{\circ}\) for the angular resolution [2]. The offline analysis made use of the interferometric technique between antennas, to improve the signal-to-noise-ratio and the angular resolution down to \(0.1-0.2^{\circ}\) (Askaryan) / \(1^{\circ}\) geomagnetic. Previous ANITA flights have observed anomalous up-going events [22] with very steep trajectories. These events do not show any phase inversion, unlike other cosmic rays observed below the horizon (due to the ice reflection), and still remain to be explained. During the last ANITA (IV) flight, 27 cosmic-ray events were clearly identified, among which 23 have the expected polarity from their geometry. However, 4 near-horizon events do not present any polarity inversion. Therefore, these events are inconsistent with cosmic-ray signals reflected off the ice, even though, they are identified as coming out of the ice sheet. Two of these events, present only a \(1-2\,\sigma\) significance on their arrival direction, and thus could, in fact, originate from above the horizon. In which case, they would be compatible with cosmic rays. However, constraints on the propagation and coherence of the radio signal make this hypothesis unlikely. A detailed simulation study over the complete set of detected cosmic rays have shown a significance of \(3.3\pm 0.5\,\sigma\) regarding the detected anomalous events. The confidence level does not allow for a firm conclusion, however it suggests a new class of cosmic-ray-like events with Earth-skimming trajectories. Another possibility, is that these events were produced by the decay in the atmosphere of a tau-lepton induced by an Earth-skimming tau neutrino. If that is the case, current limits on the diffuse flux of tau neutrinos would rather suggest a point source origin. In addition, to these unprecedented discoveries, the ANITA missions have set the most stringent constraints on astrophysical neutrinos at GZK energies: \(E^{-2}\times 1.3\times 10^{-7}\,\)GeV.cm\({}^{-2}\).s\({}^{-1}\).sr\({}^{-1}\) at 90%C.L. for an \(E^{-2}\) spectrum in the range of \(E_{\nu}=10^{18}-10^{23.5}\) eV. The future mission of this kind is called **PUEO**[23] (**Payload for Ultrahigh Energy Observations**). It plans to increase by almost two orders of magnitude the combined sensitivity of all previous ANITA missions (from ANITA I to ANITA IV), while keeping the same constraints in terms of detector size (because of the balloon requirements). To tackle this challenge, PUEO will improve the ANITA detector design with recent developments made in hardware and firmware, and proceed to the break-down of the instrument in two sub-instruments: * the main instrument is composed of 108 quad-ridged horn antennas (twice more than ANITA IV), in the \(300-1200\,\)MHz band. In addition, 12 antennas canted in the direction of the ground, and dedicated to disentangle the very steep trajectories. Furthermore, it will run on a phased trigger system, allowing for a drastic reduction of the trigger threshold. * the low frequency instrument is composed of 8 sinuous antennas in the \(50-300\,\)MHz band. This instrument is dedicated to the detection of the radio emissions from extensive-air-showers, induced by Earth-skimming tau neutrino. The effective area is twice larger than the main instrument, for this channel. These performances are achieved thanks to the large collecting aperture of each antenna (about \(1.9\,\)m in diameter). As well as the frequency band, for which, the radio beam induced by the air-shower is larger by almost a factor 2 compared to the frequency band of the main instrument. Thanks to the split of the frequency band, the number of channels is doubled for the same detector size, and the radio noise (more intense in the low frequency band) is restricted to only one instrument. The expected energy threshold will be similar to ANITA IV, and the online angular reconstruction will be improved thanks to the phasing of the antennas. Finally, PUEO is planning to fly during the austral summer of 2025-2026 from Mc Murdo, for a nominal flight time of 30 days. Imaging from the air and spaceStratospheric balloons have also been investigated to monitor the atmosphere itself, thanks to the Cherenkov and fluorescence imaging. The large volume of atmosphere compensates for the more constrained duty cycle imposed by these light detection techniques, which require very dark environments. The **Extreme Universe Space Observatory** (**EUSO**) is a succession of near-space and space based missions. Their goals were to validate the detection strategy of POEMMA (see below), which relies on the detection of fluorescence and Cherenkov emissions from ultra-high-energy particles [24]. These detection techniques have been intensively used by cosmic-ray experiments such as the **Pierre Auger Observatory**, and the **Telescope Array**, which have also investigated the potential for neutrino detection through the direct measurements of the particles content from neutrinos induced air-showers (see e.g., [1]). The last mission of this type called **EUSO-SPB2** (for **EUSO aboard a Super Pressure Balloon 2**), is the follow-up mission of **EUSO-Balloon** (2014) and **EUSO-SPB1** (2017) [25]. The mission is composed of two telescopes: the fluorescence telescope, pointing downward and measuring micro-second scales fluorescence lights from ultra-high-energy cosmic-ray tracks ; and the Cherenkov telescope, pointing towards the limb and measuring nanoseconds scales Cherenkov emissions produced by Earth-skimming neutrinos. It was launched from Wanaka in New-Zealand during the spring 2023. The goals of this mission were: to quantify the air glow background from the night-sky near the Earth's limb for a future space mission ; and to measure 100s of direct cosmic rays per hour, and to test the reconstruction procedures. Finally, a program of target of opportunity was also planned for follow-ups of transient events of high-energies. Unfortunately, due to a hole in the balloon of the NASA, the payload could not complete its mission, and only lasted for a few hours. Nevertheless the technology could still be tested and validated up to some extent. Furthermore the NASA offered a new flight in compensation and the collaboration is already developing a new payload for a future mission. The **Probe of Extreme Multi-messenger Astrophysics** (**POEMMA**) is a NASA astrophysics probe-class space based mission, and a potential candidate for a future NASA probe announcement of opportunity [24]. Its goal is to measure ultra-high-energy cosmic rays and cosmic neutrinos, by using the wide field of view in combination with the Earth and its atmosphere as neutrino targets. To do that, it aims at detecting the optical signals from extensive-air-showers resulting from neutrino interaction. The design of the mission relies on two identical spacecrafts flying in a loose formation with a separation of 300 km at an altitude of 525 km, with an orbit inclined by \(28.5^{\circ}\). Each spacecraft will be composed of a Schmidt telescope with an optical collecting area of \(6\,\mathrm{m}^{2}\) and a field of view of \(45^{\circ}\). The focal plane of each telescope will be divided into 2 sections: one, optimized for a fluorescence camera and recording 80% of the mirror ; and the other, optimized for a Cherenkov camera (recording 20% of the mirror). The fluorescence camera will focus on the fluorescence light emitted by ultra-high-energy cosmic rays, and the Cherenkov camera will be focusing on the Cherenkov light emitted by Earth-skimming tau neutrinos. For the latter, the telescopes have to be sufficiently tilted in order to watch the Earth's limb. The planned energy threshold and angular resolution are \(10\,\mathrm{Pe\kern-1.0ptV}\) and \(0.4^{\circ}\) for the Cherenkov channel, while for the fluorescence channel they should be higher with \(10\,\mathrm{Pe\kern-1.0ptV}\) and \(1^{\circ}\)[2]. The separation between the spacecrafts can be reduced to \(25\,\mathrm{km}\) to observe the light from air-showers going upward, hence reducing the energy threshold for the detection of the neutrinos. The stereo observation mode allows for a more precise monitoring of \(10^{4}\) gigaton of atmosphere. The telescopes can repoint within 8 minutes, allowing for efficient searches of follow-up events across the sky. Finally, the orbital period is about \(\sim 95\,\mathrm{minutes}\), providing a full sky coverage for sources of ultra-high-energy cosmic rays and neutrinos. Radio detection from the groundGround based radio-detectors are also envisioned to detect ultra-high-energy neutrinos. The detection strategy relies on the possibility to monitor gigantic volumes of atmosphere to catch the air-showers induced by the tau decays. In order to reach this goal, it must be demonstrated that sparse and extended arrays can be deployed and operated autonomously. Several radio experiments have demonstrated the feasibility of air-shower detection (induced by cosmic rays): **CODALEMA**, **LOPES**, **AERA**, and **LOFAR**[26]. However, autonomous radio detection was investigated by a few of them only, such as **AERA** and **TREND** (the **Tianshan Radio-array for Neutrino Detection**[27]) 2009-2013. TREND took advantage of the radio quiet environment and the mountainous topography of a remote valley in the Tianshan mountains in Western China, to pave the way of the autonomous radio detection of air showers. This achievement has opened the way to a large scale radio array called **GRAND**[28] (**Giant Radio array for Neutrino Detection**). GRAND is a planned large-scale radio experiment dedicated to the detection of ultra-high-energy messengers, of energies above \(50\,\mathrm{PeV}\), with a main focus on ultra-high-energy neutrinos. It will consist of a radio array of \(\sim 200\,000\) antennas over \(200\,000\,\mathrm{km}^{2}\) deployed in several mountainous regions around the world. The focus is put on very inclined air-showers to detect tau-neutrinos with Earth-skimming trajectories that go through a dense medium as a mountain or the Earth surface for up-going trajectories. The deployment of the GRAND experiment is expected to be staged, i.e., divided in 3 main steps, GRANDproto300, GRAND10k and GRAND200k. GRANDProto300 (GP300) is the deployment of the first 300 antennas over \(\sim 200\,\mathrm{km}^{2}\) in China, to detect cosmic rays and hopefully gamma rays in the \(10^{16.5}-10^{18}\,\mathrm{eV}\) energy range. It will serve as a testbench for the GRAND experiment, validating the detection and the reconstruction feasibility of highly inclined showers (\(\theta>80^{\circ}\)), by probing autonomous radio-detection on large-scale arrays, and angular reconstruction below \(0.1^{\circ}\). GRAND10k is expected for 2028, it will consist of two sites: GRAND North (Gobi Desert, China) and GRAND South (Argentina), with 5-10k antennas each, to work on issues related to large-scale arrays, and to detect the first ultra-high-energy neutrinos for optimistic fluxes. Two sites are ideal for a full sky coverage, and test various types of environments and related technical issues. Finally, GRAND200k will consist of 20 sub-arrays of 10 000 antennas all around the world, to reach the sensitivity necessary to ensure the detection of the ultra-high-energy neutrino fluxes. At the moment, three prototypes are deployed: * GP300 (300 antennas) in the Gobi Desert in China, * GRAND@Auger (10 antennas) at the Pierre Auger Observatory for cross-calibration, * GRAND@Nancay (4 antennas) at the Nancay Radio Observatory in France as a testbench. With the prototypes, the collaboration will validate the autonomous radio detection technique, calibrate its antennas and develop efficient reconstruction methods for very inclined air-showers. An alternative to the concept of sparse radio arrays is followed by **BEACON**[29] (**Beam forming Elevated Array for COsmic Neutrinos**). This project plans to use the radio interferometry technique in the \(30-80\,\mathrm{MHz}\) range, to detect events induced by Earth-skimming tau neutrino. The concept aims at deploying antenna stations atop of high elevation mountains, in order to increase the field of view towards the ground. This strategy increases the collecting area of radio signals from events emerging below the horizon and propagating in upward trajectories. Consequently, the expected energy threshold is \(30\,\mathrm{PeV}\), and the angular resolution should lay between \(0.3^{\circ}-1^{\circ}\)[2]. BEACON is therefore building on two key elements: first the radio-interferometer technique, extensively used in radio-astronomy for observations with a high sensitivity, and second, its topography site, which provide a large field of view. Preliminary simulation studies have shown that, in principle, BEACON could reach the fluxes of ultra-high-energy neutrinos with a thousand antenna stations. Currently, a prototype station, made of 8 antennas, is deployed at the Barcroft Station in the White Mountains of California. The prototype is used as a test-bench for calibration and data analysis on cosmic-ray observations. From this prototype, a phase of gradual upscaling should follow and consists in the deployment of a thousand stations. Hybrid detectorIn principle, nothing prevents from combining different detection methods within the same detector. This strategy is followed by **IceCube-Gen2**[30]. The second generation of the IceCube Neutrino Observatory, will target neutrinos in the TeV to EeV energy range. In order to achieve that, it will rely on a design made of three subcomponents: an optical detector focusing on high-energy neutrinos, a large and sparse radio array targeting ultra-high-energy neutrinos, and a hybrid surface detector dedicated to the detection and the veto of cosmic-ray induced extensive-air-showers. The optical component (Gen2-Optical) will consist of a detection volume of \(8\,\mathrm{km}^{3}\), including the existing optical array. It will be made of 120 strings with 80 DOMs per string (totaling 8160 DOMs instead of 5160 presently). Strings will be deployed between \(1344-2689\,\mathrm{m}\) below the surface. The string spacing will be \(240\,\mathrm{m}\) (instead of \(125\,\mathrm{m}\) in present IceCube), in order to significantly increase the volume of the instrument, while maintaining an energy threshold (\(\sim 5\,\mathrm{TeV}\)) and reconstruction performances (\(\sim 0.3^{\circ}-10^{\circ}\)) competitive [2]. Strings will be deployed in a sunflower pattern to improve the azimuthal homogeneity. Finally, DOMs will have an improved photon collection about three time larger than the current one, thanks to a multi-PMT design, inspired from other experiments. The sparse radio detector (Gen2-radio) will extend the energy reach, up to the EeV regime, and will be located next to the optical Cherenkov detector. At the South Pole, the ice near the surface has an attenuation length of \(2\,\mathrm{km}\). The radio array will cover a surface of about \(500\,\mathrm{km}^{2}\). It will be composed of two types of radio detector: hybrid stations a la RNO-G at \(\sim 150\,\mathrm{m}\) below the surface, and antenna arrays a la ARIANNA, close to the surface with one dipole at \(\sim 15\,\mathrm{m}\) below the surface. The expected energy threshold for the radio detector is \(\sim 10\,\mathrm{PeV}\) for an angular resolution comparable to RNO-G [2]. Several other projects could not be detailed here and illustrate the great diversity of concepts in the field. They combine various detection strategy such as the interferometry for in-ice or lunar radio emission (e.g., **TAROGE-M**, and **SKA**), detection through radar echoes (**RET**), arrays of Cherenkov tank deployed in deep valleys (**TAMBO**), or ground based telescopes, using Cherenkov imaging (**TRINITY**). The descriptions can be found in [1]. ## 5 Summary Over the last decades, many technical and technological developments have been accomplished in order to detect the first astrophysical neutrinos. One of the greatest challenges in such endeavor, beside the low interaction probability of the neutrino, is the large range of expected fluxes and energies. Consequently, the detector strategy and technique are tailored to each targeted energy range. As seen in this review, and illustrated in Figure 2, these strategies and techniques vary greatly: from dense and compact underground detectors up to gigantic radio arrays on-ground and below ice, without forgetting near-space and space born instrumented payloads. Thanks to these efforts, the neutrino sky starts to be visible to us. In particular, underground detectors have been the firsts to detect a Galactic supernova, and to confirm in-situ the chains of nuclear reactions taking place in the core of the Sun. However, their limited size and compactness, which grant us the access to these "low energy" neutrinos, cannot achieve the detection of neutrinos at higher energies. These neutrinos, in the high-energy regime, are expected from the interactions of accelerated particles, hence they hold some answers on the acceleration mechanisms within the sources. The neutrino sky, in this energy range, has been accessible thanks to the scaling of the underground detectors up to volumes of cubic kilometer scales, using natural targets such as ice, sea, and lakes. For these detectors, the origins of the neutrino sky remains largely unknown. Only a few sources have been identified yet, and many efforts have still to be conducted in order to reveal the origins of what is seen in the current data. Therefore, great perspectives are expected from the new detectors that are currently built, as they will increase the statistics of events and open new regions of the sky. At the most extreme energies, no neutrinos have been detected yet. The uncertain predictions for these drastically low fluxes is a challenge for the design of the experiments. Consequently, a great diversity of detector concepts is being tested and matured. They push back the technical limits of astroparticle detections, and have renewed the detection strategies. Beside the expected and exciting detection of the first ultra-high-energy neutrinos, these experiments will also provide many technological advances and novel analysis methods. Finally, a promising avenue for neutrino astronomy is the interplay with the multi-messenger astronomy. In addition to be a crucial element for multi-messenger astrophysics, neutrino detectors can greatly benefit from alerts and coincident detection, as illustrated with the case of TXS0506+56. Since in the high-energy and ultra-high-energy regime, transient sources are expected to contribute significantly, optimal synergies are required between the various neutrino detectors and the electromagnetic telescopes. The curious reader is strongly encouraged to look at the tables 1 and 2 from [2], which review exhaustively these potential synergies. The neutrino sky just started to be revealed to us, and already high-energy astrophysics and particle physics has been marked by its imprint. In less than 30 years, neutrino experiments have evolved from a detector state to complete telescopes, and started the new field of neutrino astronomy. Figure 2: Timeline of the detectors, and the few known neutrino sources, discussed in this review, as a function of their peak energy. Only a few of the many experiments designed have succeeded to detect the high-energy neutrino fluxes. A handful of sources have been identified, and the ultra-high-energy realm, remains untouched. This illustrates where the efforts for the next decades might focus. **Acknowledgement** I wish to thank my closest scientific colleagues and friends for their careful readings, suggestions, and discussions: Claire Guepin, Kumiko Kotera, and Olivier Martineau-Huynh. I also thank my colleagues Richard Dallier and Lilian Martin for discussion on the content of this proceeding.
2309.14430
The Bethe Ansatz as a Quantum Circuit
The Bethe ansatz represents an analytical method enabling the exact solution of numerous models in condensed matter physics and statistical mechanics. When a global symmetry is present, the trial wavefunctions of the Bethe ansatz consist of plane wave superpositions. Previously, it has been shown that the Bethe ansatz can be recast as a deterministic quantum circuit. An analytical derivation of the quantum gates that form the circuit was lacking however. Here we present a comprehensive study of the transformation that brings the Bethe ansatz into a quantum circuit, which leads us to determine the analytical expression of the circuit gates. As a crucial step of the derivation, we present a simple set of diagrammatic rules that define a novel Matrix Product State network building Bethe wavefunctions. Remarkably, this provides a new perspective on the equivalence between the coordinate and algebraic versions of the Bethe ansatz.
Roberto Ruiz, Alejandro Sopena, Max Hunter Gordon, Germán Sierra, Esperanza López
2023-09-25T18:00:06Z
http://arxiv.org/abs/2309.14430v2
# The Bethe Ansatz as a Quantum Circuit ###### Abstract The Bethe ansatz represents an analytical method enabling the exact solution of numerous models in condensed matter physics and statistical mechanics. When a global symmetry is present, the trial wavefunctions of the Bethe ansatz consist of plane wave superpositions. Previously, it has been shown that the Bethe ansatz can be recast as a deterministic quantum circuit. An analytical derivation of the quantum gates that form the circuit was lacking however. Here we present a comprehensive study of the transformation that brings the Bethe ansatz into a quantum circuit, which leads us to determine the analytical expression of the circuit gates. As a crucial step of the derivation, we present a simple set of diagrammatic rules that define a novel Matrix Product State network building Bethe wavefunctions. Remarkably, this provides a new perspective on the equivalence between the coordinate and algebraic versions of the Bethe ansatz. ## I Introduction In 1931 Hans Bethe introduced an ansatz for the eigenstates of the antiferromagnetic Heisenberg Hamiltonian in a closed spin-1/2 chain [1]. This ansatz involves the summation of permutations of plane waves, with their quasi-momenta coupled through transcendental equations known as Bethe equations. This ground-breaking study marked the inception of exactly solvable models in condensed matter physics and statistical mechanics [2; 3; 4]. Subsequently, the Bethe ansatz was expanded to encompass the XXZ model [5], the one-dimensional Bose gas with delta function interactions [6], the Hubbard model [7], and a plethora of other systems. At the core of the exact solutions for these models lies their integrability, which denotes the presence of an infinite number of conserved quantities. The latter naturally emerge within the framework of the Algebraic Bethe Ansatz (ABA), pioneered by the Faddeev's school in the 70s-80s [8]. This approach unveils the algebraic underpinnings of the Bethe ansatz, which are based on an \(R\) matrix that fulfils the Yang-Baxter equation. In recent years the study of quantum many body systems has been strongly influenced by quantum information theory and quantum computing. In turn, this has led to a more profound understanding of the structure of many body wavefunctions, in particular with regards to entanglement structure. It was discovered that tensor networks provide the natural language with which to express entanglement structure in many body wavefunctions. A one-dimensional tensor network is a Matrix Product State (MPS), for a summary see [9; 10; 11]. In view of these developments, it was natural to revisit the Bethe ansatz and explore how it could be reframed in the language of tensor networks. This was undertaken in [12], where a huge variety of exact integrable quantum chains were expressed in a unified way using matrix product ansatze, that are essentially equivalent to the ABA [13; 14]. In parallel, the quantum computing community pursued the construction of efficient quantum circuits to prepare many body wavefunctions on quantum computers, taking advantage of the exact solutions derived for models such as the XY and Ising models. This was achieved through the mapping of these models to free fermions [15; 16]. However, the challenge of determining how to construct quantum circuits for models that involve interactions remains. In this work, we directly address this problem. We do so by considering a highly pertinent question: could the Bethe ansatz, which describes interacting models, be adapted to the newly introduced quantum computers? [17]. Among the reasons behind this inquiry is the potential to compute traditionally inaccessible quantities via measurement, such as correlation functions with arbitrary ranges and higher orders [18]. This might lead to a quantum advantage within this particular framework, which could be implemented in the currently available quantum computers. Progress in this direction was made in [19; 20], where an algorithm to prepare eigenstates of the XXZ model was introduced. The algorithm complexity was polynomial in both the number of \(T\) gates and circuit depth. However, its probabilistic nature resulted in the success rate to diminish exponentially [21]. Its applicability was moreover confined to real valued solutions of the Bethe equations, which presented a significant limitation. A different pathway for creating a quantum circuit linked to the Bethe ansatz was introduced in [22], by using its algebraic representation. In contrast to the previously discussed approach rooted in the original plane wave ansatz, this alternative method leverages viewing the ABA as an MPS. Consequently, it could be transformed into a quantum circuit using conventional techniques from the field of tensor networks. The resulting circuit, termed Algebraic Bethe Circuit (ABC), offer the advantages of being deterministic and applicable to any solution, whether real or complex, of the Bethe equations. ABCs are formulated in terms of multiqubit quantum gates. The algorithm complexity therefore translates into the efficiency of their decomposition in terms of one-qubit and two-qubit unitaries. While an efficient decomposition was proposed for the noninteracting XX chain, no conclusive answer was found for the XXZ model. The difficulty inherent to this problem was exacerbated by the lack of analytical expressions in a closed form for the gates comprising the circuit. Indeed, closed expressions were only obtained for scenarios involving one or two magnons, while more complicated cases were addressed numerically. In this paper we comprehensively study the transformation leading to the ABC and arrive at an analytical expression for the quantum gates that form the ABC. Instead of relying on the MPS interpretation of the ABA, we use a simpler MPS structure underlying the Bethe ansatz. This structure turns out to be naturally linked to Bethe plane wave ansatz, also known as Coordinate Bethe Ansatz (CBA). Remarkably, the CBA related MPS can be encoded in a set of diagrammatic rules, which we exploit to present our findings. The shift from the ABA to the CBA ultimately enables us to provide complete analytical expressions for the unitary matrices that comprise the quantum circuit. These results are summarised in the following figure (1) The equivalence between the ABA and the CBA was proven soon after the systematic analysis of quantum integrability began. This equivalence, which holds independently of the Bethe equations, is highly nontrivial [23]. Interestingly, the MPS reformulation of the coordinate and algebraic versions of the Bethe ansatz will lead us to novel insights into their equivalence, from which we recover the integrability constraints contained in the Yang-Baxter equation. The paper is organised as follows: Section II provides a review of the Bethe wavefunction for the XXZ model, its representation through the ABA, and introduces the quantum circuit structure outlined in [22]. Section III presents the Hilbert space structure based on the \(U(1)\) symmetry of the model, highlighting the resulting gate decomposition within the quantum circuit. Section IV introduces the proposed analytic form of the general ABC gates, drawing inspiration from the simple one-magnon case. Section V introduces a set of diagrammatic rules defining a MPS that builds Bethe wavefunctions. Section VI implements the transformation of this MPS into canonical form [9; 10; 11], such that valid quantum gates can be distilled. Section VII shows how to obtain a deterministic quantum circuit. Section VIII offers a proof of the unitarity of the proposed circuit. Section IX proposes an alternative understanding of the equivalence between the ABA and the CBA. Section X specialises the construction to the XX model, and presents an efficient decomposition of the associated ABC gates into two-qubit unitaries. Section XI concludes by summarising key findings and contributions. Several appendices explain technical issues. ## II Algebraic Bethe Circuits In this section we introduce the trial wavefunctions that are at the basis of the Bethe ansatz, and review the construction of their quantum circuit representation proposed in [22]. ### Bethe wavefunctions The trial wavefunctions of the Bethe ansatz are linear superpositions of spin waves, also called magnons. Their construction relies upon the presence of a \(U(1)\) symmetry whose conserved quantity is the projection of the total spin along a chosen direction, which customarily is taken to be the \(z\) axis. The \(U(1)\) symmetry implies the conservation of the total number of magnons. Due to integrability, the interaction among magnons factorises in two-body scattering events that preserve the value of their individual momenta. Throughout the paper we focus on a spin-\(1/2\) chain. Together with the requirements of integrability, the presence of \(U(1)\) symmetry and nearest neighbours interaction, this leads to the XXZ Heisenberg model described by the Hamiltonian \[H=\sum_{j=1}^{N}\left(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{y}\sigma_{j+ 1}^{y}+\Delta\,\sigma_{j}^{z}\sigma_{j+1}^{z}\right)\,, \tag{2}\] where \(\sigma_{j}^{\alpha}\) (\(\alpha=x,y,z\)) are the Pauli matrices acting at the site \(j=1,\ldots,N\), and \(\sigma_{N+1}^{\alpha}=\sigma_{1}^{\alpha}\) for periodic boundary conditions. One spin-\(1/2\) site and one qubit share the same Hilbert space, namely, \(\mathcal{H}=\mathbb{C}^{2}\). We will then identify the spin basis with the computational basis of a qubit by \[\ket{\uparrow}\equiv\ket{0}\,\quad\ket{\downarrow}\equiv\ket{1}. \tag{3}\] The \(U(1)\) symmetry allows us to choose \(\ket{0_{N}}=\ket{0}^{\otimes N}\) as a reference state, where \(N\) is the total number of sites of the chain. Magnons are spin waves over this reference state, such that the number of magnons counts the amount of sites in the state \(\ket{1}\). Imposing periodic boundary conditions on the trial wavefunctions leads to the Bethe equations for the magnon momenta. In this paper, however, we will only be concerned with the preparation of the trial wavefunctions on a quantum computer. We refer to them as Bethe _wavefunctions_, as opposed to Bethe _eigenstates_ which are the result of applying the Bethe equations. Therefore, the boundary conditions should be addressed separately. As an example let us consider the Bethe wavefunction describing two magnons supported by \(N\) qubits. It reads \[\begin{split}|\Psi_{N}^{(2)}\rangle=&\sum_{\begin{subarray} {c}n_{1},z=1\\ n_{1}<n_{2}\end{subarray}}^{N}\Bigl{(}s_{21}\,x_{1}^{n_{1}-1}x_{2}^{n_{2}-1} \\ -s_{12}\,x_{1}^{n_{2}-1}x_{2}^{n_{1}-1}\Bigr{)}|n_{1}n_{2}\rangle\,\end{split} \tag{4}\] where we defined the variables \(x_{a}=e^{ip_{a}}\), with \(p_{a}\) the magnon momenta, and \(|n_{1}n_{2}\rangle\) denotes the state of the computational basis of \(N\) qubits with \(|1\rangle\) at positions \(n_{1}\) and \(n_{2}\). The only input of the construction is the scattering amplitude \[s_{12}=s(p_{1},p_{2})\,\ \ \ \ \ \ \ \ s_{21}=s(p_{2},p_{1})\, \tag{5}\] which encodes the scattering \(S\) matrix between two magnons \[S(p_{1},p_{2})=-\frac{s_{12}}{s_{21}}. \tag{6}\] In the XXZ model the function \(s_{12}\) is given by \[s_{12}=1+x_{1}x_{2}-2\Delta\,x_{2}\, \tag{7}\] where \(\Delta\) is the coupling constant appearing in the Heisenberg Hamiltonian (2). If the scattering amplitude is symmetric, the \(S\) matrix reduces to a sign flip, in which case there exists a Jordan-Wigner transformation that maps the system to free fermions [24]. This result is consistent with the antisymmetry of the wavefunction under the exchange of momenta \(p_{1}\) and \(p_{2}\). For \(M\) magnons, the Bethe wavefunction is \[\begin{split}|\Psi_{N}^{(M)}\rangle=&\sum_{ \begin{subarray}{c}n_{m}=1\\ n_{m}<n_{m+1}\end{subarray}}^{N}\sum_{a_{m}=1}^{M}\epsilon_{a_{1}\ldots a_{M} }\prod_{\begin{subarray}{c}p,q=1\\ p>q\end{subarray}}^{M}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! In [22] a procedure was presented to distil unitary gates out of \(\mathscr{R}_{T}\), and at the same time eliminate the ancillary qubits. The procedure eventually supplies the following deterministic quantum circuit (12) which was named Algebraic Bethe Circuit or ABC. The gates \(P_{k}\) are the solution to certain recursion relations involving \(\mathscr{R}_{T}\). These recursion relations were solved numerically in [22], although analytical expressions were obtained for wavefunctions describing one and two magnons. In this paper we follow a different approach. Instead of focusing on the recursion relations, we will propose an informed ansatz for the ABC gates. A partial proof and strong numerical arguments will be then given in favour of its validity. In this way, we derive the complete solution of the quantum circuit preparing the wavefunction (8) for an arbitrary number of magnons and sites. Although integrability motivates (8) and determines the scattering amplitude to have the form (7), we note that the construction presented in the next sections applies for an arbitrary scattering amplitude \(s_{12}\). ## III Symmetry sectors We introduce here our notation, which relies on the assumed \(U(1)\) symmetry of the integrable chain. We first present an efficient way to index the computational basis states belonging to different symmetry sectors. We then exploit both the symmetry and the staircase structure of the ABC to break down the \(P_{k}\) gates into a collection of restricted maps. ### Hilbert space The Hilbert space of \(k\) qubits, \(\mathcal{H}_{k}=(\mathbb{C}^{2})^{\otimes k}\), is spanned by the computational basis \[\mathcal{H}_{k}=\text{Span}\left\{|i_{1}\ldots i_{k}\rangle\;:\;i_{j}=0,1 \right\}\,. \tag{13}\] We stress that we order qubits from left to right. The Hilbert space \(\mathcal{H}_{k}\) decomposes into symmetry sectors with definite total spin along the \(z\) axis, which, by the assumption of \(U(1)\) symmetry, will be preserved by the quantum gates. The decomposition is \(\mathcal{H}_{k}=\mathop{\oplus}\limits_{r=0}^{k}\mathcal{H}_{r,k}\), where \(\mathcal{H}_{r,k}\) is the subspace with \(r\) qubits at \(|1\rangle\), \[\mathcal{H}_{r,k}=\text{Span}\left\{|i_{1}\ldots i_{k}\rangle\;:\;\sum_{j=1}^{ k}i_{j}=r\right\}. \tag{14}\] The restriction of the computational basis to the \(r\) symmetry sector can be equivalently described with the notation \(|n_{1}\ldots n_{r}\rangle\) of Section II, which singles out the location of qubits at \(|1\rangle\) (15) When we need to graphically represent the state of the qubits, we shall use a red line for \(|1\rangle\) and a black line for \(|0\rangle\). The dimension of \(\mathcal{H}_{r,k}\) is the binomial coefficient \[d_{r,k}=\frac{k!}{r!(k-r)!}\, \tag{16}\] whose sum, using the binomial expansion, gives the dimension of \(\mathcal{H}_{k}\). To describe the symmetry structure of the quantum gates, it would be convenient to label the basis states of each sector with an integer \(a=1,\ldots,d_{r,k}\). This is easily achieved as follows. Let the tuple \((i_{1}\ldots i_{k})_{r}\) describe a computational basis state belonging to \(\mathcal{H}_{r,k}\). We can use its binary digits to define an integer \[(i_{1}\ldots i_{k})_{r}\;\rightarrow\;I=\sum_{j=1}^{k}\,i_{j}\;2^{j-1}. \tag{17}\] We sort every possible \(I\) in an ordered list and assign to each \(I\) the number \(a\) that indexes its position in the list. We illustrate the map with the example \(k=4\) and \(r=2\) in the table below. \[\begin{array}{c c c c c c c c c}i_{1}&i_{2}&i_{3}&i_{4}&I&a&n_{1}&n_{2}\\ \hline 1&1&0&0&3&1&1&2\\ 1&0&1&0&5&2&1&3\\ 1&0&0&1&9&4&1&4\\ 0&1&1&0&6&3&2&3\\ 0&1&0&1&10&5&2&4\\ 0&0&1&1&12&6&3&4\\ \hline\end{array} \tag{18}\] ### Gate structure The quantum circuit (12) creates a Bethe wavefunction supported by \(N\) qubits. It contains \(N-1\) gates \(P_{k}\) which can be classified in two types. When \(k\) is smaller than the number of magnons, \(M\), the gates \(P_{k}\) act on a number of qubits that grows as \(k+1\). We will refer to them as short gates. They are crucial to obtain a deterministic quantum circuit. On the contrary, the gates with \(k\geq M\) act on a fixed number of \(M+1\) qubits. We will call them long gates. First we describe the symmetry structure of the long gates. Their rightmost input qubit is by construction always at \(|0\rangle\), as can be seen in (12). Hence, for our purposes, only the map \[P_{k}|0\rangle:\mathcal{H}_{M}\longrightarrow\mathcal{H}_{M+1} \tag{19}\] is relevant. Due to the underlying symmetry, this map decomposes as \(P_{k}|0\rangle=\oplus_{r=0}^{M}P_{k}^{(r)}\), where \(P_{k}^{(r)}\) is the restriction of (19) to input and output configurations with \(r\) qubits in \(|1\rangle\). In components, it reads \[P_{k}|0\rangle=\left(\begin{array}{cccc}1&0&\cdots&0\\ 0&P_{k}^{(1)}&\cdots&0\\ \vdots&\vdots&&\vdots\\ 0&0&\cdots&P_{k}^{(M)}\\ 0&0&\cdots&0\end{array}\right)\,. \tag{20}\] Without loss of generality, we have chosen \(P_{k}^{(0)}=1\). The last row vanishes because there is no input state in the symmetry sector \(r=M+1\). The small ABC gates are \(k+1\) qubit unitaries with no restriction on the input configurations. Therefore we need to determine the complete map \[P_{k}:\mathcal{H}_{k+1}\longrightarrow\mathcal{H}_{k+1}. \tag{21}\] Breaking down into components restricted to different symmetry sectors, we have \(P_{k}=\oplus_{r=0}^{k+1}P_{k}^{(r)}\), or equivalently \[P_{k}=\begin{pmatrix}1&0&\cdots&0&0\\ 0&P_{k}^{(1)}&\cdots&0&0\\ \vdots&\vdots&&\vdots&\vdots\\ 0&0&\cdots&P_{k}^{(k)}&0\\ 0&0&\cdots&0&1\end{pmatrix}\, \tag{22}\] where we have set \(P_{k}^{(0)}=P_{k}^{(k+1)}=1\). Up to now we have been exploiting the \(U(1)\) symmetry of the integrable model. We can also consider the staircase structure of the ABC. As is clear from (12), the leftmost output qubit of each \(P_{k}\) is not affected by any other quantum gate. Hence, it determines how many qubits in the state \(|1\rangle\) will flow through the rest of the circuit. It is thus convenient to further split the symmetry reduced maps \(P_{k}^{(r)}\) into two components \(P_{k}^{(i,r)}\), with \(i=0,1\), which make explicit the final state of this qubit \[P_{k}^{(r)}=\left(\begin{array}{c}P_{k}^{(1,r)}\\ P_{k}^{(0,r)}\end{array}\right). \tag{23}\] Graphically, we have (24) The input and output configurations to the left map satisfy \[\sum_{l=1}^{M}\,i_{l}=r\,\hskip 14.226378pt\sum_{l=1}^{M}\,j_{l}=r-i. \tag{25}\] An analogous condition holds for the right map \[\sum_{l=1}^{k+1}\,i_{l}=r\,\hskip 14.226378pt\sum_{l=1}^{k}\,j_{l}=r-i. \tag{26}\] We have used rounded boxes in (24) to stress that \(P_{k}^{(i,r)}\) need not be unitary. We have also introduced a notation that will serve us along the paper. We denote in parenthesis the state of those qubits which are already explicitly or implicitly described by \(P_{k}^{(i,r)}\), and therefore do not count as input or output to these matrices. This is the case of the leftmost output qubit, associated to the superscript \(i\), and the rightmost input qubit of long gates, which is fixed to \(|0\rangle\). The main properties of the matrices \(P_{k}^{(i,r)}\) are collected in the following table. \[\begin{array}{c|c|c|c}P_{k}^{(i,r)}&\text{input}&\text{output}&\text{ dimensions}\\ \hline k<M&\mathcal{H}_{r,k+1}&\mathcal{H}_{r-i,k}&d_{r-i,k}\times d_{r,k+1}\\ \hline k\geq M&\mathcal{H}_{r,M}&\mathcal{H}_{r-i,M}&d_{r-i,M}\times d_{r,M} \end{array} \tag{27}\] ## IV Ansatz for the ABC gates Below we propose analytical expressions for the maps \(P_{k}^{(i,r)}\) building the ABC gates with arbitrary \(M\) and \(N\). In order to motivate our ansatz, we start by reviewing the construction of the ABC in the simple case \(M=1\), which was derived in [22]. ### One-magnon states The Bethe wavefunction (8) for one magnon is \[|\Psi_{N}^{(1)}\rangle=\sum_{n=1}^{N}x^{n-1}|n\rangle\, \tag{28}\] with \(x=e^{ip}\). When the magnon momentum is real, we obtain a plane wave. The previous state is unnormalized. By construction, the ABC should prepare its normalised version \[|\Phi_{N}^{(1)}\rangle=\frac{1}{\sqrt{C_{N}}}|\Psi_{N}^{(1)}\rangle\, \tag{29}\] where \[C_{N}=\langle\Psi_{N}^{(1)}|\Psi_{N}^{(1)}\rangle=\sum_{n=0}^{N-1}|x|^{2n}. \tag{30}\] The circuit (12) in this simple case only contains two-qubit gates, reducing to \[\begin{array}{c}\includegraphics[width=142.364pt]{figs/2-2-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 with the matrix \(A_{k}^{(r)}\) denoting the square root of \(C_{k}^{(r)}\), and \(B_{k}^{(r)}\) being the inverse of \(A_{k}^{(r)}\). Using these definitions, we propose the following ansatz \[P_{k}^{(i,r)}=A_{k}^{(r-i)}\,\Lambda^{(i,r)}\,B_{k+1}^{(r)}. \tag{40}\] While the matrices \(A\) and \(B\) implement the change to an orthonormal basis, \(\Lambda^{(i,r)}\) should be responsible for building the coefficients of the Bethe wavefunction (8). The ansatz (40) has a clear MPS inspiration. A MPS is an approximation to the multileg tensor in the right hand side (rhs) of (15) given by the product \[\ket{\Psi_{N}}=\ket{\varphi_{in}} \tag{41}\] which is defined with the help of an auxiliary Hilbert space living on the horizontal links. The three-leg tensor \(\Lambda\) can be promoted to a matrix \[\bar{\Lambda}:\mathcal{H}_{\text{aux}}\otimes\mathcal{H}_{\text{phys}} \rightarrow\mathcal{H}_{\text{phys}}\otimes\mathcal{H}_{\text{aux}} \tag{42}\] satisfying \(\bar{\Lambda}_{\alpha 0}^{i\beta}=\Lambda_{\alpha\beta}^{i}\), where \(\alpha\) and \(\beta\) take values in the auxiliary space and \(i\) is a physical index. Using this matrix, (41) admits the circuitlike representation \[\begin{array}{c|c|c|c}k\geq M&\text{input}&\text{output}&\text{dimensions}\\ \hline B_{k+1}^{(r)}&\mathcal{H}_{r,M}&\mathcal{H}_{r,M}&d_{r,M}\times d_{r,M }\\ \hline\Lambda^{(i,r)}&\mathcal{H}_{r,M}&\mathcal{H}_{r-i,M}&d_{r-i,M}\times d_{r,M}\\ \hline A_{k}^{(r)}&\mathcal{H}_{r,M}&\mathcal{H}_{r,M}&d_{r,M}\times d_{r,M} \\ \hline\end{array} \tag{43}\] The ABA constructing \(M\) magnons states can be very directly interpreted as a MPS network [22]. Indeed, the previous figure is equivalent to (11), with (42) given by \(\mathscr{R}_{T}\). The auxiliary space in that case is the Hilbert space of \(M\) qubits, and \(\ket{\varphi_{in}}=\ket{1_{M}}\) and \(\ket{\varphi_{out}}=\ket{0_{M}}\). The MPS has a large gauge freedom, corresponding to a change of basis in the auxiliary space \[\begin{array}{c}\includegraphics[width=142.26378pt]{Fig3}\end{array} \tag{44}\] which can vary from link to link. Using this freedom, it is always possible to transform \[\Lambda\ \rightarrow\ X_{k}\,\Lambda\,X_{k+1}^{-1}\, \tag{45}\] such that the associated matrices (42) are unitary and (43) defines a valid quantum circuit. A MPS with this property is said to be in canonical form [9; 10; 11]. The transformation into canonical form was the procedure used in [22] to bring the ABA into the ABC. The matrices \(X_{k}\) were determined by recursion relations too involved to be analytically solved in the general case. Here we will pursue an ansatz of the same structure but different components, in search of a simplification. Indeed, (40) has the form (45) with \(A\) identified with \(X\) and \(B\) with its inverse. This idea will lead us to a MPS tensor naturally linked to the CBA. We will focus first on the long gates, whose graphical representation is \[\begin{array}{c}\includegraphics[width=142.26378pt]{Fig4}\end{array} \tag{46}\] where we used the notation explained in (24). All the matrices in the ansatz (40) for long gates act on \(M\) qubits. Their properties are described in the table below \[\begin{array}{c|c|c|c}k\geq M&\text{input}&\text{output}&\text{dimensions}\\ \hline B_{k+1}^{(r)}&\mathcal{H}_{r,M}&\mathcal{H}_{r,M}&d_{r,M}\times d_{r, M}\\ \hline\Lambda^{(i,r)}&\mathcal{H}_{r,M}&\mathcal{H}_{r-i,M}&d_{r-i,M}\times d_{r,M}\\ \hline A_{k}^{(r)}&\mathcal{H}_{r,M}&\mathcal{H}_{r,M}&d_{r,M}\times d_{r,M} \\ \hline\end{array} \tag{47}\] Our ansatz for the short ABC gates, which are crucial for obtaining a deterministic quantum circuit, entails some peculiarities. We postpone their discussion to Section VII. ## V The A tensors Here we present our proposal for the matrices \(\Lambda^{(i,r)}\). The simple \(M=1\) circuit reviewed in Section IV.1 will serve us again as a guide. ### Diagrammatic rules We start by analysing the case when the leftmost output qubit of a long ABC gate is in the state \(\ket{0}\), denoted with the superscript \(i=0\). It is instructive to revisit the one magnon result for \(P_{k}^{(0,1)}\) and compare it with the decomposition (40) \[\begin{array}{c}\includegraphics[width=142.26378pt]{Fig5}\end{array} \tag{48}\] We immediately obtain \(\Lambda^{(0,1)}=x\), which has the appealing interpretation of arising from the displacement of the input \(|1\rangle\) one position toward the right. Extrapolating this intuition to the general case, we propose that \(\Lambda^{(0,r)}\) is a diagonal matrix whose entries are the momentum factors arising from displacing its input state one site to the right. This prescription is better understood with a graphical example. Let us consider a long gate of an ABC creating a three-magnon state. The action of \(\Lambda^{(0,2)}\) on the input state \(|110\rangle\) will be \[\tikzfig{fig:1 ### From ABC to CBA According to the logic of the ansatz (40), the MPS tensor defined by the symmetry restricted maps \(\Lambda^{(i,r)}\) \[\Lambda^{0}=\left(\begin{array}{cccc}1&0&\cdots&0\\ 0&\Lambda^{(0,1)}&\cdots&0\\ \vdots&\vdots&&\vdots\\ 0&0&\cdots&\Lambda^{(0,M)}\end{array}\right)\, \tag{58}\] and \[\Lambda^{1}=\left(\begin{array}{cccc}0&\Lambda^{(1,1)}&\cdots&0\\ \vdots&\vdots&&\vdots\\ 0&0&\cdots&\Lambda^{(1,M)}\\ 0&0&\cdots&0\end{array}\right)\, \tag{59}\] should reproduce the Bethe wavefunctions (8). We will prove in Appendix B that this is indeed the case. Here we illustrate how the component \(|n_{1}n_{2}\rangle=|13\rangle\) of the two-magnon state (4) is reproduced by our diagrammatic rules. For that, it is enough to analyse the action of the first three \(\Lambda\) matrices (60) We correctly obtain \[\langle 13|\Psi_{N}^{(2)}\rangle=s_{21}x_{2}^{2}-s_{12}x_{1}^{2}. \tag{61}\] We stress that our diagrammatic rules exactly prepare the Bethe wavefunction (8). Namely, no proportionality factor needs to be included, contrary to the state (9) constructed by the ABA. It should be noted that determining the proportionality factor between the states created by the ABA and the CBA is a complicated problem. Thus, we can conclude that (55) and (57) provide the MPS tensor naturally associated with the CBA. Although the map between both tensors must be a transformation of the form (44), it is necessarily nontrivial. The ABA tensor for multimagnon states is constructed out of a product of \(R\) matrices, each of them depending on an individual magnon momenta. On the contrary, the \(\Lambda\) defined above only depends on the scattering amplitude of the model, which is a function of pairs of magnon momenta. ## VI The \(\boldsymbol{A}\) and \(\boldsymbol{B}\) matrices The MPS tensor associated to the CBA derived above and that of the ABA do not lead directly to valid quantum gates. They need to be brought into canonical form using the freedom (44). In this and the next section we will determine the change of basis needed to transform (55) and (57) into the quantum gates \(P_{k}\). ### Overlap matrices In the search for the required transformation, let us consider the subcircuit formed by the first \(k-1\) gates of the ABC, highlighted in yellow below (62) We assume that \(k>M\), such that \(P_{k-1}\) is a long gate. While the input configuration to \(P_{N-1}\) is fixed to \(|1_{M}\rangle\otimes|0\rangle\), that of a general long gate can have the first \(M\) qubits in an arbitrary state. A basis for the input states to \(P_{k-1}\) is thus \(|n_{1}\ldots n_{r}\rangle\), where the number of qubits in \(|1\rangle\) can take any value \(r\leq M\) and \(n_{r}\leq M\) assures that the rightmost qubit is in \(|0\rangle\). For each basis element, the subcircuit highlighted in yellow will output a different state supported by \(k\) qubits \[|n_{1}\ldots n_{r}\rangle\rightarrow|\Phi_{k,a}^{(r)}\rangle. \tag{63}\] The integer \(a=1,\ldots,d_{r,M}\) is related to \((n_{1},\ldots,n_{r})\) as explained in Section III.1. Since the circuit is unitary, all these states are orthonormal \[\langle\Phi_{k,a}^{(r)}|\Phi_{k,b}^{(r)}\rangle=\delta_{ab}. \tag{64}\] The MPS based on the tensor \(\Lambda\) derived in the previous section builds the Bethe wavefunctions (8). Equivalently, the probabilistic circuitlike network (43), with \(\bar{\Lambda}\) associated to \(\Lambda\) as explained in (42), outputs Bethe wavefunctions when the \(M\) rightmost auxiliary qubits are projected into the reference state \(|0\rangle\). The ABC is, however, a deterministic circuit which needs no auxiliary qubits. Based on the ansatz (40), we have interpreted the ABC as the unitary version of (43). The mismatch between the number of qubits involved in both realisations of (8) renders their correspondence nontrivial. This issue will be addressed in the next section. Here it is enough to notice that the subcircuit in (62) must be equivalent for consistency to (65) The matrices \(\bar{\Lambda}\) in the green area create \(k\) qubit states \(|\Psi^{(r)}_{k,a}\rangle\), with the same conventions as in (63). Since \(\bar{\Lambda}\) is not unitary, these states are not orthogonal and they are not normalised. We define a matrix of overlaps in each symmetry sector, whose entries are \[C^{(r)}_{k,ab}=\langle\Psi^{(r)}_{k,a}|\Psi^{(r)}_{k,b}\rangle. \tag{66}\] This is known in mathematical terms as a Gram matrix. To ensure that (65) coincides with the highlighted area of (62), we need \(B\) to transform the non orthogonal set into an orthonormal set, that is \[B^{(r)+}_{k}C^{(r)}_{k}B^{(r)}_{k}=\mathbb{1}_{d_{r,M}}. \tag{67}\] This relation can only be satisfied if the overlap matrix is positive definite. By definition it is also Hermitian. A matrix with these properties can be subject to the so called Cholesky decomposition \[C^{(r)}_{k}=A^{(r)+}_{k}A^{(r)}_{k}. \tag{68}\] The matrix \(A\) becomes unique if we chose it to be upper triangular and with positive diagonal entries. Relation (67) fixes \(B\) up to a unitary transformation. Substituting the Cholesky formula into (67), we fix that freedom by requiring that the matrices \(A\) and \(B\) are inverse of each other. With this choice, \(B\) will also be upper triangular. Explicit formulae for \(A\) and \(B\) in terms of minors of the overlap matrices can be found in Appendix A. These matrices, together with (55) and (57), complete the ansatz (40) for the long gates. ### Partial Bethe wavefunctions The previous construction would remain rather formal unless we determine the states \(|\Psi^{(r)}_{k,a}\rangle\). These states turn out to be again Bethe wavefunctions, with momenta drawn from the magnon momenta defining the state prepared by the complete quantum circuit. The subset of momenta associated to \(|\Psi^{(r)}_{k,a}\rangle\) is determined by the input state to the shaded region in (65) as follows \[|n_{1}\ldots n_{r}\rangle\to(p_{n_{1}},\ldots,p_{n_{r}})\, \tag{69}\] with \(a\) related to \((n_{1},\ldots,n_{r})\). This assignment is proven in Appendix B. Here we will limit ourselves to give an example of how it arises. Let us consider an ABC creating a Bethe wavefunction describing three magnons, and input to the shaded region of (65) the state \(|n_{1}n_{2}\rangle=|12\rangle\) from the \(r=2\) symmetry sector. The coefficient multiplying the basis element \(|13\rangle\) of the output is obtained by adding the contributions of the graphs (70) Using the rules of Section V, we obtain \[\langle 13|\Psi^{(2)}_{k,1}\rangle=s_{21}x_{2}^{2}-s_{12}x_{1}^{2}\, \tag{71}\] This result coincides with (61), which emerges from the ABC designed to build a two-magnon Bethe wavefunction (4) with momenta \(p_{1}\) and \(p_{2}\). If the input state is instead \(|n_{1}n_{2}\rangle=|13\rangle\), the coefficient of the same output element \(|13\rangle\) will be \[\langle 13|\Psi^{(2)}_{k,2}\rangle=s_{31}x_{3}^{2}-s_{13}x_{1}^{2}\, \tag{72}\] which is now consistent with a two-magnon output state of momenta \(p_{1}\) and \(p_{3}\). As before, (72) arises from the contributions of two graphs. These are (73) Notice that the states \(|\Phi^{(r)}_{k,a}\rangle\) resulting from the sub-circuit in (62) are in general not Bethe wavefunctions. This is only the case for \(r=M\), since there is only one state in that symmetry sector. Hence \[|\Phi^{(M)}_{k,1}\rangle=\frac{1}{\sqrt{\langle\Psi^{(M)}_{k,1}|\Psi^{(M)}_{k,1 }\rangle}}\,|\Psi^{(M)}_{k,1}\rangle\, \tag{74}\] which generalises the analogous statement for \(M=1\), obtained in Section IV.1. Namely, when the input to the shaded circuit in (62) is \(|1_{M}\rangle\otimes|0\rangle\), it outputs the same Bethe wavefunction prepared by the complete circuit, but supported by \(k\) instead of \(N\) qubits. ## VII The short gates Long and short gates differ in the number of qubits on which they act. The latter act upon a number of qubits that grows with their gate index as \(k+1\) until \(k=M\) is reached. This property is essential for the ABC to define a deterministic circuit. A graphical representation of (40) applied to short gates is (75) Some adjustments in the construction of \(\Lambda\) and the unitarization matrices \(A\) and \(B\) with respect to previous sections are clearly necessary. ### A ghost leg for \(\boldsymbol{\Lambda}\) The dependence of \(\Lambda\) on \(k\) just comes into play through the number of qubits on which it acts. For this reason, we will keep the previous notation and do not add to it a gate subscript. We want the matrix \(\Lambda\) of short gates for consistency to be based on the same diagrammatic rules presented in Section V. But this raises a caveat. The fixed input \(|0\rangle\) of the long gates's rightmost qubit played an essential role. The input state to the short gates is instead unconstrained, making unclear how to apply the diagrammatic rules. We solve this problem by adding a rightmost \(|0\rangle\) as a ghost input, namely, an input which does not correspond to a real qubit. It will be represented graphically by a \(\bullet\). We consider first the matrices \(\Lambda^{(0,r)}\), resorting again to an example. With the help of the ghost input, the action on the state \(|101\rangle\) of the matrix \(\Lambda^{(0,2)}\) as part of the short gate \(P_{2}\) is given by (76) The same trick applies straightforwardly to the matrices \(\Lambda^{(1,r)}\). For instance, the action of \(\Lambda^{(1,2)}\) belonging to the short gate \(P_{1}\) on \(|11\rangle\) leads to (77) Hence, the new \(\Lambda^{(i,r)}\) provide a map between input and output configurations of \(k+1\) physical qubits. As a result, the matrix \(A\) for short gates must connect \(k+1\) with \(k\) qubit configurations. This means that it extinguishes the ghost leg introduced by \(\Lambda\) (78) We represent this again with a \(\bullet\), which we define to always absorb a \(|0\rangle\). Let us give an example of how these new rules fit together in the decomposition (75), using a graph associated to the short gate \(P_{2}\) (79) The dimensions of the matrices \(A\), \(B\) and \(\Lambda\) building the short gates are summarised in the Table below \begin{tabular}{c|c|c|c} \(k<M\) & input & output & dimensions \\ \hline \(B_{k+1}^{(r)}\) & \(\mathcal{H}_{r,k+1}\) & \(\mathcal{H}_{r,k+1}\) & \(d_{r,k+1}\times d_{r,k+1}\) \\ \hline \(\Lambda^{(i,r)}\) & \(\mathcal{H}_{r,k+1}\) & \(\mathcal{H}_{r-i,k+1}\) & \(d_{r-i,k+1}\times d_{r,k+1}\) \\ \hline \(A_{k}^{(r)}\) & \(\mathcal{H}_{r,k+1}\) & \(\mathcal{H}_{r,k}\) & \(d_{r,k}\times d_{r,k+1}\) \\ \hline \end{tabular} The mathematical expressions for \(\Lambda\) are still given by (55) and (57), after substituting \(M\) by \(k+1\)[25]. The unifying definition of \(\Lambda\) for long and short gates has however the consequence that (75) cannot be interpreted as a transformation into canonical form. Since \(A\) is not a square matrix, it does not correspond to a change of basis. Such an interpretation could be restored with a different choice of the elements in the decomposition (75), but it would come at the expense of obscuring their meaning, therefore this will not be pursued here. ### Small overlap matrices To determine the matrices \(A\) and \(B\) of the short gates, we follow the logic outlined in Section VI. The matrix \(B\) should describe a change of basis that orthonormalizes the states prepared by a part of the ABC. In analogy with the shaded area of (65), this part is (81) We have labelled the green gate with the three-leg tensor \(\Lambda\) instead of its matrix version \(\bar{\Lambda}\) to emphasise that the ghost leg implies no additional qubit. Carrying on the parallel with long gates, when a basis state \(|n_{1}\ldots n_{r}\rangle\) enters (81), the output should be a \(k\) qubit Bethe wavefunction with momenta drawn from the subset \(p_{1},\ldots,p_{k}\) according to the rule (69). Unfortunately we were not able to fully prove this statement. In spite of that, we take it as valid and determine \(B\) by the requirement \[B_{k}^{(r)+}\,C_{k}^{(r)}\,B_{k}^{(r)}=\mathbb{1}_{d_{r,k}}. \tag{82}\] The overlap matrix \(C\) has dimension \(d_{r,k}\times d_{r,k}\) and is defined as in (66). Setting \(A_{k}^{(r)}\) to be the inverse of \(B_{k}^{(r)}\) is not compatible with the dimensions described in Table 80. The matrix \(A_{k}^{(r)}\) has been taken to bridge between configurations of \(k+1\) and \(k\) qubits. Guided by this, we build a new map from \(k+1\) qubit configurations into states supported by \(k\) qubits \[|n_{1}\ldots n_{r}\rangle\rightarrow|\widehat{\Psi}_{k,a}^{(r)}\rangle\, \tag{83}\] where \(r\leq k\) by \(U(1)\) conservation. The magnon momenta are selected now from the enlarged set \(p_{1},\ldots,p_{k+1}\) according again to (69). The overlap matrix associated with (83) has dimension \(d_{r,k+1}\times d_{r,k+1}\), and by construction contains the previous one. Since \(\mathcal{H}_{r,k}\) has dimension \(d_{r,k}\), not all states \(|\widehat{\Psi}_{k,a}^{(r)}\rangle\) can be linearly independent. As a result, their overlap matrix \(\widehat{C}_{k}^{(r)}\) will be positive _semidefinite_. The Cholesky decomposition of such a matrix still exists \[\widehat{C}_{k}^{(r)}=A_{k}^{(r)+}\,A_{k}^{(r)}\, \tag{84}\] but, contrary to (68), \(A_{k}^{(r)}\) is now upper _rectangular_. Its number of rows is determined by the rank of the matrix that we want to decompose, which in our case is \(d_{r,k}\). Hence, it has the right dimensions to constitute our guess for the corresponding matrix in (75). ### The product matrix \(L\) Although the product of \(A\) and \(B\) for long gates is the identity, this product for short gates defines a relevant new matrix (85) The matrix \(L\) has dimension \(d_{r,k}\times d_{r,k+1}\) with \(r\leq k\). Given that \(\widehat{C}\) contains \(C\), we can fix \(B\) to be the inverse of the upper _triangular_ block of \(A\). Appendix A then shows that \[L_{k,ab}^{(r)}=\frac{\det\,C_{k,a\to b}^{(r)}}{\det\,C_{k}^{(r)}}\, \tag{86}\] which is easily seen to reduce to the identity when the rightmost leg is in \(|0\rangle\). The notation \(a\to b\) means that the \(a\)th row of \(C\) is substituted by the \(b\)th row of \(\widehat{C}\). Using the matrix \(L\), (81) can be recast as \[\begin{array}{c}\includegraphics[width=14.226378pt]{L1}\end{array} \tag{87}\] In the derivation of the matrices \(A\) and \(B\), we have assumed that this network builds Bethe wavefunctions. In Appendix B on the other hand, these states are shown to be created by the probabilistic circuitlike network \[\begin{array}{c}\includegraphics[width=14.226378pt]{L2}\end{array} \tag{88}\] Hence the correctness of the short gate construction relies upon the equivalence of (87) and (88). In particular, it completes the proof that the \(M\) magnon ABC on \(N\) qubits builds the desired Bethe wavefunction when \(k=M\). The different dimensions of the green gates in (87) and (88) however render a general demonstration of the equivalence difficult. We have checked that it holds up to \(k=5\) using computer algebra software, and tested it numerically up to \(k=10\). ### An example In order to better understand the role of the matrices \(L\) in the previous equivalence, we will analyse the simple case \(k=3\). Disregarding the first green gate of each side because they have the same action on physical qubits, it reduces to prove \[\begin{array}{c}\includegraphics[width=14.226378pt]{L3}\end{array} \tag{89}\] We will need the explicit expressions of \(L_{1,2}^{(r)}\). Using (86), they are given by \[L_{1}^{(1)} = (1,1)\,\ \ \ \ \ L_{2}^{(1)}\,=\,\left(\begin{array}{cc}1&0 &\frac{x_{3}-x_{2}}{x_{1}-x_{2}}\\ 0&1&\frac{x_{1}-x_{2}}{x_{1}-x_{2}}\end{array}\right)\, \tag{90}\] \[L_{2}^{(2)} = \Big{(}\,1,\frac{s_{31}x_{3}-s_{13}x_{1}}{s_{21}x_{2}-s_{12}x_{1} },\,\frac{s_{32}x_{3}-s_{23}x_{2}}{s_{21}x_{2}-s_{12}x_{1}}\,\Big{)}. \tag{91}\] When the rightmost input qubit to the left hand side (LHS) of (89) is in \(|0\rangle\), the action of \(L_{2}\) becomes trivial. We thus rather focus on input configurations where that qubit is in \(|1\rangle\). The input state \(|001\rangle\) leads to two possible outputs, \(|01\rangle\) and \(|10\rangle\). Two different braids connect \(|001\rangle\) with \(|01\rangle\) \[\begin{array}{c}\includegraphics[width=14.226378pt]{L3}\end{array} \tag{92}\] Substituting the appropriate entries of \(L_{1,2}^{(1)}\), the contribution of these graphs leads to \[x_{1}\frac{x_{3}-x_{2}}{x_{1}-x_{2}}+x_{2}\frac{x_{1}-x_{3}}{x_{1}-x_{2}}=x_{ 3}. \tag{93}\] This correctly reproduces the result of the single path allowed by the rhs of (89) to connect those input and out states \[\begin{array}{c}\includegraphics[width=14.226378pt]{L4}\end{array} \tag{94}\] The red line in the single path above visits auxiliary qubits which do not have a counterpart in (92). The rightmost line entering \(L_{2}\) in (92) is a continuation of the ghost leg created by the previous matrix \(\bar{\Lambda}\). When that leg carries a \(|1\rangle\), and before extinguishing it, the mission of \(L_{2}\) is to bring back the \(|1\rangle\) into the flow of physical qubits. This is the crucial point that makes the equivalence in (89) possible. An analogous analysis holds for the transition between \(\ket{001}\) and \(\ket{10}\). The graphs (95) contribute now \[\frac{x_{3}-x_{2}}{x_{1}-x_{2}}+\frac{x_{1}-x_{3}}{x_{1}-x_{2}}=1\, \tag{96}\] in agreement with (97) Finally, we consider input states \(\ket{101}\) and \(\ket{011}\). Both of them connect with the output \(\ket{11}\). We will only analyse the input \(\ket{101}\) because, in this case, the other does not bring any new insight. The LHS of (89) gives raise again to two paths (98) The matrix \(L_{2}\) contributes equally to both, obtaining \[\frac{s_{31}x_{3}-s_{13}x_{1}}{s_{21}x_{2}-s_{12}x_{1}}(s_{21}x_{2}-s_{12}x_{1} ). \tag{99}\] This time also the rhs of (89) leads to two paths (100) whose contribution agrees with (99). It is tempting to associate the first and second graphs in (98) separately with the first and second graphs in (100). Their separate contributions do not coincide however. Agreement is only found between their sums, providing one more witness to the intricacy of the short gate construction. ## VIII Unitarity We have completed our proposal for the ABC gates and given strong arguments that they prepare the desired Bethe wavefunctions. The last and fundamental requirement to prove is unitarity. We address again first the long gates. For the purpose of the ABC, it was only necessary to specify how they act when the rightmost input qubit is in \(\ket{0}\). The condition for \(P_{k}\ket{0}\) to be promoted to a unitary matrix acting on \(M+1\) qubits, is \[\bra{0}P_{k}^{+}P_{k}|0\rangle=\mathbb{1}_{2^{M}}\, \tag{101}\] for \(k\geq M\). There is no unique way to realise this embedding and thus can be adjusted to the quantum architecture at our disposal. Inside each symmetry sector, the previous condition becomes \[P_{k}^{(0,r)+}P_{k}^{(0,r)}+P_{k}^{(1,r)+}P_{k}^{(1,r)}=\mathbb{1}_{d_{r,M}}. \tag{102}\] We substitute the ansatz (40) for the ABC gates, and eliminate the resulting matrices \(A\) using the Cholesky formula (68). Unitarity translates then into \[B_{k+1}^{(r)+}\!\!\left(\sum_{i=0,1}\Lambda^{(i,r)+}C_{k}^{(r-i)}\Lambda^{(i, r)}\right)\!B_{k+1}^{(r)}=\mathbb{1}_{d_{r,M}}. \tag{103}\] The following figure gives a representation of \(C_{k+1}^{(r)}\), provided the input and output states are \(M\) qubit con figurations \(|n_{1}\dots n_{r}\rangle\) in the \(r\) symmetry sector and \(i\) runs on the two values \(0\) and \(1\). The network in the shaded area builds then \(C_{k}^{(r-i)}\). Hence (104) leads to the recursion relations \[C_{k+1}^{(r)}=\sum_{i=0,1}\Lambda^{(i,r)+}C_{k}^{(r-i)}\Lambda^{(i,r)}\, \tag{105}\] with \(C_{k}^{(0)}=1\). These relations summarise the structure that made possible to find an explicit solution for the ABC gates. They turn out to also guarantee their unitarity. Indeed, plaguing it into (103) we recover the change of basis relation that defines \(B\). When \(M=1\), (105) reduces to the simple recursion relation (34). The treatment of the short gates is analogous. The main difference lies in the Cholesky decomposition, since we should use formula (84) instead of (68), obtaining \[B_{k+1}^{(r)+}\!\!\left(\sum_{i=0,1}\Lambda^{(i,r)+}\widehat{C}_{k}^{(r-i)} \Lambda^{(i,r)}\!\right)\!B_{k+1}^{(r)}=\mathbb{1}_{d_{r,k+1}}, \tag{106}\] for \(k<M\). As a consequence, the overlap matrix \(C\) in (103) is replaced by \(\widehat{C}\), defined in terms of the enlarged set of states (83). From the definitions of the matrices \(A\) and \(B\) for short gates, it is easy to see that both overlap matrices are related by \[\widehat{C}_{k}^{(r)}=L_{k}^{(r)+}C_{k}^{(r)}L_{k}^{(r)}. \tag{107}\] The circuitlike representation of \(C_{k+1}^{(r)}\) is again given by (104), but since now \(k<M\), all matrices \(\bar{\Lambda}\) are cut away and only remains the network (108) From (107), the shaded region builds in this case \(\widehat{C}_{k}^{(r-i)}\) and the figure translates into the recursion relations \[C_{k+1}^{(r)}=\sum_{i=0,1}\Lambda^{(i,r)+}\widehat{C}_{k}^{(r-i)}\Lambda^{(i,r )}. \tag{109}\] When substituted in (106), they prove the unitarity of the short gates. ## IX ABA = Cba In this section we will clarify the connection between the approach to derive the Bethe circuits followed here, based on the CBA, and that in [22], based on the ABA. The ABA has a straightforward interpretation as an MPS, where the associated three-leg tensor is (110) This figure makes explicit the decomposition of the tensor \(\mathcal{R}_{T}\) in (11) in terms of the \(R\) matrices of the model. The XXZ \(R\) matrix for the spin-\(1/2\) chain is \[R=\rho\begin{pmatrix}1&0&0&0\\ 0&y&x&0\\ 0&x&y&0\\ 0&0&0&1\end{pmatrix}. \tag{111}\] Integrability imposes \[y^{2}=1+x^{2}-2\Delta\,x\, \tag{112}\] where \(\Delta\) is the anisotropy of the Hamiltonian (2). The global factor \(\rho\) is a free parameter, which we will set to 1. Each \(R\) matrix is thus a function of the magnon momentum, \(R_{j}=R(x_{j})\). Reference [22] introduced the transformation of \(\Gamma\) into canonical form as the way to derive the quantum gates of the ABC. Let us focus on long gates, since we have seen that short gates involve additional considerations. In analogy with (40), long gates were given by \[P_{k}^{(i,r)}=\mathscr{A}_{k}^{(r-i)}\,\Gamma^{(i,r)}\mathscr{B}_{k+1}^{(r)}\, \tag{113}\] with \(\mathscr{A}_{k}\) and \(\mathscr{B}_{k}\) inverse of each other [26]. Unlike the comprehensive approach followed here, the starting point in [22] was the recursion relations of the form (105) under the substitution \(\Lambda\to\Gamma\) and \(C_{k}\to\mathscr{C}_{k}=\mathscr{A}_{k}^{\prime+}\mathscr{A}_{k}\). These recursion relations were solved iteratively to find the matrix \(\mathscr{A}_{k}\), a task that in general could only be achieved via numerical techniques. The ABA and the CBA are known to prepare the same wavefunctions, up to a normalisation factor, even before imposing the Bethe equations for the magnon momenta [23]. While it is straightforward to obtain the CBA norms, those of the ABA states pose a notoriously difficult problem [27; 28; 29]. With the insights developed in the present paper, the hardness of finding analytic solutions to the ABA recursion relations can be partly explained by this fact. Since \(\Lambda\) and \(\Gamma\) build the same wavefunctions, they must be related by the MPS gauge freedom (44) \[\Gamma^{\,i}=X^{-1}\Lambda^{i}X. \tag{114}\] Given that both tensors are site independent, the transformation \(X\) must be site independent. It also has to be consistent with the \(U(1)\) symmetry. The value \(\rho=1\) for the global factor of the \(R\) matrix leads straightforwardly to \(\Gamma^{(0,0)}\)=1. The diagrammatic rules constructed in Section V imply \(\Lambda^{(0,0)}=1\). This shows that (114) is satisfied in the trivial, zero-charge sector. Choosing a different value for \(\rho\) would require introducing a proportionality factor between both sides of (114). We now define \[\mathscr{A}_{k}=A_{k}X\,\qquad\mathscr{B}_{k}=X^{-1}B_{k}. \tag{115}\] It is then clear that (114) and (115) solve the ABA recursion relations, because with these definitions they just reduce to (105), and lead to the same quantum gates as (40). The relation between the tensors \(\Gamma\)\(\gamma\)\(\Lambda\) offers an interesting alternative way to prove the equivalence between the algebraic and coordinate versions of the Bethe ansatz. The existence of a matrix \(X\) fulfilling (114) is nontrivial. Let us analyse the two-magnon case as an example. When \(M=2\), we have \[\Gamma^{0}=\begin{pmatrix}1&0&0&0\\ 0&x_{1}&y_{1}y_{2}&0\\ 0&0&x_{2}&0\\ 0&0&0&x_{1}x_{2}\end{pmatrix}\, \tag{116}\] while \(\Lambda^{0}=\text{diag}(1,x_{1},x_{2},x_{1}x_{2})\). Hence \(X\) is a matrix that diagonalizes \(\Gamma^{0}\). This condition only determines \(X\) up to left multiplication by a diagonal matrix \[X=D\,X^{0}\, \tag{117}\] with \(D=\text{diag}(1,d_{1},d_{2},d_{3})\). Since \(\Gamma^{0}\) is upper triangular, \(X^{0}\) will also be upper triangular. We fix it by requiring the diagonal entries to be the unity. With this condition, we obtain \[X^{0}=\begin{pmatrix}1&0&0&0\\ 0&1&\frac{y_{1}y_{2}}{x_{1}-x_{2}}&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}. \tag{118}\] This construction is easily seen to work for arbitrary \(M\). When \(i=1\), we have \[\Lambda^{1}=\begin{pmatrix}0&1&1&0\\ 0&0&0&-s_{12}x_{1}\\ 0&0&0&s_{21}x_{2}\\ 0&0&0&0\end{pmatrix}\, \tag{119}\] and \[\Gamma^{1}=\begin{pmatrix}0&y_{1}&x_{1}y_{2}&0\\ 0&0&0&y_{2}\\ 0&0&0&x_{2}y_{1}\\ 0&0&0&0\end{pmatrix}. \tag{120}\] Equation (114) applied to these matrices overdetermines the diagonal matrix \(D\) in (117). A solution can only be found if the following relation holds \[\frac{s_{21}}{s_{12}}=\frac{x_{1}y_{1}^{2}-x_{1}^{2}(x_{1}-x_{2})}{x_{2}y_{1}^ {2}+x_{1}-x_{2}}. \tag{121}\] Exchanging \(1\leftrightarrow 2\) inverts the LHS. It is simple to see that the consistency of the rhs with this property is equivalent to the integrability condition (112). Then, substituting the integrability condition above, leads to XXZ scattering amplitude (7), up to a symmetric function of the magnon momenta. We will see below that a symmetric \(s\) is physically irrelevant and can be thus discarded. Remarkably, (114) encodes the integrability structure of the model, otherwise formulated in terms of the Yang-Baxter equation. For the sake of completeness, we include the solution of the diagonal matrix \[D=\text{diag}\Big{(}1,y_{1},\frac{s_{21}y_{2}}{x_{2}-x_{1}},\frac{y_{1}y_{2}}{ x_{2}-x_{1}}\Big{)}\, \tag{122}\] which contains the information about the norms of the ABA states. Free fermion circuits The implementation of our algorithm on a real quantum processor depends on the possibility of efficiently decomposing \(P_{k}\) into elementary quantum gates. Here we will only address this very important question when the magnon \(S\) matrix (6) reduces to a sign, whereby the system maps to free fermions through a Jordan-Wigner transformation. This corresponds to the symmetric point of the scattering amplitude (7), which is achieved for vanishing \(\Delta\) and leads to the XX chain. At this point, the dependence of the Bethe states on the scattering amplitude factorises \[|\Psi_{N}^{(M)}\rangle = \left(\prod_{\begin{subarray}{c}p,q=1\\ p>q\end{subarray}}^{M}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! were reproduced via a variational ansatz with several layers of \(U(1)\) preserving two-qubit gates. The result of the optimization pointed toward the need of an exponentially growing number of layers with an increasing number of magnons. On the other hand, it is known that the gap for adiabatically building the ground state of the XXZ closes only polynomially [5]. Since there is a polynomial equivalence between adiabatic and digital computation [32], this suggests that an efficient decomposition of the ABC could exist. We hope that the structure uncovered in this paper can help to find an improved ansatz. In this sense, the interest of finding a proof of (127) is not only mathematical. Understanding how the general ansatz (40) can be reduced to a single layer of two-qubit unitaries for the XX model, is the natural starting point to address the interacting case. ## XI Conclusions In this paper, we have proposed a deterministic quantum circuit designed to generate the Bethe wavefunction associated with the XXZ spin chain model. Our efforts build upon the approach initiated in reference [22], which we now complete by providing comprehensive analytic expressions for the quantum gates, applicable to systems with any number of sites and magnons. The method employed in [22] relied on a series of recursive equations hinging on the \(R\) matrix of the Algebraic Bethe Ansatz which required numerical solutions, particularly for scenarios involving more than two magnons. In contrast, this paper takes a distinct standpoint that revolves around the discovery of a novel Matrix Product State structure inherent to the Coordinate Bethe Ansatz. This construction offers greater clarity regarding the role of the dynamics, as elucidated by the phase and scattering factors of the model. Furthermore, it possesses an intriguing diagrammatic structure that merits further investigation and exploration. We also highlighted the connection between the method presented here and the approach introduced in [22], based on the Algebraic Bethe Ansatz. As a bonus, we obtain a new understanding of the equivalence between the coordinate and the algebraic version of the Bethe ansatz. Looking at the technical aspects, it is worth noting the significant role played by the Cholesky decomposition in this construction, which differs from the role undertaken by the QR decomposition in [22]. Our construction does not impose the periodicity condition on the Bethe wavefunction, which is the condition yielding the Bethe equations for the momenta. Consequently, to obtain the eigenfunctions of the XXZ Hamiltonian, it becomes necessary to initially solve these equations and then incorporate the solutions into the quantum gates. Alternatively, one could employ the circuit as a tool for a Variational Quantum Eigensolver (VQE) to seek out solutions for the Bethe equations. This presents an interesting avenue for exploration. A central question regards the decomposition of the ABC gates into one-qubit and two-qubit unitaries. We have strengthened the evidence that an efficient decomposition in terms of matchgates exists for the free fermion chain. Addressing the interacting case is crucial for the implementation of our algorithm on a real quantum computer. At any rate, we should stress that the framework presented here treats free and interacting models on an equal footing, offering analytical expressions for both. It thus appears as an optimal playground to analyse the interplay between integrability, interaction and complexity. Our results hold the potential for extension in several directions. These include exploring open boundary conditions, investigating inhomogeneous systems, delving into the nested Bethe ansatz, exploring semi-classical exactly solvable models, and even nonintegrable models. In this sense, it is interesting to note that the ABC can prepare plane wave superpositions of the form (8) for an arbitrary scattering amplitude \(s(p,q)\) not related to integrability. ## Acknowledgements We would like to thank Francisco Alcaraz, Mari Carmen Banuls, Jean Sebastian Caux, Ignacio Cirac, Diego Garcia-Martin, Karen Hallberg, Rafael Hernandez, Vladimir Korepin, Barbara Kraus, Jose Ignacio Latorre, Rafael Nepomechie, Juan Miguel Nieto Garcia, Tomaz Prosen, Kareljan Schoutens and Peter Zoller for useful discussions. This work has been financially supported by the Spanish Agencia Estatal de Investigacion through the grants "IFT Centro de Excelencia Severo Ochoa CEX2020-001007-S" and PID2021-127726NB-I00 funded by MCIN/AEI/10.13039/501100011033 by ERDF, and the CSIC Research Platform on Quantum Technologies PTI-001. We also acknowledge support by the MINECO through the QUANTUM ENIA project call - QUANTUM SPAIN project, and by the EU through the RTRP-NextGenerationEU within the framework of the Digital Spain 2025 Agenda. R.R. is supported by the UCM, Ministerio de Universidades, and the European Union - NextGenerationEU through contract CT18/22. A.S. is supported by the Spanish Ministry of Science and Innovation under grant number SEV-2016-0597-19-4. M.H.G. was supported by "la Caixa" Foundation (ID 100010434), Grant No. LCF/BQ/DI19/11730056 and by the U.S. DOE, Office of Science, Office of Advanced Scientific Computing Research, under the Quantum Computing Application Teams program. ## Appendix A Formulae for the \(A\) and \(B\) matrices. In this appendix, we write down the expressions of \(A\) and \(B\) in the ansatz (40). We also prove the expression (86) of \(L\) for short gates. We address the alternatives of long gates (46) and short gates (75) separately. ### Long gates In Section VI, we defined the \(d_{r,M}\times d_{r,M}\) matrix \(A_{k}^{(r)}\) by the Cholesky decomposition (68) of \(C_{k}^{(r)}\), which is the overlap matrix of a set of linearly independent Bethe wavefunctions with \(r\) magnons in a spin chain of \(k\geq M\) sites. The fact \(A_{k}^{(r)}\) is upper triangular and has positive diagonal entries ensures the definition is unique. The matrix \(B_{k}^{(r)}\) is the inverse of \(A_{k}^{(r)}\) by definition. This matrix is built upon the Gram-Schmidt process that transforms the nonorthogonal set \(|\Psi_{k,a}^{(r)}\rangle\) into a unitary rotation of the orthonormal set \(|\Phi_{k,a}^{(r)}\rangle\). Therefore, \(B_{k}^{(r)}\) admits the closed form \[\begin{split} B_{k,aa}^{(r)}&=\sqrt{\frac{ \det_{a-1}C_{k}^{(r)}}{\det_{a}C_{k}^{(r)}}}\,\quad B_{k,ab}^{(r)}=0\ \ \text{if}\ \ a>b\,\\ B_{k,ab}^{(r)}&=-\frac{\det_{b-1}C_{k,a\to b}^{(r)}}{ \sqrt{\det_{b-1}C_{k}^{(r)}\det_{b}C_{k}^{(r)}}}\ \ \text{if}\ \ a<b\,\end{split} \tag{12}\] Let us recall \(\det_{a}M\) denotes the upper left minor of order \(a\) of the matrix \(M\) and \(M_{a\to b}\) denotes the matrix that results from \(M\) under the replacement of the \(a\)th column by the \(b\)th column. We shall demonstrate that the closed form of the inverse matrix to (12) is \[A_{k,ab}^{(r)}=\frac{\det_{a}C_{k,a\to b}^{(r)}}{\sqrt{\det_{a-1}C_{k}^{(r)} \det_{a}C_{k}^{(r)}}}. \tag{13}\] The byproduct of (13) is a determinant formula for the Cholesky decomposition of the overlap matrix (and of any Hermitian, positive definite matrix for that matter). Let us alleviate the notation before we begin with the proof. Given that we will keep both \(r\) and \(k\) fixed in the following, we introduce \[A\equiv A_{k}^{(r)}\,\quad B\equiv B_{k}^{(r)}\,\quad C\equiv C_{k}^{(r)}. \tag{14}\] To demonstrate now that \(A\) and \(B\) above are inverse matrices, we need to check that one of the two alternative products, say \(BA\), is proportional to the identity. Furthermore, we just need to focus on the diagonal and upper triangular entries of \(BA\), for its lower triangular entries vanish. According (12) and (13), the nontrivial entries of \(BA\) read \[\delta_{ab}=\frac{\det_{a}C_{a\to b}}{\det_{a}C}\,-\!\!\sum_{c=a+1}^{b}\! \frac{\det_{c-1}C_{a\to c}\det_{c}C_{c\to b}}{\det_{c-1}C\det_{c}C}\, \tag{15}\] where \(a\leq b\). The identity directly holds if \(a=b\). We assume \(a<b\) hereafter. We demonstrate (15) by proving the lemma \[\frac{\det_{c-1}C_{a\to b}}{\det_{c-1}C}\,-\frac{\det_{c-1}C_{a\to c}\det_{c }C_{c\to b}}{\det_{c-1}C\det_{c}C}\,=\frac{\det_{c}C_{a\to b}}{\det_{c}C}\,. \tag{16}\] If the lemma were to hold, we could start from \(c=a+1\) in (16) and apply it iteratively in (15) until we prove identity. Our next step exploits the fact \(\det_{c}C_{c\to b}\) and \(\det_{c}C_{a\to b}\) share the non vanishing minor \(\det_{c-1}C_{a\to b}\), whose associated matrix we denote by \(M\). The matrix \(M\) is nonsingular and invertible because it is built upon the inner product of linearly independent Bethe wavefunctions. We can thus employ formula \[\det\left(\begin{array}{cc}P&S\\ R&Q\end{array}\right)=\det P\det\!\left(Q-RP^{-1}S\right)\,, \tag{17}\] to write \[\begin{split}\frac{\det_{c}C_{c\to b}}{\det_{c-1}C_{a\to b}}& =-(C_{ca}-w^{+}\,M^{-1}v_{a})\,\\ \frac{\det_{c}C_{a\to b}}{\det_{c-1}C_{a\to b}}&=C_{cc}-w^{+}\,M^{-1}v_{c}\, \end{split} \tag{18}\] where we have introduced the \(c-1\) dimensional column vectors \[(v_{a})_{b}=C_{ba}\,\ w_{d}=C_{da}\ \ \text{if}\ \ d\neq b\,\ w_{b}=C_{ca}. \tag{19}\] If we use these expressions and expand \(\det_{c}C\) by minors, we can rephrase (16) as \[\begin{split} w^{+}\,M^{-1}(&\det_{c-1}C_{a\to c}\,v_{a}- \det_{c-1}C\,v_{c})\\ +&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Short gates In Section VII, we defined \(A_{k}^{(r)}\) to be \(d_{r,k}\times d_{r,k+1}\) matrix that applies the Cholesky decomposition (84) to \(\widehat{C}_{k}^{(r)}\). The overlap matrix is positive semidefinite rather than positive definitive. The reason is that it is built upon the inner product of linearly dependent Bethe sates. Nonetheless, \(A_{k}^{(r)}\) is still unique as it is upper rectangular and has positive diagonal entries in the leftmost square block. This block in fact provides the Cholesky decomposition of \(C_{k}^{(r)}\), the overlap matrix of a complete set of linearly independent Bethe wavefunctions with \(r\) magnons in a spin chain of \(k<M\) sites. Since the overlap matrix is now positive definite, and we can transfer the conclusions we drew for long gates to this case. Therefore, the formula for the entries of the leftmost \(d_{r,k}\times d_{r,k}\) block \(A_{k}^{(r)}\) is (111). It also follows that the formula for the \(d_{r,k}\times d_{r,k}\) matrix \(B_{k}^{(r)}\) is (111) again. (The ranges of the indices of both formulae of course change.) To determine \(A_{k}^{(r)}\) completely, we also need to fix the entries beyond the leftmost block. To do so, we look at \(\left|\Psi_{k,a}^{(r)}\right\rangle\), the overcomplete set of \(d_{r,k+1}\) Bethe wavefunctions that underpin the positive semidefinite overlap matrix. (We drop the hats from the definitions (83) and (84) to avoid cluttering the notation.) We set the first \(d_{r,k}\) Bethe wavefunctions to be a linear basis of \(\mathcal{H}_{r,k}\). This choice implies we can unambiguously express \[\left|\Psi_{k,b}^{(r)}\right\rangle=\sum_{a=1}^{d_{r,k}}\,P_{ab}\left|\Psi_{k, a}^{(r)}\right\rangle\,,\ b=d_{r,k}+1,\ldots,d_{r,k+1}\, \tag{112}\] where \(P\) is a rectangular matrix. The basis of Bethe wavefunctions is amenable to the same reasoning behind formulae for long gates (under the replacement \(d_{r,M}\mapsto d_{r,k}\)). Therefore, we can write, up to an unitary rotation, \[\left|\Psi_{k,b}^{(r)}\right\rangle =\sum_{a,c=1}^{d_{r,k}}\frac{\left|C_{11}\,\ldots\ C_{1a}\ P_{cb}C _{1c}\right|}{\begin{array}{c}\vdots\ \vdots\ \vdots\\ C_{a1}\,\ldots C_{aa-1}\,P_{cb}C_{ac}\end{array}\right|\Phi_{k,c}^{(r)}} \tag{113}\] \[\equiv\sum_{a=1}^{d_{r,k}}\,A_{k,ab}^{(r)}\left|\Phi_{k,a}^{(r)} \right\rangle\,,\] where in the last step we have extended the formula (111) to \(a=1,\ldots,d_{r,k}\) and \(b=1,\ldots,d_{r,k+1}\). The matrix \(A_{k}^{(r)}\) thus constructed straightforwardly yields the Cholesky decomposition (84). We deduce that (111) correctly specifies \(A_{k}^{(r)}\) for short gates as well. (The expression (111) is actually a closed form for the Cholesky decomposition of any positive semidefinite matrix.) Last, we prove formula (86) for \(L\). Consider the rhs of (111). The expression is also valid for \(BA\) in short gates. The difference lies in the ranges of the indices, which now are \(a=1,\ldots,d_{r,k}\) and \(b=1,\ldots,d_{r,k+1}\). If \(b>d_{r,k}\), the iteration of lemma (111) leaves a remnant because it halts at \(c=d_{r,k}\). The remnant is just (86), which was to be proven. ## Appendix B The tensor network of Bethe wavefunctions In this appendix, we prove that the circuitlike network (88) constructs the Bethe wavefunction \[\left|\Psi_{k,a}^{(r)}\right\rangle=\sum_{\begin{subarray}{c}m_{b}=1\\ m_{b}<m_{b+1}\end{subarray}}^{k}\sum_{a_{b}=1}^{r}\epsilon_{m_{1}\ldots m_{r} }\prod_{\begin{subarray}{c}p,q=1\\ p>q\end{subarray}}^{r}\sigma_{a_{p}a_{q}} \tag{114}\] \[\times y_{a_{1}}^{m_{1}-1}\ldots y_{a_{r}}^{m_{r}-1}\left|m_{1} \ldots m_{r}\right\rangle\.\] where \(\left|n_{1}\ldots n_{r}\right\rangle\) is the initial state tensor network, \(y_{1},\ldots,y_{r}\) denote the associated subset of momentum variables of \(x_{1},\ldots,x_{M}\) such that \(y_{a}=x_{n_{a}}\) and \(\sigma_{ab}\) denotes the scattering amplitude of \(y_{a}\) and \(y_{b}\). Let us begin with the first (lowest) level of (88). We use \(\bar{\Lambda}\left|0\right\rangle=\Lambda\), (55), and (57) to obtain \[\bar{\Lambda}\left|n_{1}\ldots n_{r}\right\rangle\left|0\right\rangle=\,\prod _{a=1}^{r}y_{a}\,\left|0\right\rangle\left|n_{1}\ldots n_{r}\right\rangle \tag{115}\] \[+\sum_{a=1}^{r}(-1)^{a+1}\prod_{\begin{subarray}{c}b=1\\ b\neq a\end{subarray}}^{r}\sigma_{ba}y_{b}\,\left|1\right\rangle\left|n_{1} \ldots\widehat{n}_{a}\ldots n_{r}\right\rangle\,,\] where \(\widehat{n}_{a}\) denotes \(n_{a}\) is absent in the list. If we introduce the notation \[\bar{\Lambda}_{j}\equiv\mathbb{1}_{2^{j-1}}\otimes\bar{\Lambda}\otimes\mathbb{1 }_{2^{j-j}}\, \tag{116}\] where \(1\leq l\leq k\), at the second level we have \[\bar{\Lambda}_{1}\,\bar{\Lambda}_{2}\left|n_{1}\ldots n_{r}\right\rangle \left|00\right\rangle=\prod_{b=1}^{r}y_{a}^{2}\,\left|00\right\rangle\left|n_{1} \ldots n_{r}\right\rangle \tag{117}\] \[+\sum_{a=1}^{r}(-1)^{a+1}\prod_{\begin{subarray}{c}b=1\\ b\neq a\end{subarray}}^{r}\sigma_{ba}y_{b}^{2}\left(\left|01\right\rangle+y_{a} \left|10\right\rangle\right)\left|n_{1}\ldots\widehat{n}_{a}\ldots n_{r}\right\rangle\] \[+\sum_{\begin{subarray}{c}a_{1},a_{2}=1\\ a_{2}\neq a_{1}\end{subarray}}^{r}(-1)^{a_{1}+a_{2}}\prod_{\begin{subarray}{c}b=1 \\ b\neq a_{1},a_{2}\end{subarray}}^{r}\sigma_{ba}y_{b}^{2}\left(y_{a_{2}}|11 \rangle\right)\left|n_{1}\ldots\widehat{n}_{a_{1}}\ldots\widehat{n}_{a_{2}} \ldots n_{r}\right\rangle\,,\] where \(\alpha_{1}=a_{1}\) and \(\alpha_{2}\) labels the position of \(a_{2}\) inside the ordered set of \(r-1\) elements \(1,\ldots,\widehat{a}_{1},\ldots,r\). The generalisation to the higher levels of (115) and (117) is apparent. The projection of the last \(M\) ancillae on \(\left\langle 0_{M}\right|\) at the \(k\)th level completes the tensor network. Therefore, we can write (88) like \[\left\langle 0_{M}\right|\bar{\Lambda}_{k}\ldots\bar{\Lambda}_{1}|n_{1}\ldots n_{r} \rangle\left|0_{k}\right\rangle=\sum_{\begin{subarray}{c}m_{b}=1\\ m_{b}<m_{b}\not\equiv_{b}\not\equiv_{1}\end{subarray}}^{k}\sum_{\begin{subarray}{ c}a_{b}=1\\ m_{b}<m_{b}\not\equiv_{1}\end{subarray}}^{r}(-1)^{\alpha_{1}\ldots+\alpha_{r}+r} \prod_{\begin{subarray}{c}p,q=1\\ p>q\end{subarray}}^{r}\sigma_{a_{p}a_{q}}\,y_{a_{1}}^{m_{1}-1}\ldots y_{a_{r} }^{m_{r}-1}\left|m_{1}m_{2}\ldots m_{r}\right\rangle\, \tag{100}\] where \(\alpha_{b}\) labels the position of \(m_{b}\) inside the ordered set of \(r-(b-1)\) elements \(1,\ldots,\widehat{a}_{1},\ldots,\widehat{a}_{b-1},\ldots,r\). The equality between (100) and the Bethe wavefunction (8) follows from \(\epsilon_{a_{1}\ldots a_{r}}=(-1)^{\alpha_{1}+\ldots+\alpha_{r}+r}\), which concludes the proof. We close the appendix by stressing that the tensor network in particular prepares (8) for \(r=M\).
2310.00240
Learning Mask-aware CLIP Representations for Zero-Shot Segmentation
Recently, pre-trained vision-language models have been increasingly used to tackle the challenging zero-shot segmentation task. Typical solutions follow the paradigm of first generating mask proposals and then adopting CLIP to classify them. To maintain the CLIP's zero-shot transferability, previous practices favour to freeze CLIP during training. However, in the paper, we reveal that CLIP is insensitive to different mask proposals and tends to produce similar predictions for various mask proposals of the same image. This insensitivity results in numerous false positives when classifying mask proposals. This issue mainly relates to the fact that CLIP is trained with image-level supervision. To alleviate this issue, we propose a simple yet effective method, named Mask-aware Fine-tuning (MAFT). Specifically, Image-Proposals CLIP Encoder (IP-CLIP Encoder) is proposed to handle arbitrary numbers of image and mask proposals simultaneously. Then, mask-aware loss and self-distillation loss are designed to fine-tune IP-CLIP Encoder, ensuring CLIP is responsive to different mask proposals while not sacrificing transferability. In this way, mask-aware representations can be easily learned to make the true positives stand out. Notably, our solution can seamlessly plug into most existing methods without introducing any new parameters during the fine-tuning process. We conduct extensive experiments on the popular zero-shot benchmarks. With MAFT, the performance of the state-of-the-art methods is promoted by a large margin: 50.4% (+ 8.2%) on COCO, 81.8% (+ 3.2%) on Pascal-VOC, and 8.7% (+4.3%) on ADE20K in terms of mIoU for unseen classes. The code is available at https://github.com/jiaosiyu1999/MAFT.git.
Siyu Jiao, Yunchao Wei, Yaowei Wang, Yao Zhao, Humphrey Shi
2023-09-30T03:27:31Z
http://arxiv.org/abs/2310.00240v1
# Learning Mask-aware CLIP Representations for Zero-Shot Segmentation ###### Abstract Recently, pre-trained vision-language models have been increasingly used to tackle the challenging zero-shot segmentation task. Typical solutions follow the paradigm of first generating mask proposals and then adopting CLIP to classify them. To maintain the CLIP's zero-shot transferability, previous practices favour to freeze CLIP during training. However, in the paper, we reveal that CLIP is insensitive to different mask proposals and tends to produce similar predictions for various mask proposals of the same image. This insensitivity results in numerous false positives when classifying mask proposals. This issue mainly relates to the fact that CLIP is trained with image-level supervision. To alleviate this issue, we propose a simple yet effective method, named Mask-aware Fine-tuning (MAFT). Specifically, Image-Proposals CLIP Encoder (IP-CLIP Encoder) is proposed to handle arbitrary numbers of image and mask proposals simultaneously. Then, _mask-aware loss_ and _self-distillation loss_ are designed to fine-tune IP-CLIP Encoder, ensuring CLIP is responsive to different mask proposals while not sacrificing transferability. In this way, mask-aware representations can be easily learned to make the true positives stand out. Notably, our solution can seamlessly plug into most existing methods without introducing any new parameters during the fine-tuning process. We conduct extensive experiments on the popular zero-shot benchmarks. With MAFT, the performance of the state-of-the-art methods is promoted by a large margin: 50.4% (+ 8.2%) on COCO, 81.8% (+ 3.2%) on Pascal-VOC, and 8.7% (+4.3%) on ADE20K in terms of mIoU for unseen classes. Code is available at github.com/jiaosiyu1999/MAFT.git. ## 1 Introduction Semantic segmentation, one of the most widely researched topics in computer vision, has achieved remarkable success [3; 39; 15; 16] with the development of deep learning techniques [14]. However, traditional segmentation models are only capable of segmenting a few predefined categories within a closed vocabulary [7; 2; 22; 21], which is much smaller than the number of categories used by humans to describe the real world. Therefore, zero-shot segmentation [31; 1; 10; 13] is introduced to segment objects using arbitrary categories described by texts. Recently, large-scale visual-language pre-training models (CLIP [28] and ALIGN [17]) have shown impressive transferability in recognizing novel categories, leading to their increased adoption for tackling the challenging zero-shot segmentation task [6; 33; 20; 27]. A mainstream solution follows the "frozen CLIP" paradigm, which executes the zero-shot segmentation with two steps: 1) first employing a Proposal Generator to produce class-agnostic mask proposals and 2) then leveraging a frozen pre-trained CLIP to classify each mask proposal via similarity matching in the aligned image-text feature space. While acceptable results are obtained, we reveal that these approaches overlook a crucial issue, _i.e._ the frozen CLIP is insensitive to different mask proposals and tends to produce similar predictions for various proposals of the same image. To better illustrate the above-mentioned issue, we show several examples in Fig. 1. We use MaskFormer [5] to generate a series of mask proposals and select three typical ones. When using frozen CLIP for classification, we observe that it correctly classifies the high-quality _swan_ proposal \(p_{1}\). However, for the other two proposals \(p_{2}\) and \(p_{3}\), which respectively contain only shape information of _swan_ and both regions of _swan_ and _river_, the frozen CLIP produces similar predictions compared to \(p_{1}\). This is reasonable since CLIP is trained by image-text pairs, making it insensitive to pixel-level information (_e.g._ background noise), and resulting in numerous false positives. Based on the above observations, we consider that an expected CLIP for zero-shot segmentation task should **1) be sensitive to different mask proposals, 2) not compromise its original transferability on novel classes.** To this end, we introduce a Mask-aware CLIP Fine-tuning method (dubbed MAFT). To make CLIP sensitive to different mask proposals, we devise an Image-Proposals CLIP Encoder (IP-CLIP Encoder), which utilizes mask proposals to perform masked Multihead Attention [5; 4]. This design enables the model to handle arbitrary numbers of images and proposals simultaneously. The _mask-aware loss_ is proposed to minimise the distance between the IoU score of mask proposals and the classification score of IP-CLIP Encoder, prompting IP-CLIP Encoder to differentiate various proposals. Besides, to preserve CLIP's zero-shot transferability, we utilize a frozen CLIP as a teacher network to facilitate fine-tuning. This is achieved by aligning the outputs of the frozen CLIP and IP-CLIP Encoder through _self-distillation loss_. By performing MAFT, several advantages are provided: 1) Fine-tuning is efficient since only a few mask proposals need to be classified. 2) Compared to pixel-level fine-tuning, mask-aware fine-tuning hardly alters the structure of CLIP itself, preserving its maximum transferability. 3) Mask-aware fine-tuning of CLIP is released from the segmentation module, making it plug-and-play and applicable to any "frozen CLIP" approaches. As shown in Fig. 1, the mask-aware CLIP can well distinguish different proposals and provide proper classification scores for both seen (_river_) and unseen (_swan_) classes. We evaluate our MAFT on three commonly used zero-shot segmentation benchmarks: COCO-Stuff [2], Pascal-VOC [7], and ADE20K [40]. Extensive experiments show that MAFT works well with various zero-shot segmentation methods. In particular, by plugging MAFT, the state-of-the-art approach FreeSeg [27] achieves superior performance on COCO-Stuff (42.2% \(\rightarrow\) 50.4%), Pascal-VOC (78.6% \(\rightarrow\) 81.8%) and ADE20K (4.4% \(\rightarrow\) 8.7%) in terms of mIoU of unseen classes. Furthermore, we conduct experiments in a _open-vocabulary_ setting, where MAFT enhances the performance of A-847 [40], A-150 [40], PC-459 [24], PC-59 [24] and PAS-20 [7] datasets by +3.0%, +11.2%, +6.4%, +19.1% and +4.4%, respectively. Notably, our approach outperforms the freezing CLIP counterpart and establishes new state-of-the-art results on all datasets. Figure 1: Comparison between the frozen CLIP and our mask-aware CLIP for proposal classification. Regions of proposals are highlighted with green. The frozen CLIP classifies \(p_{1}\), \(p_{2}\), and \(p_{3}\) as _swan_ class and produces similar predictions, although these proposals contain different regions of the image. After the MAFT, the mask-aware CLIP can produce proper scores for different proposals. Related Work **Zero-Shot Segmentation**[29] is established to break the restriction of categories and perform segmentation on unseen classes. Earlier works SPNet [31] learn a joint pixel and vocabulary concept embedding space, ZS5 [1] utilizes a generative model to generate pixel-level features based on word embeddings of unseen classes, CaGNet [10] incorporates context information for better feature generation. Recent approaches take the advent of large-scale visual-language models (_e.g._ CLIP [28] and ALIGN [17]) to leverage rich alignment features from image-text pairs. [34] uses CLIP to generate pseudo-labels for single-image segmentation. STRICT [25] obtains pixel-level pseudo-labels from CLIP for unlabeled pixels and proposes a self-training strategy to capture latent information on unseen classes. LSeg [8] trains a CNN model to compute per-pixel image embeddings and use CLIP text embeddings as a classifier. [32] employs contrastive supervision to learn segmentation masks from text. Concurrently, recent works [6; 33; 27; 20; 9] follow the "frozen CLIP" paradigm for zero-shot segmentation, they first generate a series of mask proposals and then utilize CLIP [28] or ALIGN [17] to classify them. ZSSeg and OVSeg [33; 20] train CLIP adapters to boost performance. FreeSeg[27] simultaneously uses semantic, instance, and panoptic labels and performs fusion training. OpenSeg[9] takes extra images with image-level supervision (_e.g._ captions) to scale up training data. **Pre-trained model fine-tuning** is widely used for transferring pre-trained knowledge to downstream tasks, _e.g._ segmentation. However, this strategy may not work well for data-limited tasks like few-shot learning and zero-shot learning due to the daunting _overfitting_ problem. To address this problem and transfer pre-trained knowledge to data-limited tasks, [43; 42; 12; 33; 20; 27] propose to learn text prompts or image prompts by using (a few) annotated images from target dataset. SVF [30] fine-tunes only a few parameters in the pre-trained image encoder to adapt pre-trained knowledge to few-shot segmentation. [38; 37] use contrastive learning to avoid catastrophic forgetting. Alternatively, many outstanding approaches in data-limited tasks [23; 35; 36; 6; 33] choose to freeze the parameters of pre-trained models to maintain the transferability. Specific to the task of zero-shot/ open-vocabulary segmentation, mainstream approaches use frozen CLIP to avoid overfitting. Recently, MaskCLIP [41] conducts adequate experiments to fine-tune CLIP for open-vocabulary segmentation but has failed. While this attempt is meaningful and appreciated, it is believed that the failure is due to the large domain gap between pixel-level and image-level tasks. This motivates us further research fine-tuning CLIP to be mask-aware (region-level task). ## 3 Preliminary **Problem Setting.** Zero-shot segmentation aims at training a segmentation model capable of segmenting novel objects using text descriptions. Given two category sets \(C_{seen}\) and \(C_{unseen}\) respectively, where \(C_{seen}\) and \(C_{unseen}\) are disjoint in terms of object categories (\(C_{seen}\cap C_{unseen}=\emptyset\)). The model is trained on \(C_{seen}\) and directly tested on both \(C_{seen}\) and \(C_{unseen}\). Typically, \(C_{seen}\) and \(C_{unseen}\) are described with semantic words (_e.g._ sheep, grass). **Revisiting the "frozen CLIP" paradigm.** The "frozen CLIP" approaches [6; 33; 27; 20] execute zero-shot segmentation in two steps: mask proposals generation and mask proposals classification. In the first step, these approaches train a Proposal Generator to generate \(N\) class-agnostic mask proposals (denoting as \(M\), \(M\in\mathbb{R}^{N\times H\times W}\)) and their corresponding classification scores (denoting as \(A^{p}\), \(A^{p}\in\mathbb{R}^{N\times|C_{seen}|}\)). MaskFormer [5] and Mask2Former [4] are generally used as the Proposal Generator since the Hungarian matching [19] in the training process makes the mask proposals strongly generalizable. In the second step, \(N\) suitable sub-images (\(I_{sub}\)) are obtained by _merging_\(N\) mask proposals and the input image. \(I_{sub}\) is then fed into the CLIP Image Encoder to obtain the image embedding (\(E^{I}\)). Meanwhile, text embedding (\(E^{T}\)) is generated by a CLIP Text Encoder. The classification score (\(A^{c},A^{c}\in\mathbb{R}^{N\times C}\)) predicted by CLIP is calculated as: \[A^{c}_{i}=\mathrm{Softmax}(\frac{\exp(\frac{1}{\tau}s_{c}(E^{T}_{i},E^{I}))}{ \sum_{i=0}^{C}\exp(\frac{1}{\tau}s_{c}(E^{T}_{i},E^{I}))}),i=[1,2,...C] \tag{1}\] where \(\tau\) is the temperature hyper-parameter. \(s_{c}(E^{T}_{i},E^{I})=\frac{E^{T}_{i}.E^{I}}{|E^{I}_{i}||E^{I}|}\) represents the cosine similarity between \(E^{T}_{i}\) and \(E^{I}\). \(C\) is the number of classes, with \(C=|C_{seen}|\) during training and \(C=|C_{seen}\cup C_{unseen}|\) during inference. Noting that CLIP is frozen when training to avoid overfitting. To further enhance the reliability of \(A^{c}\), the classification score of the Proposal Generator (\(A^{p}\)) is ensembled with \(A^{c}\) since \(A^{p}\) is more reliable on seen classes. This _ensemble_ operation is wildly used in "frozen CLIP" approaches. The pipeline of "frozen CLIP", as well as the _merge_ and _ensemble_ operations, are described in detail in the Appendix. Although "frozen CLIP" approaches have achieved promising results, it is clear that directly adopting an image-level pre-trained CLIP for proposal classification can be suboptimal. A frozen CLIP usually produces numerous false positives, and the _merge_ operation may destroy the context information of an input image. In view of this, we rethink the paradigm of the frozen CLIP and explore a new solution for proposal classification. ## 4 Methodology We introduce Mask-Aware Fine-tuning (MAFT), a method for learning mask-aware CLIP representations. Within MAFT, we first propose the Image-Proposal CLIP Encoder (IP-CLIP Encoder) to handle images with any number of mask proposals simultaneously (Sec. 4.1). Then, _mask-aware loss_ and _self-distillation loss_ are introduced to fine-tune the IP-CLIP Encoder and make it distinguishable for different mask proposals while maintaining transferability (Sec. 4.2). The complete diagram of the MAFT is shown in Fig. 2, we use the ViT-B/16 CLIP model for illustration. ### Image-Proposal CLIP Encoder (IP-CLIP Encoder) IP-CLIP Encoder aims to process arbitrary numbers of images and mask proposals simultaneously. We draw inspiration from MaskFormer [4; 5], which uses attention-masks in Multihead Attention and provides the flexibility for accepting any number of queries and features of different masked regions. Accordingly, we apply mask proposals as attention-masks in Multihead Attention and designate independent classification queries for each mask proposal. In the IP-CLIP Encoder shown in Fig. 2, we denote the features propagate between Transformer layers as \(F^{i}\), where \(i=[1,2...12]\). We can express \(F^{i}\) as \(F^{i}=[F^{i}_{cls};\ F^{i}_{feat}],\in\mathbb{R}^{(1+hw)\times d}\), here \(1\) represents a class-embedding vector (\(F^{i}_{cls}\)), \(hw\) represents the number of the flattened image features (\(F^{i}_{feat}\)). To obtain the classifications of all mask proposals simultaneously, we repeat \(F^{i}_{cls}\) at layer \(L\)\(N\) times, where \(N\) is the number of mask proposals, denoting the repeated class-embedding vectors as \(F^{i*}_{cls}\). We can express the modified features (\(F^{i*}\)) as \(F^{i*}=[F^{i*}_{cls};\ F^{i}_{feat}],\in\mathbb{R}^{(N+hw)\times d}\). Figure 2: Overview of the Mask-Aware Fine-tuning (MAFT). In IP-CLIP Encoder, we modify the CLIP Image Encoder, and apply the mask proposals as attention bias in Multihead Attention from the \(L^{th}\) layer. The final projection unit is an MLP module used for reshaping the channels of \(F_{cls}\). _w.o._\(M\) denotes IP-CLIP Encoder processes image without utilizing mask proposals (\(M\)). _Mask-aware_ Loss is designed to train CLIP to be mask-aware, while _Self-distillation_ Loss is designed to maintain the transferability. Only the IP-CLIP Encoder is trained (orange part), the Proposal Generator and the CLIP Text Encoder are frozen (blue part). **Propagation of \(F^{i}\), where \(i=[1,2,...L]\).** We consider that CLIP's classification significantly relies on context information. In the first \(L\) Transformer layers, the propagation of \(F^{i}\) is the same as in standard CLIP. Specifically, \(F^{i}_{cls}\) utilizes cross-attention with all pixels within \(F^{i}_{feat}\), effectively retaining the context information. In the subsequent \(12-L\) Transformer layers, the propagation of \(F^{i*}\) can be partitioned into two parts: the propagation of \(F^{i*}_{cls}\) and the propagation of \(F^{i}_{feat}\). **Propagation of \(F^{i*}_{cls}\).** We use \(F^{i*}_{cls}\)[n] and \(M\)[n] to represent the position \(n\) in \(F^{i*}_{cls}\) and \(M\), where \(n=[1,2...N]\). It is expected \(F^{i*}_{cls}\)[\(n\)] computes Multihead Attention for the positions where \(M\)[\(n\)]\(=1\) and itself. To achieve this, we construct an attention bias \(B\in\mathbb{R}^{N\times(N+hw)}\) as follows: \[B_{(i,j)}=\left\{\begin{matrix}0,\mathrm{if}\ \hat{M}_{(i,j)}=1\\ -\infty,\mathrm{if}\ \hat{M}_{(i,j)}=0\end{matrix},\ \ \ \hat{M}=[\mathrm{I}(N,N);\ \mathrm{Flat}(M)]\right. \tag{2}\] here \(\mathrm{I}(N,N)\) denotes \(N^{th}\) order identity matrix, \(\mathrm{Flat}(\cdot)\) denotes the _flatten_ operation. \(\hat{M}\) is an intermediate variable for better representation. Therefore, a masked Multihead Attention is used for propagating \(F^{i*}_{cls}\) : \[F^{(i+1)*}_{cls}=\mathrm{Softmax}(\frac{\mathrm{Que}(F^{i*}_{cls})\mathrm{Key}( F^{i*})^{T}}{\sqrt{d}}+B)\mathrm{Val}(F^{i*}) \tag{3}\] where \(\mathrm{Que}(\cdot)\), \(\mathrm{Key}(\cdot)\), and \(\mathrm{Val}(\cdot)\) denote linear projections, \(d\) is the hidden dimension of \(F^{i*}\). Notably, We omit the MLP Layer and Layer Normalizations in Transformer layers to simplify the representation in Eq. 3 and Eq. 4. **Propagation of \(F^{i}_{feat}\).** A standard Multihead Attention is used for propagating \(F^{i}_{feat}\) : \[F^{i+1}_{feat}=\mathrm{Softmax}(\frac{\mathrm{Que}(F^{i}_{feat})\mathrm{Key}( F^{i}_{feat})^{T}}{\sqrt{d}})\mathrm{Val}(F^{i}_{feat}) \tag{4}\] Therefore, for any given mask proposal \(M\)[\(n\)], the corresponding class-embedding \(F^{i*}_{cls}\)[\(n\)] only performs Multihead Attention with \(F^{i}_{feat}\) where \(M\)[\(n\)]\(=1\) and \(F^{i*}_{cls}\)[\(n\)]. The propagation of \(F^{i}_{feat}\) remains undisturbed by attention-masks. Compared with the frozen CLIP, IP-CLIP Encoder leverages context information effectively and reduces computational costs. ### Objective IP-CLIP Encoder with CLIP pre-trained parameters remains challenging in distinguishing different mask proposals, _e.g._, when the proposals contain more background regions than foreground objects, IP-CLIP may tend to classify them into the foreground categories. To overcome this limitation, we introduce _mask-aware loss_ and _self-distillation loss_ to fine-tune the IP-CLIP Encoder to be mask-aware without sacrificing transferability. We conduct the _mask-aware_ loss function (\(\mathcal{L}_{ma}\)) on \(A^{c}\). The goal is to assign high scores to high-quality proposals and low scores to low-quality proposals in \(A^{c}\). Concretely, we use the Intersection over Union (IoU) score obtained from ground-truth and align it with the \(A^{c}\) to prompt CLIP to become mask-aware. Assuming there are \(k\) classes in ground-truth, we can generate \(k\) binary maps of ground-truth and calculate the IOU score (\(S_{IoU}\)) with \(N\) mask proposals. We identify a discrepancy between the maximum values of \(A^{c}\) and \(S_{IoU}\). The maximum value of \(A^{c}\) tends to approach 1, whereas the maximum value of \(S_{IoU}\) ranges from 0.75 to 0.99. This inconsistency can hinder the alignment between these two metrics. Therefore, we introduced a min-max normalization technique for \(S_{IoU}\) as follows: \[S^{norm}_{IoU}=\frac{S_{IoU}-min(S_{IoU})}{max(S_{IoU})-min(S_{IoU})},S_{IoU} \in\mathbb{R}^{K\times N} \tag{5}\] Meanwhile, we select \(k\) pre-existing classes in \(A^{c}\) (\(A^{c}_{select},A^{c}_{select}\in\mathbb{R}^{K\times N}\)), and employ \(SmoothL1\) Loss to align it with \(S^{norm}_{IoU}\). Therefore, \(\mathcal{L}_{ma}\) can be reformulated as follows: \[\mathcal{L}_{ma}(A^{c}_{select},S^{norm}_{IoU})=\mathrm{SmoothL1}(A^{c}_{select },S^{norm}_{IoU}) \tag{6}\] \[\mathrm{SmoothL1}(x,y)=\left\{\begin{aligned} 0.5\cdot(x-y)^{2},& \mathrm{if}\ |x-y|<1\\ |x-y|-0.5,&\mathrm{otherwise}\end{aligned}\right. \tag{7}\] In addition to \(\mathcal{L}_{ma}\), we also introduce a _self-distillation_ loss \(\mathcal{L}_{dis}\) to maintain CLIP's transferability and alleviate overfitting on \(C_{seen}\). Within \(\mathcal{L}_{dis}\), we use a frozen CLIP as the _teacher_ net, the IP-CLIP as the _student_ net for self-distillation. The predictions of the frozen CLIP and IP-CLIP are expected to be the same when no mask is included. Denoting the output of the frozen CLIP as \(A_{T}\), and the output of the fine-tuned IP-CLIP without masks as \(A_{S}\). We use \(SmoothL1\) Loss to minimize the difference as follows: \[\mathcal{L}_{dis}(A_{S},A_{T})=\mathrm{SmoothL1}(A_{S},A_{T}) \tag{8}\] It is important to note that when processing an image through IP-CLIP without mask proposals, the resulting \(A_{S}\) is a matrix with dimensions \(\mathbb{R}^{C\times 1}\). Therefore, the final loss function can be formulated as: \(\mathcal{L}=\mathcal{L}_{ma}+\lambda\mathcal{L}_{dis}\), where we set the constant \(\lambda\) to 1 in our experiments. The mask-aware fine-tuning process is efficient as we only perform a few iterations (less than 1 epoch). ## 5 Experiments ### Setting **Dataset.** We first follow [1; 11; 26; 6; 33] to conduct experiments on three popular zero-shot segmentation benchmarks, Pascal-VOC, COCO-Stuff and ADE20K, to evaluate our method. Then, we evaluate MAFT on the _open-vocabulary_ setting [20; 33], _i.e._, training on COCO-Stuff and testing on ADE20K (A-847, A-150), Pascal-Context (PC-459, PC-59), and Pascal-VOC (PAS-20). More details of the dataset settings are provided in the Appendix. **Evaluation Metrics.** To quantitatively evaluate the performance, we follow standard practice [1; 31; 10; 25; 6; 33; 27], adopt mean Intersection over Union (mIoU) to respectively evaluate the performance for seen classes (IoU\({}^{*}\)) and unseen classes (IoU\({}^{*}\)). We also employ the harmonic mean IoU (hIoU) among the seen and unseen classes to measure comprehensive performance. **Methods.** Three representative methods are used to verify the generality of MAFT. We unify the three methods into the same framework, with all methods using ResNet101 as the backbone of Proposal Generator and ViT-B/16 CLIP model for a fair comparison. \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{COCO-Stuff} & \multicolumn{3}{c|}{Pascal-VOC} & \multicolumn{3}{c}{ADE20K} \\ & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **hIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** \\ \hline SPNet[31] & 34.6 & 26.9 & 30.3 & 77.8 & 25.8 & 38.8 & - & - & - \\ ZSS[1] & 34.9 & 10.6 & 16.2 & 78.0 & 21.2 & 33.3 & - & - & - \\ CaGNet[10] & 35.6 & 13.4 & 19.5 & 78.6 & 30.3 & 43.7 & - & - & - \\ STRICT[25] & 35.3 & 30.3 & 32.6 & 82.7 & 35.6 & 73.3 & - & - & - \\ ZegFormer[6] & 36.7 & 36.2 & 36.4 & 90.1 & 70.6 & 79.2 & 17.4 & 5.1 & 7.9 \\ ZegFormer[8] +MAFT & 36.4 & -0.3 & 38.1 & -1.7 & 91.5 & -80.7 & -10.1 & 85.7 & -16.6 & -0.7 & -0.1 & -9.8 \\ ZSSeg[33] & 40.4 & 36.5 & 38.3 & 86.6 & 59.7 & 69.4 & 18.0 & 4.5 & 7.2 \\ ZSSeg+MAFT & 40.6 & -0.3 & 40.1 & -1.5 & -0.3 & -2.0 & 88.4 & -1.5 & -66.2 & -0.5 & -18.9 & -0.9 & -0.7 \\ FreeSeg[27] & 42.4 & 42.2 & 42.3 & 91.9 & 78.6 & 84.7 & 22.3 & -4.4 & -7.3 \\ FreeSeg+MAFT & 43.3 & -0.9 & 50.4 & -8.2 & 46.5 & -4.2 & 91.4 & -0.5 & 81.8 & -3.2 & -0.6 & -1.6 & -0.9 & -1.3 & -12.4 & -0.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with state-of-the-art methods in zero-shot segmentation. mIoU\({}^{*}\) and mIoU\({}^{u}\) denote the mIoU(%) of seen classes and unseen classes. \begin{table} \begin{tabular}{l|c c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{COCO-Stuff} & \multicolumn{3}{c|}{Pascal-VOC} & \multicolumn{3}{c}{ADE20K} \\ & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** \\ \hline ZegFormer[6] & 18.5 & 23.0 & 20.5 & 81.4 & 76.8 & 79.0 & 5.1 & 2.6 & 3.5 \\ ZegFormer +MAFT & 35.1 & +16.6 & 31.6 & +7.6 & 33.3 & +12.7 & 87.6 +6.2 & 79.9 +3.1 & 83.5 +4.5 & 15.8 +10.8 & 7.0 +4.4 & 9.8 +6.3 \\ ZSSeg[33] & 20.6 & 27.4 & 23.6 & 82.0 & 71.2 & 76.2 & 5.9 & 2.8 & 3.9 \\ ZSSeg+MAFT & 36.1 & +15.5 & 35.9 & +3.3 & 36.0 & +12.4 & 87.1 +5.1 & 76.1 +4.9 & 81.2 +5.0 & 17.2 +11.3 & 7.2 +4.4 & 10.2 +6.3 \\ FreeSeg[27] & 22.3 & 29.3 & 25.3 & 87.4 & 74.7 & 80.5 & 6.5 & 2.8 & 3.9 \\ FreeSeg+MAFT & 40.1 & +17.8 & 49.7 +9.4 & 44.4 +19.3 & 90.4 +3.0 & 84.7 +10.0 & 87.5 +7.0 & 21.3 +14.8 & 8.7 +5.9 & 12.2 +8.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Results on representative methods [6; 33; 27] with/without MAFT. Here we remove the _ensemble_ operation, and only maintain CLIP classifier results. * **ZegFormer** (CVPR 2022) [6] is an early adopter of the "frozen CLIP" paradigm. It uses MaskFormer as Proposal Generator and employs an _ensemble_ operation to improve the confidence of the results. * **ZSSeg** (ECCV 2022) [33] uses MaskFormer as Proposal Generator and introduces learnable prompts to improve classification accuracy, which significantly affects the subsequent methods. ZSSeg also adopts a self-training strategy, this strategy is excluded from all methods for a fair comparison. * **FreeSeg** (CVPR 2023) [27] represents the state-of-the-art method, unifies semantic, instance, and panoptic segmentation tasks and uses annotations from all three tasks for fusion training. We retrain FreeSeg with only the semantic annotations to ensure fairness. **Implementation details.** We employ ResNet101 as backbone of the Proposal Generator and ViT-B/16 CLIP model. The training process consists of two stages. For the **first** stage, we follow the official code of ZegFormer, ZSSeg and FreeSeg for model training. For the **second** stage, we fine-tune IP-CLIP Encoder with MAFT. We take the batch size of 16 and set CLIP input image size to 480\(\times\)480. The optimizer is AdamW with a learning rate of 0.00001 and weight decay of 0.00001. The number of training iterations is set to 100 for Pascal-VOC, 1000 for COCO-Stuff and 5000 for ADE20K. ### Comparisons with State-of-the-art Methods In this section, three representative methods are used [6; 33; 27] to evaluate the effectiveness of MAFT. We compare three representative methods with MAFT and frozen CLIP. Additionally, we compare the results with previous state-of-the-art methods [31; 1; 10; 25]. **Comparisons in the _zero-shot_ setting.** In Tab. 1, MAFT remarkably improves the performance. MAFT promotes the state-of-the-art performance by + 8.2% on COCO, + 3.2% on Pascal, and +4.3% on ADE20K in terms of mIoU for unseen classes. It is important to note that the results for seen classes are mainly based on \(A^{p}\) rather than \(A^{c}\) due to the _ensemble_ operation in [6; 33; 27] (Details in Sec. 3). Therefore, the effect of MAFT on the seen classes is relatively insignificant. **Comparisons without ensemble strategy.** To better showcase the performance gains from MAFT, we removed the _ensemble_ operation in [6; 33; 27] and presented the results in Tab. 2. It can be seen that the performance of different methods is significantly improved after applying MAFT. In particular, the state-of-the-art method FreeSeg achieves hIoU improvements of 19.1%, 7.0%, and 8.3% on COCO, VOC2012 and ADE20K datasets. **Comparisons in the _open-vocabulary_ setting.** We further evaluated the transferability of MAFT in the _open-vocabulary_ setting [20; 33], using FreeSeg as a baseline for comparison. Results are shown in Tab. 3. Compared with OVSeg [20] and OpenSeg [9], FreeSeg achieves suboptimal performance. However, the proposed MAFT enhances the performance of A-847, A-150, PC-459, PC-59 and PAS-20 by 3.0%,11.2%, 6.4%, 19.1% and 4.4%, and outperforms OpenSeg on all five datasets. ### Ablation Study We conduct ablation studies on various choices of designs of our MAFT to show their contribution to the final results in Tab. 4. FreeSeg is used as the baseline model and _ensemble_ operation is removed. **Component-wise ablations.** To understand the effect of each component in the MAFT, including the IP-CLIP Encoder and the fine-tuning strategy (\(\mathcal{L}_{ma}\), \(\mathcal{L}_{dis}\)), we start with standard FreeSeg and \begin{table} \begin{tabular}{l|c c c c} \hline \hline & **A-847** & **A-150** & **PC-459** & **PC-59** & **PAS-20** \\ \hline SPNet[31] & - & - & - & 24.3 & 18.3 \\ ZSSeg[1] & - & - & - & 19.4 & 38.3 \\ LSeg+[8] & 2.5 & 13.0 & 5.2 & 36.0 & 59.0 \\ OVSeg[20] & 7.1 & 24.8 & 11.0 & 53.3 & 92.6 \\ OpenSeg* [9] & 8.8 & 28.6 & 12.2 & 48.2 & 72.2 \\ FreeSeg[27] & 7.1\({}^{\circ}\)1 & 17.9\({}^{\circ}\) & 6.4 & 34.4 & 85.6 \\ FreeSeg +MAFT & 10.1\({}_{\text{+3.0}}\) & 29.1\({}_{\text{+11.2}}\) & 12.8\({}_{\text{+6.4}}\) & 53.5\({}_{\text{+19.1}}\) & 90.0\({}_{\text{+4.4}}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with state-of-the-art methods on the _open-vocabulary_ setting. mIoU is used to evaluate the performance. * denotes additional training data is used. progressively add each design. (Tab. 3(a)). FreeSeg uses frozen CLIP and yields inferior performance due to CLIP's mask-unaware property (\(1^{st}\) row). Then, IP-CLIP Encoder obtains rich context information and greatly reduces the computational costs, resulting in an improvement of 7.1% on seen classes and 6.9% on unseen classes. However, mask-aware is not accomplished at this point. Using only \(\mathcal{L}_{ma}\) for fine-tuning CLIP produces decent performance (\(3^{rd}\) result). The introduction of \(\mathcal{L}_{dis}\) (the \(4^{th}\) result) maintains transferability while learning mask-aware representations, which further enhances the performance on unseen classes by 2.6%. **Effect of different \(\mathcal{L}_{ma}\).**_Mask-aware_ Loss \(\mathcal{L}_{ma}\) is an essential component of MAFT. In Tab. 3(b), we investigate how different loss functions (\(L1\), \(L2\), \(SmoothL1\) and \(KL\) Loss) impact performance, here we remove \(\mathcal{L}_{dis}\) for analysis. Results show \(SmoothL1\) Loss boosts performance on \(C_{unseen}\) to 47.1% (+17.8%), \(KL\) Loss provides +12.5% improvement on \(C_{seen}\), but only +11.8% on \(C_{unseen}\), manifesting \(KL\) Loss compromises the model of transferability comparing with \(SmoothL1\) Loss. **Training iterations.** Tab. 3(c) examines the impact of training iterations. Increasing the number of iterations leads to gradual improvement of IoU\({}^{s}\), but it also results in significant overfitting on unseen classes. Therefore, we choose to fine-tune 1k iterations to maximize the zero-shot ability. **Frozen units in CLIP.** We also explore the impact of fine-tuning units within IP-CLIP Encoder. As illustrated in Fig. 2, IP-CLIP Encoder comprises convolution layers (dubbed as \(conv.\)), class embedding (\(cls.\)), Transformer layers, final projection (\(proj.\)) and positional embedding (\(pos.\), not shown in Fig. 2). We start with fine-tuning the entire IP-CLIP Encoder, and then freezing each unit sequentially, as specified in Tab. 3(d). We only freeze \(MLP\) in the Transformer layers (dubbed as \(mlp\)). Compared with fine-tuning the entire IP-CLIP Encoder, the performance of mIoU\({}^{u}\) is improved by 5.0% when freezing \(conv.\), \(cls.\), \(pos.\) and \(mlp\). **Start mask attention layer.** Tab. 3(e) presents the results of the start mask attention layer (\(L\)). We observe a significant improvement in the performance of unseen classes by +3.4% when the value of \(L\) increases from 0 to 8. This could be attributed to the fact that starting masked Multihead Attention later enables \(F_{cls}^{i*}\) to gain more context information. However, the performance significantly drops when \(L=10\) (from 49.7% to 45.7%), which may be due to the loss of mask-aware property. ### Extending MAFT with SAM We explore using the Segment Anything Model [18] (SAM) as the proposal generator. We evaluate the performance with SAM-H using an original CLIP (dubbed \(\mathrm{SAM}\)) or a mask-aware fine-tuned CLIP (dubbed \(\mathrm{SAM}+\mathrm{MAFT}\)). In fact, SAM can be seamlessly integrated into our framework as the proposal generator. The results are shown in Tab. 5. Experiments are conducted under both _zero-shot_ setting and _open-vocabulary_ setting. It can be observed that \(\mathrm{SAM}+\mathrm{MAFT}\) obtains significant improvement over \(\mathrm{SAM}\) under both settings. Besides, \(\mathrm{SAM}+\mathrm{MAFT}\) also surpasses \(\mathrm{FreeSeg}+\mathrm{MAFT}\) on all benchmarks. Particularly, in the zero-shot setting (Pascal-VOC), \(\mathrm{SAM}+\mathrm{MAFT}\) outperforms \(\mathrm{FreeSeg}+\mathrm{MAFT}\) by 6.8% in \begin{table} \end{table} Table 4: **Ablations on COCO dataset.** GFLOPs in (a) is used to measure the computation of CLIP Image Encoder. The best results are highlighted with red, and the default settings are highlighted with gray background. terms of mIoU\({}^{u}\). This enhancement can be attributed to the stronger generalization capabilities of SAM for unseen classes. ### Extending MAFT with more Vision-Language Models In order to demonstrate the efficacy and robustness of MAFT, we conduct experiments using stronger (CLIP-ViT-L) and ResNet-based (CLIP-Res50) Vision-Language Models. The open-vocabulary results are shown in Tab. 6, we also include the results of OVSeg with CLIP-ViT-L for comparison. **CLIP-ViT-L.** According to Tab. 6, FreeSeg with a standard CLIP-ViT-L model (dubbed \(\mathrm{FreeSeg}\)) still can not achieve satisfactory results. However, by integrating our MAFT (dubbed \(\mathrm{FreeSeg}+\mathrm{MAFT}\)), the segmentation results are remarkably enhanced, thus establishing new state-of-the-art benchmarks. **CLIP-Res50.** Our MAFT can easily adapted into ResNet-based models. Specifically, we modified the \(\mathrm{AttentionPool2d}\) unit within CLIP-R50 Image Encoder. The mask proposals are introduced as attention bias (\(B\)) in Multihead Attention, with \(F_{cls}\) being repeated N times. Notably in CLIP-R50, \(F_{cls}\) is obtained via \(\mathrm{GlobalAveragePooling}\) performing on \(F_{feat}\). The results are presented in Tab. 6. The performance on all 5 datasets is improved by a large margin. \(\mathrm{FreeSeg}+\mathrm{MAFT}\) with CLIP-R50 achieves competitive results with some CLIP-ViT-B-based methods according to Tab. 3. ### Qualitative Study **Visualizations of typical proposals.** Fig. 3 shows frozen CLIP and mask-aware CLIP classifications of typical proposals, including high-quality proposals of foreground (\(p_{1}\), \(p_{4}\)), high-quality proposals of background (\(p_{3}\), \(p_{6}\)), a proposal with background noise (\(p_{2}\)), and a proposal containing part of the foreground (\(p_{5}\)). The proposal regions are highlighted in green or yellow. Several observations can be obtained: (1) The frozen CLIP provides good predictions for \(p_{1}\) and \(p_{4}\). (2) The frozen CLIP assigns \(p_{2}\) as \(cat\) and \(p_{5}\) as \(horse\), with scores even higher than \(p_{1}\), \(p_{4}\), indicating the frozen CLIP cannot distinguish proposals containing information on the same objects. (3) The frozen CLIP fails to give correct predictions for \(p_{3}\) and \(p_{6}\), which may be due to the lack of context information. (4) Our mask-aware CLIP gives good predictions for high-quality proposals (\(p_{1}\), \(p_{3}\), \(p_{4}\), \(p_{6}\)) and provides suitable predictions for \(p_{2}\) and \(p_{5}\). **Qualitative analysis.** We show some visual examples in Fig. 4. Some segmentation results of FreeSeg contain background noise (_e.g._ the \(1^{st}\) & \(2^{nd}\) row, \(3^{rd}\) column) or contain only part of the objects (\(3^{rd}\) row, \(3^{rd}\) column). In ADE20K-847 dataset, too many classes may lead to the anticipated results (last row, \(3^{rd}\) column) with the frozen CLIP. Using a mask-aware CLIP to learn mask-aware representations can significantly improve these segmentation results, as evident from the last column. More visual samples are shown in the Appendix. \begin{table} \begin{tabular}{l|c|c c c c c} \hline \hline & **backbone** & **A-847** & **A-150** & **PC-459** & **PC-59** & **PAS-20** \\ \hline OVSeg [20] & & 9.0 & 29.6 & 12.4 & 55.7 & 94.5 \\ FreeSeg [27] & ViT-L & 8.5 & 21.0 & 7.6 & 33.8 & 86.4 \\ FreeSeg + MAFT & & 12.1 \(\pm\)3.6 & 32.0 \(\pm\)11.0 & 15.7 \(\pm\)8.1 & 58.5 \(\pm\)24.7 & 92.1 \(\pm\)5.7 \\ FreeSeg [27] & Res50 & 5.3 & 15.5 & 5.4 & 28.2 & 87.1 \\ FreeSeg + MAFT & & 8.4 \(\pm\)3.1 & 27.0 \(\pm\)11.5 & 9.9 \(\pm\)4.5 & 50.8 \(\pm\)22.6 & 89.0 \(\pm\)1.9 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison with more Vision-Language Models. \begin{table} \begin{tabular}{l|c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{Pascal-VOC} & \multicolumn{4}{c}{COCO-Stuff} \\ & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **hIoU** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU\({}^{*}\)** & **mIoU** & **mIoU** \\ \hline SAM & 85.1 & 86.7 & 85.9 & 85.5 & 43.1 & 43.3 & 43.2 & 42.1 \\ SAM + MAFT & 91.0 \(\pm\)5.9 & 88.6 \(\pm\)1.9 & 89.8 \(\pm\)3.9 & 90.4 \(\pm\)4.9 & 43.4 \(\pm\)0.3 & 51.5 \(\pm\)8.2 & 47.1 \(\pm\)3.9 & 44.1 \(\pm\)2.0 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison with SAM. We use SAM-H as the proposal generator. ## 6 Conclusion In this paper, we rethink the "frozen CLIP" paradigm in zero-shot segmentation and propose Mask-Aware Fine-Tune (MAFT) for fine-tuning CLIP. Firstly, IP-CLIP Encoder is proposed to handle images with any number of mask proposals. Then, \(\mathcal{L}_{ma}\) and \(\mathcal{L}_{dis}\) are designed for fine-tuning CLIP to be mask-aware without sacrificing its transferability. MAFT is plug-and-play and can be applied to any "frozen CLIP" approach. Extensive experiments well demonstrate the performance of various zero-shot segmentation methods is improved by plugging MAFT. **Limitations.** Our MAFT introduces a CLIP fine-tuning framework to the research of zero-shot segmentation. However, the classification ability for novel classes is still limited by pre-trained vision-language models. How to further narrow this limitation is our future research focus. Figure 4: Qualitative results. The models are trained with COCO-Stuff and directly tested on VOC2012, COCO, and ADE20K. Figure 3: Visualizations of typical proposals & top 5 \(A^{c}\) by frozen CLIP and mask-aware CLIP. ## Appendix A Technical details of the "frozen CLIP" approaches Here we introduce technical details of the "frozen CLIP" approaches in Sec. A. The dataset settings are shown in Sec. B. Moreover, we provide additional experiments in Sec. C, and additional qualitative results in Sec. D. ## Appendix A Technical details of the "frozen CLIP" approaches Fig. 5 presents an overview of the "frozen CLIP" approach. **During training,** a standard MaskFormer or Mask2Former is used as Proposal Generator to generate \(N\) mask proposals (\(M\), \(M\in\mathbb{R}^{N\times h\times w}\)) and classification score (\(A^{p}\), \(A^{p}\in\mathbb{R}^{N\times|C_{seen}|}\)). **During testing,** the input image is merged with \(M\) to obtain \(N\) sub-images (\(I_{sub}\), \(I_{sub}\in\mathbb{R}^{N\times h\times\hat{w}}\)). These sub-images are fed into a frozen CLIP to get the CLIP classification score (\(A^{c}\), \(A^{c}\in\mathbb{R}^{N\times|C_{seen}\cup C_{unseen}|}\)). Here \(C_{seen}\) and \(C_{unseen}\) represent a set of seen classes and unseen classes. An _ensemble_ operation is used to ensemble \(A^{p}\) and \(A^{c}\) for the final prediction. The _merge_ and the _ensemble_ operations will be introduced in detail in following: **Merge operation.** To generate appropriate sub-images based on mask proposals, [6] presents three different _merge_ operations: 1) mask, 2) crop, 3) mask \(\&\) crop. Through experimentation, they demonstrate that the mask \(\&\) crop option yields the best results. Figure 6 provides an example of these operations. It's worth noting that all sub-images are resized to \(\hat{h}\times\hat{w}\), here \(\hat{h}\) and \(\hat{w}\) typically take a value of 224, which is the default input size of CLIP Image Encoder. Although acceptable results can be obtained with the _merge_ operation, it involves repeatedly feeding images into CLIP, which leads to significant computational redundancy. **Ensemble operation.** Comparatively, \(A^{p}\) provides higher confidence classification scores for the seen classes, and \(A^{c}\) provides higher confidence classification scores for the unseen classes. Therefore, an ensemble of \(A^{p}\) and \(A^{c}\) achieves better results. The _ensemble_ operation can be formulated as: \[\hat{A}(c)=\begin{cases}A^{p}(c)^{\lambda}\cdot A^{c}(c)^{(1-\lambda)}\;,\;c\in C ^{seen}\\ A^{c}(c)^{\lambda}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; * **COCO-Stuff**: COCO-Stuff is a large-scale semantic segmentation dataset that includes 171 classes. For the _zero-shot_ setting [6; 33; 27], it is divided into 156 seen classes for training and 15 unseen classes for testing. For the _open-vocabulary_ setting, all 171 classes are used for training. * **Pascal-VOC**: There are 10582 images for training and 1,449 images for testing. For the _zero-shot_ setting, Pascal-VOC is split into 15 seen classes and 5 unseen classes. For the _open-vocabulary_ setting, all 20 classes are used for evaluation (dubbed as PAS-20). * **ADE20K**: ADE20K contains 25k images for training and 2k images for validation. For the _zero-shot_ setting, we follow [6] to choose 847 classes present in both training and validation sets, and split them into 572 seen and 275 unseen classes. For the _open-vocabulary_ setting, we use two settings of ADE20K: 150 classes (dubbed as A-150) and 847 classes (dubbed as A-847). * **Pascal-Context** is an extensive dataset of Pascal-VOC 2010. Two versions are used for _open-vocabulary_ setting, one with 59 frequently used classes (dubbed as PC-59) and another with the whole 459 classes (dubbed as PC-459). ## Appendix C Additional experiments ### Analysis of the Upper Bound of MAFT Considering the _mask-aware_ loss may be limited if the quality of proposals is too bad, we conducted an evaluation of the Upper Bound while using Mask2Former as the proposal generator. The results are presented in Tab. 7. Specifically, we replace \(A^{c}\) by \(S_{IoU}\) (IoU score between binary ground-truth masks and proposals) during inference, and multiply proposals with \(S_{IoU}\) to obtain the segmentation result. This result can be regarded as the Upper Bound for the given proposals. Notably, the Upper Bound achieves satisfactory results (77.6 % mIoU), indicating Mask2Former is capable of providing high-quality proposals in most cases. Additionally, there is still a large gap between the current performance and the Upper Bound (\(\approx\) 30% mIoU), which suggests that our MAFT has enormous potential for improvement, whereas we have achieved state-of-the-art performance. ### Analysis of the Self-Training (ST) strategy Several previous approaches [25; 33; 41] adopt the Self-Training (ST) strategy to enhance performance. we conduct experiments to investigate the application of ST into our method. Specifically, we use the existing \(\mathrm{FreeSeg}+\mathrm{MAFT}\) model to generate pseudo-labels for unseen classes on the training data, and then re-train \(\mathrm{FreeSeg}\) with the pseudo-labels. Results are shown in Tab. 8. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & \multicolumn{4}{c}{COCO-Stuff} \\ & **mIoU\({}^{s}\)** & **mIoU\({}^{u}\)** & **hIoU** & **mIoU** \\ \hline MAFT & 43.3 & 50.4 & 46.5 & 43.9 \\ Upper Bound & 77.2 & 82.1 & 79.6 & 77.6 \\ \hline \hline \end{tabular} \end{table} Table 7: Upper Bound analysis. Figure 6: Comparison among three _merge_ operations. The improvement of ST on the unseen category is significant (Pascal: 81.8 % \(\rightarrow\) 86.3%, COCO: 50.4% \(\rightarrow\) 55.2%) in terms of mIoU\({}^{u}\). However, it's essential to highlight the applicability of ST is limited by a crucial requirement: **unseen classes need to be obtained during training.** This requirement poses significant limitations on generalizing ST to various scenarios, _e.g._, open-vocabulary settings, since images of unseen classes may not be obtained during training. ## Appendix D Visualization We provide more qualitative results, including typical proposals and top-5 \(A^{c}\) (Fig. 7), as well as examples of models trained on COCO-Stuff and text on A-847 (Fig. 8), A-150 (Fig. 9), PC-459 (Fig. 10), PC-59 (Fig. 11), Pascal-VOC (Fig. 12), and COCO-Stuff(Fig. 13). **Typical Proposals and Top-5 \(A^{c}\)**, Fig. 7 shows frozen CLIP and mask-aware CLIP classifications of typical proposals. In the \(2^{nd}\) column, we provide high-quality proposals of _thing_ classes. Both the frozen CLIP and mask-aware CLIP provide high classification scores for the correct classes. In the 3\({}^{rd}\) column, we provide proposals that only contain part of the objects (row 1-3), and proposals containing more than 1 class (row 4). The mask-aware CLIP provides more proper results compared to the frozen CLIP. In the 4\({}^{th}\) column, we provide some high-quality background proposals. The frozen CLIP typically gives incorrect predictions, but the mask-aware CLIP assigns high scores for the correct classes. **Qualitative Analysis.** Fig. 8,9,10,11,12,13 show segmentation results on Pascal-VOC, COCO-Stuff, ADE20K. In Pascal-VOC dataset (Fig. 12), which only contains 20 _thing_ classes, the \(\mathrm{FreeSeg}+\mathrm{M}\mathrm{A}\mathrm{F}\mathrm{T}\) model tends to assign background regions to the similar _thing_ classes, _e.g._, "train" in row 1, "pottedplant" in row 3-4. "boat" in row 8. In A-847, A-150, PC-459, PC-59 and COCO-Stuff datasets, both seen classes and unseen classes exist in the input images, the \(\mathrm{FreeSeg}+\mathrm{M}\mathrm{A}\mathrm{F}\mathrm{T}\) model generates better segmentation results compared to \(\mathrm{FreeSeg}\).
2309.12727
In-context Interference in Chat-based Large Language Models
Large language models (LLMs) have had a huge impact on society due to their impressive capabilities and vast knowledge of the world. Various applications and tools have been created that allow users to interact with these models in a black-box scenario. However, one limitation of this scenario is that users cannot modify the internal knowledge of the model, and the only way to add or modify internal knowledge is by explicitly mentioning it to the model during the current interaction. This learning process is called in-context training, and it refers to training that is confined to the user's current session or context. In-context learning has significant applications, but also has limitations that are seldom studied. In this paper, we present a study that shows how the model can suffer from interference between information that continually flows in the context, causing it to forget previously learned knowledge, which can reduce the model's performance. Along with showing the problem, we propose an evaluation benchmark based on the bAbI dataset.
Eric Nuertey Coleman, Julio Hurtado, Vincenzo Lomonaco
2023-09-22T09:18:55Z
http://arxiv.org/abs/2309.12727v1
# In-context Interference in Chat-based Large Language Models ###### Abstract Large language models (LLMs) have had a huge impact on society due to their impressive capabilities and vast knowledge of the world. Various applications and tools have been created that allow users to interact with these models in a black-box scenario. However, one limitation of this scenario is that users cannot modify the internal knowledge of the model, and the only way to add or modify internal knowledge is by explicitly mentioning it to the model during the current interaction. This learning process is called in-context training, and it refers to training that is confined to the user's current session or context. In-context learning has significant applications, but also has limitations that are seldom studied. In this paper, we present a study that shows how the model can suffer from interference between information that continually flows in the context, causing it to forget previously learned knowledge, which can reduce the model's performance. Along with showing the problem, we propose an evaluation benchmark based on the bAbI dataset. ## 1 Introduction Chat-based Large Language Models (LLMs) (Devlin et al., 2019; OpenAI, 2023) have gained significant attention in the last year due to their impressive capabilities and potential to perform a wide range of tasks (Brown et al., 2020). These models have been used in various contexts, and multiple experts have been surprised by their remarkable ability to maintain a fluid conversation, answering questions with information stored in their weights and with information acquired within each session. Despite their ability to hold conversations, the inability to modify model weights in many applications makes the only way to add relevant information is through prompts in the same context (Brown et al., 2020; Dai et al., 2022). The property to learn and accumulate knowledge of the model without the need to modify the weights is a critical tool for current and future LLMs that have not been thoroughly examined or studied (Wu et al., 2023). Therefore, in this paper, we take a step toward understanding the limitations and strengths of in-context learning of chat-based LLMs by studying how the model behaves when we continually add new knowledge. These understandings can provide insights into how we can mitigate possible limitations and further improve the performance of these models. This will ensure reliable and efficient interactions with users. The main contributions of this paper can be summarized as follows: First, we propose a benchmark to evaluate the accumulation and retention of information of LLMs, mainly to understand their capabilities and limitations of in-context learning. Second, using the proposed benchmark, we show evidence that these models can suffer from interference inside the same chat session, decreasing the performance as we add new information. This is critical, as for some applications, it may not be relevant to forget previous information within a session. However, there are cases where it is crucial that the model provides reliable information. Third, we provide insights into the current ability of these models to learn, retain, and reason on in-context scenarios. The problems presented in this work will facilitate the development of more robust and efficient language models and will pave the way for their successful integration into various applications and domains. ## 2 Related Works In this section, we make a brief analysis of the different research areas that intersect with our work. **In-Context Learning** Recent papers explore in-context learning in chat-based LLMs, showing the importance of this training process (Garg et al., 2023). Coda-Forno et al. (2023) presents meta-in-context learning, showing recursive improvement of in-context learning on various tasks while us ing meta-learning. Others have compared the generalization of few-shot fine-tuning and in-context learning, emphasizing the role of model size and examples (Mosbach et al., 2023). **Question Answering in LLMs** Question answering is a key application of LLMs. Recent research has advanced this area with novel methods, like LAMOC, which uses language model-guided captioning for knowledge based visual question answering (Du et al., 2023). The McL-KBQA framework leverages in-context learning for knowledge base question answering in low-resource settings (Tan et al., 2023). The TempReason dataset measures and improves LLMs' temporal reasoning capability (Tan et al., 2023). ChatGPT's performance on complex questions reveals the challenges of long-term reasoning for chat-based LLMs (Tan et al., 2023). **Continual Learning in LLMs** Adding new knowledge continually without forgetting is the focus of Continual Learning (CL). Most works in CL for LLMs focus on training soft prompts (Wang et al., 2022; Razdaibiedina et al., 2023) or adapters (Ke et al., 2023). These works are mainly focused on training the model weights to specific tasks and problems, unlike ours, which seeks to study how in-context learning influences the performance, that is, without modifying model weights. Others have studied the behavior of different methods to continuously learn (Araujo et al., 2022; Fu et al., 2023). ## 3 In-Session evaluation Normally, users interact with chat-based LLM in terms of sessions or contexts. If we assume no access to internet, each session is a close environments where the user can interact with a model as a black-box, meaning the the user has no access to directly change or condition the weight of the model, and the only way to of interaction is through an input known as prompts via the chat interface. Ideally, when users interact with these models, they expect that the model remembers previous interaction inside the same context. This is know as in-context learning (Zhou et al., 2022), where the model learns through the information that the user provides in forms of input and users expect a perfect memory. To test the capabilities and limitations to remember previous interactions, we propose to submit a sequence of stories or facts and test how the performance of the model evolves as the amount of information that stored increase. Starting from an input \(s_{0}\), we ask questions \(q_{0}\) associate to input \(0\) to obtain performance \(per_{0}\). At time step \(i\), we will provide the model with an input \(s_{i}\) and ask questions \(q_{\leq i}\), meaning that we will ask all questions from input \(0\) to \(i\) to obtain performance \(per_{i}\). If a model can correctly answer a question in a single context, we expect that the performance over time should not decrease as we add more information. However, as we will show in our experiments, the model suffers from interference and the performance decreases as we increase the number of facts. ### Benchmark To evaluate in-context learning, we need a sequence of inputs that the model should be able to solve quickly without prior knowledge. For this reason, we need to evaluate only the retention capacity of the information delivered in the context. Starting from a subset of the bAbI dataset (Weston et al., 2016), we create a sequence of stories that give the model new information. The only way the model can correctly answer the questions is by using the information extracted from inside the context. Given the complexity of the original dataset, we propose two modifications. In the first modification, we replaced all the default names of the entities in the dataset with unique names. By doing this, we ensured that the model could correctly identify the entities in the stories and answer questions about them, even when the entities were not explicitly named in the questions. In the second modification, we simplified the structure by including only the last two statements in each story, along with a single question about the last entity mentioned. This allowed us to assess whether the model could remember the context of the story and answer questions accurately, without the complexity of the original dataset. Figure 1 provide an example of stories created from their original version. A limitation of these models is the token limit. There are two things to take this into consideration, the first is the computational capacity required when we increase the number of tokens and the second is the size of the context that ensures effective performance of the model. To ensure this, we chose 50 stories that the model could correctly classify. ### Experimental setup To evaluate the previous scenario, we use the Vicuna model (Chiang et al., 2023), within the LangChain Chase (2022)framework. The main reason for using this model is that it is open source and this helps us to extend the experiments to other types of adaptation, as we will describe in future works. Specifically, using the Hugging Face library Wolf et al. (2020), we load the **Vicuna13B** model, a LLaMa-based model for text generation. Due to computational limitations, we set the max number of tokens to 2048. Because the answer we are looking are only 1 word long, to reduce the probability of longer answers, we empirically found that a temperature of \(0.7\) provide overall the best performance. Following previous work of in-context learning, we teach the model via a prompt that can be found in the Appendix. ## 4 Experiments ### One Context-One Story As mentioned in the previous section, to correctly detect if a model suffers from interference between new and old knowledge, it is important to detect when the performance decreases. For this, we must first verify that the model can correctly answer all questions in a one-story-per-context scenario to avoid noisy results that can affect the conclusions. For this, we first perform experiments with only one story and the corresponding questions per context. We reset the prompts after the interaction. The accuracy when using the full story is 58%, as shown in Table 1. This low accuracy and the high number of tokens per story encourage us to propose a simplified version of the known dataset. When testing the performance of the simplified version, the model obtained a 100%, probing that the complexity of the original dataset could interfere with the conclusions drawn. ### Incremental stories When learning continually, one can expect that the model will accumulate all the information without forgetting previous interactions. This means that if the model correctly answers a question, and when new information is added, the model forgets the previous answer, we can identify an interference problem. Our hypothesis is that when new information is delivered, an interference problem occurs that confuses or erases some facts. Understanding this limitation would allow us to study the phenomenon and propose methods to accumulate information clearly. This would avoid the need to train the model and limit the number of tokens. As shown in Figure 2, as we add new knowledge \begin{table} \begin{tabular}{l c c} \hline \hline & **Full Story** & **Short Story** \\ \hline Accuracy & 58\% & 100\% \\ Avg. \# Tokens & 67 & 19 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy and average number of token for both benchmarks. As expected, the complexity of the full benchmark makes the accuracy lower. However, when we simplify it, we can obtain a higher accuracy. Figure 1: Modifications were made to the original stories to simplify the problem. First, we replace all default names with unique names to minimize the risk of confusion. Second, we shortened the story, leaving only the last two sentences, removing the complexity of reasoning. Figure 2: Evolution of the accuracy as we increase the number of stories in the context. As expected, as we increase the information in the context, we observed a decrease in the performance of the model. to the prompts of the model, we can see that the performance of the model decrease from a 100% with only one story to around 75% when we have 8 in the same context. A similar effect appears when using the original stories, where the model show a decrease in the performance. It is important to note that we cannot significantly increase the amount of stories since we have a limitations on the amount of tokens the model can receive. Some studies have shown that is possible to increase the number of tokens (Bulatov et al., 2023), however, this normally increases the computational cost, as shown in Table 2, where we see that as we increase the size of the prompt (\(\#\) of stories and length), the cost of delivering a response increases. The interference between old and new information is not something new. As we train the weights of Deep Learning models, the modification of the weights cause a problem known as Catastrophic Forgetting (CF) (McCloskey and Cohen, 1989), and it is related to the constant modification of the weights of the model when we train new tasks. However, it is a different process than the one presented here where the model's weights are not modified. The interference in this case is at the information level and it is the model that is not able to correctly identify the relevant parts of all the information delivered. #### 4.2.1 Summarizing Similar to the CF problem, we need to devise ways to minimize interference between the information the model accumulates. One way is to apply summarizing tools that these methods have built-in, this can reduce the information but keep relevant knowledge in the buffer. By keeping what is strictly necessary, the model can compress the information, reducing the interference with unnecessary information and improve performance. However, as we can observed in Figure 3, when summarizing the information the model do not maintain the performance. One theory of the above is that the model may be interfering with the responses it has delivered (which are part of the context). For this reason, instead of summarizing the previous information, we decided to delete old stories and only keep in buffer \(6\) of them. By removing old stories, we are removing the option for the model to change its response (for better or worse), but more importantly we are reducing the amount of tokens that cause interference. Figure 3 shows how the model is able to keep the performance even with a high number of stories. ## 5 Conclusion This paper introduces a new method to evaluate how chat-based LLMs are affected by interference in in-context learning. We propose a benchmark based on the bAbI dataset to test how well the models can accumulate information in the context. We find that adding new information causes interference that can harm the performance of the models in some scenarios where the user has no control over them. This is a preliminary study of the limitations of in-context learning, but it can be expanded to other related issues, such as the interference between the context-based knowledge and the pre-trained knowledge of the models. ### Limitations Despite the efforts put into carrying out the experiments, it is important to note that the results are obtained from a single model (Vicuna 13B). Although these experiments can also be carried out \begin{table} \begin{tabular}{l c c c c c} \hline \# Stories & **1** & **2** & **3** & **4** & **5** \\ \hline Short & 34s & 38s & 44s & 58s & 63s \\ Full & 46s & 49s & 71s & 78s & 85s \\ \hline \end{tabular} \end{table} Table 2: Time in seconds that it take to generate an answer as we increase the number of stories. By using smaller simplify version of the stories, we reduce the number of tokens which translates into a decrease in response time. Figure 3: To mitigate the interference, we propose summarizing or removing previous information. By summarizing the previous story, we expect the model to keep only relevant information, however we observed a constant decrease in the performance. On the other hand, by removing old stories and keeping only 6 in the buffer, the model is able to reduce the interference. on private models, it is important to highlight that in order to continue this line of research, it is necessary to have complete availability of the models, such as to study how the accumulation of in-context learning can affect previously acquired knowledge stored in the weights or adapters of a model. ## 6 Acknowledgements This research was supported by Leonardo Labs. We acknowledge and appreciate them for their valuable feedback and guidance throughout this project.
2309.14356
COCO-Counterfactuals: Automatically Constructed Counterfactual Examples for Image-Text Pairs
Counterfactual examples have proven to be valuable in the field of natural language processing (NLP) for both evaluating and improving the robustness of language models to spurious correlations in datasets. Despite their demonstrated utility for NLP, multimodal counterfactual examples have been relatively unexplored due to the difficulty of creating paired image-text data with minimal counterfactual changes. To address this challenge, we introduce a scalable framework for automatic generation of counterfactual examples using text-to-image diffusion models. We use our framework to create COCO-Counterfactuals, a multimodal counterfactual dataset of paired image and text captions based on the MS-COCO dataset. We validate the quality of COCO-Counterfactuals through human evaluations and show that existing multimodal models are challenged by our counterfactual image-text pairs. Additionally, we demonstrate the usefulness of COCO-Counterfactuals for improving out-of-domain generalization of multimodal vision-language models via training data augmentation.
Tiep Le, Vasudev Lal, Phillip Howard
2023-09-23T00:16:47Z
http://arxiv.org/abs/2309.14356v2
# COCO-Counterfactuals: Automatically Constructed Counterfactual Examples for Image-Text Pairs ###### Abstract Counterfactual examples have proven to be valuable in the field of natural language processing (NLP) for both evaluating and improving the robustness of language models to spurious correlations in datasets. Despite their demonstrated utility for NLP, multimodal counterfactual examples have been relatively unexplored due to the difficulty of creating paired image-text data with minimal counterfactual changes. To address this challenge, we introduce a scalable framework for automatic generation of counterfactual examples using text-to-image diffusion models. We use our framework to create COCO-Counterfactuals, a multimodal counterfactual dataset of paired image and text captions based on the MS-COCO dataset. We validate the quality of COCO-Counterfactuals through human evaluations and show that existing multimodal models are challenged by our counterfactual image-text pairs. Additionally, we demonstrate the usefulness of COCO-Counterfactuals for improving out-of-domain generalization of multimodal vision-language models via training data augmentation. We make our code2 and the COCO-Counterfactuals dataset3 publicly available. Footnote 2: [https://github.com/Intelabs/multimodal_cognitive_si/tree/main/COCO-Counterfactuals](https://github.com/Intelabs/multimodal_cognitive_si/tree/main/COCO-Counterfactuals) Footnote 3: [https://huggingface.co/datasets/Intel/COCO-Counterfactuals](https://huggingface.co/datasets/Intel/COCO-Counterfactuals) ## 1 Introduction While vision and language models have achieved remarkable performance improvements in recent years, out-of-domain (OOD) generalization remains a challenge for even the best models, which typically exhibit much lower performance in zero-shot evaluation settings than on withheld in-domain test sets. This has often been attributed to spurious correlations between non-causal features and labels in datasets which can be exploited during training as shortcuts to achieving artificially high in-domain performance (Geirhos et al., 2020). For example, image recognition models often learn to utilize spurious features in the backgrounds of images when trained for classification on datasets such as ImageNet (Singla and Feizi, 2021; Xiao et al., 2020). Augmenting training datasets with counterfactuals, which study the impact on a response variable following a change to a causal feature, has been previously proposed as a strategy for countering this effect in NLP models (Levesque et al., 2012; Kaushik et al., 2019). Motivated by concepts in causal learning (Feder et al., 2022), these methods typically form counterfactual examples by making minimal edits to an input text such that a corresponding label or attribute of the text (e.g., sentiment) is changed. Training models with counterfactual examples therefore provides a strong inductive bias against learning spurious correlations in datasets, leading to greater robustness and improved generalization on OOD data (Eisenstein, 2022; Vig et al., 2020) as well as enabling better domain adaptation in low resource settings (Calderon et al., 2022). Despite its success in the realm of NLP, the application of counterfactual data augmentation to multimodal vision-language models has largely remained unexplored, mainly due to low-resource settings involving multimodal data and challenges associated with creating paired counterfactual examples spanning multiple modalities. For example, consider the task of creating counterfactual examples for a multimodal dataset containing images with associated text captions. Creating a counterfactual to a given image-text example requires not only minimally editing a casual feature in the text caption, but also making a corresponding minimal edit to the image which ideally modifies only the changed causal feature while preserving other spurious features from the original image. Collecting such counterfactual examples from existing image datasets is infeasible due to the massive variation in natural images that can accurately depict even identical text captions. While manual creation of counterfactual examples by humans is an option that has been employed previously for NLP (Kaushik et al., 2019; Gardner et al., 2020), this approach suffers from a lack of scalability due to the high cost of human labor, which would be compounded even further for multimodal counterfactuals due to the need for both text and image editing skills. Given these challenges, how can paired image-text counterfactual examples be created at the scale needed for effective model evaluation and data augmentation? We address this problem by introducing a novel data generation pipeline for automatically creating multimodal counterfactual examples using text-to-image diffusion models. Our approach minimally edits captions from an existing image-text dataset and then leverages Stable Diffusion (Rombach et al., 2021) with cross-attention control (Hertz et al., 2022) to generate pairs of images with minimal differences (i.e., isolated to the counterfactual change). We employ our data generation pipeline to create at scale **COCO-Counterfactuals** (Figure 1), a counterfactual variant of the MS-COCO dataset (Lin et al., 2014). We validate the quality of COCO-Counterfactuals using human evaluations and conduct zero-shot experiments showing that state-of-the-art multimodal models are challenged by our generated counterfactual examples. Our additional experiments show that training CLIP (Radford et al., 2021) Figure 1: Examples of COCO-Counterfactuals, our minimal-edit counterfactuals dataset for images with paired text captions. on COCO-Counterfactuals improves its performance on multiple OOD datasets, including zero-shot tasks not seen during training. We make COCO-Counterfactuals and the code for our counterfactual data generation pipeline publicly available under the CC BY 4.0 License. ## 2 Related Work ### Counterfactual Examples for NLP Counterfactual data augmentation has been shown to improve the robustness of models across a wide range of problem domains in NLP. Kaushik et al. (2019) demonstrated that human-authored counterfactuals pose a significant challenge for existing models and that augmenting training datasets with counterfactual examples improves sentiment analysis and Natural Language Inference classifiers. Gardner et al. (2020) similarly used human experts to author minimally-edited contrast example sets for 10 NLP datasets and showed that model performance evaluated on them drops substantially. A number of approaches have been proposed to move beyond reliance on human authors towards automated methods for generating counterfactual examples. Wang and Culotta (2021) and Yang et al. (2021) automatically construct counterfactual examples by identifying and removing or replacing potentially causal words. Howard et al. (2022) introduce a framework for generating looser counterfactuals which allow larger edits of original examples, resulting in more natural and linguistic counterfactual examples. Other semi-automated methods have been proposed to generate counterfactual examples while still relying on human input or labeling (Wu et al., 2021). To the best of our knowledge, none of these existing approaches for automatic counterfactual generation have been extended to multimodal image-text datasets. ### Image Benchmarks for Measuring Spurious Correlations Several image datasets have been proposed as benchmarks for measuring the degree to which models have learned to rely on spurious correlations during training. _CelebA Hair Color_(Liu et al., 2015) is a binary image classification dataset that labels whether a person depicted has a blonde hair color, which is spuriously correlated with gender. Sagawa et al. (2019) constructed the _Waterbirds_ dataset by cropping images of landbirds or seabirds onto land and sea backgrounds, resulting in a binary classification task for bird type (i.e., landbird or seabird) in the presence of spurious correlations with the background. _Colored MNIST_(Arjovsky et al., 2019) artificially imposes colors on the MNIST handwritten digits dataset, where the color is spuriously correlated with the class label. Lynch et al. (2023) uses text-to-image models to generate _Spawrious_, an image classification dataset of four dog breeds spuriously correlated with six background locations. Unlike COCO-Counterfactuals, these datasets are limited only to image classification over a small number of labels and are primarily suited for evaluating model robustness as opposed to training data augmentation. Thrush et al. (2022) introduced _Winoground_, an image-text dataset aimed at measuring visio-linguistic compositionality. Given two images and two captions which have the same words but in different order, the task is to correctly match each caption to its corresponding image. While their paired image-text examples can be viewed as counterfactuals, they focus only on edits to word order and rely on humans to create a dataset aimed specifically at evaluating compositionality. In contrast, our method automatically generates counterfactual examples with word content changes while also preserving non-causal spurious features across paired counterfactual images. _FOIL-COCO_(Shekhar et al., 2017) contains 'foil' captions with a single change to the original MS-COCO caption to invalidate it for the accompanying image. They show that vision and language models struggle to correctly classify captions, detect the edited word, and correct the foiled caption. Our image-text counterfactuals similarly create 'foil' captions to MS-COCO captions, but goes further by also creating paired images which differ only according to how the caption was edited. ### Data Augmentation with Synthetic Images Generated from Text-to-image Models Motivated by recent advances in text-to-image diffusion models (Nichol et al., 2021; Rombach et al., 2021; Saharia et al., 2022; Ramesh et al., 2022), data augmentation with synthetically-generated images has emerged as a growing topic of interest. He et al. (2022) showed that images generated by GLIDE (Nichol et al., 2021) for specific classes in image recognition datasets can be used for training to improve performance on the corresponding image classification tasks. Trabucco et al. (2023) perform image-to-image transformations for data augmentation using text-to-image diffusion models, observing improvements in few-shot image classification performance. Vendrow et al. (2023) represent class labels from image recognition datasets as custom tokens in the vocabulary of a text-to-image diffusion model, enabling them to generate images of objects from the original dataset under different domain shifts. While our data generation pipeline also leverages text-to-image diffusion models, our approach differs from prior work in our focus on producing minimal changes to paired image-text data in both the vision and language modalities. ## 3 COCO-Counterfactuals We detail our data generation methodology for creating COCO-Counterfactuals (COCO-CFs), a synthetic multimodal counterfactual dataset of paired image and text captions based on the MS-COCO dataset (Lin et al., 2014). While we showcase our methodology by generating and releasing the COCO-Counterfactuals dataset, our approach can be applied to automatically construct multimodal counterfactuals for any dataset containing image captions.4 Footnote 4: Appendix B.1 details hyper-parameters and pre-trained models used to generate COCO-Counterfactuals. ### Creating Counterfactual Captions Given an original image caption \(C_{o}\), our first task is to create a corresponding counterfactual caption \(C_{c}\) which alters a subject of \(C_{o}\) while preserving most of its original details. The altered subject represents the changed causal feature in our counterfactual example while the remaining preserved details from the original caption can be viewed as potentially spurious correlated features. To alter a subject of \(C_{o}\), we first identify all nouns using NLTK (Bird et al., 2009) as candidate words for substitution5 For each of the \(i\in\{1,..,n\}\) identified nouns, we create 10 candidate counterfactual captions by replacing only the \(i\)-th noun in \(C_{o}\) with the [MASK] token and retrieving the top-10 most probable replacements via masked language modeling (MLM)6. This produces a total of \(n\times 10\) candidate counterfactual captions, which we then filter to retain only those in which the substituted word is also a noun. Footnote 5: In this work, we focus on counterfactual captions that are derived from altering a noun from original captions. We leave the investigation of altering words of other types such as verbs and adjectives for future work. Footnote 6: We use RoBERTa-base for MLM Our aim is to substitute nouns with alternative words that represent different subjects, and yet still maintain ontological similarity to the original noun. Hence, we use a pre-trained sentence similarity model7 to measure the similarity between each candidate counterfactual caption and the original caption \(C_{o}\), keeping only those candidates which have a sentence similarity within the range \((0.8,0.91)\). Finally, we use GPT-2 to score the perplexity of all candidates which remain after filtering and choose the candidate having the lowest perplexity as our counterfactual caption \(C_{c}\). Footnote 7: [https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) ### Generating Counterfactual Images After creating a counterfactual caption \(C_{c}\), our next task is to generate synthetic images \(I^{s}_{o}\) and \(I^{s}_{c}\) from the original caption \(C_{o}\) and counterfactual caption \(C_{c}\) (respectively). Ideally we would like \(I^{s}_{o}\) and \(I^{s}_{c}\) to differ only in terms of the noun which was replaced in \(C_{o}\) to produce \(C_{c}\), thereby enabling the changed causal feature to be learned in the presence of other potentially spurious correlated features (i.e., the unchanged details between \(C_{o}\) and \(C_{c}\)). However, this is a challenge for existing text-to-image generation models as minor changes to a text prompt can produce significantly different images. For instance, prompting Stable Diffusion with the captions "A small child lounges with a _remote_ in his hand" and "A small child lounges with a _toy_ in his hand" may produce images that differ not only in the object that the child is holding, but also in other details such as his facial features, the manner in which he is laying, the color of his clothes, and the image background. To address this issue, Hertz et al. (2022) proposed a methodology called Prompt-to-Prompt which injects cross-attention maps during the diffusion process to control the attention between certain pixels and tokens of the prompt during denoising steps. This enables separate generations from text-to-image diffusion models to maintain many of the same image details while isolating their differences according to how the text prompts differ. An example of counterfactual image-text pairs \((C_{o},I_{o}^{s})\) and \((C_{c},I_{e}^{s})\) generated with and without prompt-to-prompt is shown in Figure 2, illustrating how Prompt-to-Prompt enables the principle of minimal text edits for NLP counterfactuals to be extended to image generation. Brooks et al. (2023) noted that making different changes to images may require varying the parameter \(p\) in Prompt-to-Prompt, which controls the number of denoising steps with shared attention weights. For example, changes that require more substantial structural modifications to the image may necessitate less overall similarity between the resulting images and thus fewer shared attention weights. We therefore adopt their proposed approach of over-generating 100 image pairs with Prompt-to-Prompt by randomly sampling values of the parameter \(p\sim U(0.1,0.9)\)8. The resulting 100 image pairs are filtered using CLIP (Radford et al., 2021) to ensure a minimum cosine similarity of 0.2 between the encoding of each caption and its corresponding generated image, with the best image pair \((I_{o}^{s},I_{e}^{s})\) chosen from those which remain according to the directional similarity in CLIP space (Gal et al., 2022): Footnote 8: We use the implementation from Instruct-Pix2Pix (Brooks et al., 2023). \[\text{CLIP}_{dir}=\frac{(E_{T}(C_{c})-E_{T}(C_{o}))\cdot(E_{I}(I_{e}^{s})-E_{ I}(I_{o}^{s}))}{||E_{T}(C_{c})-E_{T}(C_{o})||\;||E_{I}(I_{c}^{s})-E_{I}(I_{o}^{s})||} \tag{1}\] where \(E_{T}\) and \(E_{I}\) are CLIP's text and image encoders (respectively). The CLIP\({}_{dir}\) metric measures the consistency in changes between the two images \((I_{o}^{s},I_{e}^{s})\) and their corresponding captions \((C_{o},C_{c})\). Thus, selecting images with a higher CLIP\({}_{dir}\) improves the overall quality of our generated counterfactuals via greater consistency between the alterations made in both modalities. ### Generating COCO-Counterfactuals from MS-COCO We apply our counterfactual caption and image generation pipeline described above to create the COCO-Counterfactuals dataset. Specifically, we generate candidate counterfactual captions for 25,014 original MS-COCO captions910, keeping only the best candidate counterfactual for each original caption that meets our filtering criteria. This produced a total of 24,508 original & counterfactual caption pairs \((C_{o},C_{c})\) after filtering and selection. Our image over-generation pipeline produced 2.45 million candidate image pairs \((I_{o}^{s},I_{e}^{s})\) for these 24.5k caption pairs, of which 17,410 had at least one generated image pair which met our filtering criteria. After selection according to the CLIP\({}_{dir}\) metric, a total of 34,820 image-caption pairs remain, comprising our COCO-Counterfactuals dataset11. Footnote 9: We use the 5K validation split of the 2017 dataset from [https://cocodataset.org/#download](https://cocodataset.org/#download). Footnote 10: While we use MS-COCO in this study as the source of our original captions, one advantage of our counterfactual generation approach is that the input dataset itself does not require paired image and text data. Footnote 11: While the MS-COCO 5K validation split has 25,014 captions, COCO-Counterfactuals includes only 17,410 of them due to our filtering criteria. Thus, for a fair comparison in our experiments, hereafter we refer to this subset of the 5K validation split including only those 17,410 captions and their paired original images as the MS-COCO dataset. Figure 2: Examples of COCO-Counterfactuals generated with Prompt-to-Prompt (left) and without (right). Prompt-to-prompt enables us to extend the principle of minimal-edit text counterfactuals to the visual domain, isolating image differences to only the changed causal feature. ## 4 COCO-Counterfactuals Analysis This section aims to show that, in addressing the challenges associated with low-resource settings involving multimodal data (see Section 1), our proposed novel data generation pipeline can serve as an efficient and scalable framework to automatically create high quality multimodal counterfactual examples in COCO-Counterfactuals (COCO-CFs). Toward this goal, we first employ human evaluation to analyze COCO-CFs. We then show that COCO-CFs can be used as a challenging dataset for model evaluation on zero-shot image-text retrieval and image-text matching tasks. ### Human Evaluation of COCO-Counterfactuals We employ professional data annotators to conduct a human study on the quality of COCO-CFs. For each of the 34,820 images in COCO-CFs, we have at least one annotator choose whether the corresponding original or counterfactual caption best fits the image. Annotators can also choose "both" if both captions describe the image equally well, or "neither" if neither caption accurately describes the image. \(10\%\) of the images are labeled by 3 different individuals to estimate inter-annotator agreement, with the remaining images each labeled by a single annotator (see Appendix B.2 for additional details). Table 1 provides the percentage of images from COCO-CFs which were matched to their correct caption by the human annotators. We also report the percentage of incorrect matches (i.e., the wrong caption was picked as best describing the image) as well as the percentage of "both" and "neither" labels. Overall, 73% of images were correctly matched to their corresponding caption (see Appendix A.3 for an analysis of incorrect matches). Images generated from the counterfactual caption had a 10% greater incidence of incorrect caption selections than those generated from the original caption. This could be due to the constraints imposed on the counterfactual image by Prompt-to-Prompt (i.e., shared attention weights with the original image), which increases the likelihood that the generated image lacks some of the details in its corresponding caption. The Fleiss' kappa coefficient for the 10% of images labeled by three annotators was 0.74, indicating strong agreement among the annotators who participated in this study. Among those images which had label disagreement, 47.4% of the labels were correct, 27.3% were incorrect, and 18.6% selected "neither." This suggests that many of the disagreements are associated with images for which the correct caption choice is more ambiguous. While we employed human annotators to validate the quality of COCO-Counterfactuals for this analysis, our automated counterfactual generation approach does not require the use of human annotators to produce a new dataset. Indeed, our experiments described subsequently in Section 5.1 show that COCO-Counterfactuals which were labeled as incorrect by humans have no negative impact on training data augmentation. ### COCO-Counterfactuals for Model Evaluation Motivated by prior work which has proposed using counterfactuals as challenging test sets in NLP (Kaushik et al., 2019; Gardner et al., 2020), we further investigate whether our COCO-CFs can serve a similar purpose for state-of-the-art multimodal vision-language models such as CLIP, Flava (Singh et al., 2022), BridgeTower (Xu et al., 2022) and ViLT (Kim et al., 2021) for the zero-shot image-text retrieval and image-text matching tasks. We employed the HuggingFace implementations of these model in our experiments (see Appendix A.6 for more detail). \begin{table} \begin{tabular}{l l l l l} \hline \hline Image set & Correct & Incorrect & Neither & Both \\ \hline Generated from original caption & 79.10\% & 8.16\% & 10.18\% & 2.56\% \\ Generated from counterfactual caption & 67.27\% & 18.43\% & 10.74\% & 3.56\% \\ All images & 73.18\% & 13.30\% & 10.46\% & 3.06\% \\ \hline \hline \end{tabular} \end{table} Table 1: Human evaluation results for COCO-Counterfactuals #### 4.2.1 Zero-shot Image-text Retrieval We evaluate the zero-shot image-text retrieval (ITR) performance of pre-trained Flava and BridgeTower models on COCO-CFs as well as _human-evaluated COCO-CFs_, which consists of only image-text pairs that were correctly matched in our human evaluation study (Section 4.1). Since pre-trained CLIP was employed in our counterfactual image generation process (see Section 3.2), it is not suitable for zero-shot ITR setting. Thus, we only report its evaluation in Appendix. A.6 for completeness. For baselines, we evaluate ITR performance of these models on the MS-COCO dataset. Table 2 reports ITR performance (i.e., Recall at 1, 5, and 10) on COCO-CFs and human-evaluated-COCO-CFs for pre-trained BridgeTower and Flava models. The percentages enclosed within parentheses indicate the change in performance of a model on an evaluated dataset versus the performance of that model on MS-COCO (baseline). We observe that the performance of BridgeTower and Flava decreases significantly (up to \(51\%\) and \(57\%\), respectively) compared to the baseline's performance on both COCO-CFs and human-evaluated-COCO-CFs. These results demonstrate that COCO-Counterfactuals can serve as a challenging test set for SOTA multimodal vision-language models. #### 4.2.2 Image-text Matching Typically, during pre-training for image-text matching (ITM), multimodal models learn to differentiate actual image-text pairs from alternative images or captions which are randomly sampled from a dataset. By design, our COCO-CFs have the potential to make this task significantly more challenging by requiring models to also differentiate between minimally-edited image or text candidates. We measure the magnitude of this increased difficulty by comparing the difference in ITM scores between actual image-text pairs and their corresponding counterfactual or randomly sampled alternatives. Let \((C_{o},I_{o})\) denote an original image-text pair from MS-COCO, \(I_{o}^{s}\) denote our synthetically-generated image corresponding to \(C_{o}\), and \((C_{c},I_{c}^{s})\) denote our corresponding synthetically-generated counterfactual image-text pair in COCO-Counterfactuals. We further denote \((C_{r},I_{r})\) as a different original image-text pair randomly sampled from MS-COCO such that \(I_{o}\neq I_{r}\). For a given pre-trained \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**HuggingFace Pre-trained Models**} & \multirow{2}{*}{**Evaluated Dataset**} & \multicolumn{2}{c}{**Test Retrieval**} & \multicolumn{2}{c}{**Image Retrieval**} \\ \cline{3-8} & & **R@1** & **R@5** & **R@10** & **R@1** & **R@5** & **R@10** \\ \hline \multirow{3}{*}{bridgepower-large-imim-im} & COCO-CFs & 21.72 (\(\mathbf{\sim}\)**51\%**) & 469.4(\(\mathbf{\sim}\) 35\%**) & 58.65 (\(\mathbf{\sim}\) 29\%**) & 17.93 (\(\mathbf{\sim}\) 47\%**) & 38.94(\(\mathbf{\sim}\) 35\%**) & 49.95 (\(\mathbf{\sim}\) 30\%**) \\ & human-evaluated COCO-CFs & 26.36 (\(\mathbf{\sim}\)**41\%**) & 54.1(\(\mathbf{\sim}\) 25\%**) & 66.31 (\(\mathbf{\sim}\) 20\%**) & 21.44 (\(\mathbf{\sim}\) 37\%**) & 45 (\(\mathbf{\sim}\) 25\%**) & 56.39 (\(\mathbf{\sim}\) 21\%**) \\ \hline \multirow{3}{*}{flava-full} & COCO-CFs & 21.28 (\(\mathbf{\sim}\)**47\%**) & 466.64 (\(\mathbf{\sim}\) 41\%**) & 58.87 (\(\mathbf{\sim}\) 34\%**) & 37.76 (\(\mathbf{\sim}\) 16\%**) & 66.15 (\(\mathbf{\sim}\) 12\%**) & 75.83 (\(\mathbf{\sim}\) 10\%**) \\ & human-evaluated COCO-CFs & 26.1 (\(\mathbf{\sim}\)**47\%**) & 54.23 (\(\mathbf{\sim}\) 31\%**) & 66.83 (\(\mathbf{\sim}\) 25\%**) & 43.44 (\(\mathbf{\sim}\) 39\%**) & 72.35 (\(\mathbf{\sim}\) 34\%**) & 81.44 (\(\mathbf{\sim}\) 45\%**) \\ \hline \hline \end{tabular} \end{table} Table 2: Image-text retrieval performance on COCO-CFs and human-evaluated COCO-CFs for BridgeTower and Flava models. Largest drops of performance against the baseline are in boldface. Figure 3: ITM score differences computed for existing multimodal models on COCO-Counterfactuals dataset. multimodal model, we compute the following metrics using its ITM scoring function \(\mathcal{G}\): \[\text{IR}_{r} =\mathcal{G}(C_{r},I_{r})-\mathcal{G}(C_{r},I_{o}) \text{IR}_{c} =\mathcal{G}(C_{c},I_{c}^{s})-\mathcal{G}(C_{c},I_{o}^{s})\] \[\text{TR}_{r} =\mathcal{G}(C_{r},I_{r})-\mathcal{G}(C_{o},I_{r}) \text{TR}_{c} =\mathcal{G}(C_{c},I_{c}^{s})-\mathcal{G}(C_{o},I_{c}^{s})\] \(\text{IR}_{r}\) and \(\text{TR}_{r}\) scores can be viewed as measuring the confidence of a model's image or text retrieval (respectively) over two real image-text pairs from MS-COCO. Similarly, \(\text{IR}_{c}\) and \(\text{TR}_{c}\) scores measure image or text retrieval confidence, but using matched image-text pairs from COCO-Counterfactuals dataset. For all metrics, values greater than zero indicate that a model scores the correct image-text pair as more similar than its random or counterfactual alternative. Larger positive values can be viewed as indicating greater confidence in the model's correct discernment between the alternatives. Figure 3 plots the distribution of these four metrics for three pre-trained multimodal models: CLIP, BridgeTower, and ViLT. All three models exhibit a significant negative distribution shift when presented with counterfactual alternatives rather than random alternatives, demonstrating the increased difficulty of COCO-Counterfactuals for existing models. A significant number of COCO-Counterfactuals are also incorrectly scored (i.e., have ITM score difference less than zero) by all three models. Even in cases where the counterfactual alternatives can be correctly discerned, we posit that the much smaller values of \(\text{IR}_{c}\) and \(\text{TR}_{c}\) may improve the efficiency of training through the increased difficulty of the ITM task. #### 4.2.3 Discussion When used as a test set, COCO-Counterfactuals by design evaluate the robustness of models to minimal changes in paired image-text data. Table 2 and Figure 3 show that existing models perform significantly worse when evaluated on COCO-Counterfactuals. Additionally, we find that training these same models on COCO-Counterfactuals produces an average relative improvement of 24.3% in image-text retrieval performance on withheld counterfactual examples (see Table 5 of Appendix A.1). These results point to the usefulness our dataset for evaluating and improving the robustness of multimodal models to counterfactual changes. ## 5 COCO-Counterfactuals for Training Data Augmentation This section aims to evaluate whether COCO-Counterfactuals can serve as an alternative to real data for training data augmentation in low-resource scenarios. We train a fully unfrozen pre-trained CLIP model with its contrastive loss using various combinations of real data from MS-COCO and COCO-CFs datasets (see Appendix B.3 for additional training details). In order to investigate the robustness of models trained on COCO-CFs, we evaluate them on OOD datasets for image-text retrieval and image recognition. For baselines, we report the performance of pre-trained CLIP (i.e., without any additional training) as well as a CLIP model which has been additionally trained using only real data from MS-COCO. We repeat each of our training experiments with 25 different random seeds and report both the mean and standard deviation of performance measured across all random seeds. We also validate the statistical significance of performance improvements obtained by models trained on COCO-CFs using one-tailed t-tests. ### Image-text Retrieval To evaluate OOD performance on the image-text retrieval task that CLIP was trained for, we use the 1K test set of Flickr30k dataset (Young et al., 2014). Table 3 reports the zero-shot performance of the baselines as well as CLIP trained with varying amounts of the original MS-COCO and COCO-CFs datasets. We observe that all CLIP models trained with COCO-Counterfactuals outperform pre-trained CLIP by an average of 5 points, based on the mean performance across text and image retrieval settings. Additionally, our best model trained with 20,894 COCO-Counterfactuals provides statistically significant improvements relative to training only on the real MS-COCO dataset across all settings. We also found that COCO-Counterfactuals improve in-domain performance on the MS-COCO test set, which we detail in Appendix A.5. To investigate the potential impact of COCO-Counterfactuals which were labeled incorrectly by human annotators, we repeated our training data augmentation experiments using only image-text pairs which were correctly matched in our human evaluation study (Section 4.1). Overall, we found that excluding these incorrectly-labeled COCO-Counterfactuals from training data augmentation had a negligible impact on performance (see Appendix A.4). This suggests that training data augmentation is robust to noise introduced by synthetic data, and that the 26.82% of incorrectly-labeled COCO-Counterfactuals do not pose an issue for data augmentation applications. While certain use cases which require a high degree of confidence in the accuracy of generated counterfactuals may benefit from the use of human validation, we believe that these results demonstrate how our approach can be used for fully automated training data augmentation without human annotation. ### Image Recognition Despite being trained for image-text retrieval, CLIP has exhibited impressive performance at zero-shot image recognition. Using the same approach as Radford et al. (2021) for the image recognition task (i.e., form a sentence "A photo of a \(\{c\}\)" for each class label \(c\) to obtain image-text matching scores), we evaluate whether CLIP models trained on COCO-Counterfactuals exhibit competitive OOD performance improvement to baselines' performance in this zero-shot classification setting. Using the same CLIP models trained on varying amounts of MS-COCO and COCO-CFs, Table 4 reports their zero-shot classification accuracy on six image recognition datasets. We observe that training with an approximately 50-50 split of MS-COCO & COCO-CFs provides the best overall performance, offering improvements over pre-trained CLIP (without any additional training) on all datasets except Food101 and outperforming training with only MS-COCO on most datasets (see Appendix A.2 for additional analysis of performance differences). ### Discussion Recent work investigating the suitability of synthetic training data for image recognition tasks has found that synthetic image data is much less efficient than real data, requiring 5x more synthetic training samples to achieve similar performance as models trained on real data (He et al., 2022). In contrast, our results show that training data augmentation with COCO-Counterfactuals is at least as efficient (Table 3) and sometimes more efficient (Table 4) than data augmentation with an identical amount of real data (\(|D_{\text{train}}|=13,928\)). This finding suggests that our approach could be particularly valuable in low-resource settings where paired image-text data is scarce. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Training dataset** & \(|D_{\text{train}}|\) & \% CFs & **CIFAR10** & **CIFAR100** & **Food101** & **Caltech101** & **Caltech256** & **ImageNet** & **Mean** \\ \hline None (pre-trained CLIP) & 0 & 0\% & 88.8 & 64.17 & **84.17** & 90.32 & 83.43 & 59.25 & 78.36 \\ \hline MS-COCO & 13,928 & 0\% & 89.12.03 & 63.89\({}_{0.4}\) & 82.67\({}_{0.2}\) & 92.77\({}_{0.1}\) & 85.05\({}_{0.5}\) & 59.55\({}_{0.5}\) & 78.85\({}_{0.2}\) \\ MS-COCO + COCO-CFs & 13,928 & 0\% & 89.45\({}_{0.3}\) & 66.26\({}_{0.4}\) & 83.13\({}_{0.2}\) & 92.63\({}_{0.1}\) & **85.21\({}_{0.1}\)** & **50.66\({}_{0.2}\)** & **72.46\({}_{0.2}\)** \\ \hline MS-COCO + COCO-CFs & 34,820 & 60\% & 89.16\({}_{0.3}\) & **66.88\({}_{0.4}\)** & 82.12\({}_{0.2}\) & **92.87\({}_{0.1}\)** & 84.95\({}_{0.2}\) & 59.22\({}_{0.3}\) & 79.20\({}_{0.2}\) \\ MS-COCO + COCO-CFs & 41,784 & 67\% & 88.51\({}_{0.5}\) & 65.97\({}_{0.5}\) & 82.06\({}_{0.2}\) & 92.77\({}_{0.1}\) & 84.59\({}_{0.2}\) & 58.88\({}_{0.2}\) & 78.80\({}_{0.2}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Zero-shot classification accuracy of pre-trained CLIP and CLIP models trained on varying amounts of data from MS-COCO and COCO-CFs datasets. All other settings are identical to Table 3. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{**Text Retrieval**} & \multicolumn{4}{c}{**Image Retrieval**} \\ \cline{3-10} **Training dataset** & \(|D_{\text{train}}|\) & \% CFs & **R@1** & **R@5** & **R@10** & **R@5** & **R@10** & **Mean** \\ \hline None (pre-trained CLIP) & 0 & 0\% & 67.1 & 89 & 93.8 & 69.4 & 90.6 & 94.9 & 84.13 \\ \hline MS-COCO & 13,928 & 0\% & 77.90\({}_{0.4}\) & 93.79\({}_{0.2}\) & 97.11\({}_{0.1}\) & 75.14\({}_{0.4}\) & 93.72\({}_{0.2}\) & 96.69\({}_{0.2}\) & 89.06\({}_{0.1}\) \\ MS-COCO + COCO-CFs & 13,928 & 50\% & 76.66\({}_{0.5}\) & 94.53\({}_{0.3}\) & 96.84\({}_{0.2}\) & 75.75\({}_{0.4}\) & 93.60\({}_{0.2}\) & **96.96\({}_{0.2}\)** & 89.05\({}_{0.1}\) \\ \hline MS-COCO + COCO-CFs & 34,820 & 60\% & 78.28\({}_{0.4}\) & **94.72\({}_{0.3}\)** & **97.27\({}_{0.2}\)** & 76.13\({}_{0.5}\) & 93.85\({}_{0.2}\) & 96.91\({}_{0.2}\) & **89.53\({}_{0.2}\)** \\ MS-COCO + COCO-CFs & 41,784 & 67\% & 77.75\({}_{0.5}\) & 94.51\({}_{0.3}\) & 97.03\({}_{0.2}\) & **76.38\({}_{0.3}\)** & **94.01\({}_{0.2}\)** & 96.79\({}_{0.2}\) & 89.41\({}_{0.1}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Image-text retrieval performance on the OOD Flickr30k 1K test set for pre-trained CLIP and CLIP models trained on varying amounts of data from MS-COCO and COCO-CFs datasets. \(|D_{\text{train}}|\) indicates the total number of image-text pairs used for training, while % CFs indicates the percentage of those image-text pairs which were sampled from COCO-CFs. Results report mean over 25 different random seeds, with standard deviation as a subscript. Best results are in boldface. Results which use COCO-CFs are underlined when a one-tailed t-test indicates that their improvement over training only on MS-COCO is statistically significant (\(p\leq 0.05\)) Consistent with prior work on training data augmentation with NLP counterfactuals (Howard et al., 2022; Joshi and He, 2022), Tables 3 and 4 show that improvements in OOD performance with increasing amounts of counterfactual examples reaches a saturation point, beyond which additional data augmentation does not lead to further improvements. For image-text retrieval on Flickr30k (Table 3), this saturation point is reached with a 40 / 60% mixture of MS-COCO / COCO-Counterfactuals in the training dataset. In contrast, Table 4 shows that the saturation point for the OOD image recognition datasets is reached with a 50 / 50% split based on the mean of the six datasets. These results suggest that the optimal mixture of real examples and synthetically generated counterfactual examples may differ depending on the evaluation task and dataset. While training data augmentation with COCO-Counterfactuals produces statistically significant performance improvements relative to training with only real data, the overall magnitude of these improvements is limited and varies by evaluation setting. COCO-Counterfactuals produce the largest improvements on zero-shot image recognition tasks, where its overall mean improvement over pre-trained CLIP is twice as large as that achieved by training on an equivalent amount of real data from MS-COCO. However, OOD generalization performance varies by dataset, which further analysis suggests, is related to domain gaps between altered subjects in COCO-Counterfactuals and the domain of the evaluation dataset (see Appendix A.2 for details). ## 6 Conclusion We proposed an automated data generation methodology for creating counterfactual examples from image-text pairs to address the challenge of low-resource settings involving multimodal data. This approach was used to create COCO-Counterfactuals (COCO-CFs), a high-quality synthetic dataset of paired image-text counterfactuals derived from MS-COCO captions. COCO-CFs are challenging for existing pre-trained multimodal models and significantly increase the difficulty of the zero-shot image-text retrieval and image-text matching tasks. Our experiments demonstrate that augmenting training data with COCO-CFs improves OOD generalization on multiple downstream tasks. In this work, we focused on the creation of task-agnostic counterfactual examples. A promising direction for future research is the adaptation of our approach to produce task-specific counterfactuals. For example, in the case of image recognition, the counterfactual changes could be limited to a targeted label distribution to produce counterfactual examples more tailored to the end task. Alternatively, task-specific model failures or spurious correlations could be diagnosed and used as a basis for determining which counterfactual changes to consider when creating the counterfactual captions. We believe that such approaches have the potential to produce counterfactuals which are more targeted for improving specific model deficiencies. Another opportunity for future work is larger-scale automatic generation of counterfactual examples to enable full counterfactual pre-training of multimodal models. Additionally, we believe that extending our image-text counterfactuals to the video domain could be a promising path towards improving video transformers through counterfactual data augmentation. Limitations & Ethical ConcernsMotivated by the desire to produce minimal-edit counterfactuals, we only considered changes to nouns. This is a common strategy for NLP counterfactuals (see Appendix B.1.1 for discussion), but alternative generation strategies such as controlled text decoding (Howard et al., 2022) could be used to enable a larger range of counterfactual changes, in addition to alterations of adjectives or verbs. We leave investigation of these directions to future studies. Due to a limited compute budget, we only explored generating COCO-Counterfactuals using Stable Diffusion. Additionally, our training data augmentation experiments were limited to a single model (CLIP). It is possible that other text-to-image generation models may exhibit better performance for generating counterfactual image-text data. Additionally, the benefits of counterfactual data augmentation may vary for different multimodal vision-language models. Despite the impressive recent improvements in text-to-image generation capabilities, models such as Stable Diffusion have well-known limitations that should be considered when utilizing datasets which are derived from them (see Appendix C.5 for a detailed discussion). We do not foresee significant risks of security threats or human rights violations in our work. However, the automated nature of our image generation process may introduce the possibility of our COCO-Counterfactuals dataset containing images that some individuals may consider inappropriate or offensive.
2309.10130
The effect of gravitational decoupling on constraining the mass and radius for the secondary component of GW190814 and other self-bound strange stars in f(Q)-gravity theory
Inspired by the conundrum of the gravitational event, GW190814 which brings to light the coalescence of a 23 $ M_{\odot}$ black hole with a yet to be determined secondary component, we look to modelling compact objects within the framework of $f(\mathcal{Q})$ gravity by employing the method of gravitational decoupling. We impose a quadratic equation of state (EOS) for the interior matter distribution which in the appropriate limit reduces to the MIT bag model. The governing field equations arising from gravitational decoupling bifurcates into the $\rho=\theta^0_0$ and $p_r=\theta^1_1$ sectors leading to two distinct classes of solutions. Both families of solutions are subjected to rigorous tests qualifying them to describe a plethora of compact objects including neutron stars, strange stars and the possible progenitor of the secondary component of GW190814. Using observational data of mass-radius relations for compact objects LMC X-4, Cen X-3, PSR J1614-2230 and PSR J0740+6620 we show that it is possible to generate stellar masses and radii beyond 2.0 $ M_{\odot}$ for neutron stars. Our findings reveal that the most { suitable and versatile model in this framework is the quadratic EOS}, which accounts for a range of low mass stars as well as typical stellar candidates describing the secondary component of GW190814.
S. K. Maurya, K. N. Singh, M. Govender, G. Mustafa, S. Ray
2023-09-18T20:11:08Z
http://arxiv.org/abs/2309.10130v1
The effect of gravitational decoupling on constraining the mass and radius for the secondary component of GW190814 and other self-bound strange stars in \(f(\mathcal{Q})\)-gravity theory ###### Abstract Inspired by the conundrum of the gravitational event, GW190814 which brings to light the coalescence of a 23 \(M_{\odot}\) black hole with a yet to be determined secondary component, we look to modelling compact objects within the framework of \(f(\mathcal{Q})\) gravity by employing the method of gravitational decoupling. We impose a quadratic equation of state (EOS) for the interior matter distribution which in the appropriate limit reduces to the MIT bag model. The governing field equations arising from gravitational decoupling bifurcates into the \(\rho=\theta_{0}^{0}\) and \(p_{r}=\theta_{1}^{1}\) sectors leading to two distinct classes of solutions. Both families of solutions are subjected to rigorous tests qualifying them to describe a plethora of compact objects including neutron stars, strange stars and the possible progenitor of the secondary component of GW190814. Using observational data of mass-radius relations for compact objects LMC X-4, Cen X-3, PSR J1614-2230 and PSR J0740+6620 we show that it is possible to generate stellar masses and radii beyond 2.0 \(M_{\odot}\) for neutron stars. Our findings reveal that the most suitable and versatile model in this framework is the quadratic EOS, which accounts for a range of low mass stars as well as typical stellar candidates describing the secondary component of GW190814. Neutron stars (1108); Compact objects (288); Theoretical models (2107) ## 1 Introduction Over the years, compact objects such as neutron stars, pulsars and strange stars have served as cosmic laboratories for determining the nature of matter at ultra-high densities. While Einstein's classical theory of general relativity can account for many observed features such as compactness, mass-radius relations, and surface redshifts of these objects, it has fallen short in accounting for peculiar observations of neutron stars with masses exceeding \(M=2M_{\odot}\). The LIGO Scientific and Virgo collaborations (LVC) observations of gravitational waves such as GW190814 and GW170817 events have also cast light on the shortcomings of classical general relativity in accounting for supermassive black holes (BH) (Abott et al., 2016, 2017, 2020). In particular, the gravitational wave event GW190814 suggests that the source of the signals originated in a compact binary coalescence of a 22.2 to 24.3\(M_{\odot}\) black hole and a compact object having a mass in the range of 2.50 to 2.67\(M_{\odot}\). The GW170817 event of August 17, 2017 is thought to be the merger of two neutron stars with masses in the range 0.86 - 2.26 \(M_{\odot}\). There have been various proponents put forward to account for the observed signals of gravitational events including the nature of matter (equation of state) of stars making up the binary duo and modified gravity theories. The GW170817 event and its electromagnetic counterparts provided researchers with a new tool to propose more exotic equations of states for the neutron stars (NS) involved in this merger. The electromagnetic signal emanating from GW170817 was composed of two parts: a short gamma-ray burst GRB170817A with a delay of approximately 2 seconds with respect to the GW signal and a kilonova, AT2017gfo, peaking in luminosity after a few days after the merger. These observed delays have led researchers to speculate on the nature of the binary components. For example, the delay in the gamma-ray burst has prompted some to believe that remnant aris ing from the merger was most likely a hypermassive star that collapsed to a black hole within a few milliseconds (Ruiz et al., 2018). Similarly, the kilonova signal points to a not too soft EOS (Radice et al., 2018). In order to account for the possible ranges for the tidal deformability, \(400\leq\Lambda\leq 800\) and the radius of the \(1.5M_{\odot}\) component which lies within \(11.8\,\mathrm{km}\leq R_{1.5}\leq 13.1\,,\mathrm{km}\), the GW170817 event has been modeled as merger of an hadronic star with a strange star (Burgio et al., 2018). Furthermore, to account for small radii and not too small \(\Lambda\) it has been proposed that the stellar matter undergoes strong phase transitions at supranuclear densities giving rise to quark matter. The observed signal associated with GW190814 has led to speculation of the nature of the secondary component due to several factors including the mass of \(2.59^{+0.08}_{-0.09}M_{\odot}\), lack of significant tidal deformations and no accompanying electromagnetic signals. This points to the presence of either a neutron star (NS) or a BH. We are thus faced with a conundrum where the secondary component is either the heaviest NS or lightest BH ever observed in a double compact-object system. On the other hand, the GW190814 event has been modeled as a second-generation merger, i.e., a triple hierarchical system giving rise to a remnant from a primary bNS which was then captured by the 23 \(M_{\odot}\) black hole (Lu et al., 2021). Alternatively, the double merger scenario can be the result of a tight NS-NS scattering off a massive BH. These scenarios all call upon the very nature of matter (EOS) of the components making up the binary or triple hierarchical system. Using covariant density functional theory, Fattoyev et al. (2020) explored the possible 2.6 \(M_{\odot}\) stable bounded configuration while ensuring that existing constraints on the composition of neutron stars and the ground-state properties of finite nuclei are preserved. The energy density functionals (EDF) used in their investigation predicted high pressures prevalent in stellar matter. A softening of the EOS accomplished by the addition of more interactions at high densities does not eradicate the problem with the internal pressure of the compact star. Their findings point to the secondary component of GW190814 event as most likely a BH. In a later study, Tews et al. (2021) employed the NMMA framework to ascertain whether the GW190814 was the result of NS-BH or BH-BH merger. Their starting point was to use a set of 5000 EOSs that extended beyond 1.5 times the nuclear saturation density. Their conclusion was that GW190814 arose from a binary black hole merger with a probability of \(>99.9\%\). In an attempt to provide a theoretical basis for the existence of a 2.6 \(M_{\odot}\) neutron star, Godzieba et al. (2022) employed a Markov Chain Monte Carlo approach to generate about two million phenomenological equations of state with and without first-order phase transitions for the interior matter distribution. The only imposition in their study was the requirements of GR and causality. They observed that if the secondary component of GW190814 was indeed a nonrotating neutron star, then this constrains the EOS to densities of the order of \(3\rho_{nuc}\). It is still possible to have a neutron star with a mass greater than 2.5 \(M_{\odot}\) and an \(R_{1.4}\) of 11.75 km. In a more recent study, Mohanty et al. (2023) employed a total of 60 EOS's to investigate the impact of anisotropy in neutron stars. Using a numerical approach they showed that it is possible to generate neutron star masses greater than 2.0 \(M_{\odot}\) within the GR framework by varying the degree of anisotropy within the stellar core. Researchers have also adopted alternative explorations within modified theories of gravity to account for the peculiar masses observed in gravitational events. Using a set of astronomical data and associated constraints on mass-radii relations, Tangphati et al. (2022) modeled the secondary component of the GW 190814 event as a quark star (QS) obeying a color-flavored locked-in (CFL) EOS within the framework of \(f(R,T)\) gravity. They showed that the curvature coupling constant arising in \(f(R,T)\) theory impacts on the maximum allowable masses and radii for stable compact objects to exist. By varying the bag constant and the color superconducting gap energy, masses up to 2.86 \(M_{\odot}\) with a radius \(R=12.43\)km were possible. Using Einstein-Gauss-Bonnet (EGB) gravity, Tangphati et. al modeled anisotropic quark stars satisfying a CFL EOS (Tangphati et al., 2021). They obtained masses up to 2.83 \(M_{\odot}\) which exceeded the observed masses of pulsars. Such unconventional masses were possible through the variation of the EGB coupling constant. Further work in \(R^{2}\)-gravity coupled with a CFL EOS, has led to predicted masses of the order of 1.97 \(M_{\odot}\) which falls in the observed range of the static neutron star PSR J1614-2230. In their attempt to model the mass of the secondary component inferred from the GW 190814 event, Astashenok et al. imposed a stiff EOS on the stellar fluid making up the compact object within the \(f(R)\) framework(Astashenok et al., 2021). They concluded that the secondary component could be an NS, BH or rapidly rotating NS and ruled out the possibility of it being a strange star. In a separate work, Astashenok et al. claimed that it was possible to have supermassive neutron stars with masses in the region of 3 \(M_{\odot}\) provided that their spin was nonzero (Astashenok et al., 2020). These stars were modeled within the context of \(f(R)\) gravity where \(f(R)=R+\alpha R^{2}\) and the quadratic contribution arises in the strong gravitation regime. In recent work on the modeling of compact objects the so-called \(f(\mathcal{Q})\) gravity was employed with very interesting results, especially with regard to the upper bound on the mass limit of these bounded configurations. The study of anisotropic compact objects in which non-metricity, \(\mathcal{Q}\) drives gravitational interactions in the presence of a quintessence field was carried out by Mandal et al. (2022). By choosing an exponential form for \(f(\mathcal{Q})\), they demonstrated that their models described physically realizable compact stellar objects such as HerX-1, SAXJ1808.4-3658, and 4U1820-30. In more recent work, Maurya et al. assumed that the interior matter distribution of a compact object obeyed an MIT bag EOS with the stellar fluid being composed of an anisotropic fluid originating from the superposition of two solutions via the Complete Geometric Deformation (CGD) (Maurya et al., 2022). They showed that contributions from the nonmetricity and the decoupling parameter tend to stabilise mass configurations beyond 2.5 \(M_{\odot}\). On the observational front, LIGO together with the advanced VIRGO observations of gravitational wave events have given us a glimpse into the possible nature of the sources of these signals. With the Laser Interferometer Space Antenna (LISA) and the Einstein Telescope (ET) researchers hope to probe deeper for new physics during the merger of binary neutron stars. The increased resolution of these probes will enable researchers to gain a better understanding of the physics at play within the core of these massive stars. While the first observations from LISA is expected to be around 2030 and for ET a little later, researchers are currently generating simulated catalogs of standard sirens events for LIGO-Virgo, LISA and ET. Utilising these mock catalogues and a simple parametrization for the nonmetricity function, \(f(\mathcal{Q})\) which replicates a \(\Lambda\)CDM cosmological background, Ferreira et al. (2022) were able to calculate redshifts and \(\Omega_{m}\) and compare their results to SNIa data. The idea is to find signatures or tell-tale signs of nonmetricity in observations of gravitational events. Motivated by the work of Godzieba et al. (2022) and Tews et al. (2021), amongst others, in which they utilized a wide spectrum of EOS's to simulate the matter configurations for the secondary components of GW190814 and GW170817, we employ a quadratic EOS within the \(f(\mathcal{Q})\) gravity and MGD framework to model compact objects. We pay particular attention to the contributions from the quadratic term on observed mass-radii limits of neutron stars and pulsars which help constrain the free parameters in our model. The paper is organized as follows: In Section 2, model equations for \(f(\mathcal{Q})\) gravity with extra source have been provided. In Section 3, extended gravitationally decoupled solution in \(f(\mathcal{Q})\) gravity is studied along with mimicking of the density constraint (i.e., \(\rho=\theta_{0}^{0}\) in 3.1) and mimicking of the pressure constraint (i.e., \(p_{r}=\theta_{1}^{1}\) in 3.2). In Section 4, the matching conditions for the astrophysical system in connection to the exterior spacetime has been elaborated. The physical analysis of completely deformed strange star models and astrophysical implications the problem are done in Section 5 where regular behavior of strange star models are discussed under the cases (i) for solution \(\rho=\theta_{0}^{0}\) (in 5.1.1) and (ii) for solution \(p_{r}=\theta_{1}^{1}\) (in 5.1.2) whereas we have provided the constraining upper limit of maximum mass for strange star via M-R diagrams under the cases (i) Linear EOS with constancy in bag constant, fixed decoupling parameter and varying EOS parameter (in 5.2.1), (ii) linear EOS with constancy in bag constant, fixed EOS parameter and varying decoupling constant (in 5.2.2), (iii) Quadratic EOS with fixed bag constant, fixed decoupling constant with varying quadratic EOS parameter (in 5.2.3), and (iv) Quadratic EOS with fixed bag constant, fixed quadratic EOS parameter with constant versus varying decoupling constant for mimicking of radial pressure, \(p_{r}=\theta_{1}^{1}\)(in 5.2.4). We have discussed the energy exchange between the fluid distributions for \(\hat{T}_{ij}\) and \(\theta_{ij}\) under the cases (i) for solution \(\rho=\theta_{0}^{0}\) (in 6.0.1) and (ii) for solution \(p_{r}=\theta_{1}^{1}\) (in 6.0.2). In Section 7 the comparative study of model's arising in GR, GR+CGD, \(f(\mathcal{Q})\) and \(f(\mathcal{Q})+\)CGD gravity have been presented while a few concluding remarks are discussed in Section 8. ## 2 Model equations for \(f(\mathcal{Q})\) gravity with extra source Let us now have an exposure to the \(f(\mathcal{Q})\) gravity and hence gravitationally decoupled systems in terms of symmetric teleparallel paradigm. The first essential point is that in \(f(\mathcal{Q})\) theory the gravitational interaction is triggered by the non-metric scalar \(\mathcal{Q}\). Therefore, it is always possible that using a second Lagrangian \(\mathcal{L}_{\theta}\) for a different source \(\theta_{ij}\), one may express the modified action of \(f(\mathcal{Q})\) gravity for gravitationally decoupled systems in the following way: \[\mathcal{S}=\int \left(\frac{1}{2}\,f(\mathcal{Q})+\lambda_{k}^{\theta ij}R_{ \theta ij}^{k}+\lambda_{k}^{ij}T_{ij}^{k}+\mathcal{L}_{m}\right)\sqrt{-g}\ d^{4}x\] \[+\alpha\int\mathcal{L}_{\theta}\,\sqrt{-g}\ d^{4}x. \tag{1}\] In the above Eq. (1), \(\mathcal{L}_{m}\) is the matter Lagrangian density, \(g\) is a determinant of the metric tensor (i.e., \(g=|g_{ij}|\)), \(\alpha\) represents a decoupling constant, and \(\lambda_{k}^{\theta ij}\) defines the Lagrange multipliers. However, within the context of the \(f(\mathcal{Q})\) gravity, the metric tensor \(g_{ij}\) and the connection \(\Gamma^{k}_{\ ij}\) are considered individually and the necessary connection for nonmetricity term can be expressed as follows: \[Q_{kij}=\nabla_{{}_{k}}\,g_{ij}=\partial_{i}\,g_{jk}-\Gamma^{l}_{\ ij}\,g_{lk}- \Gamma^{l}_{\ ik}\,g_{jl}. \tag{2}\] where \(\nabla_{k}\) defines the covariant derivative and \(\Gamma^{k}_{\ ij}\) referred to as an affine connection. The final affine connection configuration is defined as \[\Gamma^{k}_{ij}=\{\,{}^{k}_{i\ j}\}+K^{k}_{\ ij}+L^{k}_{\ ij}, \tag{3}\] where \(\{\,{}^{k}_{i\ j}\}\), \(K^{k}_{\ ij}\) and \(L^{k}_{\ ij}\) define the Levi-Civita connection, contortion tensor and disformation tensor respectively, which are described as: \[\{\,{}^{k}_{i\ j}\}=\frac{1}{2}g^{kl}\left(\partial_{i}g_{lj}+ \partial_{j}g_{li}-\partial_{l}g_{ij}\right),\] \[L^{k}_{\ ij}=\frac{1}{2}Q^{k}_{\ ij}-Q_{(i\ \ j)},\quad K^{k}_{\ ij}=\frac{1}{2}T^{k}_{\ ij}+T_{(i\ \ j)}^{\ \ k}, \tag{4}\] where \(T^{k}_{\ i\ j}\) defines the anti-symmetric part of the affine connection, i.e., \(T^{k}_{\ i\ j}=2\Gamma^{l}_{\ [\ i\ j]}\). In terms of the nonmetricity tensor, the superpotential is defined as: \[P^{k}_{\ ij}=\frac{1}{4}\left[-Q^{k}_{\ ij}+2Q^{k}_{(i\,j)}+Q^{k}g_{ij}-\bar{ Q}^{k}g_{ij}-\delta^{k}_{(i}Q_{j)}\right]. \tag{5}\] In particular, there are only two unique traces of the nonmetricity tensor \(Q_{kij}\) because of the symmetry of the metric tensor \(g_{ij}\), which can be written as \[Q_{k}\equiv Q_{k}^{\ i\ \ i},\qquad\ \ \ \tilde{Q}^{k}\equiv Q_{i}^{\ ki}. \tag{6}\] Let us define the nonmetricity scalar in the background of connection as it will be useful in the current analysis and the calculated form of the nonmetricity scalar can be given as \[\mathcal{Q}=-\frac{1}{4}Q_{ijk}Q^{ijk}+\frac{1}{2}Q_{ijk}Q^{kji}+ \frac{1}{4}Q_{i}Q^{i}-\frac{1}{2}Q_{i}\bar{Q}^{i}. \tag{7}\] Furthermore, we define \(T_{ij}\) and \(\theta_{ij}\) for the current analysis as: \[\hat{T}_{ij}=\frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\,\mathcal{L}_{m} \right)}{\delta g^{ij}}\quad\ \&\ \theta_{ij}=\frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\,\mathcal{L}_{\theta }\right)}{\delta g^{ij}}. \tag{8}\] The modified Einstein-Hilbert action within the purview of symmetric teleparallel gravity (i.e. Eq. (1)), can produce the following gravitational field equations by variation of action with respect to the metric tensor \(g^{ij}\): \[\frac{2}{\sqrt{-g}}\nabla_{k}\left(\sqrt{-g}\,f_{\mathcal{Q}}\,P ^{k}_{\ ij}\right)+\frac{1}{2}g_{ij}f+f_{\mathcal{Q}}\big{(}P_{i\,kl}\,Q_{j}^{\ \ kl}\] \[-2\,Q_{kli}\,P^{kl}_{\ \ j}\big{)}=T_{ij},\quad\text{where}\ \ T_{ij}=\big{(}\hat{T}_{ij}+\alpha\,\theta_{ij}\big{)}, \tag{9}\] where \(f_{\mathcal{Q}}=\frac{\partial f}{\partial\mathcal{Q}}\), \(T_{ij}\) is the stress energy-momentum tensor while \(\theta_{ij}\) represents the involvement of extra source term. On the other hand, by varying the action with respect to the connection, one may obtain the following relation: \[\nabla_{n}\,\lambda^{jin}_{k}+\lambda^{ij}_{k}=\sqrt{-g}\,f_{ \mathcal{Q}}P^{k}_{ij}+H_{k}{}^{ij}, \tag{10}\] where \(H_{k}{}^{ij}\) represents the density related to hyper-momentum tensor and can be defined as: \[H_{k}{}^{ij}=-\frac{1}{2}\frac{\delta L_{m}}{\delta\Gamma^{k}{}_{ij}}. \tag{11}\] Now, by using the antisymmetric property of \(i\) and \(j\) in the Lagrangian multiplier coefficients, the Eq. (10) gives the following relation: \[\nabla_{i}\nabla_{j}\left(\sqrt{-g}\,f_{\mathcal{Q}}\,P^{k}_{\ \ ij}+H^{k}_{\ \ ij}\right)=0. \tag{12}\] Additionally, we can retrieve the constraint over the connection, \(\bigtriangledown_{i}\bigtriangledown_{j}(H^{k}_{\ \ ij})=0\), according to the Eq. (12) as follows: \[\nabla_{i}\nabla_{j}\left(\sqrt{-g}\,f_{\mathcal{Q}}\,P^{k}_{\ \ ij}\right)=0. \tag{13}\] Due to lacking of curvature and torsion, it can be more explicitly parameterized by a number of functions and finally, the affine connection gets the following form: \[\Gamma^{k}_{\ ij}=\left(\frac{\partial x^{k}}{\partial\xi^{l}} \right)\partial_{i}\partial_{j}\xi^{l}. \tag{14}\] Under the current scenario, an invertible relation is \(\xi^{k}=\xi^{k}(x^{i})\). As a consequence, there always does exist a possibility of discovering a coordinate system that eliminates the \(\Gamma^{k}{}_{ij}\) connection, i.e. \(\Gamma^{k}_{\ ij}=0\). It is to note that the coincident gauge is the covariant derivative \(\nabla_{i}\) reduced to the partial one \(\partial_{i}\). In any other coordinate system, where this affine relationship does not vanish, the metric development would be altered, resulting in a completely new theory (Dimakis et al., 2022). Therefore, we find a coincident gauge coordinate and nonmetricity by Eq. (2) which can be simplified as \[Q_{kij}=\partial_{k}\,g_{ij}. \tag{15}\] One consequence of the operations stated above is that the metric makes the computation easier, as except for the standard General Relativity, here for the action diffeomorphism invariance no longer does exist. In principle, a covariant formulation of \(f(\mathcal{Q})\) gravity can be used by determining the affine connection in the absence of gravity before choosing the affine connection in Eq. (14). In this paper, we look into the gravitationally decoupled solutions for compact objects under \(f(\mathcal{Q})\) gravity. For the current analysis, we consider the generic spherically symmetric metric, which is given as: \[ds^{2}=e^{\Phi(r)}dt^{2}-e^{\mu(r)}dr^{2}-r^{2}(d\theta^{2}+\sin^{2}\theta\,d \phi^{2}). \tag{16}\] In the present study, we consider that the spacetime is filled by the anisotropic matter distribution, then total energy-momentum tensor (\(T_{ij}\)) can be described as: \[T_{ij}=\epsilon\,u^{i}\,u_{j}-\mathcal{P}\,K_{j}^{i}+\Pi_{j}^{i}, \tag{17}\] where \[\mathcal{P}=\frac{P_{r}+2P_{\perp}}{3};\quad\Pi_{j}^{i}=\Pi\big{(} \xi^{i}\xi_{j}+\frac{1}{3}K_{j}^{i}\big{)};\] \[\text{with}\ \ \Pi=P_{r}-P_{\perp};\quad K_{j}^{i}=\delta_{j}^{i}-u^{i }u_{j}, \tag{18}\] and fluid's four-velocity vector \(u^{i}\) and unit space like vector \(\xi^{i}\) are given by \(\{i=0,1,2,3\}\), \[u^{i}=(e^{-\Phi/2},\ 0,\ 0,\ 0)\ \ \text{and}\ \ \xi^{i}=(0,\ e^{-\mu/2},\ 0,\ 0), \tag{19}\] such that \(\xi^{i}u_{i}=0\) and \(\xi^{i}\xi_{i}=-1\). Moreover, \(\epsilon\) denotes the total energy density while \(P_{r}\) and \(P_{\perp}\) represent the total radial and tangential pressure, respectively for the gravitationally decoupled system. In this regard, the components of the energy-momentum tensor for gravitationally decoupled system under the spherically symmetric line element (16) are, \[T_{0}^{0}=\epsilon,\quad T_{1}^{1}=-P_{r},\quad\ T_{2}^{2}=T_{3}^{3}=-P_{\perp}, \tag{20}\] For the metric Equation (16), we can calculate the nonmetricity scalar as follows: \[\mathcal{Q}=-\frac{2e^{-\mu(r)}\,[1+r\,\Phi^{\prime}(r)]}{r^{2}}, \tag{21}\] where the notation \({}^{\prime}\) denotes the derivative over the radial coordinate \(r\) only. In the above expression, \(\mathcal{Q}\) is based on the zero affine connections which can be absorbed via the equations of motion (9) for the anisotropic fluid (18) as follows: \[\epsilon=-\frac{f(\mathcal{Q})}{2}+f_{\mathcal{Q}}(\mathcal{Q}) \Big{[}\mathcal{Q}+\frac{1}{r^{2}}+\frac{e^{-\mu}}{r}(\Phi^{\prime}+\mu^{ \prime})\Big{]}, \tag{22}\] \[P_{r}=\frac{f(\mathcal{Q})}{2}-f_{\mathcal{Q}}(\mathcal{Q}) \Big{[}\mathcal{Q}+\frac{1}{r^{2}}\Big{]},\] (23) \[P_{\perp}=\frac{f(\mathcal{Q})}{2}-f_{\mathcal{Q}}(\mathcal{Q}) \Big{[}\frac{\mathcal{Q}}{2}-e^{-\mu}\Big{\{}\frac{\Phi^{\prime\prime}}{2}+ \Big{(}\frac{\Phi^{\prime}}{4}+\frac{1}{2r}\Big{)}\] \[\times(\Phi^{\prime}-\mu^{\prime})\Big{\}}\Big{]},\] (24) \[0=\frac{\cot\theta}{2}\,\mathcal{Q}^{\prime}\,f_{\mathcal{Q} \mathcal{Q}}(\mathcal{Q}), \tag{25}\] where \(f_{\mathcal{Q}}(\mathcal{Q})=\frac{\partial f(\mathcal{Q})}{\partial\mathcal{ Q}}\) and \(f_{\mathcal{Q}\mathcal{Q}}(Q)=\frac{\partial^{2}f(\mathcal{Q})}{\partial\mathcal{ Q}^{2}}\). It is to be noted that the nonzero off-diagonal metric components derived from the specific gauge choice for the field equations in the context of \(f(T)\) theory put some constraints on the functional form of \(f(T)\)(Ferraro & Fiorini, 2011). As a consequence, it imposes restrictions on the functional form of \(f(\mathcal{Q})\) theory. In this connection, Wang et al. (2022) derived the possible functional forms for \(f(\mathcal{Q})\) gravity in the framework of the static and spherically symmetric spacetime by taking an anisotropic matter distribution. More specifically, they have shown that the exact Schwarzschild solution can exist only when \(f_{\mathcal{Q}\mathcal{Q}}(\mathcal{Q})=0\), while the solution obtained by taking nonmetricity scalar \(\mathcal{Q}^{\prime}=0\) or \(\mathcal{Q}=\mathcal{Q}_{0}\), where \(Q_{0}\) is constant, shows the deviation from the exact Schwarzschild solution (The detailed analysis of above derivation is given in section 4). Therefore, in order to solve the system of field equations in \(f(\mathcal{Q})\)-gravity theory for obtaining self-gravitating compact objects, we derive the functional form of \(f(\mathcal{Q})\) by taking only \(f_{\mathcal{Q}\mathcal{Q}}\) to be zero as \[f_{\mathcal{Q}\mathcal{Q}}(\mathcal{Q})=0\ \Rightarrow\ f_{\mathcal{Q}}( \mathcal{Q})=\beta_{1}\ \Rightarrow\ f(\mathcal{Q})=\beta_{1}\,\mathcal{Q}+\beta_{2}, \tag{26}\] where \(\beta_{1}\) and \(\beta_{2}\) are constants. At this juncture, we would like to mention that the compatibility of static spherically symmetric spacetime with the coincident gauge, if one assumes the affine connection to be zero and \(f(\mathcal{Q})\)-gravity theory has vacuum solutions (i.e. \(T_{ij}=0\)), then the off-diagonal component can be given by \[\frac{\cot\theta}{2}\,\mathcal{Q}^{\prime}\,f_{\mathcal{Q}\mathcal{Q}}=0, \tag{27}\] where \(\mathcal{Q}\) has been provided by Eq. (21). As a consequence of the above Eq. 26 it can be explored that \(f(\mathcal{Q})\) to be linear, the equations of motion automatically turn into \(f_{\mathcal{Q}\mathcal{Q}}=0\)(Zhao, 2022). This aspect is very essential as the non-linear function of \(\mathcal{Q}\) would gives rise to inconsistent equations of motion. This implies that we need for a more generalized form of the spherically symmetric metric related to a fixed coincident gauge (Zhao, 2022). Therefore, in the present study to make the spherically symmetric coordinate system compatible one with affine connection \(\Gamma_{ij}^{k}=0\), we have opted for the linear functional form by considering \(f_{\mathcal{Q}\mathcal{Q}}=0\) to derive the equations of motion. Now inserting Eqs. (21) and (26) into Eqs. (22)-(24), the equations of motion can be obtained as follows: \[\epsilon=\frac{1}{2\,r^{2}}\Big{[}2\,\beta_{1}+2\,e^{-\mu}\,\beta _{1}\,(r\,\mu^{\prime}-1)-r^{2}\,\beta_{2}\Big{]}, \tag{28}\] \[P_{r}=\frac{1}{2\,r^{2}}\Big{[}-2\,\beta_{1}+2\,e^{-\mu}\,\beta _{1}\,(r\,\Phi^{\prime}+1)+r^{2}\,\beta_{2}\Big{]}, \tag{29}\] \[P_{\perp} = \frac{e^{-\mu}}{4\,r}\Big{[}2\,e^{\mu}\,r\,\beta_{2}+\beta_{1}\,\left( 2+r\Phi^{\prime}\right)\left(\Phi^{\prime}-\mu^{\prime}\right) \tag{30}\] \[\quad+2\,r\,\beta_{1}\,\Phi^{\prime\prime}\Big{]},\] The vanishing of the covariant derivative of the effective energy-momentum tensor is \(\bigtriangledown_{i}T^{i}_{j}=0\), provides \[-\frac{H^{\prime}}{2}(\epsilon+P_{r})-(P_{r})^{\prime}+\frac{2}{r}(P_{\perp}-P _{r})=0. \tag{31}\] It is to be noted that the above Eq. (31) is nothing but the usual Tolman-Oppenheimer-Volkoff (TOV) equation (Tolman, 1939; Oppenheimer & Volkoff, 1939), with \(f(\mathcal{Q})=\beta_{1}\mathcal{Q}+\beta_{2}\). Therefore, in connection to the proposed compact stellar model we would like to employ gravitational decoupling under the CGD approach to get solution to the system of Eqs. (28)-(30). For this specific purpose, the gravitational potentials \(\Phi(r)\) and \(\mu(r)\) are essential to modify as follows: \[\Phi(r) \longrightarrow H(r)+\alpha\,\eta(r) \tag{32}\] \[e^{-\mu(r)} \longrightarrow W(r)+\alpha\,\Psi(r), \tag{33}\] where \(\Phi(r)\) and \(\mu(r)\) are two arbitrary deformation functions via the decoupling constant \(\beta\) whereas in the resultant parts \(\eta(r)\) and \(\Psi(r)\) are the geometric deformation functions for the temporal and radial metric components, respectively. In the above Eqs. (32) and (33), for \(\alpha=0\), the standard \(f(\mathcal{Q})\) gravity theory can be easily recovered. Essentially in the present work to continue, we must consider the non-zero value for both the deformation functions, i.e., \(\eta(r)\neq 0\) and \(\Psi(r)\neq 0\). The above transformations (32) and (33) can easily divide the decoupled system, viz. (28)-(30), into two subsystems which are as follows: (i) the first system reflects the field equation in \(f(\mathcal{Q})\) gravity under \(T_{ij}\) and (ii) the second system represents the additional source \(\theta_{i\,j}\). To include all these systems, we need to specify the form of energy-momentum tensor \(T_{ij}\) in the following form: \[\hat{T}^{i}_{j}=\rho\,\chi^{i}\,\chi_{j}-\mathcal{P}_{\mathcal{Q}}\,h^{i}_{j} +\hat{\Pi}^{i}_{j}\, \tag{34}\] where \[\mathcal{P}_{\mathcal{Q}}=\frac{p_{r}+2p_{\perp}}{3}\ ;\quad\hat{ \Pi}^{i}_{j}=\Pi_{\mathcal{Q}}\big{(}\zeta^{i}\zeta_{j}+\frac{1}{3}h^{i}_{j} \big{)}\ ;\] \[\text{with}\quad\Pi_{\mathcal{Q}}=p_{r}-p_{t}\ ;\ \ h^{i}_{j}= \delta^{i}_{j}-\chi^{i}\chi_{j}\, \tag{35}\] and \(\chi^{i}\) (fluid's four-velocity vector) and \(\zeta^{i}\) (unit space like vector) are given by, \[\chi^{i}=(e^{-H/2},\ 0,\ 0,\ 0)\ \ \text{and}\ \ \zeta^{i}=(0,\ \sqrt{W},\ 0,\ 0), \tag{36}\] such that \(\xi^{i}\zeta_{i}=0\) and \(\zeta^{i}\zeta_{i}=-1\). The \(\epsilon\), \(P_{r}\) and \(P_{\perp}\) can be written as, \[\epsilon=\rho+\alpha\,\theta^{0}_{0},\ \ P_{r}=p_{{}_{r}}-\alpha\,\theta^{1}_{1}, \ \ P_{\perp}=p_{{}_{t}}-\alpha\,\theta^{2}_{2}, \tag{37}\] and the corresponding total anisotropy, \[\Delta=P_{\perp}-P_{r}=\Delta_{\mathcal{Q}}+\Delta_{\theta}, \tag{38}\] where \(\ \Delta_{\mathcal{Q}}=p_{{}_{t}}-p_{{}_{r}}\\) and \(\ \Delta_{\theta}=\alpha(\theta^{1}_{1}-\theta^{2}_{2})\). One may note that in the present anisotropic compact stellar system there are two types of anisotropies, viz. \(T_{i\,j}\) and \(\theta_{i\,j}\). Another anisotropy \(\Delta_{\theta}\) comes in the picture due to gravitational decoupling which has a definite role in the transformation processes. By putting Eqs. (32) and (33) in the system (28)-(30), the set of equations of motion dependent on the gravitational potentials (\(H(r)\) and \(W(r)\), or when \(\alpha=0\)), are produced: \[\rho=\frac{\beta_{1}(1-W)}{r^{2}}-\frac{W^{\prime}\beta_{1}}{r}- \frac{\beta_{2}}{2}, \tag{39}\] \[p_{{}_{r}}=\frac{\beta_{1}(W-1)}{r^{2}}+\frac{H^{\prime}W\beta_{ 1}}{r}+\frac{\beta_{2}}{2},\] (40) \[p_{{}_{t}}=\frac{\beta_{1}(W^{\prime}H^{\prime}+2H^{\prime\prime }W+H^{\prime 2}W)}{4}+\frac{\beta_{1}\,(W^{\prime}+H^{\prime}W)}{2r}\] \[\quad+\frac{\beta_{2}}{2}, \tag{41}\] and according to the TOV Eq. (31), \[-\frac{H^{\prime}}{2}(\rho+p_{{}_{r}})-(p_{{}_{r}})^{\prime}+\frac{2}{r}(p_{{} _{t}}-p_{{}_{r}})=0. \tag{42}\] Consequently, the spacetime that follows can provide the corresponding solution: \[ds^{2}=-e^{H(r)}dt^{2}+\frac{dr^{2}}{W(r)}+r^{2}d\theta^{2}+r^{2}\text{sin}^{2} \theta d\phi^{2}. \tag{43}\] Moreover, the system of field equations for \(\theta\)-sector is derived by turning on \(\alpha\) as, \[\theta^{0}_{0}=-\beta_{1}\Big{(}\frac{\Psi}{r^{2}}+\frac{\Psi^{ \prime}}{r}\Big{)}, \tag{44}\] \[\theta^{1}_{1}=-\beta_{1}\Big{[}\frac{\Psi}{r^{2}}+\frac{(\Psi^{ \prime}\Psi+W\,\eta^{\prime})}{r}\Big{]},\] (45) \[\theta^{2}_{2}=-\beta_{1}\Big{[}\frac{(\Psi^{\prime}\Phi^{\prime }+2\Phi^{\prime\prime}\Psi+\Phi^{\prime 2}\Psi+W^{\prime}\,\eta^{\prime})}{4}+\frac{(\Psi^{ \prime}+\Phi^{\prime}\Psi)}{2r}\Big{]}\] \[\quad-\beta_{1}\Big{[}\frac{W}{4}\left(2\,\eta^{\prime\prime}+ \alpha\,\eta^{\prime\,2}+\frac{2\,\eta^{\prime}}{r}+2\,H^{\prime}\,\eta^{ \prime}\right)\Big{]}, \tag{46}\] and the associated conservation is \[-\frac{\Phi^{\prime}}{2}(\theta^{0}_{0}-\theta^{1}_{1})+(\theta^{1}_{1})^{\prime }+\frac{2}{r}(\theta^{1}_{1}-\theta^{2}_{2})=\frac{\eta^{\prime}}{2}\left(p_{{} _{r}}+\rho\right) \tag{47}\] However, the mass function for both systems is given by \[m_{\mathcal{Q}}=\frac{1}{2}\int_{0}^{r}\rho(x)\,x^{2}dx\ \ \text{and}\ \ m_{\theta}=\frac{1}{2}\,\int_{0}^{r}\theta^{0}_{0}(x)\,x^{2}dxdxdx \tag{48}\] where the relevant mass functions for the sources \(T_{ij}\) and \(\theta_{ij}\) are \(m_{\mathcal{Q}}(r)\) and \(m_{\theta}(r)\), respectively. Then, in the context of \(f(\mathcal{Q})\) gravity, the interior mass function of the minimally deformed space-time (16) may be expressed as \[\tilde{m}(r)=m_{\mathcal{Q}}(r)-\frac{\beta_{1}\,\alpha}{2}\,r\,\Psi(r). \tag{49}\] Therefore, from all the previous steps towards the generation of the mass functions, the need as well as advantage of CGD-decoupling becomes very straightforward as such it can be pointed out that one can extend any known solutions associated with the action \(\mathcal{S}_{\mathcal{Q}}\) with solution \(\{T_{ij},W,H\}\) of the system (e.g. Eqs. (39)-(41)) into the domain beyond of \(f(\mathcal{Q})\) gravity theory associated with the action \(\mathcal{S}\). It is to be noted that in the process equation of motion are displayed in Eqs. (28)-(30) and the unconventional gravitational system of equations are Eqs. (44)-(48) to determine \(\{\theta_{ij},\,\Psi,\,\eta\}\). Let us now generate the \(\theta\)_-version_ of any \(\{T_{ij},W,H\}\)-solution as \[\{T_{ij},\;H(r),\;W(r)\}\Longrightarrow\{T_{ij}^{\rm tot},\;\Phi(r),\;\mu(r)\}. \tag{50}\] which describes a definite way to investigate the results that is beyond the symmetric teleparallel gravity. ## 3 Extended Gravitationally Decoupled Solution in \(f(\mathcal{Q})\) Gravity In this Section, we will solve both systems of equations (36)-(39) and (41)-(44) related to the sources \(T_{\mu\nu}\) and \(\theta_{\mu\nu}\). It is mentioned that the energy-momentum tensor \(T_{ij}\) describes an anisotropic fluid matter distribution, therefore, the \(\theta_{ij}\) may enhance the total anisotropy of the system which helps in preventing the gravitational collapse of the system. Furthermore, if we look at the second system, it shows clearly that the solution of the second system depends on the first system. Then it is mandatory to solve the first system initially. For solving the first system (36)-(39) in \(f(\mathcal{Q})\)-gravity, we use a generalized polytropic equation of state (EOS) of the form, \[p_{r} = a\,\rho^{1+1/n}+b\rho+c, \tag{51}\] where \(a,\ b\) and \(c\) are constant parameters with proper dimensions and \(n\) denotes a polytropic index. Let us trace back the generic history of the polytropic equation of state \(p_{r}=a\rho^{1+\frac{1}{n}}\) which has been extensively used to analyse the physical attributes of the compact stellar objects in the context of various requirements (Abramowicz, 1983; Cosenza et al., 1981; Herrera and Barreto, 2004; Herrera et al., 2004; Herrera and Barreto, 2013; Herrera et al., 2014, 2016; Bekenstein, 1960; Takisa and Maharaj, 2013; Azam and Mardan, 2017). In connection to cosmological scenario Chavanis (2014a) first tried to generalize the polytropic EOS in the form \(p_{r}=\gamma\rho^{1+\frac{1}{n}}+\beta\rho\). In the context of the late universe Chavanis (2014b) further studied the issues by considering negative indices which was immediately received attention to the study of quantum fluctuations and constant-density cosmology (Freitas and Goncalves, 2014). But, this EOS is mostly applicable for a stellar system with vanishing pressure when the density goes to zero. Therefore, for the self-bound compact objects, a further modified EOS \(p_{r}=\gamma\rho^{1+\frac{1}{n}}+\beta\rho+\chi\) was extensively considered by several investigators (Azam et al., 2015, 2016; Azam and Mardan, 2017; Naecim et al., 2021). It is interesting to note that the polytropic EOS (51) may represent a MIT bag EOS for the specifications: \(a=0,\ b=\frac{1}{4}\) and \(c=-\frac{4}{3}\mathcal{B}_{g}\), where \(\mathcal{B}_{g}\) is a bag constant. Due to highly non-linearity of the field equations, as a simple case we assume the value for the polytropic index as \(n=1\). In this context one may note that the contribution for quadratic term (i.e. \(a\rho^{2}\)) appears in the EOS to express the neutron liquid in Bose-Einstein condensate form whereas the linear terms (i.e. \(b\rho+c\)) come from the free quarks model of the MIT bag model, with \(b=1/3\) and \(c=-4\mathcal{B}_{g}/3\). In addition to the above studies, there are various works employing more realistic EOSs. Analytical representations of more realistic EOS based on parameters arising in nuclear physics such as the FPS EOS due to Pandharipande and Ravenhall (Pandharipande and Ravenhall, 1989), SLy EOS of Douchin and Haensel (Douchin and Haensel, 2001) and the unified EOSs BSK19, BSk20 and BSk21 (Potekhin et al., 2013), amongst others have been previously employed using numerical techniques to generate neutron star models. The quadratic EOS can be viewed as a truncation of these more realistic EOSs. The models derived using the quadratic EOS can be used as first approximation to test more complicated numerical codes. Furthermore, Haensel and Potekhin (Haensel and Potekhin, 2004) pointed out that unified EOSs (presented in the form of tables) introduce ambiguities in the calculations of various parameters in neutron star modeling. These pathologies arise in the interpolation between the tabulated points as well as the calculation of derivatives regarding thermodynamical quantities. They further highlight the point that analytical EOSs ensure that shortcomings are circumnavigated and allow for high-precision neutron star modeling. Now, by using Eqs. (39) and (40) in Eq. (51), one can find the differential equation \[4\beta_{1}r^{2}\left(a\beta_{2}-a\beta_{1}W^{\prime 2}-a\beta_{2}W+bW-b+W-1 \right)+4\beta_{1}r^{3}.\] \[\times(Nr^{2}+1)^{4}+r^{2}\beta_{2}(-2b+a\beta_{2}-2)\Big{(}Nr^{2}+1 \Big{)}^{4}\] \[+4\beta_{1}\Big{(}N^{2}(b-a\beta_{2}+1)r^{4}+3N(b-a\beta_{2}+1)r^{2 }+L\Big{(}N\] \[\times(n\beta_{2}+1)r^{2}-b\Big{(}Nr^{2}+3\Big{)}3a\beta_{2}+1 \Big{)}r^{2}+2\Big{)}\] \[\times\Big{(}Nr^{2}+1\Big{)}^{2}+4a(L-N)^{2}r^{2}\Big{(}Nr^{2}+3 \Big{)}^{2}\beta_{1}^{2}\Big{)}\Bigg{]}\] \[+\frac{1}{Nr^{2}+1}\Bigg{(}\frac{8f_{2}(r)\Big{(}Lr^{2}+1\Big{)} \beta_{1}}{\Big{(}Lr^{2}+1\Big{)}^{2}\Big{(}Nr^{2}+1\Big{)}^{4}\beta_{1}}\] \[+\frac{8f_{3}(r)\Big{(}Lr^{2}+1\Big{)}\beta_{1}}{\Big{(}Nr^{2}+1 \Big{)}^{3}\Big{(}L\beta_{1}r^{2}+1\Big{)}}\Bigg{)}\Bigg{]}, \tag{58}\] where the used coefficients in the above expressions are mentioned in the Appendix. In the above set of Eqs. (56)-(58), we have the complete spacetime geometry for the seed solution. However, corresponding to the \(\theta\)-sector, we need to find the solution of the second system of Eqs. (44)-(46). However, note that there are three independent equations with five unknowns. This situation, therefore, demands for a need of two additional information to close the \(\theta\)-system, e.g. \(\Psi\) and \(\eta(r)\). It is known that physical viability \(\Phi(r)\) should have a monotonic increasing feature towards the boundary and so \(H(r)+\alpha\eta(r)\) must be an increasing function of \(r\). Hence for simplicity, we assume \(\eta(r)=H(r)\), which provides eventually \(\Phi(r)=(1+\alpha)H(r)\). Related to the system of equations (44) and (45) for the source \(\theta_{ij}\), after imposing the constraint \(\beta_{2}\neq 0\), we opt for the following preferences: \[\Psi(r) = W(r)-1+\frac{\beta_{2}r^{2}}{6\beta_{1}} \tag{59}\] \[\Psi(r) = \frac{1}{1+r\,\Phi^{\prime}(r)}-\frac{W(r)\left[1+r\{H^{\prime}( r)+\eta^{\prime}(r)\}\right]}{1+r\,\Phi^{\prime}(r)}\] (60) \[-\frac{\beta_{2}r^{2}}{2\beta_{1}}.\] One can note that \(\Psi(r)\) is free from any singularity and also \(\Psi(0)=0\). As a result, these allow us to mimic (i) \(\theta_{0}^{0}\) with the energy density \(\rho\), i.e. \(\rho=\theta_{0}^{0}\) from Eq. 59 and (ii) \(\theta_{1}^{1}\) with the radial pressure \(p_{r}\), i.e. \(p_{r}=\theta_{1}^{1}\) from Eq. (60). Several authors (Sharif et al., 2020a,b, , 2021; Maurya et al., 2020, 2021, 2022, 2023) have successfully applied this technique in modeling compact objects in GR as well as modified gravity theories and its gravitational cracking concept under gravitational decoupling (Contreras and Fuenmayor, 2021). Motivated by these works, we use the mimic approach in the present study for the system of equations (39)-(41). Therefore, we adopt the following approaches: (i) Mimicking of the density constraint (i.e. \(\rho=\theta_{0}^{0}\)) and (ii) Mimicking of the radial pressure constraint (i.e. \(p_{r}=\theta_{1}^{1}\)) [vide for details the following Ref. (Ovalle, 2017)]. ### Mimicking of the density constraint (i.e., \(\rho=\theta_{0}^{0}\)) To solve the \(\theta\)-sector, here we mimic the seed energy density \(\rho\) to \(\theta_{0}^{0}\) from Eqs. (39) and (44), we find first order linear differential equation in \(\Psi(r)\) as, \[\Psi^{\prime}+\frac{\Psi}{r}+\frac{\Psi}{2\beta_{1}r}\Big{[}2 \beta_{1}(1-W-r\,W^{\prime})-r^{2}\,\beta_{2}\Big{]}=0 \tag{61}\] Now we obtain the expression of deformation function \(\Psi(r)\) after integrating the above equation by using the known potential \(H(r)\) as \[\Psi(r)=\frac{r^{2}\left(\beta_{2}+6\beta_{1}L-6\beta_{1}N+\beta _{2}Nr^{2}\right)}{6\beta_{1}\left(1+Nr^{2}\right)}. \tag{62}\] The arbitrary constant of integration has been taken to be zero to ensure the non-singular nature of \(\Psi(r)\) at the center. Furthermore, we take the deformation function \(\eta(r)=H(r)\) as mentioned above in order to find the expression for \(\theta\)-sector. Hence, the \(\theta\)-sector components are obtained as \[\theta_{0}^{0}(r) = \frac{1}{2\left(Nr^{2}+1\right)^{2}}\bigg{[}N^{2}\left(2\beta_{ 1}r^{2}-\beta_{2}r^{4}\right)-\beta_{2}-2\beta_{1}L \tag{63}\] \[\times\left(Nr^{2}+3\right)+N\left(6\beta_{1}-2\beta_{2}r^{2} \right)\bigg{]},\] \[\theta_{1}^{1}(r) = \frac{-1}{6(1+Nr^{2})}\bigg{[}\beta_{2}+6\beta_{1}\theta_{11}(r) \Big{(}(\alpha+2)Lr^{2}-(\alpha+1)\] (64) \[\times Nr^{2}+1\Big{)}+\beta_{2}(\alpha+1)\theta_{11}(r)r^{2} \left(Nr^{2}+1\right)+6\beta_{1}L\] \[-6\beta_{1}N+\beta_{2}Nr^{2}\bigg{]},\] \[\theta_{2}^{2}(r) = \frac{-1}{24\left(Nr^{2}+1\right)^{2}}\bigg{[}2\theta_{22}(r) \left(Nr^{2}+1\right)\Big{[}6\beta_{1}\Big{(}1+(\alpha+2)Lr^{2}\] (65) \[-(\alpha+1)Nr^{2}\Big{)}+(\alpha+1)\beta_{2}r^{2}\left(Nr^{2}+1 \right)\Big{]}+\theta_{23}(r)\bigg{]},\] where the explicit expressions for \(\theta_{22}(r)\) and \(\theta_{23}(r)\) are given in Appendix. ### Mimicking of the pressure constraint (i.e., \(p_{r}=\theta_{1}^{1}\)) In this mimic constraints approach, we are mimicking the seed radial pressure \(p_{r}\) with the \(\theta_{1}^{1}(r)\) from Eqs. (40) and (45), we obtain the expression for deformation function \(\Psi(r)\) as \[\Psi(r)=-\frac{1}{\left(Nr^{2}+1\right)\Psi_{21}(r)}\bigg{[}2r^{2 }\Big{(}Lr^{2}+1\Big{)}\bigg{\{}a\beta_{2}^{2}-2\beta_{1}L\Big{[}-6a\beta_{2 } \tag{66}\] \[+N^{3}\Big{(}r^{6}(1-2a\beta_{2})+4a\beta_{1}r^{4}\Big{)}+N^{2} \Big{(}r^{4}(3-10a\beta_{2})+24a\] \[\times\beta_{1}r^{2}\Big{)}+\Psi_{22}(r)\bigg{\}}\bigg{]},\] Finally using the same deformation function \(\eta(r)=H(r)\), we find the \(\theta\)-components for this solution as \[\theta_{0}^{0}(r)=\frac{2\beta_{1}\Big{[}4Nr^{2}\Big{(}Lr^{2}+ 1\Big{)}\Big{(}Nr^{2}+1\Big{)}\Psi_{00}(r)+f_{4}(r)\Big{]}}{3f_{3}(r)\left(Nr ^{2}+1\right)^{2}}, \tag{67}\] \[\theta_{1}^{1}(r)=\frac{1}{4\left(Nr^{2}+1\right)^{4}}\bigg{[} \bigg{(}\beta_{2}+2\beta_{1}L\left(Nr^{2}+3\right)+N^{2}\Big{(}\beta_{2}r^{4}\] (68) \[-2\beta_{1}r^{2}\Big{)}+N\left(2\beta_{2}r^{2}-6\beta_{1}\right) \bigg{)}\bigg{\{}a\bigg{(}\beta_{2}+2\beta_{1}L\left(Nr^{2}+3\right)\] \[+\beta_{2}N^{2}r^{4}-2\beta_{1}N^{2}r^{2}-6\beta_{1}N+2\beta_{2 }Nr^{2}\bigg{)}\] \[-2b(Nr^{2}+1)^{2}\bigg{\}}\bigg{]}+c,\] where the explicit expressions for \(\Psi_{21}(r)\) and \(\Psi_{22}\), and \(\Psi_{00}(r)\) are given in Appendix. ## 4. Exterior spacetime and matching conditions At this juncture, let us apply the boundary condition to have the expressions for the constants as well as the physical parameters to study the features of the compact star. For this purpose, we need to match the interior spacetime smoothly with a suitable exterior vacuum solution at the pressureless bounding surface (i.e. at \(r=R\)) for the functional form of \(f(\mathcal{Q})=\beta_{1}\mathcal{Q}+\beta_{2}\). Following the analysis provided by Wang et al. (2022) for the off-diagonal component as appeared in Eq. (20), the solutions of \(f(Q)\) gravity are restricted to the following two cases: \[f_{\mathcal{QQ}}=0 \Rightarrow f(\mathcal{Q})=\beta_{1}\,\mathcal{Q}+\beta_{2}, \tag{69}\] \[\mathcal{Q}^{\prime}=0 \Rightarrow \mathcal{Q}=\mathcal{Q}_{0}. \tag{70}\] where \(\beta_{1}\), \(\beta_{2}\) and \(\mathcal{Q}_{0}\) constants. One can note that this result is similar to the one in \(f(T)\) gravity theory-related work in Ref. (Boehmer et al., 2011), where they have considered the constraint case \(f_{TT}=0\) or \(T^{\prime}=0\) for the spherically symmetric static distributions with diagonal tetrad as background formalism. One can also note that for the cosmological constant assumed to be \(\beta_{2}/\beta_{1}\), the first solution Eq. (69) is equivalent to GR as it reduces to STGR. We shall discuss this point later on in a conversant way. We have the physical \(\epsilon=P_{r}=P_{\perp}=0\) in the case of vacuum. Therefore, the equation of motions (21)-(23), due to Eq. (69), take the forms \[\Phi^{\prime}+\mu^{\prime}=0, \tag{71}\] \[\frac{\beta_{2}}{\beta_{1}}-\frac{2}{r^{2}}=\mathcal{Q},\] (72) \[\frac{\beta_{2}}{2}+\beta_{1}e^{-\mu}\Bigg{[}\frac{\Phi^{\prime \prime}}{2}+\Big{(}\frac{\Phi^{\prime}}{4}+\frac{1}{2r}\Big{)}(\Phi^{\prime}- \mu^{\prime})\Bigg{]}=0. \tag{73}\] Now, from Eq. (71), one can get in a straightforward way the following one \[\Phi(r)=-\mu(r)+\mu_{0}, \tag{74}\] where \(\mu_{0}\) is a constant of integration. It is to note that, for the sake of convenience, \(\mu_{0}\) can be absorbed via the rescaling of the time coordinate \(t\) to \(e^{-\mu_{0}/2}\). As a result, the \(rr\)-component in Eq. (14) becomes the inverse of the \(tt\) component, so that one may obtain \[\Phi(r)=-\mu(r). \tag{75}\] In Eq. (72), we consider the term \(\beta_{2}/\beta_{1}\) as the cosmological constant of Einstein \(\Lambda\) with a reversed sign due to the convention of nonmetricity of Eq. (7). Now, from the relation (16) along with Eqs. (71) and (72), we get the expression for \(\mu\) as \[e^{\mu}=\Big{(}1+\frac{\mu_{1}}{r}-\frac{\beta_{2}}{6\beta_{1}}r^{2}\Big{)}^{ -1}, \tag{76}\] where \(\mu_{1}\) is an integration constant. In a similar way \(\Phi(r)\) can be found from Eq. (75) and (76) as follows: \[e^{\Phi}=\Big{(}1+\frac{\mu_{1}}{r}-\frac{\beta_{2}}{6\beta_{1}}r^{2}\Big{)}. \tag{77}\] Hence the explicit line element can be provided as \[ds^{2}=-\Big{(}1+\frac{\mu_{1}}{r}-\frac{\beta_{2}}{6\beta_{1}}r ^{2}\Big{)}dt^{2}+\Big{(}1+\frac{\mu_{1}}{r}-\frac{\beta_{2}}{6\beta_{1}}r^{ 2}\Big{)}^{-1}\] \[dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\!\theta\,d\phi^{2}). \tag{78}\] A few interesting observations from the above metric (78) are as follows: straightforwardly it represents the Schwarzschild (anti-)de Sitter solution with (i) the cosmological constant \(\Lambda=\beta_{2}/\beta_{1}\), (ii) the mass of the stellar object \(\mu_{1}=2\mathcal{M}\). This comparison and resemblance indicate that there exists the Schwarzschild (anti-)de Sitter solution only for the linear \(f(\mathcal{Q})\) gravity which is equivalent to GR. Let us now treat the following situations: (i) solution for \(\mathcal{Q}=\mathcal{Q}_{0}\) as obtained in Eq. (70), which acts as a constraint on the functional form of \(f(\mathcal{Q})\), and (ii) the nonmetricity scalar constant \(\mathcal{Q}_{0}\) which can be considered as equivalent to the cosmological constant \(\Lambda\), as shown in Eq. (72). Hence, in the present case the non-metricity scalar \(\mathcal{Q}\) can be expressed as \[\mathcal{Q}_{0}=-\frac{2e^{-\mu}}{r}\Big{(}\Phi^{\prime}+\frac{1}{r}\Big{)}. \tag{79}\] Now, under the constant non-metricity scalar (i.e. \(\mathcal{Q}=\mathcal{Q}_{0}\)) and vacuum case (i.e. \(\epsilon=P_{r}=P_{\perp}=0\)), the equation of motions (22)-(24) become \[f_{\mathcal{Q}}(\mathcal{Q}_{0})\frac{e^{-\mu}}{r}(\Phi^{\prime }+\mu^{\prime})=0, \tag{80}\] \[-\frac{f_{\mathcal{Q}}(\mathcal{Q}_{0})}{2}+f_{\mathcal{Q}}( \mathcal{Q}_{0})\Big{(}\mathcal{Q}_{0}+\frac{1}{r^{2}}\Big{)}=0,\] (81) \[f_{\mathcal{Q}}(\mathcal{Q}_{0})\Bigg{[}\frac{\mathcal{Q}_{0}}{ 2}+\frac{1}{r^{2}}+e^{-\mu}\Bigg{\{}\frac{\Phi^{\prime\prime}}{2}+\Big{(}\frac {\Phi^{\prime}}{4}+\frac{1}{2r}\Big{)}\] \[\times(\Phi^{\prime}-\mu^{\prime})\Bigg{\}}\Bigg{]}=0. \tag{82}\] The above set of equations (80) and (81) readily provide \[f(\mathcal{Q}_{0})=0,\quad\text{or}\quad f_{\mathcal{Q}}(\mathcal{Q}_{0})=0. \tag{83}\] These two restrictions immediately give clue that the functional form of \(f(\mathcal{Q})\) can be expressed in terms of power series expansion of \(f(\mathcal{Q})\) around \(\mathcal{Q}=\mathcal{Q}_{0}\) which may be as follows: \[f(\mathcal{Q})=\mathcal{Q}_{1}(\mathcal{Q}-\mathcal{Q}_{0})^{1 }+\mathcal{Q}_{2}(\mathcal{Q}-\mathcal{Q}_{0})^{2}+\mathcal{Q}_{3}(\mathcal{Q} -\mathcal{Q}_{0})^{3}\] \[+\mathcal{Q}_{4}(\mathcal{Q}-\mathcal{Q}_{0})^{4}+.......... \tag{84}\] and which can be written in the general form as \[f(\mathcal{Q})=\sum_{n=1}\mathcal{Q}_{n}(\mathcal{Q}-\mathcal{Q}_{0})^{n}, \tag{85}\] \(\mathcal{Q}_{1},\ \ \mathcal{Q}_{2},\ \ \mathcal{Q}_{3},\)\(....\) being the constant coefficients. In the present situation of \(f(\mathcal{Q})\) gravity-related non-trivial solution one should keep in mind that Eq. (83) should be satisfied by the functional form of \(f(\mathcal{Q})\). This is essential to obtain new solutions which are distinctly different than GR. Now, from Eq. (80), one may get another situation which is \[\Phi^{\prime}+\mu^{\prime}=0. \tag{86}\] Then, substituting Eq. (86) in Eq. (79) we get \[e^{\Phi(r)}=\frac{\mu_{1}}{r}-\frac{Q_{0}}{6}r^{2}\ \ \text{and}\ \ e^{-\mu(r)}=\frac{\mu_{1}}{r}-\frac{Q_{0}}{6}r^{2}. \tag{87}\] Therefore, the metric (14) eventually takes the form as follows: \[ds^{2}=-\Big{(}\frac{\mu_{1}}{r}-\frac{Q_{0}}{6}r^{2}\Big{)}dt^{2}+ \Big{(}\frac{\mu_{1}}{r}-\frac{Q_{0}}{6}r^{2}\Big{)}^{-1}dr^{2}\\ +r^{2}d\theta^{2}+r^{2}\mathrm{sin}^{2}\theta\,d\phi^{2}. \tag{88}\] It is noticeable that the above line element is not the same as the Schwarzschild solution which implies that the exact Schwarzschild solution does not at all exist for the non-trivial functional form of \(f(\mathcal{Q})\). By taking into account the above discussion, we opt the Schwarzschild anti-de Sitter spacetime in \(f(\mathcal{Q})\) gravity under the functional form (69) which can be provided as, \[ds^{2}_{+}=-\bigg{(}1-\frac{2\mathcal{M}}{r}-\frac{\Lambda}{3}\ r ^{2}\bigg{)}\,dt^{2}+\frac{dr^{2}}{\bigg{(}1-\frac{2\mathcal{M}}{r}-\frac{ \Lambda}{3}\ r^{2}\bigg{)}}\\ +r^{2}\Big{(}d\theta^{2}+\mathrm{sin}^{2}\theta\,d\phi^{2}\Big{)}, \tag{89}\] where \(\mathcal{M}=\hat{M}_{\mathcal{Q}}/\beta_{1}\) and \(\Lambda=\beta_{2}/2\beta_{1}\), where \(\hat{m}_{\mathcal{Q}}(R)=\hat{M}_{\mathcal{Q}}\). Therefore, it is clearly observed that when \(\beta_{1}=1\) and \(\beta_{2}=0\), the Schwarzschild anti-de Sitter spacetime (89) reduces to the Schwarzschild exterior solution. On the other hand, the minimally deformed interior spacetime for the region (\(0\leq r\leq R\)) is given by \[ds^{2}_{-}=-e^{H(r)+\alpha\,\eta(r)}\,dt^{2}+\big{[}W(r)+\alpha \,\Psi(r)\big{]}^{-1}dr^{2}\\ +r^{2}(d\theta^{2}+\mathrm{sin}^{2}\theta\,d\phi^{2}). \tag{90}\] As usual, here we employ the Israel-Darmois matching conditions (Israel, 1966; Darmois, 1927), i.e. to satisfy the first and second form at the interface and mathematically, this can be given as \[e^{\Phi^{-}(r)}|_{r=R}=e^{\Phi^{+}(r)}|_{r=R}\ \text{and}\ e^{ \lambda^{-}(r)}|_{r=R}=e^{\lambda^{+}(r)}(r)|_{r=R}, \tag{91}\] \[\Big{[}G_{i.\,\varepsilon}\,r^{\varepsilon}\Big{]}_{\Sigma}\equiv \lim_{r=R^{+}}(G_{\,\varepsilon})-\lim_{r\to R^{-}}(G_{\,\varepsilon})=0\] \[\implies\ \ \Big{[}\tau^{\mathrm{eff}}_{\,\varepsilon}\,r^{ \varepsilon}\Big{]}_{\Sigma}=\Big{[}(T_{\,\varepsilon}+\alpha\,\theta_{\, \varepsilon})\,r^{\varepsilon}\Big{]}_{\Sigma}=0, \tag{92}\] The conditions (91) and (92) yields, \[e^{\mathbf{\mu}(R)+\alpha\eta(R)}=\Bigg{(}1-\frac{2\mathcal{M}}{R}- \frac{\Lambda}{3}\ R^{2}\Bigg{)}\] \[\text{and}\ \ W(R)+\alpha\,\psi(R)=\Bigg{(}1-\frac{2\mathcal{M}}{R}- \frac{\Lambda}{3}\ R^{2}\Bigg{)}, \tag{93}\] \[P_{r}(R)=p_{r}(R)+\alpha\,\beta_{1}\,\left[\psi_{\Sigma}\bigg{(} \frac{1}{R^{2}}+\frac{H^{\prime}_{\Sigma}}{R}\bigg{)}+\frac{W_{\Sigma}\,\eta^ {\prime}_{\Sigma}}{R}\right]=0. \tag{94}\] In order to calculate the numerical values we employ Eqs. (93) and (94) and thus determine the unknown parameters such as the constant (\(F\)), mass (\(\mathcal{M}\)) and arbitrary constant (\(c\)). ## 5 Physical analysis of completely deformed SS models and astrophysical implications ### Regular behavior of strange star (SS) models #### 5.1.1 For solution 3.1 (\(\rho=\theta_{0}^{0}\)) We now turn our attention to the physical analysis of the models obtained for the \(\rho=\theta_{0}^{0}\) sector. The energy density is plotted in Fig 1. Keep in mind that for \(\alpha=0\) we recover the standard \(f(\mathcal{Q})\) gravity. Starting from the left of Fig. 1. the first and second plots reveal the behavior of the energy density as a function of the radial coordinate for a fluid obeying the MIT bag model EOS. We observe that the energy density is regular at all interior points of the fluid configuration, attaining a maximum at the center. It is clear that contributions from the decoupling constant, \(\alpha\) leads to higher core densities in the linear regime as compared to the standard \(f(\mathcal{Q})\) gravity models. The third and fourth panels reveal the trend in the energy density for a fluid obeying a quadratic EOS. We note that for the quadratic EOS, contributions from \(\alpha\) lead to a significant increase in the energy density, particularly in the central regions of the star. More ever, the closeness of the contours reveals that these higher densities lead to more compact configurations. If we compare the linear EOS to the quadratic EOS for nonzero decoupling constant, we observe that stars with quadratic EOS have higher core densities and are more compact than their linear EOS counterparts. Panels one and three reveal that in the absence of the decoupling constant (\(\alpha=0\)), the energy density in the linear regime dominates its quadratic counterpart. The linear model predicts high core densities but as one moves to the surface layers of the star, the densities are of similar magnitude. A comparison of panels 2 and 4 (MIT versus quadratic EOS for nonvanishing decoupling parameter) reveals the density for the quadratic model is higher than the linear model at each interior point of the bounded configuration. We now consider the trends in the behaviour of the radial pressure (plotted in Fig. 2) throughout the interior of the compact object for the \(\rho=\theta_{0}^{0}\) sector. A comparison of panels one and three show that the pressure in the quadratic models dominate over the linear models when the decoupling parameter vanishes. This increased pressure in the quadratic models leads to greater stability against the inwardly driven gravitational force, with this effect being enhanced in the central regions of the star. The surface pressure in the quadratic models dominates their linear counterparts thus leading to stable surface layers. It is clear from the second and fourth panels that the pressure is enhanced in the presence of the decou pling parameter. In addition, the radial pressure in the quadratic models dominates their linear counterparts at each interior point from the center through to the stellar surface with the highest pressures achieved in models with quadratic EOS and nonzero \(\alpha\). Before we embark on a discussion of the trends in anisotropy in our models, we highlight some work which motivates the inclusion of pressure anisotropy in stellar configurations. In our models the radial and transverse stresses at each interior point of the compact object are unequal. Obviously, equality of the transverse component of pressure does ensure the spherical symmetry of the model (Dev & Gleiser, 2002). However, there are various reasons for consideration of the origin of anisotropy inside the compact stars (Ruderman, 1972; Sawyer, 1972; Jones, 1975; Sokolov, 1980; Kippenhahn & Weigert, 1990; Weber, 1999; Liebling & Palenzuela, 2012) and review work on the local anisotropy by Herrera & Santos (1997) may be informative in this regard. Very recently, Hererra (2020) has investigated the conditions for the (in)stability of the isotropic pressure condition in collapsing spherically symmetric, dissipative fluid distributions. Herrera demonstrated that an initially isotropic configuration upon leaving hydrostatic equilibrium evolves into an anisotropic regime. The anisotropy parameter is displayed in Fig. 3. We observe that the anisotropy changes sign within the compact object. We recall that a positive anisotropy factor signifies a repulsive anisotropic force which is directed outwards (Hansraj et al., 2022). This helps stabilise the stellar configuration against gravity. In Fig. 3, the extreme left panel shows that the anisotropy parameter is minimum at the centre of the fluid and is negative up to a finite radius, \(r_{0}\). This negative anisotropic factor is accompanied by an inwardly driven force which sums with the gravitational force, leading to an unstable interior. As one moves beyond \(r=r_{0}\), the anisotropy factor becomes positive. The repulsive force associated with \(\Delta>0\) stabilises the surface layers of the star. In direct comparison, panel 3 reveals a peculiar behaviour of the anisotropy factor. It starts off negative in the central regions and remains negative up to a large radius, \(r_{1}\) where \(r_{1}>r_{0}\). It appears that the quadratic EOS model is more unstable than its linear counterpart in the central regions of the compact object. As one moves further out, the anisotropy becomes positive. The first and third panels indicate that the quadratic EOS model has a narrower band of stable surface layers as compared to the linear EOS model. Its quadratic counterpart with a vanishing decoupling parameter (third panel) shows a marked difference between the two models. In the quadratic model, we see an interesting variation in the anisotropy parameter, i.e., starting from the centre of the star, the anisotropy is negative for some central region, \(0<r<r_{0}\) then becomes more negative as one moves outwards towards the boundary. The anisotropy remains negative for \(r_{0}<r<r_{1}\) where \(r_{1}>r_{0}\). Beyond \(r_{1}\), the anisotropy becomes positive, rendering the surface layers stable due to the repulsive anisotropic force. The change in sign of \(\Delta\) within the quadratic EOS model can be attributed to phase transitions in different regions within the stellar fluid. In the second and fourth panels, we observe the effect of a nonvanishing decoupling constant in both the linear and quadratic EOS models, respectively. In the second panel, we observe that \(\Delta<0\) from the centre to some finite radius. Thereafter, \(\Delta\) remains positive up to the boundary of the star. The fourth panel shows that anisotropy is negative within the central regions of the configuration. The anisotropy factor starts off with a finite negative value from \(r=0\) up to some radius \(r=r_{0}\). Thereafter \(\Delta\) becomes more negative as one move outwards. After some radius, \(r_{1}\) the anisotropy factor changes sign and becomes positive. A comparison of the second and fourth panels clearly shows that the surface layers of the linear model are more stable than their quadratic counterpart. It is interesting to observe the behavior of the anisotropy in the quadratic models, particularly the change of sign of \(\Delta\) in different regions within the stellar fluid. The change in the nature of the anisotropic force (from attraction to repulsion) leads to an unstable core but stable surface layers. This interesting behavior in anisotropy was also observed (Maurya et al., 2023). In this work, they modeled compact objects in \(f(\mathcal{Q})\) gravity in which the stellar fluid obeyed the MIT bag model EOS. #### 5.1.2 For solution 3.2 (\(p_{r}=\theta_{1}^{1}\)) In this subsection, we move our attention to the physical analysis of the models discovered for the solution \(p_{r}=\theta_{1}^{1}\). Figure 4 exhibits the energy density plot. As usual \(\alpha=0\) leads to the standard \(f(\mathcal{Q})\) gravity. Initially, let us look at the left two panels of Fig. 4, the first and second plots show the behavior of the energy density against the radial coordinate \(r\) for a fluid distribution following the MIT bag model EOS. It is evident that the energy density obtained here is regular at all internal points of the stellar structure and attaining a maximum at the core of the object. From the first and second panels, it is clear that the contribution of the decoupling constant (\(\alpha\)) allows a higher core densities in the MIT bag model EOS as compared to the pure \(f(\mathcal{Q})\) gravity stellar models. Furthermore, the third and fourth panels indicate the trend in the energy density for the fluid distribution in the quadratic EOS. We notice that the contributions from \(\alpha\) in the context of quadratic EOS gives to a substantial growth in the energy density, particularly in the central regions of the star which lead to more compact configurations. On the other hand, we also detect that stars obeying quadratic EOS have higher central densities and are more compact than their linear EOS counterparts as happened in the first solution. Now we move to the Fig. 5 to check the behavior of the radial pressure within the stellar object for the \(p_{r}=\theta_{1}^{1}\) sector. On comparing of the panels to the first and third in absence of the gravitational decoupling, we observe that the radial pressure of models under the quadratic EOS dominates over models in a linear regime. Furthermore, this increment in the pressure for quadratic models leads to greater stability in the central regions of the star against the inwardly directed gravitational force. From the second and fourth panels, it is clear that the pressure decreases in the presence of the gravitational decoupling parameter. Apart from this, the radial pressure of the model in the context of quadratic EOS dominates their linear counterparts everywhere inside the star from the center to the surface as well as the highest pressure is achieved in models obeying quadratic EOS under vanishing \(\alpha\). A scrutiny of Fig. 6 reveals that the anisotropy is regular at all interior points of each of the models displayed in the four panels. In addition, the anisotropy is positive and increases from the center of the star toward the boundary. This is in direct contrast to our models \(\rho=\theta_{0}^{0}\) sector. A comparison of panels 1 and 3, shows the trend in anisotropy in the absence of the decoupling constant for the linear and quadratic EOS respectively. The central anisotropy is lower in the linear model compared to its quadratic counterpart. As one approaches the surface layers, we observe that anisotropy increases, with the increase more significant over a larger portion of the surface layers in the linear model. Since \(\Delta>0\), the repulsive anisotropic renders the surface layers more stable than the central core regions. Comparatively, the relative magnitudes of the anisotropy show that the linear model is more stable than the quadratic model for the vanishing decoupling parameter. We now turn our attention to the second and third panels of Fig. 6. In these models, the anisotropy increases from the center of the configuration toward the boundary. The increase in \(\Delta\) is more profound at each interior point in the linear model. This indicates that the contribution from the repulsive nature of the anisotropic force renders the linear model more stable. A peculiar observation in the behavior of the anisotropy is observed in the linear model. While the contributions from anisotropy increase steadily from the center, \(r_{0}\) to some finite radius \(r_{1}\), we observe a decrease in \(\Delta\) for \(r_{1}<r\leq b\). This trend is not observed in the quadratic model. If we now compare panels 1 and 2, ie., the linear models for \(\alpha=0\) and nonvanishing\(\alpha\), respectively, we observe that the decoupling parameter stabilizes the central region by enhancing the anisotropy. A comparison of panels 3 and 4 clearly shows the effect of the decoupling parameter on \(\Delta\) in the quadratic models. We note that the contributions from \(\alpha\) lead to enhanced anisotropy throughout the stellar configuration thus leading to greater stability of concentric matter shells centered about the origin.. ### Constraining upper limit of maximum mass for SS via M-R diagrams In this section we investigate the effect of the decoupling parameter (\(\alpha\)), bag constant (\(\mathcal{B}_{g}\)) and EOS parameters on the mass-radius relations of various compact objects and compare our theoretical values to observational constraints. Based on the EOS \[p_{r}=a\,\rho^{1+1/n}+b\rho+c, \tag{95}\] we classify the stellar fluid as 1. \(a\neq 0,b\neq 0\) \(\rightarrow\) quadratic EOS 2. \(a\neq 0,b=0\) \(\rightarrow\) pure quadratic EOS 3. \(a=0,b\neq 0\) \(\rightarrow\) linear EOS (MIT bag model) .2.1 Linear EOS with constancy in bag constant, fixed decoupling parameter and varying EOS parameter We have up to this point tested the physical viability of our solutions in describing self-gravitating stellar objects. We have shown, through tests based on regularity, causality, and stability that these solutions describe physically realizable stellar configurations. In this section, we constrain the free parameters in our solutions by using observational data of compact stars LMC X-4, Cen X-3, PSR J1614-2230, PSR J0740+6620, and the supposedly secondary component of GW190814. The robustness of our solutions allows us to predict the observed mass-radius relations of these stars. Starting off with solution A emanating from the \(\rho=\theta_{0}^{0}\) sector, we refer to Table 1 and Fig. 7 (left panel). Here we focus on the MIT-bag models where we have fixed the Bag constant and decoupling parameter while varying the linear EOS parameter, \(b\). At the outset we must point out that a fixed linear EOS parameter \(b\), leads to the prediction of a category of stars with similar trends in mass and radii and not individual stars. From Table 1, we observe that for small values of \(b\), i.e., linear contributions from the energy density, do account for low-mass stars such as LMC X-4 and Cen X-3. Our model shows that by fixing the parameter \(b\), we can obtain masses of stars beyond \(2~{}M_{\odot}\). For example, for \(b=0.38\), our model predicts the existence of a class of stars befitting of the secondary component of the GW190814 event with a mass range of 2.5-2.67 \(M_{\odot}\) and a radius of \(15.74^{+0.40}_{-0.21}\). On the right panel, we see the trend in the M-R curves for the \(p_{r}=\theta_{1}^{1}\) sector in the linear regime. A similar trend in the increase in the linear EOS parameter on the radii of compact stars is observed in this case. However, there is one notable difference, while the predicted radii increase as \(b\) increases, the \(p_{r}=\theta_{1}^{1}\) sector predicts smaller radii, i.e., more compact stellar models. For example, in the case of the GW190814 event, the secondary component has a radius of \(14.02^{+0.09}_{-0.08}\) km for the upper value of \(b\), which is 10% less than its \(\rho=\theta_{0}^{0}\) counterpart. 5.2.2 Linear EOS with constancy in bag constant, fixed EOS parameter and varying decoupling constant Figure 8: The above \(M-R\) curves are plotted for describing the upper limit of mass-radius relationship in MIT-Model for different \(\alpha\) with fixed \(a=0,~{}b=1/3\) and \(\mathcal{B}_{g}=60\,MeV/fm^{3}\) for solution 3.1 in (\(\theta_{0}^{0}=\rho\)) - (left panel) and solution 3.2 in (\(\theta_{1}^{1}=p_{r}\)) (right panel), respectively. We now look at Table 2 in conjunction with Figure 8 in which we have fixed the bag constant and the EOS parameter, \(b\) while varying \(\alpha\). From Table 2 and the left panel of Fig. 8, we observe that an increase in the decoupling parameter results in a decrease in radii leading to more compact configurations. A similar trend is observed in the \(p_{r}=\theta_{1}^{1}\) sector. We also observe that for the chosen values of the bag constant and EOS parameter, our solutions are able to predict a wider range of radii for all compact objects under investigation. A comparison of the left and right panels reveal that the allowable radii for the compact objects under investigation is more tightly bounded for the \(\rho=\theta_{0}^{0}\) sector. The left panel shows that low mass stars such as LMC X-4 (\(1.29\pm 0.05\)\(M_{\odot}\)) and Cen X-3 (\(1.49\pm 0.08\)\(M_{\odot}\)) exist for radii greater 10 km. For the \(p_{r}=\theta_{1}^{1}\), there is wider range of radii for low mass stars closer to 9 km. More noticeably is the prediction of the existence of higher-mass compact objects beyond 2 \(M_{\odot}\). For small \(\alpha\), there exist stars with masses closer to 3 \(M_{\odot}\). For the vanishing of the decoupling parameter (\(\alpha=0\)), stellar masses exceed 3 \(M_{\odot}\). For the right panel, we note that the maximum mass is in the order of 3.5 \(M_{\odot}\). In a study of Burgio et al. (Burgio et al., 2018), they questioned the existence of compact stars with small radii giving rise to the \(GW170817/AT2017go\) signals. In one of their proposals, the so-called 'twin-star' scenario they concluded that the merger involved the coalescence of a Figure 10: The above \(M-R\) curve is plotted for describing the upper limit of mass-radius relationship in quadratic+MIT bag model for different \(b\) with fixed \(\alpha=0.2\,\mathrm{km}^{2}\), \(a=2\,\mathrm{km}^{2}\) and \(\mathcal{B}_{g}=60\,MeV/fm^{3}\) (left panel) while the right panel shows a quadratic+MIT bag model for different \(\alpha\) with fixed \(a=2\,\mathrm{km}^{2}\), \(b=1/3\) and \(\mathcal{B}_{g}=60\,MeV/fm^{3}\) for the solution 3.2 (\(\theta_{1}^{1}=p_{r}\)). hadronic star and a quark matter star with radii in the range \(10.7\)km \(<R_{1.5}<12\)km with \(R_{1.5}\) (\(14.28-15.41\) km for \(\theta_{0}^{0}=\rho\) and \(9.70-14.18\) km for \(\theta_{1}^{1}=p_{r}\)) being the radius of a 1.5 \(M_{\odot}\) compact star. Figure 8 accounts for the existence of a myriad of observed compact objects including those falling into the 'twin star' category scenario with the latter being more suitable for \(p_{r}=\theta_{1}^{1}\) sector. 2.3 Quadratic EOS with fixed bag constant, fixed decoupling constant with varying quadratic EOS parameter Here we focus on Table 3. and Figure 9. It is clear in this scenario, both of our solutions do not predict very well the existence of compact objects beyond 2 \(M_{\odot}\). Even in the case of PSR J1614-2230 with a mass of \(1.97\pm 0.04\)\(M_{\odot}\), which is less than the 2.0 \(M_{\odot}\), fails to exist. The lower mass stars have larger radii, thus indicating that these candidates have low densities compared to their linear EOS counterparts as displayed in Figure 8. In a study by (Astashenok et al., 2020), they showed that the observations of GW190814 can be accounted for within the framework of \(f(R)\) gravity and the inclusion of rotation. They utilized three frameworks, i.e., classical GR, \(f(R)\) gravity with \(f(R)=R+\alpha R^{2}\), where \(\alpha\) is quadratic curvature correction and finally \(f(R)=R^{1+\epsilon}\) where \(\epsilon\) is a measure of a small deviation from GR. Utilizing different EOS's, they showed that in the absence of rotation, the models of compact objects for both GR and \(f(R)\) gravity with varying \(\alpha\) lead to masses less than 2.5 \(M_{\odot}\) and radii between 10 and 14 km. With the inclusion of rotation, the \(f(R)\) models can easily account for masses greater than 2.5 \(M_{\odot}\) this accounting for the observed mass of the secondary component of GW190814. For observed rotational frequencies of neutron stars, \(f(R)\) gravity predicts the existence of a 2.0 \(M_{\odot}\) compact object. In the case of \(f(R)=R^{1+\epsilon}\) framework, they obtained masses in the range of 2.5- 2.67 \(M_{\odot}\) for \(\epsilon\) in the range 0.005 and 0.008. Interestingly, the radii of these compact objects in \(f(R)\) gravity with rotation can vary between 12 to 18 km. We have also obtained models of compact objects with large radii in this category of stars. 2.4 Quadratic EOS with fixed bag constant, fixed quadratic EOS parameter with constant versus varying decoupling constant for mimicking of radial pressure, \(p_{r}=\theta_{1}^{1}\) In this subsection, we pay particular attention to the M-R curves arising in the composite linear-quadratic EOS and the effect of the decoupling constant. Table 4. shows that these models predict the existence of low-mass stars with an increase in the decoupling constant leading to higher-mass stars with larger radii very similar to the trends obtained by (Astashenok et al., 2021) in describing rotating neutron stars in \(f(R)=R+\alpha R^{2}\) gravity. It is clear from Table 4 and Figure 10 that the mixed EOS model fails to predict masses which can account for the LIGO observation of approximately 2.6 \(M_{\odot}\) of the secondary component of the binary coalescence GW190814. It appears that when the linear and quadratic EOS parameters are switched on simultaneously, the decoupling constant quenches any increase in the mass of the compact object. While these models fail to predict masses above 2.6 \(M_{\odot}\), the quadratic EOS predicts the existence of well-known pulsars and neutron stars to a very good approximation. In comparing the models derived from the \(\rho=\theta_{0}^{0}\) and \(p_{r}=\theta_{1}^{1}\) sectors we compared and contrasted their predicting powers when it comes to low mass stars as well as extreme masses bordering on the mass of the lightest black hole. We have shown that the predicted stellar masses are sensitive to the decoupling constant, \(\alpha\), the linear EOS parameter, \(b\), the quadratic EOS parameter, \(a\), and the bag constant. By varying these constants in particular sets, our models describe a family of compact objects with varying EOS's ranging from the simplest linear EOS (MIT bag model), through to the pure quadratic EOS and the more complex quadratic EOS. The tabulated data and plots reveal that the quadratic EOS accounts for a wide spectrum of observed compact objects in the low-mass limit and stars which qualify as the secondary component of the GW190814 event. ## 6 Energy Exchange Between the Fluid Distributions for \(\hat{T}_{ij}\) and \(\theta_{ij}\) Let us now discuss about the energy exchange requirement in connection to the extended gravitational decoupling. Ovalle in his work (Ovalle, 2019) showed that both the sources \(\hat{T}_{ij}\) and \(\theta_{ij}\) can be decoupled in a successful way as long as the exchange of energy is there in between them. Denoting \(\mathcal{G}_{ij}\) as the equations of motion of the line element in \(f(Q)\)-gravity, i.e. Eq. (9), can be considered as follows: \[\mathcal{G}_{ij}=\frac{2}{\sqrt{-g}}\nabla_{k}\left(\sqrt{-g}\,f _{Q}\,P^{k}_{\,ij}\right)+\frac{1}{2}g_{ij}f+f_{Q}\big{(}P_{\,kl}\,Q_{\,j}^{\, kl}\] \[-2\,Q_{kli}\,P^{kl}_{\,\,j}\big{)}=T_{ij}=\hat{T}_{ij}+\alpha\, \,\theta_{ij}. \tag{96}\] The conservation equation can be found by Bianchi identity \(\bigtriangledown^{i}\mathcal{G}_{ij}=0\), given by \[\frac{H^{\prime}}{2}\,(T_{1}^{1}-T_{0}^{0})+(T_{1}^{1})^{\prime}-\frac{2}{r} \,(T_{2}^{2}-T_{1}^{1})=\frac{\alpha\eta^{\prime}}{2}(T_{0}^{0}-T_{1}^{1}). \tag{97}\] The above Eq. (97) can be provided in a more suitable form, which is \[-\frac{H^{\prime}}{2}\,(\hat{T}_{0}^{0}-\hat{T}_{1}^{1})+(\hat{T}_{1}^{1})^{ \prime}-\frac{2}{r}\,(\hat{T}_{2}^{2}-\hat{T}_{1}^{1})-\frac{\alpha\eta^{ \prime}}{2}(\hat{T}_{0}^{0}-\hat{T}_{1}^{1})\] \[\frac{\alpha\Phi^{\prime}}{2}\Big{(}[T^{\theta}]^{1}_{1}-[T^{\theta}]^{0}_{0}\Big{)} +\alpha\Big{(}[T^{\theta}]^{1}_{1}\Big{)}^{\prime}=\frac{2\alpha}{r}\left([T^{ \theta}]^{2}_{2}-[T^{\theta}]^{1}_{1}\right). \tag{98}\] Now there is a notable point in the context of the TOV equation for \(f(\mathcal{Q})\) that under the linear functional form it corresponds to the static and spherically symmetric line element of general relativity. This ensures that \(\mathcal{G}^{\{H,W\}}_{ij}\) for the metric (43) should satisfy its corresponding Bianchi identity. Again, this also suggests that the energy-momentum tensor \(\hat{T}_{ij}\) should be conserved with the spacetime geometry \(\{H,W\}\) of Eq. (42). Hence, one can provide \[\bigtriangledown_{i}^{\{H,W\}}\hat{T}^{i}_{j}=0. \tag{99}\] It is also instructive, in connection to Eq.(13), that \[\bigtriangledown_{i}\hat{T}^{i}_{j}=\bigtriangledown_{i}^{\{H,W\}}\hat{T}^{i} _{j}-\frac{\alpha\,\eta^{\prime}}{2}(\hat{T}^{0}_{0}-T^{1}_{1})\delta^{1}_{ \nu}. \tag{100}\] As a linear combination of the Einstein field equations (40)-(42), we obtain from Eq. (99) the following explicit form: \[-\frac{H^{\prime}}{2}\,(\hat{T}^{0}_{0}-\hat{T}^{1}_{1})+(\hat{T}^{1}_{1})^{ \prime}-\frac{2}{r}\,(\hat{T}^{2}_{2}-\hat{T}^{1}_{1})=0. \tag{101}\] This at once indicates that the source \(\hat{T}_{ij}\) can be decoupled from the system of equations (39)-(41) in a well defined manner and eventually, based on the condition (98), one may get from Eq. (99) the following forms: \[\bigtriangledown_{e}\hat{T}^{i}_{j}=-\frac{\alpha\,\eta^{\prime}}{2}(\hat{T}^ {0}_{0}-\hat{T}^{1}_{1})\delta^{1}_{j}, \tag{102}\] and \[\bigtriangledown_{i}\theta^{i}_{j}=\frac{\alpha\,\eta^{\prime}}{2}(\hat{T}^ {0}_{0}-\hat{T}^{1}_{1})\delta^{1}_{j}. \tag{103}\] At this juncture, it is to be informed that (i) in Eqs. (39)-(41), the divergence has been calculated in connection to the deformed spacetime (16), (ii) Eq. (103) is nothing but a linear combination of "quasi-Einstein" field equations, i.e. (45)-(47) under the platform \(f(\mathcal{Q})\)-gravity, and (iii) as long as there is an exchange of energy between the sources \(\hat{T}_{ij}\) and \(\theta_{ij}\) decoupling can be successfully performed. Following the works (Ovalle et al., 2022; Contreras and Stuchlik, 2022), the energy exchange between the sources can be expressed as follows: \[\Delta E=\frac{\eta^{\prime}}{2}\big{(}p_{r}+\rho\big{)}. \tag{104}\] Therefore, \(p_{r}\) and \(\rho\) being two positive physical quantities, the above Eq. (104) can help explore the following situations: (i) if \(\eta^{\prime}>0\) then \(\Delta E>0\) which implies \(\bigtriangledown_{i}\theta^{i}_{j}>0\), i.e. the new source \(\theta_{ij}\) supplies energy to the environment, and (ii) if \(\eta^{\prime}<0\) then \(\Delta E<0\) which implies \(\bigtriangledown_{i}\theta_{j}^{i}<0\), i.e. the new source \(\theta_{ij}\) extracts energy from the environment. It is noted that temporal deformation is the same for both solutions, therefore the expressions of energy exchange will be the same for both cases but the amount of energy exchange will be different for both cases. Now inserting the expressions for seed pressure and density along with the temporal deformation function \(\eta\), we find \[\Delta E = \frac{r}{8}\Bigg{[}\frac{1}{4\left(Nr^{2}+1\right)^{4}}\bigg{\{} \bigg{(}2\beta_{1}(L-N)\left(Nr^{2}+3\right)+\beta_{2} \tag{105}\] \[\times\left(Nr^{2}+1\right)^{2}\bigg{)}\bigg{(}2a\beta_{1}(L-N) \left(Nr^{2}+3\right)+(a\beta_{2}-2b)\] \[\times\left(Nr^{2}+1\right)^{2}\bigg{)}\bigg{\}}+c-\frac{\beta_ {2}}{2}-\frac{\beta_{1}(L-N)\left(Nr^{2}+3\right)}{\left(Nr^{2}+1\right)^{2}} \Bigg{]}\] \[\times\Bigg{[}\bigg{\{}4\beta_{1}L^{3}(3a\beta_{2}-6a\beta_{1}N- 3b-1)+L^{2}\Big{\{}\beta_{2}(a\beta_{2}-2b-2)\] \[+8\beta_{1}N(1-2a\beta_{2}+2b)+4a\beta_{1}^{2}N^{2}\Big{\}}-2LN[ \beta_{2}(a\beta_{2}-2b-2)\] \[+2\beta_{1}N(1-a\beta_{2}+b)]+\beta_{2}N^{2}(a\beta_{2}-2b-2)+36 a\beta_{1}^{2}L^{4}\] \[+4a_{3}(L-N)^{2}\bigg{\}}\frac{1}{\beta_{1}L(L-N)\left(Lr^{2}+1\right)}\] \[+\frac{N(\beta_{2}(a\beta_{2}-2b-2)+4c)}{\beta_{1}L}+\frac{1}{ \left(N-L\right)\left(Nr^{2}+1\right)}\] \[\times\left[4N\bigg{(}a\Big{\{}9\beta_{1}L^{2}+2\beta_{2}L-6 \beta_{1}LN+\beta_{1}N^{2}-2\beta_{2}N\Big{\}}\right.\] \[\left.-2b(L-N)\bigg{)}\right]-\frac{16a\beta_{1}N(2L-N)}{\left(Nr ^{2}+1\right)^{2}}-\frac{16a\beta_{1}N(L-N)}{\left(Nr^{2}+1\right)^{3}}\Bigg{]}.\] #### 6.0.1 For solution 3.1 (\(\rho=\theta_{0}^{0}\)) In this Section, we discuss the amount of energy exchange (\(\Delta E\)) between the generic fluid \(\theta_{ij}\) and anisotropic fluid \(\hat{T}_{ij}\) for solution 3.1. To see this distribution, we plotted Figure 11 to show the energy exchange between the relativistic fluids via the density plots. The first left figure is plotted in the context of the pure MIT bag model by taking bag constant value \(\mathcal{B}_{g}=60\ MeV/fm^{3}\) and decoupling constant \(\alpha=0.1\). We observe that the \(\Delta E\) is positive and minimum at the center while starts increasing towards the boundary and attaining its maximum value within the star rather than the boundary. The maximum value of \(\Delta E\) is 0.00054 km\({}^{-3}\). Now we move the second figure from the left which is plotted for the pure MIT bag model by taking \(\mathcal{B}_{g}=60\ MeV/fm^{3}\) with decoupling constant \(\alpha=0.2\). We observe that the same situation occurs as happened before but the amount of energy exchange increases which is \(\Delta E_{max}\approx 0.00058\) km\({}^{-3}\). The third and fourth panels show the distribution of energy for the quadratic model with decoupling constant values \(\alpha=0.1\) and \(\alpha=0.2\), respectively for the same value of bag constant \(\mathcal{B}_{g}=60\ MeV/fm^{3}\). The pattern of energy change within the star for the quadratic model is similar to the MIT bag model but \(|\Delta E|_{qua}>|\Delta E|_{MIT}\). The maximum value of \(\Delta E\) is 0.00063 km\({}^{-3}\) and 0.00067 km\({}^{-3}\) at \(\alpha=0.1\) and \(\alpha=0.2\), respectively. Finally, we conclude that the interaction between both fluids increases significantly when moving towards the boundary and reaches its maximum value within the star, not at the boundary in all cases for the solution 3.1 (\(\rho=\theta_{0}^{0}\)). Also, \(\Delta E\) is positive throughout in the radial direction as well as its magnitude of maximum value increases when the decoupling constant \(\alpha\) increases. Furthermore, the generic source \(\theta_{ij}\) gives more energy to the environment in the presence of the quadratic EOS as compared to MIT bag model EOS. #### 6.0.2 For solution 3.2 (\(p_{r}=\theta_{1}^{1}\)) This section contains the analysis of energy exchange (\(\Delta E\)) distributions between the generic fluid \(\theta_{ij}\) and anisotropic fluid \(\hat{T}_{ij}\) for solution 3.2. For this purpose, the density plot for Fig. 12 reveals the flow of energy between the relativistic fluids. The first two left figures are plotted in the context of the pure MIT bag model for decoupling constant \(\alpha=0.1\) and \(\alpha=0.2\), respectively using the bag constant value \(\mathcal{B}_{g}=60\ MeV/fm^{3}\). We observe that the \(\Delta E\) is positive and minimum at the center while it starts increasing when moving towards the boundary but the maximum value of \(\Delta E\) is achieved within the star, not at the boundary. On the other hand, we observe one interesting point here that no impact of the gravitational decoupling on the energy exchange fluid distributions is noticed under the mimic-to-pressures constraint approach. Therefore, the same maximum value of \(\Delta E\approx 0.0005\) km\({}^{-3}\) is observed for both values of \(\alpha=0.1\) and \(0.2\). From the third and fourth panels, we show the distribution of energy for the quadratic model with decoupling constant values \(\alpha=0.1\) and \(\alpha=0.2\), respectively for the same value of bag constant \(\mathcal{B}_{g}=60\ MeV/fm^{3}\). We find that the behavior of energy exchange within the star for the quadratic model is similar to the MIT bag model but \(|\Delta E|_{qua}>|\Delta E|_{MIT}\) as happened in solution 3.1. The maximum value of \(\Delta E\) is 0.00054 km\({}^{-3}\) at \(\alpha=0.1\) and \(\alpha=0.2\), both. For the above solution, we finally can conclude that the interaction between both fluids also increases significantly when moving towards the boundary and reaches its maximum value within the star, not at the boundary, in all scenarios for the solution 3.2 (\(p_{r}=\theta_{1}^{1}\)) as well as \(\Delta E\) is positive throughout in the radial direction. Furthermore, the magnitude of the maximum value for \(\Delta E\) is independent of decoupling constant \(\alpha\) i.e. \(\Delta E\) remains the same for each \(\alpha\). In fact here also, the generic source \(\theta_{ij}\) gives more energy to the environment in the presence of the quadratic EOS as compared to MIT bag model EOS. Comparative study of model's arising in GR, GR+CGD, \(f(\mathcal{Q})\) and \(f(\mathcal{Q})\)+CGD Gravity In this section we highlight the key differences arising in general relativity (GR) and \(f(\mathcal{Q})\) gravity with and without CGD via our modeling framework. To this end, we draw the reader's attention to Fig. 13. For the first case (\(\rho=\theta_{0}^{0}\)), we have plotted the M-R curves for pure GR \([\beta_{1}=1,\ \alpha=0]\) and pure \(f(\mathcal{Q})\)\([\beta_{1}=1.1,\ \alpha=0]\) in Fig. 13 (top). We note that for the same EOS, the \(f(\mathcal{Q})-\)gravity can generate higher \(M_{max}\) than GR iff \(\beta_{1}>1\), otherwise the reverse will happen. However, when CGD turns on i.e. \(\alpha\neq 0\), the corresponding EOS gets softer resulting in lower \(M_{max}\) for both GR and \(f(\mathcal{Q})\) gravity. Therefore, when the \(tt-\) component of the CGD-induced stress-tensor \(\theta_{i}^{j}\) mimics the density, the \(M_{max}\) is lowered due to softening EOS. This can be ascribed to the effective density i.e. \(\epsilon=(1+\alpha)\rho\) increasing with the CGD strength leading to a denser interior, which will eventually trigger many exotic processes such as hyperon and kaon productions. On the other hand, the second solution where \(\theta_{1}^{1}\) mimics pressure leads to the lowering of the effective pressure \(P_{r}=(1-\alpha)p_{r}\). In view of the EOS within the core, lower radial pressure supports lower interior density which may suppress any exotic phase transitions. This makes the EOS stiffer when the CGD strength increases leading to higher \(M_{max}\) (see Fig. 13 bottom) and compactness parameter. Furthermore, it can also be observed that both \(f(\mathcal{Q})\) models yield roughly the same masses and radii when CGD is turned off. However, when CGD turns on the first case immediately decrease the \(M_{max}\) and corresponding radius while the second solution yields a slightly higher maximum mass with smaller radius. In the GR limit, both solutions yield the same \(M-R\) curves as the seed solution is unaffected by \(\beta\). ## 8 Concluding remarks and astrophysical implications To make some concluding remarks on the present work and its outcomes, let us arrange the steps point-wise that what was our motivation and how we have carried out that in a successful way: (i) Inspired by the recent gravitational event, i.e. GW190814 which brings to light the coalescence of a 23 \(M_{\odot}\) black hole with a yet-to-be-determined secondary component, we try to model compact objects under the framework of \(f(\mathcal{Q})\) gravity theory along with the method of gravitational decoupling. Figure 11: The flow of \(energy-exchange\) between the fluid distributions for MIT Bag model by taking \(a=0\) and \(\mathcal{B}_{g}=60\,MeV/fm^{3}\) (first two panels) and quadratic model by taking \(a=5\,\mathrm{km}^{2},\ b=1/3\) and \(\mathcal{B}_{g}=60\,MeV/fm^{3}\) (right two panels) with two different values of \(\alpha\) for solution 3.1(\(\rho=\theta_{0}^{0}\))). The same values of constants are employed here as used in Fig.1. Figure 12: The flow of \(energy-exchange\) between the fluid distributions for MIT Bag model by taking \(a=0\) and \(\mathcal{B}_{g}=60\,MeV/fm^{3}\) (first two panels) and quadratic model by taking \(a=2\,\mathrm{km}^{2},\ b=1/3\) and \(\mathcal{B}_{g}=60\,MeV/fm^{3}\) (right two panels) with two different values of \(\alpha\) for solution 3.2(\(p_{r}=\theta_{1}^{1}\))). The same values of constants are employed here as used in Fig.4. (ii) We assume a suitable quadratic equation of state (EOS) for the interior matter distribution of a compact star, especially a neutron/quark star, which in the appropriate limit reduces to the MIT bag model. (iii) The concerned field equations under \(f(\mathcal{Q})\) gravity are subjected to gravitational decoupling bifurcates into the \(\rho=\theta_{0}^{0}\) and \(p_{r}=\theta_{1}^{1}\) sectors leading two distinct classes of solutions. (iv) Both families of solutions are subjected to rigorous tests qualifying them to describe a class of compact stellar objects which include neutron stars, strange stars, and the possible progenitor of the secondary component of GW190814. (v) Using observational data of mass-radius relations for compact objects, e.g. LMC X-4, Cen X-3, PSR J1614-2230, and PSR J0740+6620, we show that it is possible to generate stellar masses and radii beyond 2.5 \(M_{\odot}\). (vi) The outcomes of the present work reveal that the quadratic EOS is versatile enough to account for a range of low-mass stars as well as typical stellar candidates describing the secondary component of GW190814. Based on the above-mentioned steps, the obtained results have been already classified with detailed discussions in several tables and figures. Let us now put some salient features of the outcomes as follows: 1. **Graphical plots for the basic physical parameters**: Distribution of the energy density (Figs. 1 and 4), the radial pressure (Figs.2 and 5), and the anisotropy (Figs. 3 and 6) are exhibited for different values of the model parameters. Discussions in a length have been done already in 5.1.1 in connection to the solution 3.1 (\(\rho=\theta_{0}^{0}\)) and in 5.1.2 in connection to the solution 3.2 (\(p_{r}=\theta_{1}^{1}\)) respectively. We note that all the features are satisfactory as far as physical attributes are concerned. 2. **Constraining upper limit of maximum mass via the graphical plots for M-R diagrams**: For different choices of the linear EOS with constancy in bag constant, fixed decoupling parameter, and varying EOS parameter we have shown results in Tables 1-4 and figures 7-10 which are as follows. In Table 1 and Fig. 7, we observe linear contributions from the energy density which do account for low-mass stars such as LMC X-4 and Cen X-3. From Table 2 and Fig. 8 (left panel), we observe that an increase in the decoupling parameter results in a decrease in radii leading to more compact configurations. The left panel also shows that low mass stars such as LMC X-4 (1.29 \(\pm\) 0.05 \(M_{\odot}\)) and Cen X-3 (1.49 \(\pm\) 0.08 \(M_{\odot}\)) do exist for radii greater 10 km. Figure 8 predicts a wide range of masses and radii applicable to observed compact objects, including the 'twin star' scenario, the latter of which is favoured in the \(p_{r}=\theta_{1}^{1}\) sector. Interestingly, the radii of these compact objects in \(f(R)\) gravity with rotation can vary between 12 to 18 km. We ascertain from Table 4 and Fig. 10 that the quadratic EOS model fails to predict masses which can account for the LIGO observation of approximately 2.6 \(M_{\odot}\) of the secondary component of the binary coalescence GW190814. It appears that when the linear and quadratic EOS parameters are switched on simultaneously, the decoupling constant quenches any increase in the mass of the compact object. While these models fail to predict masses above 2.6 \(M_{\odot}\), the quadratic EOS predicts the existence of well-known pulsars and neutron stars to a very good approximation. The tabulated data Figure 13: Comparison of GR, GR+CGD, \(f(Q)\) and \(f(Q)\)+CGD for both the solutions with the same value of constants employed as in Fig. 4. and plots reveal that the quadratic EOS which has the linear limit successfully predicts the existence of low-mass stars as well as neutron stars with masses beyond 2 \(M_{\odot}\). 3. **Graphical plots for the energy exchange between the fluid distributions**: We would now discuss The most important aspect of our findings, ie., the energy exchange arising from the extended gravitational decoupling. It is noted that temporal deformation is the same for both the solutions and hence the expressions of energy exchange will be the same for both cases but the amount of energy exchange will be different for both cases. (i) **For solution 3.1 (\(\rho=\theta_{0}^{0}\)):** In Fig. 11 we have exhibited the energy exchange between the relativistic fluids via the density plots. The first left figure is plotted in the context of the pure MIT bag model where we observe that the \(\Delta E\) is positive and minimum at the center while starts increasing towards the boundary and attaining its maximum value within the star rather than the boundary. Now we move the second figure from the left which is plotted for the pure MIT bag model and one can note that the same situation occurs as happened before but the amount of energy exchange increases which is \(\Delta E_{max}\approx 0.00058\) km\({}^{-3}\). The third and fourth panels show the distribution of energy for the quadratic model where the pattern of energy change within the star for the quadratic model is similar to the MIT bag model but \(|\Delta E|_{qua}>|\Delta E|_{MIT}\). The maximum value of \(\Delta E\) is \(0.00063\) km\({}^{-3}\) and \(0.00067\) km\({}^{-3}\) at \(\alpha=0.1\) and \(\alpha=0.2\), respectively. Based on the above observations, one may conclude that the interaction between both fluids increases significantly when moving towards the boundary and reaches its maximum value within the star, not at the boundary in all cases for the solution 3.1 (\(\rho=\theta_{0}^{0}\)). Also, \(\Delta E\) is positive throughout in the radial direction as well as its magnitude of maximum value increases when the decoupling constant \(\alpha\) increases. Furthermore, the generic source \(\theta_{ij}\) gives more energy to the environment in the presence of the quadratic EOS as compared to MIT bag model EOS. (ii) **For solution 3.2 (\(p_{r}=\theta_{1}^{1}\))**: The density plot for Fig. 12 has been shown to observe the flow of energy exchange between the relativistic fluids. The first two left figures are plotted in the context of the pure MIT bag model where we observe that the \(\Delta E\) is positive and minimum at the center while it starts increasing when moving towards the boundary but the maximum value of \(\Delta E\) is achieved within the star, not at the boundary. On the other hand, we observe one interesting point here that no impact of the gravitational decoupling on the energy exchange fluid distributions is noticed under the mimic-to-pressures constraint approach. From the third and fourth panels, we show the distribution of energy for the quadratic model and find that the behavior of energy exchange within the star for the quadratic model is similar to the MIT bag model but \(|\Delta E|_{qua}>|\Delta E|_{MIT}\) as happened in solution 3.1. Therefore, we can conclude that the interaction between both fluids also increases significantly when moving towards the boundary and reaches its maximum value within the star, not at the boundary in all the situations for the solution 3.2 (\(p_{r}=\theta_{1}^{1}\)) as well as \(\Delta E\) is positive throughout in the radial direction. Furthermore, the magnitude of the maximum value for \(\Delta E\) is independent of decoupling constant \(\alpha\), i.e. \(\Delta E\) remains the same for each \(\alpha\). In fact here also, the generic source \(\theta_{ij}\) gives more energy to the environment in the presence of the quadratic EOS as compared to MIT bag model EOS. The overall findings of the models presented reveal that it is possible to put suitable constraints on the upper limit of the mass-radius relation of the secondary component of GW190814 and other self-bound strange star configurations under gravitational decoupling in \(f(\mathcal{Q})\)-gravity theory which may provide the observational signature of the objects in a significant way. ## Acknowledgement The author SKM is thankful for continuous support and encouragement from the administration of the University of Nizwa to carry out the research work. G. Mustafa is very thankful to Prof. Gao Xianlong from the Department of Physics, Zhejiang Normal University, for his kind support and help during this research. Further, G. Mustafa acknowledges Grant No. ZC304022919 to support his Postdoctoral Fellowship at Zhejiang Normal University. KNS and SR are also thankful to the authorities of the Inter-University Centre for Astronomy and Astrophysics, Pune, India for providing the research facilities where SR specifically expresses thanks to ICARD of IUCAA at GLA University. The authors are also thankful to Prof. Jorge Ovalle for his help in the coding of the contour diagrams. ## Data Availability: No new data were generated or analyzed in support of this research. ## Appendix \[\theta_{11}(r)=\frac{1}{4}\Bigg{[}\frac{1}{\beta_{1}L(L-N)\left(Lr^{2}+1\right) }\bigg{(}-4\beta_{1}L^{3}(1-3a\beta_{2}\] \[+6a\beta_{1}N+3b)+L^{2}\Big{(}\beta_{2}(a\beta_{2}-2b-2)-8\beta_{1}N(2a \beta_{2}\] \[-2b-1)+4a\beta_{1}^{2}N^{2}\Big{)}-2LN(\beta_{2}(a\beta_{2}-2b-2)+2 \beta_{1}\] \[\times N(-a\beta_{2}+b+1))+\beta_{2}N^{2}(a\beta_{2}-2b-2)+36a \beta_{1}^{2}\] \[\times L^{4}+4c(L-N)^{2}\Big{)}+\frac{N(\beta_{2}(a\beta_{2}-2b-2)+ 4c)}{\beta_{1}L}\] \[+\frac{4N\Big{(}a\Big{(}9\beta_{1}L^{2}+2\beta_{2}L-6\beta_{1}LN+ \beta_{1}N^{2}-2\beta_{2}N\Big{)}\Big{)}}{(N-L)\Big{(}Nr^{2}+1\Big{)}}\] \[-\frac{16a\beta_{1}N(2L-N)-2b(Nr^{2}+1)}{\Big{(}Nr^{2}+1\Big{)}^{ 2}}-\frac{16a\beta_{1}N(L-N)}{\Big{(}Nr^{2}+1\Big{)}^{3}}\Bigg{]},\] \[\Psi_{21}(r)=4\beta_{1}\Big{(}Nr^{2}+1\Big{)}^{2}\Big{[}Nr^{2} \Big{(}\alpha-3\alpha\beta_{2}+\alpha Lr^{2}(a\beta_{2}-1)\] \[+a\beta_{2}\Big{(}Lr^{2}-3\Big{)}+2\Big{)}+3\alpha a\beta_{2}Lr^{ 2}+3a\beta_{2}Lr^{2}-(\alpha+1)N^{2}r^{4}\] \[\times(a\beta_{2}-1)-(\alpha+1)br^{2}(L-N)\Big{(}Nr^{2}+3\Big{)}- \alpha Lr^{2}+1\Big{]}\] \[+(\alpha+1)\beta_{2}r^{2}\Big{(}Nr^{2}+1\Big{)}^{4}(a\beta_{2}-2 b-2)+4(\alpha+1)a\beta_{1}^{2}r^{2}\] \[\times(L-N)^{2}\Big{(}Nr^{2}+3\Big{)}^{2}+4(\alpha+1)cr^{2}\Big{(} Nr^{2}+1\Big{)}^{4},\] \[\times N^{2}r^{4}+28\beta_{1}bN^{2}r^{2}+12\beta_{1}bN-8b\beta_{2} Nr^{2}-2\beta_{2}+4b\] \[\times\left(Nr^{2}+1\right)^{4}-2\beta_{2}N^{4}r^{8}+4\beta_{1}N^{4 }r^{6}-8\beta_{2}N^{3}r^{6}+12\beta_{1}N^{3}r^{4}\] \[-12\beta_{2}N^{2}r^{4}+12\beta_{1}N^{2}r^{2}\] \[+4\beta_{1}N-8\beta_{2}Nr^{2}\bigg{]},\] \[f_{2}(r)=2\bigg{[}\Big{\{}(N-L)\bigg{(}aN^{4}\beta_{2}^{2}r^{8}-2 bN^{4}\beta_{2}r^{8}-2N^{4}\beta_{2}r^{8}+4aN^{3}\] \[\times\beta_{2}r^{6}-8bN^{3}\beta_{2}r^{6}-8N^{3}\beta_{2}r^{6}-4 aN^{4}\beta_{1}^{2}r^{4}+6aN^{2}\beta_{2}^{2}r^{4}\] \[-8bN\beta_{1}r^{4}-12bN^{2}\beta_{2}r^{4}-12N^{2}\beta_{2}r^{4}+8 aN^{3}\beta_{1}\beta_{2}r^{4}-40a\] \[+N^{3}\beta_{1}^{2}r^{2}+4aN\beta_{2}^{2}r^{2}-16bN^{2}\beta_{1}r ^{2}-8bN\beta_{2}r^{2}-8N\beta_{2}r^{2}\] \[+16aN^{2}\beta_{1}\beta_{2}r^{2}+4b\Big{(}Nr^{2}+1\Big{)}^{4}-84aN ^{3}\beta_{1}^{2}+4aL^{2}\Big{(}2N^{3}r^{6}\] \[+17N^{2}r^{4}+36Nr^{2}+9\Big{)}\beta_{1}^{2}+a\beta_{2}^{2}-8bN \beta_{1}-2b\beta_{2}+8aN\beta_{1}\] \[\times\beta_{2}-2\beta_{2}-4L\beta_{1}\Big{[}\Big{(}(1-a\beta_{2 })r^{8}+2a\beta_{1}r^{6}\Big{)}N^{4}+4r^{4}\] \[\times\Big{(}(1-2a\beta_{2})r^{2}+4a\beta_{1}\Big{)}N^{3}+\Big{(} (6-16a\beta_{2})r^{4}+26a\beta_{1}r^{2}\Big{)}N^{2}\] \[-4\Big{(}(3a\beta_{2}-1)r^{2}+3a\beta_{1}\Big{)}N+b\Big{(}Nr^{2}+ 1\Big{)}^{2}\Big{(}N^{2}r^{4}+6Nr^{2}\] \[+3\Big{)}-3a\beta_{2}+1\Big{)}\bigg{)}\bigg{]}r^{2},\] \[f_{3}(r)=aN^{4}\beta_{2}^{2}r^{8}-2bN^{4}\beta_{2}r^{8}-2N^{4} \beta_{2}r^{8}+4aN^{3}\beta_{2}^{2}r^{6}\] \[+4bN^{4}\beta_{1}r^{6}+4N^{4}\beta_{1}r^{6}-8bN^{3}\beta_{2}r^{6}-8 N^{3}\beta_{2}r^{6}-4aN^{4}\beta_{1}\] \[\times\beta_{2}r^{6}+4aN^{4}\beta_{1}^{2}r^{4}+6aN^{2}\beta_{2}^{ 2}r^{4}+20bN^{3}\beta_{1}r^{4}+12N^{3}\beta_{1}r^{4}\] \[-12bN^{2}\beta_{2}r^{4}-12N^{2}\beta_{2}r^{4}-20aN^{3}\beta_{1} \beta_{2}r^{4}+24aN^{3}\beta_{1}^{2}r^{2}\] \[+4aN\beta_{2}^{2}r^{2}+28bN^{3}\beta_{1}r^{2}+12N^{3}\beta_{1}r^{ 2}-8bN\beta_{2}r^{2}-8N\beta_{2}r^{2}\] \[-28aN^{2}\beta_{1}\beta_{2}r^{2}+4b\Big{(}Nr^{2}+1\Big{)}^{4}+36aN ^{2}\beta_{1}^{2}+4aL^{2}\] \[\times\Big{(}Nr^{2}+3\Big{)}^{2}\beta_{1}^{2}+a\beta_{2}^{2}+12bN \beta_{1}+4N\beta_{1}-2b\beta_{2}-12aN\beta_{1}\beta_{2}\] \[-2\beta_{2}-4L\beta_{1}\Big{[}\Big{(}1-a\beta_{2})r^{6}+2a\beta_ {1}r^{4}\Big{\}}N^{3}+\Big{[}(3-5a\beta_{2})r^{4}\] \[+12a\beta_{1}r^{2}\Big{]}N^{2}+\Big{(}(3-7a\beta_{2})r^{2}+18a\beta _{1}\Big{)}N+b\Big{(}Nr^{2}+1\Big{)}^{2}\] \[\times\Big{(}Nr^{2}+3\Big{)}-3a\beta_{2}+1\Big{]},\] \[f_{4}(r)=-2N\Big{(}Lr^{2}+1\Big{)}\Big{[}4cr^{2}(\alpha+1)(Nr^{2}+ 1)^{4}+r^{2}(\alpha+1)\beta_{2}\] \[\times(a\beta_{2}-2b-2)(Nr^{2}+1)^{4}+4\beta_{1}\Big{(}N^{2}( \alpha+1)(1-a\beta_{2}-1)r^{4}\] \[-L\alpha r^{2}-b(L-N)(Nr^{2}+3)(\alpha+1)r^{2}+3aL\beta_{2}r^{2}+3 aL\alpha\beta_{2}r^{2}\] \[+N[L\alpha(a\beta_{2}-1)r^{2}+\alpha+a(Lr^{2}-3)\beta_{2}-3a \alpha\beta_{2}+2]r^{2}+1\Big{)}\] \[\times(Nr^{2}+1)^{2}+4a(L-N)^{2}r^{2}(Nr^{2}+3)^{2}(\alpha+1)\beta _{1}^{2}\Big{]}f_{5}(r)\] \[f_{6}(r)=-2\Big{(}Lr^{2}+1\Big{)}\Big{(}Nr^{2}+1\Big{)}\Big{[} aN^{4}\beta_{2}^{2}r^{8}-2bN^{4}\beta_{2}r^{8}-N^{4}\] \[\times\beta_{2}r^{8}+4aN^{3}\beta_{2}^{2}r^{6}+4bN^{4}\beta_{1}r ^{6}+2N^{4}\beta_{1}r^{6}-8bN^{3}\beta_{2}r^{6}\] \[-4N^{3}\beta_{2}r^{6}-4aN^{4}\beta_{1}\beta_{2}r^{6}+4aN^{4}\beta_ {1}^{2}r^{4}+6aN^{2}\beta_{2}^{2}r^{4}+20b\] \[\times N^{3}\beta_{1}r^{4}+6N^{3}\beta_{1}r^{4}-12bN^{2}\beta_{2}r ^{4}-6N^{2}\beta_{2}r^{4}-20aN^{3}\] \[\times\beta_{1}\beta_{2}r^{4}+24aN^{3}\beta_{1}^{2}r^{2}+4aN\beta _{2}^{2}r^{2}+28bN^{3}\beta_{1}r^{2}+6N^{2}\beta_{1}r^{2}\] \[-8bN\beta_{2}r^{2}-4N\beta_{2}r^{2}-28aN^{2}\beta_{1}\beta_{2}r^{2}+ 4c(Nr^{2}+1)^{4}+36a\] \[+4aN^{4}\beta_{1}^{2}r^{4}+6aN^{2}\beta_{2}^{2}r^{4}+20bN^{3}\beta_ {1}r^{4}+6N^{3}\beta_{1}r^{4}-12bN^{2}\] \[-12aN\beta_{1}\beta_{2}-\beta_{2}-2L\beta_{1}\Big{(}(1-2a\beta_{2}) r^{6}+4a\beta_{1}r^{4}\Big{)}N^{3}\] \[+[(3-10a\beta_{2})r^{4}+24a\beta_{1}r^{2}]N^{2}+\Big{(}(3-14a\beta _{2})r^{2}+36a\beta_{1}\Big{)}N\] \[+2b(Nr^{2}+1)^{2}(Nr^{2}+3)-6a\beta_{2}+1\Big{)}\Big{]}f_{7}(r)+3 \Big{(}Lr^{2}+1\Big{)}\] \[\times\Big{(}Nr^{2}+1\Big{)}\bigg{[}4cr^{2}(\alpha+1)\Big{(}Nr^{2}+ 1\Big{)}^{4}+r^{2}(\alpha+1)\beta_{2}(-2b\] \[+a\beta_{2}-2)\Big{(}Nr^{2}+1\Big{)}^{4}+4\beta_{1}\Big{(}N^{2}( \alpha+1)(1-a\beta_{2})r^{4}-L\alpha r^{2}\] \[-b(L-N)\Big{(}Nr^{2}+3\Big{)}(\alpha+1)r^{2}+3aL\ \[+a\beta_{2}^{2}+12bN\beta_{1}+2N\beta_{1}-2b\beta_{2}-12aN\beta_{1} \beta_{2}-\beta_{2}-2L\beta_{1}\] \[\times\Big{(}\Big{(}(1-2a\beta_{2})r^{6}+4a\beta_{1}r^{4}\Big{)}N^{ 3}+\Big{(}(3-10a\beta_{2})r^{4}+24a\beta_{1}r^{2}\Big{)}\] \[\times N^{2}+\Big{(}(3-14a\beta_{2})r^{2}+36a\beta_{1}\Big{)}N+2b \Big{(}Nr^{2}+1\Big{)}^{2}\Big{(}Nr^{2}+3\Big{)}\] \[-6a\beta_{2}+1\Big{)}\Big{]}r^{2},\] \[+4a(L-N)^{2}r\Big{(}Nr^{2}+3\Big{)}^{2}(\alpha+1)\beta_{1}^{2}+8a (L-N)^{2}Nr^{3}\] \[\times\Big{(}Nr^{2}+3\Big{)}(\alpha+1)\beta_{1}^{2}+16cN\Big{(}Nr^ {3}+r\Big{)}^{3}(\alpha+1)+4N\] \[\times\Big{(}Nr^{3}+r\Big{)}^{3}(\alpha+1)\beta_{2}(-2b+a\beta_{ 2}-2)\Big{]}r,\] \[f_{8}(r)=\Big{[}aN^{4}\beta_{2}^{2}r^{8}-2bN^{4}\beta_{2}r^{8}-N ^{4}\beta_{2}r^{8}+4aN^{3}\beta_{2}^{2}r^{6}+4bN^{4}\] \[\times\beta_{1}r^{6}+2N^{4}\beta_{1}r^{6}-8bN^{3}\beta_{2}r^{6}-4 N^{3}\beta_{2}r^{6}-4aN^{4}\beta_{1}\beta_{2}r^{6}\] \[+4aN^{4}\beta_{1}^{2}r^{4}+6aN^{2}\beta_{2}^{2}r^{4}+20bN^{3} \beta_{1}r^{4}+6N^{3}\beta_{1}r^{4}-12bN^{2}\] \[\times\beta_{2}r^{4}-6N^{3}\beta_{2}r^{4}-20aN^{3}\beta_{1}\beta_ {2}r^{4}+24aN^{3}\beta_{1}^{2}r^{4}+4aN\beta_{2}^{2}r^{2}\] \[+28bN^{2}\beta_{1}r^{2}+6N^{3}\beta_{1}r^{2}-8bN\beta_{2}r^{2}-4N \beta_{2}r^{2}-28aN^{2}\beta_{1}\beta_{2}r^{2}\] \[+4c\Big{(}Nr^{2}+1\Big{)}^{4}+36aN^{2}\beta_{1}^{2}+4aL^{2}\Big{(} Nr^{2}+3\Big{)}^{2}\beta_{1}^{2}+a\beta_{2}^{2}\] \[+12bN\beta_{1}+2N\beta_{1}-2b\beta_{2}-12aN\beta_{1}\beta_{2}- \beta_{2}-2L\beta_{1}\] \[\times\Big{\{}\Big{(}(1-2a\beta_{2})r^{6}+4a\beta_{1}r^{4}\Big{)} N^{3}+\Big{(}(3-10a\beta_{2})\times r^{4}+24a\] \[\times\beta_{1}r^{2}\Big{)}N^{2}+\Big{(}(3-14a\beta_{2})r^{2}+36a \beta_{1}\Big{)}N+2b\Big{(}Nr^{2}+1\Big{)}^{2}\] \[\times\Big{(}Nr^{2}+3\Big{)}(\alpha+1)r^{2}+3aL\beta_{2}r^{2}+3aL \alpha\beta_{2}r^{2}+N\Big{(}L\alpha(a\beta_{2}\] \[-1)r^{2}+\alpha+a\Big{(}Lr^{2}-3\Big{)}\beta_{2}-3a\alpha\beta_{2}+ 2\Big{)}r^{2}+1\Big{)}\Big{(}Nr^{2}+1\Big{)}\] \[+4a(L-N)^{2}r\Big{(}Nr^{2}+3\Big{)}^{2}(\alpha+1)\beta_{1}^{2}+8a (L-N)^{2}Nr^{3}\] \[\times\Big{(}Nr^{2}+3\Big{)}(\alpha+1)\beta_{1}^{2}+16cN\Big{(}Nr^ {3}+r\Big{)}^{3}(\alpha+1)+4N\] \[\times\Big{(}Nr^{3}+r\Big{)}^{3}(\alpha+1)\beta_{2}(-2b+a\beta_{2} -2)\Big{]}r,\] \[f_{8}(r)=\Big{[}aN^{4}\beta_{2}^{2}r^{8}-2bN^{4}\beta_{2}r^{8}-N^ {4}\beta_{2}r^{8}+4aN^{3}\beta_{2}^{2}r^{6}+4bN^{4}\] \[\times\beta_{1}r^{6}+2N^{4}\beta_{1}r^{6}-8bN^{3}\beta_{2}r^{6}-4 N^{3}\beta_{2}r^{6}-4aN^{4}\beta_{1}\beta_{2}r^{6}\] \[+4aN^{4}\beta_{1}^{2}r^{4}+6aN^{2}\beta_{2}^{2}r^{4}+20bN^{3}\beta_ {1}r^{4}+6N^{3}\beta_{1}r^{4}-12bN^{2}\] \[\times\beta_{2}r^{4}-6N^{3}\beta_{2}r^{4}-20aN^{3}\beta_{1}\beta_ {2}r^{4}+24aN^{3}\beta_{1}^{2}r^{2}+4aN\beta_{2}^{2}r^{2}\] \[+28bN^{2}\beta_{1}r^{2}+6N^{3}\beta_{1}r^{2}-8bN\beta_{2}r^{2}-4N \beta_{2}r^{2}-28aN^{2}\beta_{1}\beta_{2}r^{2}\] \[+4c\Big{(}Nr^{2}+1\Big{)}^{4}+36aN^{2}\beta_{1}^{2}+4aL^{2}\Big{(} Nr^{2}+3\Big{)}\beta_{1}^{2}+a\beta_{2}^{2}\] \[+12bN\beta_{1}+2N\beta_{1}-2b\beta_{2}-12aN\beta_{1}\beta_{2}- \beta_{2}-2L\beta_{1}\] \[\times\Big{\{}\Big{(}(1-2a\beta_{2})r^{6}+4a\beta_{1}r^{4}\Big{)} N^{3}+\Big{(}(3-10a\beta_{2})\times r^{4}+24a\] \[\times\beta_{1}r^{2}\Big{)}N^{2}+\Big{(}(3-14a\beta_{2})r^{2}+36a \beta_{1}\Big{)}N+2b\Big{(}Nr^{2}+1\Big{)}^{2}\] \[\times\Big{(}Nr^{2}+3\Big{)}-6a\beta_{2}+1\Big{\}}.\]
2309.16541
Cosmological constraints from density-split clustering in the BOSS CMASS galaxy sample
We present a clustering analysis of the BOSS DR12 CMASS galaxy sample, combining measurements of the galaxy two-point correlation function and density-split clustering down to a scale of $1\,h^{-1}{\rm Mpc}$. Our theoretical framework is based on emulators trained on high-fidelity mock galaxy catalogues that forward model the cosmological dependence of the clustering statistics within an extended-$\Lambda$CDM framework, including redshift-space and Alcock-Paczynski distortions. Our base-$\Lambda$CDM analysis finds $\omega_{\rm cdm} = 0.1201\pm 0.0022$, $\sigma_8 = 0.792\pm 0.034$, and $n_s = 0.970\pm 0.018$, corresponding to $f\sigma_8 = 0.462\pm 0.020$ at $z \approx 0.525$, which is in agreement with Planck 2018 predictions and various clustering studies in the literature. We test single-parameter extensions to base-$\Lambda$CDM, varying the running of the spectral index, the dark energy equation of state, and the density of massless relic neutrinos, finding no compelling evidence for deviations from the base model. We model the galaxy-halo connection using a halo occupation distribution framework, finding signatures of environment-based assembly bias in the data. We validate our pipeline against mock catalogues that match the clustering and selection properties of CMASS, showing that we can recover unbiased cosmological constraints even with a volume 84 times larger than the one used in this study.
Enrique Paillas, Carolina Cuesta-Lazaro, Will J. Percival, Seshadri Nadathur, Yan-Chuan Cai, Sihan Yuan, Florian Beutler, Arnaud de Mattia, Daniel Eisenstein, Daniel Forero-Sanchez, Nelson Padilla, Mathilde Pinon, Vanina Ruhlmann-Kleider, Ariel G. Sánchez, Georgios Valogiannis, Pauline Zarrouk
2023-09-28T15:53:45Z
http://arxiv.org/abs/2309.16541v2
# Cosmological constraints from density-split clustering in the BOSS CMASS galaxy sample ###### Abstract We present a clustering analysis of the BOSS DR12 CMASS galaxy sample, combining measurements of the galaxy two-point correlation function and density-split clustering down to a scale of 1 \(h^{-1}\)Mpc. Our theoretical framework is based on emulators trained on high-fidelity mock galaxy catalogues that forward model the cosmological dependence of the clustering statistics within an extended-\(\Lambda\)CDM framework, including redshift-space and Alcock-Paczynski distortions. Our base-\(\Lambda\)CDM analysis finds \(\omega_{\rm cdm}=0.1201\pm 0.0022\), \(\sigma_{8}=0.792\pm 0.034\), and \(n_{s}=0.970\pm 0.018\), corresponding to \(f\sigma_{8}=0.462\pm 0.020\) at \(z\approx 0.525\), which is in agreement with Planck 2018 predictions and various clustering studies in the literature. We test single-parameter extensions to base-\(\Lambda\)CDM, varying the running of the spectral index, the dark energy equation of state, and the density of massless relic neutrinos, finding no compelling evidence for deviations from the base model. We model the galaxy-halo connection using a halo occupation distribution framework, finding signatures of environment-based assembly bias in the data. We validate our pipeline against mock catalogues that match the clustering and selection properties of CMASS, showing that we can recover unbiased cosmological constraints even with a volume 84 times larger than the one used in this study. keywords: cosmological parameters, large-scale structure of Universe ## 1 Introduction Within the vast cosmic structures we observe today, signatures of primordial features are intermixed with non-linear processes that shape the evolution of galaxies, constituting a ground that is challenging to model but rich in information. In our standard cosmological model, the \(\Lambda\)CDM paradigm, the present-day Universe is the result of a hierarchical structure formation scenario that started from primordial density perturbations, which were amplified during inflation and continued to grow through gravitational collapse until today (Guth and Pi, 1982; Hawking, 1982). Galaxy clustering has been pivotal in testing this hypothesis from late-time Universe data by characterizing the way in which matter is distributed in space, using galaxies as biased tracers of the underlying dark matter distribution. The most common way to approach galaxy clustering is through the two-point correlation function, or its Fourier pair, the power spectrum. These statistics provide a nearly-complete description of the galaxy density field on large scales and encode information about physics of the early Universe (Peebles, 1980). Baryon acoustic oscillations (BAO), which originate from sound waves in the photon-baryon plasma before recombination (Peebles and Yu, 1970; Sunyaev and Zeldovich, 1970), leave an imprint on the matter distribution, which is detected as a bump in the correlation function or as wiggles in the power spectrum (Eisenstein and Hu, 1998). This acoustic feature works as a standard ruler, allowing us to measure the expansion rate of the Universe at different epochs (Percival et al., 2001; Eisenstein et al., 2005; Cole et al., 2005; eBOSS Collaboration et al., 2020; Moon et al., 2023). In addition to BAO, the full shape of the galaxy correlation function and power spectrum contains an enormous wealth of information due to its sensitivity to the growth rate of cosmic structure (Blake et al., 2011; Reid et al., 2012; Alam et al., 2017; Brieden et al., 2021), dark energy, the physics of neutrinos (Zhang and Cai, 2022), primordial non-Gaussianity (Moradinezhad Dizgah et al., 2021), and the galaxy-halo connection (Yuan et al., 2022). Models based on perturbation theory provide an accurate description of galaxy clustering data on linear and mildly non-linear scales, allowing the extraction of information from the full shape of the power spectrum (e.g., Sanchez et al., 2017; Grieb et al., 2017; d'Amico et al., 2020; Troster et al., 2020; Bautista et al., 2021; Philcox and Ivanov, 2022; Semenaite et al., 2022, 2023). The assumptions behind these models tend to break down in the highly non-linear regime (\(\lesssim 20\,h^{-1}\)Mpc), which has motivated the development of models calibrated on N-body simulations that can accurately describe the galaxy field on scales where non-linear physical processes become relevant. This has recovered information from a portion of the survey data that is usually discarded from the standard clustering analyses, providing clues not only about cosmology but also in regards to galaxy evolution and its connection to the dark matter halo field (Kobayashi et al., 2020; Zhai et al., 2023; Chapman et al., 2022; Lange et al., 2022, 2023; Yuan et al., 2022). Two-point functions provide a complete description of Gaussian density fields. This is satisfied at large scales, where non-linear evolution is mild, and the Gaussianity of the primordial fluctuations are still largely preserved. The late-time Universe, however, is highly non-Gaussian at small scales due to non-linear evolution, and higher-order summary statistics are required to capture all the information. Thanks to substantial theoretical and algorithmic development over the last years, measurements of N-point correlation functions (Slepian et al., 2017; Philcox et al., 2021; Sugiyama et al., 2023) and polyspectra (Gil-Marin et al., 2017; Gualdi et al., 2021; Philcox and Ivanov, 2022) are now being performed on data, which not only tightens the constraining power on \(\Lambda\)CDM, but also opens an avenue for testing potential signatures of physics beyond our fiducial model, such as parity violation (Philcox, 2022; Hou et al., 2023). However, the measurement of these N-point statistics remains challenging due to their high computational demands and the large volumes that are needed to detect them with enough statistical significance. This has motivated the development of robust, informative, and efficient clustering methods that can access the information that leaks into higher orders, which can then be complemented and cross-validated with the standard N-point clustering analysis. Several alternative summary statistics that meet these criteria have been proposed in the literature, including k-th nearest neighbor statistics (Banerjee and Abel, 2021), wavelet scattering transforms (Valogiannis and Dvorkin, 2022, 2023), void statistics (Lavaux and Wandelt, 2012), marked correlations (White, 2016; Massara et al., 2022), skew spectra (Hou et al., 2023), and Minkowski functionals (Lipipich and Sanchez, 2021). Among these novel methods, Paillas et al. (2021) proposed to perform a clustering analysis split by local density, combining the information content of different environments of the cosmic web. Paillas et al. (2021) showed that density-split clustering can tighten the constraints on geometry and growth by modelling redshift-space distortions around different environments, compared to the standard two-point clustering analysis. Paillas et al. (2023) expanded on this by quantifying the information content of the full shape of the density-split correlation functions, forecasting that the method can deliver precise constraints on \(\Lambda\)CDM parameters, and can potentially be used to put upper limits on the sum of neutrino masses. Until now, a model that can capture the cosmological dependence of the full shape of the density-split correlation functions was not available. In Cuesta-Lazaro et al. (2023), we have presented sunbird, a simulation-based model for density-split and two-point clustering that can operate down to intra-halo scales, well into the non-linear regime, which has been validated against high-fidelity mock galaxy catalogues based on the AbacusSummit suite of simulations. In this work, we use sunbird to carry out the first application of density-split clustering to observational data, applying it to the final data release of the Baryon Oscillation Spectroscopic Survey (Dawson et al., 2013). We fit a full-shape model of the galaxy two-point correlation function and density-split multipoles down to \(1\,h^{-1}\)Mpc, putting constraints on the base-\(\Lambda\)CDM model, as well as on extensions to the base model that vary the dark energy equation of state, the density of relic neutrinos, and the running of the spectral index of the primordial power spectrum. The paper is organized as follows. We define our observables in Sect. 2. The clustering modelling, as well as the simulations used to validate our pipeline are presented in Sect. 3. We present our main cosmological constraints and the validation tests in Sect. 4. Finally, we summarize and conclude in Sect. 5. ## 2 Observations ### BOSS CMASS galaxy sample We use data from the final data release (DR12) of the Baryon Oscillation Spectroscopic Survey (BOSS, Dawson et al., 2013). BOSS was a survey conducted as part of the third stage of the larger Sloan Digital Sky Survey (SDSS, York et al., 2000), which collected optical spectra from more than 1.5 million targets using the 2.5-m Sloan Telescope (Gunn et al., 2006) at Apache Point, New Mexico. BOSS covered roughly 10,000 deg\({}^{2}\) of the sky in two hemispheres, referred to as the North and the South Galactic caps (NGC and SGC, respectively). Our analysis is focused on the CMASS galaxy sample, which is dominated by luminous red galaxies (LRG) that were selected on SDSS multicolour photometric observations (Gunn et al., 1998; Gunn et al., 2006). CMASS is nearly complete down to stellar mass of \(M_{*}\approx 10^{11.3}\,\mathrm{M_{\odot}}\) for \(z>0.45\)(Maraston et al., 2013), and covers a redshift range \(0.4\lesssim z\lesssim 0.7\). For this paper, we impose a more stringent redshift cut \(0.45\leq z\leq 0.6\) to avoid regions where the galaxy number density drops abruptly, which can potentially bias our model predictions. Additionally, we restrict the analysis to the NGC for simplicity. After imposing these restrictions, this results in a sample with a total volume of \(\approx 1.4\,(\,h^{-1}\)Gpc\()^{3}\), an _effective_ volume of \(\approx 1.1\,(\,h^{-1}\)Gpc\()^{3}\), and an average number density of \(\approx 3.5\times 10^{-4}\,(\,h\)Mpc\({}^{-1})^{3}\). We use the DR12 large-scale structure catalogues provided by the BOSS collaboration1(Reid et al., 2016). These catalogues include angles and redshifts for each galaxy, which we convert to comoving Cartesian coordinates by adopting a flat-\(\Lambda\)CDM fiducial cosmology characterized by a matter density parameter \(\Omega_{\rm m}=0.315\), which closely matches the Planck 2018 best-fit cosmology, assuming base-\(\Lambda\)CDM (Planck Collaboration et al., 2020). The BOSS collaboration also provides a set of random catalogues that follow the footprint and radial selection of CMASS galaxies, but with no intrinsic clustering, which are used to estimate the overdensity field as described in the following sections. ### Clustering measurements #### 2.2.1 Two-point clustering Galaxy clustering is usually characterized in terms of the two-point correlation function (2PCF), \(\xi^{\rm gg}(r)\), which, in its simplest form, quantifies the excess probability \({\rm d}P\) of finding a galaxy in a volume \({\rm d}V\), separated at a distance \(r\) from another galaxy, with respect to an unclustered Poisson distribution: \[{\rm d}P=n_{g}\,\left[1+\xi^{\rm BB}(r)\right]{\rm d}V\,, \tag{1}\] where \(n_{g}\) is the mean number density of galaxies in the sample. In the presence of redshift-space distortions (RSD, Jackson, 1972; Kaiser, 1987) or Alcock-Paczynski distortions (AP, Alcock & Paczynski, 1979), the galaxy distribution appears anisotropic to the observer. To capture this anisotropy, a convenient choice is to bin the correlation function in terms of \(s\) and \(\mu\), where \(s\) is the redshift-space pair separation and \(\mu\) is the cosine of the angle between the vector connecting the two galaxies and the observer's line of sight. A number of estimators have been proposed to measure the 2PCF from observational data. A robust and commonly used estimator is the one introduced by Landy & Szalay (1993), \[\xi^{\rm BB}(s,\mu)=\frac{{\rm GG-2GR+RR}}{{\rm RR}}\,, \tag{2}\] where GG is the normalized number of galaxy pairs in the \((s,\mu)\) bin, while GR and RR are the normalized galaxy-random and random-random pairs, which make use of the unclustered random catalogues. It is useful to separate out the different angular components of the \((s,\mu)\) correlation function by decomposing it into multipole moments, defined by \[\xi_{\ell}(s)=\frac{2\ell+1}{2}\int_{-1}^{1}{\rm d}\mu\,\xi({\rm s},\mu){\rm P }_{\ell}(\mu), \tag{3}\] with \({\rm P}_{\ell}\) the Legendre polynomials. We measure \(\xi^{\rm gg}(s,\mu)\) in CMASS using pycorr2, which is a wrapper around a modified version of the CorfuFunc pair-counting code (Sinha & Garrison, 2020). We focus the analysis on the monopole (\(\xi_{0}\)) and quadrupole (\(\xi_{2}\)) moments. The correlation functions are measured in \(24\,\mu\) bins from \(-1\) to \(1\), and radial bins with scale-dependent widths: \(1h^{-1}\,{\rm Mpc}\) bins for \(s\in[0,4]\,h^{-1}{\rm Mpc}\), \(3h^{-1}\,{\rm Mpc}\) bins for \(s\in(4,30]\,h^{-1}{\rm Mpc}\), and \(5h^{-1}\,{\rm Mpc}\) bins for \(s\in(30,151]\,h^{-1}{\rm Mpc}\). Galaxies are appropriately weighted during the pair counting to consider various observational systematics that can bias the clustering measurements. The total systematic weight for each galaxy is given by Footnote 2: [https://github.com/cosmodesi/pycorr](https://github.com/cosmodesi/pycorr). \[w_{\rm sys,tot}=w_{\rm sys}(w_{\rm fc}+w_{\rm zf}-1)\,, \tag{4}\] where \(w_{\rm sys}\), \(w_{\rm fc}\), and \(w_{\rm zf}\) account for imaging systematics, fiber collisions, and redshift failures (Ross et al., 2016). This is additionally multiplied by a weight \(w_{\rm FKP}\), which optimally weights the contribution of galaxies based on their redshift-dependent number density (Feldman et al., 1994), \[w_{\rm FKP}=1/(1+n(z)P_{0})\,, \tag{5}\] where \(P_{0}=10^{4}\,h^{3}{\rm Mpc}^{-3}\). The total weights for each galaxy are then given by \[w_{\rm tot}=w_{\rm FKP}w_{\rm sys,tot}\,, \tag{6}\] while points from the random catalogue are only weighted by \(w_{\rm FKP}\). #### 2.2.2 Density-split clustering The density-split clustering (DSC) method (Paillas et al., 2021) characterizes galaxy clustering in bins of environmental density, with the aim of extracting and combining the cosmological information coming from distinct environments of the cosmic web. We apply the density-split algorithm to the CMASS galaxy sample using our publicly available code3, with slight modifications to the algorithm presented in Paillas et al. (2023) to account for the non-uniform survey geometry. Footnote 3: [https://github.com/epaillas/densitysplit](https://github.com/epaillas/densitysplit). We start by painting the CMASS galaxies and randoms to a rectangular grid that fully encompasses the survey volume, and we estimate the overdensity field as \[\delta=\frac{{\rm G}}{{\rm R}}-1 \tag{7}\] where G and R are the normalized galaxy and random counts in each cell, weighted as given in Eq. (4). We smooth the overdensity field with a Gaussian filter of radius \(R_{s}=10\,h^{-1}{\rm Mpc}\), and then we sample it using cloud-in-cell interpolation at \(N_{\rm query}\) query positions, which are taken from the CMASS random catalogue. Here we set \(N_{\rm query}\) to 5 times the number of galaxies in the catalogue, and split those query points into 5 quintiles, according to the overdensity at each location. Figure 1: Probability distribution function (PDF) of the galaxy overdensity measured around random query points. The overdensity field has been smoothed with a Gaussian filter of width \(R_{s}=10\,h^{-1}{\rm Mpc}\). The colours represent the split of the PDF into quintiles. \(\bigoplus\) Figure. 1 shows the probability distribution function (PDF) of galaxy overdensity measured at the query positions. The overdensity field shows a non-Gaussian PDF with significant skewness and kurtosis. This shape is a consequence of the growth of structure being bounded at \(\Delta=-1\) from below (regions completely devoid of galaxies), while no such constraint is present at the positive \(\Delta\) end. The distribution peaks at negative overdensities, reflecting that the average region in the Universe is underdense due to the larger volume occupied by voids. The division into quintiles is demarcated by the different colours in the figure. We label the quintiles as \(\mathrm{Q}_{i}\), where \(i\) goes from 0 to 4 from lower to higher densities. In what follows, we discard \(\mathrm{Q}_{2}\) from the clustering analysis, since all five quintiles are not independent from each other.4 Footnote 4: As the query points are random, the sum of the density-split cross-correlation functions over quintiles vanishes up to shot noise, and all the information in \(\mathrm{Q}_{2}\) is already contained in the remaining four quintiles, as shown in previous work (Paillas et al., 2023). Figure 2 shows the comoving number density of quintile positions as a function of redshift, which is observed to closely follow that of the galaxies. This is given by construction, as the query positions are sampled from the clustering random catalogues, which in turn are constructed to match the footprint and radial selection of the galaxies. Once the density quintiles are defined, we estimate the quintile-galaxy cross-correlation function using the estimator (Landy and Szalay, 1993) \[\xi^{\mathrm{GB}}(s,\mu)=\frac{\mathrm{QG-QR-RG+RR}}{\mathrm{RR}}\,, \tag{8}\] where \(\mathrm{QG}\), \(\mathrm{QR}\), \(\mathrm{RG}\), \(\mathrm{RR}\) are the normalized quintile-galaxy, quintile-random, galaxy-random, and random-random pair counts. We note that this assumes that the random catalogue is the same for galaxies and quintiles, which is justified by the good match between the \(n(z)\) distributions from Fig. 2. The autocorrelation functions of \(\mathrm{DS}\) quintiles are estimated as \[\xi^{\mathrm{GB}}(s,\mu)=\frac{\mathrm{QQ-2QR+RR}}{\mathrm{RR}}\,, \tag{9}\] with \(\mathrm{QQ}\) being the normalized quintile-quintile pair counts. We adopt the same binning scheme as for the galaxy 2PCF, as described in Sect. 2.2.1, and we weight the galaxy pairs according to Eq. (6). Figure 3 shows the measured multipoles from DSC and the galaxy 2PCF. The DSC auto- and cross-correlation functions cover a broad range of amplitudes, as the quantiles trace the underlying matter density field in different ways. \(\mathrm{Q}_{0}\) & \(\mathrm{Q}_{1}\) are underdense regions with negative linear bias parameters that usually range from -3 to -1, whereas \(\mathrm{Q}_{3}\) & \(\mathrm{Q}_{4}\) tracer overdensities and have positive bias parameters that can range from 1 to 3 (Paillas et al., 2021). Two signature features are spotted in all profiles: a transition regime around \(25\,h^{-1}\,\mathrm{Mpc}\), which is due to the scale that was used to smooth the overdensity field and define the quintiles, and the peak/valley around \(100\,h^{-1}\,\mathrm{Mpc}\), which is an imprint of the BAO that originated from sound waves in the photon-baryon plasma prior to recombination. The quadrupoles show a large degree of anisotropy in DSC. Two main factors contribute to this anisotropy. Firstly, there is an RSD effect caused by the dynamics of galaxies around different density environments. Kaiser-like motions on large scales and random motions on small scales (Fingers of God) cause distinct RSD patterns on the clustering of each quintile, similar to the well-known effects seen in the galaxy 2PCF. Secondly, there is an RSD effect imprinted on the quintile positions themselves, which is the product of identifying the density quintiles in redshift space. Paillas et al. (2023) showed that splitting densities in redshift space causes selection effects that induce distortions in the clustering of the quintiles, which manifests itself as a quadrupole moment in the DSC autocorrelation functions. This is similar in nature to the selection effect that is produced when cosmic voids are identified in redshift space (Chuang et al., 2017; Nadathur et al., 2019; Correa et al., 2020). We reserve the discussion of the model fits (solid lines) for Sect. 4.1. ## 3 Modelling ### Mock galaxy catalogues Throughout this work, we use different types of mock galaxy catalogues to study various aspects of the analysis. The MD-Patchy mocks are used to estimate the sample variance associated to our observed clustering measurements. The Nseries mocks are used to validate the theory model we apply to CMASS. The AbacusSummit mocks are used to train our simulation-based model for galaxy clustering, as presented in our companion paper (Cuesta-Lazaro et al., 2023). Each of these mocks are described in more detail below. #### 3.1.1 MD-Patchy mocks To estimate the covariance of the data vector, we use the MultiDark-Patchy mocks (MD-Patchy, Kitaura et al., 2016), a suite of 2048 mock galaxy catalogues that were designed to match the footprint, redshift distribution and halo occupation distribution of the BOSS DR12 galaxy samples. The mocks are based on approximate lightcones generated with augmented Lagrangian perturbation theory (ALPT) using the patchy code (Kitaura et al., 2014, 2015). ALPT is based on a combination of second-order Lagrangian perturbation theory on large scales and the spherical collapse model on smaller scales. patchy populates dark matter haloes using a subhalo abundance matching prescription that is calibrated from N-body simulations from the Big MultiDark Suite (Klypin et al., 2016), and uses a \(\Lambda\)CDM model matched to the best fit to the Planck 2013 CMB measurements (Planck Collaboration et al., 2014), characterized by a matter density parameter \(\Omega_{\mathrm{m}}=0.307115\), a baryon density parameter \(\Omega_{\mathrm{b}}=0.048\), an amplitude of matter fluctuations in \(8\,h^{-1}\,\mathrm{Mpc}\) spheres Figure 2: Comoving number density as a function of redshift for the density-split quintiles and galaxies from the BOSS DR12 CMASS sample. \(\sigma_{8}=0.8288\), a tilt of the primordial power spectrum \(n_{s}=0.9611\), and a dimensionless Hubble parameter \(h=0.6777\). #### 3.1.2 Nseries mocks To study systematic errors in our theory model, we use the Nseries cutsky mocks, a collection of 84 mock catalogues that were designed to match the clustering, footprint and radial selection of the CMASS galaxy sample. The cutsky mocks are constructed from a base set of seven independent full N-body simulations, with a box side length of \(2.6\,h^{-1}\)Gpc and a mass resolution of \(1.5\times 10^{11}\,h^{-1}\mathrm{M}_{\odot}\). Dark matter haloes are populated with a halo occupation distribution prescription, with parameters chosen to best model the clustering of the BOSS DR12 CMASS sample. Each box is trimmed and rotated in different ways to produce 12 cursky mocks that match the geometry of the CMASS sample, which results in a total of 84 pseudo-independent cutsky catalogues. The mocks are then passed through the same fiber assignment code that was used for BOSS, so that they faithfully reproduce the angular variations of fiber collisions in the data (Hahn et al., 2017). Nseries is characterized by a cosmology \(\Omega_{\mathrm{m}}=0.286\), \(\Omega_{\mathrm{b}}=0.0470\), \(h=0.70\), \(\sigma_{8}=0.82\), and \(n_{s}=0.96\). #### 3.1.3 AbacusSummit mocks To train and validate our clustering emulators, we use AbacusSummit, a suite of cosmological N-body simulations (Maksimova et al., 2021) designed to meet the simulation requirements of the Dark Energy Spectroscopic Instrument (Levi et al., 2019). The simulations were run with the Abacus N-body code (Garrison et al., 2019, 2021), comprising different volumes, resolutions, and cosmologies. The _base_ simulations follow the evolution of \(6912^{3}\) dark matter particles in a \((2\,h^{-1}\mathrm{Gpc})^{3}\) volume, with a mass resolution of \(2\times 10^{9}\,h^{-1}\mathrm{M}_{\odot}\). There are 97 cosmology variations in total, exploring an eight-dimensional parameter space around the Planck18 primary \(\Lambda\)CDM cosmology (PL18, Planck Collaboration et al., 2020): \[\boldsymbol{\theta}_{\mathrm{AbACUSSUMMIT}}=\left\{u_{\mathrm{cdm}},\,\omega_ {\mathrm{b}},\sigma_{8},n_{s},\mathrm{d}n_{s}/\mathrm{d}\ln k,N_{\mathrm{eff} },w_{0},w_{a}\right\}\,. \tag{10}\] Here, \(\omega_{\mathrm{cdm}}=\Omega_{\mathrm{c}}h^{2}\) and \(\omega_{\mathrm{b}}=\Omega_{\mathrm{b}}h^{2}\) are the physical cold dark matter and baryon densities, \(\mathrm{d}n_{s}/\mathrm{d}\ln k\) is the running of the spectral tilt, \(N_{\mathrm{eff}}\) is the effective number of ultra-relativistic species, \(w_{0}\) is the present-day dark energy equation of state, and \(w_{a}\) captures the time evolution of the dark energy equation of state. The simulations assume a flat spatial curvature, and the dimensionless Hubble parameter \(h\) is calibrated to match the Cosmic Microwave Background (CMB) acoustic scale \(\theta_{*}\) to the PL18 measurement. For the training and validation of the emulators, we restrict to the Figure 3: DSC and galaxy 2PCF multipoles measured from the BOSS DR12 CMASS catalogue (circles with error bars), along with the best-fit model from our emulator (solid lines with shaded bands). The columns show the multipoles of the quintile-galaxy cross-correlation function (left), the quintile autocorrelation function (middle), and the galaxy two-point correlation function (right). The lower sub-panels display the difference between the model and the data, in units of the standard deviation of the total error budget. Each colour corresponds to a different density quintile, as illustrated in Fig. 1. The 68 per cent errors of the data are estimated from 2048 realizations of the MD-Patchy mocks and represent the expected level of sample variance for the CMASS NGC volume. The emulator uncertainty is estimated by validating the predictions against a test set of simulations with a known cosmology. following subset of simulations, which were all run with the same initial conditions: [leftmargin=*] **c000**PL18 \(\Lambda\)CDM base cosmology, matching the mean of the base_plikHM_TTTEEE_lowl_lowE_lensing likelihood. **c001-004**Secondary cosmologies, including a low \(\omega_{\rm cdm}\) choice (WMAPP, Komatsu et al., 2011), a \(w\)CDM choice, a high \(N_{\rm eff}\) choice, and a low \(\sigma_{8}\) choice. [leftmargin=*] **c013**S Cosmology matching the Euclid Flagship2 \(\Lambda\)CDM simulation (Castander et al., in preparation). **c100-126**Linear derivative grid providing paired simulations with small negative and positive steps in the eight-dimensional cosmological parameter space from Eq. (10). **c130-181**An emulator grid around the c000 cosmology that provides a wider coverage of the cosmological parameter space. In addition to the base simulations, there are multiple realizations of smaller boxes with a side length of 500 \(h^{-1}\)Mpc at the c000 cosmology, which can be used for covariance estimation. Throughout the rest of the paper, we will refer to these simulations as AbacusSmall, and to the _base_ simulations simply as AbacusSummit. Group finding is done on the fly, using a hybrid Friends-of-Friends/Spherical Overdensity algorithm, dubbed CompaSO (Hazdijyska et al., 2021). As described in Sect. 3.3, we populate these halo catalogues with galaxies using a halo occupation distribution prescription, which are then used to obtain the clustering measurements for our training data. ### Galaxy-halo connection model In the current paradigm of cosmology, galaxies are thought to form and evolve within dark matter halos, which are large structures that form as a result of the gravitational collapse of overdensities in the Universe. The halo occupation distribution (HOD) is a statistical model that describes how galaxies are distributed within dark matter halos. A well-suited model for LRG is the base halo model from Zheng et al. (2007), where the average number of central galaxies in a halo of mass \(M\) is given by \[\langle N_{\rm c}\rangle(M)=\frac{1}{2}\left(1+{\rm erf}\left(\frac{\log M- \log M_{\rm cut}}{\sqrt{2}\sigma}\right)\right) \tag{11}\] where \({\rm erf}(x)\) denotes the error function, \(M_{\rm cut}\) is the minimum mass required to host a central, and \(\sigma\) is the slope of the transition between having zero to one central galaxies. The average number of satellite galaxies is in turn given by \[\langle N_{\rm s}\rangle(M)=\langle N_{\rm c}\rangle(M)\left(\frac{M-\kappa M _{\rm cut}}{M_{1}}\right)^{\alpha} \tag{12}\] where \(\kappa M_{\rm cut}\) gives the minimum mass required to host a satellite, \(M_{1}\) is the typical mass that hosts one satellite, and \(\alpha\) is the power law index for the number of galaxies. We use the AbacusHOD package, which is highly efficient and contains a wide range of HOD extensions (Yuan et al., 2022). For this work, we extend the base model to modulate galaxy peculiar velocities via the parameters \({\rm\alpha_{vel,c}}\), which parametrizes the velocity bias between the central galaxy and the halo centre, and \({\rm\alpha_{vel,s}}\), which parametrizes the velocity bias between the satellite galaxies and the local dark matter particles. When no velocity bias is present, \({\rm\alpha_{vel,c}}=0\) and \({\rm\alpha_{vel,s}}=1\), in which case centrals perfectly follow the velocity of halo centres, and satellites perfectly match the velocity of the underlying dark matter particles. We introduce two additional parameters to account for galaxy assembly bias: \(B_{\rm cen}\) and \(B_{\rm sat}\), which add environment-based secondary bias for centrals and satellites, respectively. Here, the environment is defined as the smoothed matter density around the halo centres, using a top-hat filter of radius \(R_{S}=5\,h^{-1}\)Mpc. When no secondary bias is present, \(B_{\rm cen}=B_{\rm sat}=0\). Positive/negative values of these parameters indicate a preference for galaxies to form in haloes around less/more dense environments, respectively. \begin{table} \begin{tabular}{|l|l|l|l|} \hline Parameter & Prior distribution & Baseline & Interpretation \\ \hline \(\omega_{\rm b}\) & \(N(0.02268,0.00038)\) & — & Physical baryon density \\ \hline \(\omega_{\rm cdm}\) & \(\mathcal{U}[0.1032,0.14]\) & — & Physical cold dark matter density \\ \hline \(\sigma_{8}\) & \(\mathcal{U}[0.687,0.938]\) & — & Amplitude of matter fluctuations in \(8\,h^{-1}\)Mpc spheres \\ \hline \(n_{\rm s}\) & \(\mathcal{U}[0.901,1.025]\) & — & Spectral index of the primordial power spectrum \\ \({\rm d}n_{s}/{\rm d}\ln k\) & \(\mathcal{U}[-0.038,0.038]\) & 0.0 & Running of the spectral index \\ \hline \({\rm N}{\rm erf}\) & \(\mathcal{U}[-2.1902,3.902]\) & 3.0146 & Number of ultra-relativistic species \\ \hline \(w_{0}\) & \(\mathcal{U}[-1.27,-0.70]\) & -1.0 & Present-day dark energy equation of state \\ \hline \(w_{\alpha}\) & \(\mathcal{U}[-0.628,0.621]\) & 0.0 & Time evolution of the dark energy equation of state \\ \hline \(M_{\rm cut}\) & \(\mathcal{U}[12.4,13.3]\) & — & Minimum halo mass to host a central \\ \hline \(M_{1}\) & \(\mathcal{U}[13.2,14.4]\) & — & Typical halo mass to host one satellite \\ \hline \(\log\sigma\) & \(\mathcal{U}[-3.0,0.0]\) & — & Slope of the transition from hosting zero to one central \\ \hline \(\alpha\) & \(\mathcal{U}[0.7,1.5]\) & — & Power-law index for the mass dependence of the number of satellites \\ \hline \(\kappa\) & \(\mathcal{U}[0.0,1.5]\) & — & Parameter that modulates the minimum halo mass to host a satellite \\ \hline \(\alpha_{c}\) & \(\mathcal{U}[0.0,0.5]\) & — & Velocity bias for centrals \\ \hline \(\alpha_{s}\) & \(\mathcal{U}[0.7,1.3]\) & — & Velocity bias for satellites \\ \hline \(B_{\rm cen}\) & \(\mathcal{U}[-0.5,0.5]\) & — & Environment-based assembly bias for centrals \\ \hline \(B_{\rm sat}\) & \(\mathcal{U}[-1.0,1.0]\) & — & Environment-based assembly bias for satellites \\ \hline \end{tabular} \end{table} Table 1: List of cosmological and HOD parameters used in our analysis. For each, we quote the parameter symbol, the prior distribution, the fixed value in the baseline model (where appropriate), and the physical interpretation. We note that the prior distribution for all parameters is uniform, with the exception of the baryon density, for which we adopt a normal distribution with a mean and dispersion as specified. ### Clustering emulators In the context of galaxy clustering and cosmology, an emulator refers to a computational model or algorithm that approximates the predictions of expensive or time-consuming simulations or calculations. They are used to efficiently and accurately generate predictions from cosmological models without resorting to additional simulations. Emulators are trained on a set of pre-computed simulations, often referred to as a training or calibration set, which cover a wide range of model parameter values and capture the desired properties of interest of an observable. As such, the emulator can learn how this observable responds to changes in cosmology or galaxy-halo connection parameters. Once the emulator is trained, it can rapidly predict the desired model predictions for any given set of input parameters. In Cuesta-Lazaro et al. (2023), we present our galaxy 2PCF and DSC emulators, which are based on HOD catalogues constructed from the AbacusSummit simulations. Here, we present a brief description of how the emulators are constructed. We start from the dark matter halo catalogues from the AbacusSummit snapshots at \(z=0.5\), spanning 85 different cosmologies within the eight-dimensional \(w_{0}w_{a}\Lambda\)CDM parameter space defined in Eq. (10). Using the abacushod code (Yuan et al., 2022), we populate dark matter haloes with a nine-parameter extended HOD framework (Sect. 3.2), \[\mathbf{\theta}_{\rm HOD}=\left\{M_{\rm cut},M_{1},\sigma,\alpha,\kappa,\alpha_{ \rm vel,c},\alpha_{\rm vel,s},B_{\rm cen},B_{\rm sat}\right\}, \tag{13}\] generating 100 unique HOD variations per cosmology, where the HOD parameters are sampled from a Latin Hypercube to ensure an optimal sampling of the parameter space. When the number density of an HOD catalogue is above the average number density of the CMASS sample (\(n_{\rm gal}\approx 3.5\times 10^{-4}\,(h/{\rm Mpc})^{3}\)), we invoke an incompleteness parameter \(f_{\rm ic}\) to downsample the catalogue down to the target number density. Under the distant-observer approximation, we map the positions of galaxies to redshift space by perturbing their positions with their peculiar velocities along one cartesian axis of the simulation box chosen to represent the line of sight. We repeat this procedure for each of the three axes of the simulation boxes, effectively generating Figure 4: Posterior probability distributions of the base-\(\Lambda\)CDM parameters from the fits to the BOSS DR12 CMASS clustering data. The pink contours show results from the combination of density-split clustering and the galaxy 2PCF, while the aquamarine contours show results using only the latter. We also overplot constraints taken from the Planck_TITEEE_LOW_LOWE likelihood (Planck Collaboration et al., 2020) in blue. All 2D contours show 68 and 95 per cent confidence intervals. three pseudo-independent redshift-space catalogues for each HOD variation, from which we can average the clustering measurements to reduce cosmic variance later on. For each of these mock catalogues, we run the DSC pipeline, using a mesh resolution of \(R_{\rm cell}=5\,h^{-1}\)Mpc, a number of random query points \(N_{\rm query}\) equal to five times the number of galaxies in each catalogue, a smoothing radius for the Gaussian filter of \(R_{s}=10\,h^{-1}\)Mpc, and five density quintiles. We compute the galaxy 2PCF and the density-split correlation functions in bins of \(s\) and \(\mu\), and we decompose them into their multipole moments. We split the HOD catalogues into training, validation and test sets, and for each clustering statistic we construct separate fully-connected neural networks, which take the cosmological and HOD parameters as an input, and return the monopole and quadrupole moments of the correlation functions. For training and validation, we use cosmologies [c160-126] and [c130-181] whereas the rest are reserved for the test set. The hyperparameters of the neural networks are calibrated to minimize the validation loss. Overall, we observe that the emulators produce model predictions with percent-level accuracy for the full range of scales, when tested against the test simulation boxes. ### Likelihood When fitting the emulator to the CMASS data, we define the log-likelihood as \[\mathcal{L}=\left(\mathbf{d}^{\rm data}-\mathbf{d}^{\rm model}\right)\mathbf{C}^{-1} \left(\mathbf{d}^{\rm data}-\mathbf{d}^{\rm model}\right)^{\top} \tag{14}\] where \(\mathbf{d}^{\rm data}\) is the observed data vector, \(\mathbf{d}^{\rm model}\) is the emulator prediction, and \(\mathbf{C}\) is the covariance matrix, which includes three contributions to the error budget: \[\mathbf{C}=\mathbf{C}^{\rm data}+\mathbf{C}^{\rm emm}+\mathbf{C}^{\rm sim}\,. \tag{15}\] Here, \(\mathbf{C}^{\rm data}\) is the term associated with the sample variance of the data vector, which is estimated from multiple realizations of the MD-Patchy mocks: \[\mathbf{C}^{\rm data}=\frac{1}{N_{\rm patchy}-1}\sum_{k=1}^{N_{\rm patchy}} \left(\mathbf{d}_{k}-\overline{\mathbf{d}}\right)\left(\mathbf{d}_{k}-\overline{\mathbf{d}} \right)^{\top}\,, \tag{16}\] where \(N_{\rm patchy}=2048\). The simulations used for training are at a fixed phase, which could be different from the true underlying phase of the Universe. In other words, the cosmic variance is frozen in our emulator predictions. To account for this, we add an extra sample variance contribution to the error budget, associated with the finite size of the training simulations. We estimate this covariance using multiple mock realizations of the AbacusSummit fiducial cosmology with a fixed set of HOD parameters with high likelihood: \[\mathbf{C}^{\rm sim}=\frac{1}{N_{\rm sim}-1}\sum_{k=1}^{N_{\rm sim}}\left( \mathbf{d}_{k}-\overline{\mathbf{d}}\right)\left(\mathbf{d}_{k}-\overline{\mathbf{d}}\right) ^{\top}\,, \tag{17}\] where \(N_{\rm sim}=1800\). \(\mathbf{C}^{\rm emm}\) accounts for the intrinsic error in the model predictions due to an imperfect emulation. This term is calculated by computing a covariance matrix from the difference between the emulator predictions and measurements from a set of test simulations with known cosmologies and HOD parameters, \(\Delta\mathbf{d}\): \[\mathbf{C}^{\rm emm}=\frac{1}{N_{\rm test}-1}\sum_{k=1}^{N_{\rm test}}\left( \Delta\mathbf{d}_{k}-\overline{\Delta\mathbf{d}_{k}}\right)\left(\Delta\mathbf{d}_{k}- \overline{\Delta\mathbf{d}_{k}}\right)^{\top}\,, \tag{18}\] where the overline denotes the mean across all test simulations, and \(N_{\rm test}=600\). When calculating the likelihood, both \(\mathbf{C}^{\rm data}\) and \(\mathbf{C}^{\rm sim}\) are multiplied by a factor \(\mathcal{P}\) before inversion (Pereival et al., 2022): \[\mathcal{P}=\frac{(N_{s}-1)[1+B(N_{d}-N_{\theta})]}{N_{s}-N_{d}+N_{\theta}-1}\,, \tag{19}\] where \[B=\frac{N_{s}-N_{d}-2}{(N_{s}-N_{d}-1)(N_{s}-N_{d}-4)}\,. \tag{20}\] Here, \(N_{s}\) is the number of simulations used to estimate the covariance, \(N_{d}\) is the length of the data vector and \(N_{\theta}\) is the number of parameters that are being fitted. This corrects for the fact that even though Eq. (16) & (17) are unbiased estimates of the covariance matrices, the inversion leads to biased parameter constraints. We sample the posterior distribution of parameters using the dynesty nested sampler (Speagle, 2020), which also provides an estimation of the Bayesian evidence. As listed in Table 1, we assume flat priors for all parameters except for the baryon density, for which we adopt a BBN-informed Gaussian prior (Aver et al., 2015; Cooke et al., 2018; Schoneberg et al., 2019) for our baseline analysis: \[\omega_{b}=0.02268\pm 0.00038\,. \tag{21}\] In later sections, we also explore using a flat prior range for \(\omega_{\rm b}\), finding that it has little impact on our conclusions and the reported constraints for other parameters. ### \(\theta_{*}\) prior In the simulations used to train our emulator, the value of the dimensionless Hubble parameter \(h\) was chosen to fix the CMB acoustic scale \(\theta_{*}\) (\(100\theta_{*}=1.041533\)) matching the best-fit measurement from PL18 (Planck Collaboration et al., 2020; Maksimova et al., 2021). This means that effectively we apply a fixed \(\theta_{*}\) constraint as a prior on our emulator results. This ensures that we only consider models within the part of parameter space where the emulator has been trained. As a result, \(h\) is not a free parameter in our model, but is determined by the sampled parameters and the \(\theta_{*}\) constraint. We use class(Blas et al., 2011) to derive \(h\) at each point in our chains, and we then use this to also obtain \(\Omega_{\rm m}\), calculated as \[\Omega_{\rm m}=\left(\omega_{\rm b}+\omega_{\rm cdm}+\omega_{\nu}\right)/h^{2}\,, \tag{22}\] where \(\omega_{\nu}=0.00064420\) accounts for 60 meV neutrinos (also the choice of the base PL18 cosmology), which is always fixed in our model. A consequence of this prior on \(\theta_{*}\) is that the parameter constraints we obtain below do not come exclusively from late-Universe clustering measurements but rather from a combination of galaxy clustering and information on the CMB acoustic scale, although they do not use \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Statistic & \(\chi^{2}\) & dof & \(\chi^{2}/{\rm dof}\) & log \(\mathcal{L}\) & log \(\mathcal{Z}\) \\ \hline 2PCF + DSC & 1052 & 635 & 1.66 & -257.58 & -276.38 \\ DSC & 942 & 563 & 1.67 & -253.15 & -270.85 \\ \hline 2PCF & 63 & 59 & 1.06 & -29.63 & -38.03 \\ \hline \end{tabular} \end{table} Table 2: The \(\chi^{2}\), degrees of freedom, log-likelihood, and log-evidence for the best fits to the BOSS DR12 multipoles from the galaxy two-point correlation function (2PCF), density-split clustering (DSC) and the combination of the two. We simultaneously fit the monopole and quadrupole, using scales \(1\,h^{-1}\)Mpc \(<s<150\,h^{-1}\)Mpc. other CMB information. The simple geometrical interpretation of \(\theta_{*}\) means that it is one of the best-measured quantities in all cosmology (the PL18 measurement corresponding to a 0.03 per cent precision level), and this measurement is also extremely robust to changes in the cosmological model (Planck Collaboration et al., 2020). We therefore consider this to be a very well-justified prior. ## 4 Results In this section, we apply our emulator framework to the CMASS clustering data and present our main cosmological constraints, along with tests for model systematics. We focus on the constraints for cosmological parameters and reserve the discussion about HOD constraints for Appendix A. ### Model fits The solid lines in Fig. 3 show the best-fit base-\(\Lambda\)CDM model to the measured multipoles, with the \(\chi^{2}\), likelihood, and Bayesian evidence values reported in Table 2. The \(\chi^{2}\) per degree of freedom for the galaxy 2PCF + DSC combination is 1.66, while for the 2PCF-only fit it is 1.06, which is comparable to the reported values for the small-scale 2PCF fit from Yuan et al. (2022) and the configuration-space BAO fit from Ross et al. (2016). We observe some hints of the models under-predicting (or over-predicting, in the case of negative density contrasts) the observed clustering at scales \(s>90\,h^{-1}\)Mpc. Such excess large-scale clustering has been observed before in the galaxy 2PCF analysis of BOSS data. In Ross et al. (2016), the authors find that the observed monopole shows an apparent excess with respect to the mean of the MD-Patchy mock catalogues, although it is argued that the mismatch is of low statistical significance. In Satpathy et al. (2017), the authors fit the correlation function using a model based on Convolutional Lagrangian Perturbation Theory, also finding that the model under-predicts the monopole data at large scales. Here, we observe a similar trend when comparing the measured multipoles to our emulator predictions, although the level of discrepancy is never greater than two standard deviations for all scales considered, irrespective of the summary statistic. As informed by the \(\chi^{2}\) values, the quality of the fit is better than might be guessed by the eye, since the separation bins are correlated. In Sect. 4.4, we show that the mean clustering signal of the Nseries mocks monopole is also lower than the CMASS data, although still consistent within \(2\sigma\). We also observe that the emulator is able to fit the mean of the mocks with much higher accuracy than the data, which we deem reasonable given that our model was trained to make predictions for the ensemble average of the clustering statistics. Although some of the observed offsets between our best-fit model and CMASS could be partially attributed to statistical fluctuations, we should bear in mind the possibility that there are residual systematic effects in the clustering data that are not fully corrected by the weighting procedure. Lavaux et al. (2019) performed a field-level inference of BOSS data, finding evidence of residual systematics on the spectroscopic data that so far remain unexplained, which can induce correlated modulations of the order of 30 per cent on the sky for CMASS. In addition, it is worth noting that the systematic weights for the large-scale structure catalogues provided by the BOSS collaboration were mainly validated for two-point statistics, and it is possible that they are sub-optimal for alternative clustering methods such as DSC. We plan to explore this topic further in future work. ### Base-\(\Lambda\)CDM constraints In this section, we explore the constraining power of DSC and the galaxy 2PCF on the base \(\Lambda\)CDM parameters, letting \(\omega_{\rm b}\), \(\omega_{\rm cdm}\), \(\sigma_{8}\), \(n_{s}\), and the HOD parameters vary during the fit. To generate the model predictions, we query the emulator fixing the remaining cosmological parameters to their baseline values \(w_{0}=-1,w_{a}=0,{\rm d}n_{s}/{\rm d}\ln k=0\), and \(N_{\rm eff}=3.0146\). Figure 4 shows the 2D posterior distributions on \(\Lambda\)CDM, marginalized over the HOD parameters. We also over-plot the base-\(\Lambda\)CDM constraints from the Planck_TTTEE_lowl_lowE likelihood from PL18 to facilitate comparison. The reported best-fit values, along with the means of the marginalized posteriors and their dispersion are listed in Table 3. It is worth noting that the best-fit values can be shifted with respect of the mean of the posterior due to its non-Gaussian shape. The galaxy 2PCF by itself constrains \(\omega_{\rm cdm}\), \(\sigma_{8}\), and \(n_{s}\) with a 2.7, 6.2, and 3.2 per cent precision, respectively. Adding the DSC multipoles tightens the constraining power to a precision of 1.7, 3.8 and 1.8, respectively. The constraints for baryon density are entirely dominated by the BBN prior. Overall, both posterior distributions are largely consistent with the PL18 results, with a \(0.04\sigma\) and \(0.6\sigma\) difference in the mean values of \(\omega_{\rm cdm}\) and \(\sigma_{8}\) between our baseline fit and PL18. As described in Sect. 3.5, we use the \(\theta_{*}\) constraint to obtain \(h\) and \(\Omega_{\rm m}\) as derived parameters from our chains, and also show the results for them in Fig. 4. We obtain \(h=0.6793\pm 0.0070\) and \(\Omega_{\rm m}=0.3122^{+0.0094}_{-0.011}\) when fixing to the base \(\Lambda\)CDM model. As Fig. 4 shows, we do not observe any significant degeneracy between \(\Omega_{\rm m}\) and the baryon density (and our constraints on \(\omega_{\rm b}\) are also prior-dominated). \begin{table} \begin{tabular}{|c|c c|c c|c c|} \hline & \multicolumn{2}{c|}{density-split + galaxy 2PCF} & \multicolumn{2}{c|}{density-split} & \multicolumn{2}{c|}{galaxy 2PCF} \\ \hline Parameter & best-fit & mean \(\pm\sigma\) & best-fit & mean \(\pm\sigma\) & best-fit & mean \(\pm\sigma\) \\ \hline \(\omega_{\rm b}\) & 0.0234 & 0.02279 \(\pm\) 0.00035 & 0.0233 & 0.02283 \(\pm\) 0.00035 & 0.0226 & 0.02272 \(\pm\) 0.00036 \\ \(\omega_{\rm cdm}\) & 0.1209 & 0.1201 \(\pm\) 0.0022 & 0.1187 & 0.1191\({}^{+0.0026}_{-0.002}\) & 0.1190 & 0.1200 \(\pm\) 0.0034 \\ \(\sigma_{8}\) & 0.8116 & 0.792 \(\pm\) 0.034 & 0.7715 & 0.768\({}^{+0.037}_{-0.043}\) & 0.7591 & 0.807\({}^{+0.055}_{-0.049}\) \\ \(n_{s}\) & 0.9837 & 0.970 \(\pm\) 0.018 & 0.9952 & 0.968 \(\pm\) 0.023 & 0.9515 & 0.969\({}^{+0.043}_{-0.05}\) \\ \hline \(h\) & 0.6774 & 0.6793 \(\pm\) 0.0070 & 0.6846 & 0.6828\({}^{+0.0070}_{-0.0083}\) & 0.6832 & 0.679 \(\pm\) 0.011 \\ \(\Omega_{\rm m}\) & 0.3157 & 0.311 \(\pm\) 0.011 & 0.3044 & 0.306 \(\pm\) 0.012 & 0.3047 & 0.311 \(\pm\) 0.018 \\ \(f\sigma_{8}\) & 0.4747 & 0.462 \(\pm\) 0.020 & 0.4482 & 0.447\({}^{+0.021}_{-0.025}\) & 0.4411 & 0.470\({}^{+0.033}_{-0.028}\) \\ \hline \end{tabular} \end{table} Table 3: Parameter constraints for the base \(\Lambda\)CDM fits on the BOSS DR12 CMASS galaxy catalogue, using the combination of density-split clustering and the galaxy 2PCF, or using only the latter. This means that, to a good approximation, \(\theta_{*}\propto\Omega_{\rm m}h^{3.4}\), the expected behaviour for fixed \(\Omega_{\rm b}h^{2}\) within flat-\(\Lambda\)CDM models (Percival et al., 2002), and thus DSC and the galaxy 2PCF results both describe this narrow degeneracy direction in the \(\Omega_{\rm m}\)-\(h\) plane. This is close to but not the same as the similarly tight degeneracy obtained from the the CMB by PL18, which corresponds to constant \(\Omega_{\rm m}h^{3}\). The degeneracy direction for models with the same \(\theta_{*}\) depends weakly on \(\Omega_{\rm b}h^{2}\) through the sound horizon. PL18 uses the CMB data to also constrain \(\Omega_{\rm b}h^{2}\), leading to an anti-correlation between constraints on \(\Omega_{\rm b}\) and \(\Omega_{\rm m}\), which then results in a different degeneracy in the \(\Omega_{\rm m}\)-\(h\) plane. Within \(\Lambda\)CDM, the linear growth rate of structure can be approximated as \[f(z)\approx\Omega_{\rm m}^{0.55}(z)\,. \tag{23}\] We use this expression to derive a value for \(f\sigma_{\rm S}\) at the effective redshift of CMASS from our chains5. We obtain \(f\sigma_{\rm S}(z=0.525)=0.462\pm 0.020\), which is a 4.3 per cent constraint from the combination of DSC and the galaxy 2PCF. We contrast this constraint with the PL18 prediction and other clustering studies in Fig. 5. Our constraint is slightly lower than the mean of PL18, but consistent at the \(1\sigma\) level. We also find good agreement with the results of the main BOSS clustering analysis (Alam et al., 2017), but we get a factor of 1.9 better precision using the DSC + galaxy 2PCF combination, compared to their middle redshift bin that covers \(0.4<z<0.6\). Footnote 5: As we are doing a direct fit of \(\Lambda\)CDM parameters and marginalizing over \(h\), our results are not affected by the problems of the standard template-based clustering analyses pointed out in Sanchez (2020). We find a higher mean \(f\sigma_{\rm S}\) than Yuan et al. (2022b) (from hereon Y22), although there is still consistency at the \(0.7\sigma\) level. Interestingly, Y22 constrain \(f\sigma_{\rm S}\) to a precision similar to our baseline analysis, even though they only use the galaxy 2PCF at scales \(<30\,h^{-1}\)Mpc. Our 2PCF-only fit is a factor of 1.8 worse than theirs, which seems puzzling given that their model is based on the same set of simulations used in this study. The main reason we attribute this difference to is that while we estimate the intrinsic emulator error using a test set of simulations that covers the whole prior range in the cosmology & HOD parameter space, Y22 estimate the error using only HOD catalogues that have a high likelihood with respect to the measured data vector. Our emulator error will generally tend to be more conservative, as it also considers the error around regions near the edge of the priors, which are usually harder to emulate. It also dominates the error budget at small scales, precisely where Y22 extract most of their information. Effectively, this means we are discarding some small-scale information which they keep. We have verified that by artificially lowering or removing the emulator error, the precision of our galaxy 2PCF constraints closely matches that of Y22. Other reasons that could potentially add to this are that Y22 use Gaussian priors on the HOD parameters while we use flat priors, and that Y22 estimate the data covariance through jackknife resampling, while our covariance is estimated from mock catalogues. Our results are \(2.2\sigma\) higher than those reported by Zhai et al. (2023), who find \(f\sigma_{\rm S}(z=0.55)=0.396\pm 0.022\). Zhai et al. (2023) train an emulator using the Aemulus suite of simulations to fit the small-scale clustering of BOSS galaxies using the galaxy 2PCF between \(0.1\,h^{-1}\)Mpc and \(60\,h^{-1}\)Mpc. Although we have not tested our emulator against Aemulus directly, our companion paper (Cuesta-Lazaro et al., 2023) shows that our model can recover unbiased cosmological constraints when fitted to mock galaxy catalogues generated with a different N-body code and galaxy-halo connection model, specifically galaxies generated with the subhalo abundance matching technique by Zhai et al. (2023) from the Uchuu suite of simulations (Ishiyama et al., 2021; Dong-Paez et al., 2022; Oogi et al., 2023; Aung et al., 2023; Prada et al., 2023). Applying our emulator to Aemulus could be a useful step to explore in future work to better understand the source of this discrepancy. Yu et al. (2023) use Eulerian perturbation theory in combination with a halo model calibrated on N-body simulations to model the power spectrum multipoles from BOSS DR12 up to \(k=0.2\,h\)Mpc\({}^{-1}\), finding \(f\sigma_{\rm S}(z=0.61)=0.455\pm 0.026\), which is in excellent agreement with our constraint, albeit being at an effective redshift slightly higher than our sample. Comparing with a full-shape analysis based on the Effective Field Theory of Large-Scale Structures (EFTofLSS, Carrasco et al., 2012), we find an \(f\sigma_{\rm S}\) value than is \(1.7\sigma\) higher than the one derived by d'Amico et al. (2020), who fit the power spectrum multipoles of BOSS DR12 galaxies up to \(k=0.2\,h\)Mpc\({}^{-1}\) and report \(f\sigma_{\rm S}(z=0.55)=0.399\pm 0.031\), which is closer to the Zhai et al. (2023) estimate. Although our \(\Omega_{\rm m}\) constraints agree to within \(0.1\sigma\) with d'Amico et al. (2020), their predicted amplitude of the primordial power spectrum, \(A_{s}\), which is \(2.3\sigma\) lower than PL18, drives their derived \(f\sigma_{\rm S}\) to lower mean values. Finally, Semenaite et al. (2022) presented a clustering analysis of BOSS and eBOSS data, using information of the full shape of the BOSS clustering wedges presented by Sanchez et al. (2017), and the multipoles from eBOSS quasars from Hou et al. (2021). They derive \(\sigma_{\rm S}=0.815\pm 0.044\) and \(\Omega_{\rm m}=0.290^{+0.012}_{-0.014}\), which agree with our constraints at the \(0.4\sigma\) and \(1.2\sigma\) level, respectively. ### Extended-\(\Lambda\)CDM constraints To explore potential deviations from \(\Lambda\)CDM, we have analyzed a grid of well-motivated extensions to the base model. Figure 6 and Table 4 summarize the results of single-parameter extensions to the base \(\Lambda\)CDM model. All these fits have been run using the baseline configuration of data vectors, scale cuts and model prescription, except for the addition of a single parameter to the cosmological model, which are incorporated one at a time. We do not find compelling proof supporting any of these extensions, as the marginalized posteriors for the additional parameters generally overlap with the base model within one standard deviation. #### 4.3.1 Running of the spectral index The first extension we look at is in regard to the scale dependence of the primordial density fluctuations. Here we characterize the primordial power spectrum as a power law with a normalization amplitude \(A_{s}\), a spectral index \(n_{s}\) and its first derivative with respect to \(\ln k\) \begin{table} \begin{tabular}{|c|c|c|} \hline Parameter & best-fit & mean \(\pm\sigma\) \\ \hline \(N_{\rm eff}\) & \(3.0759\) & \(3.02^{+0.34}_{-0.27}\) \\ \({\rm d}n_{s}/{\rm d}\ln k\) & \(-0.0022\) & \(-0.005\pm 0.013\) \\ \hline \(w_{0}\) & -\(0.9444\) & \(-0.956^{+0.046}_{-0.041}\) \\ \hline \end{tabular} \end{table} Table 4: Constraints on single-parameter extensions to the base-\(\Lambda\)CDM model from the BOSS DR12 CMASS data, using the combination of density-split clustering and the galaxy 2PCF. (also known as _the running_): \[P(k) =A_{s}\left(\frac{k}{k_{0}}\right)^{n(k)} \tag{24}\] \[n(k) =n_{s}-1+(1/2)(dn_{s}/d\ln k)\ln(k/k_{0})\,, \tag{25}\] where \(k_{0}\) is a pivot wavenumber used to specify the point at which the power spectrum is normalized. Our base-\(\Lambda\)CDM constraints from Sect. 3 have assumed a zero running for the spectral index, finding \(n_{s}=0.970\pm 0.018\). CMB experiments also favour \(n_{s}<1\), which is predicted by common single-field slow-roll inflationary models (Mukhanov, 2007). Such models also predict a very small running, since it is only second-order in inflationary slow-roll parameters (Kosowsky and Turner, 1995), but it is possible to construct valid models that predict larger values. We find \(dn_{s}/d\ln k=-0.005\pm 0.013\), which is consistent with zero running. The 1D marginalized posterior is shown in the left-hand side panel of Fig. 6, where we also overplot the constrain from the Planck_TTTEEE_lowl_lowE likelihood from PL18, who find \(-0.0055\pm 0.0067\). We warn the reader that the parameter range for the running shown in Fig. 6 corresponds to our full prior range, beyond which we do not have simulations to sample the parameter space with our model. The posterior distribution quickly approaches zero near the prior walls, giving us confidence that our \(1\sigma\) limits are not prior-dominated. However, we cannot confidently provide \(3\sigma\) limits given this restriction. #### 4.3.2 Effective number of relativistic species A relic neutrino background is a generic prediction of the standard hot Big Bang model (Lesgourgues and Pastor, 2014). The constraints on the properties of relic neutrinos and other relativistic species beyond the Standard Model of particle physics is of special interest for large scale structure analyses. The combination of CMB experiments, galaxy and supernova surveys have put the tightest upper limits on the sum of neutrino masses (Emas and Wulandari, 2019), whereas Planck has constrained the density of light relics with sub-percent level precision (Planck Collaboration et al., 2020). In the instantaneous neutrino decoupling limit, the density of radiation in the Universe (besides photons) can be written as (Lesgourgues and Pastor, 2014): \[\frac{\rho_{\nu}}{\rho_{\gamma}}=\frac{7}{8}N_{\rm eff}\left(\frac{4}{11} \right)^{4/3}\,, \tag{26}\] where \(N_{\rm eff}=3\), usually called the effective number of relativistic species, is a convenient parametrization of the relativistic energy density of the Universe beyond just photons, in units of a single neutrino. Detailed calculations that go beyond the instantaneous neutrino decoupling limit, including neutrino oscillations, predict \(N_{\rm eff}\approx 3.046\)(de Salas and Pastor, 2016). For the base-\(\Lambda\)CDM constraints from Sect. 3, we fixed \(N_{\rm eff}\) to the baseline value of 3.046. In this section, we vary this parameter during the inference analysis, finding \(N_{\rm eff}=3.02^{+0.24}_{-0.27}\), which is a 8.4 per cent precision constraint, in excellent agreement with the PL18 measurement. #### 4.3.3 Dark energy equation of state One of the main goals of modern observational cosmology is elucidating the nature of the accelerated expansion of the Universe. In the base-\(\Lambda\)CDM model, dark energy takes the form of a cosmological constant that has an equation of state \(w_{0}\equiv p/\rho=-1\), where \(p\) and \(\rho\) represent the pressure and density of the fluid. In this section, we explore models with a constant \(w\) by letting it vary as a free parameter in the fit. The right-hand side panel of Fig. 6 shows the marginalized posterior of \(w\) for the combination of the galaxy 2PCF and DSC. We find \(w_{0}=-0.956^{+0.046}_{-0.041}\), which is consistent with a fiducial cosmological Figure 5: Growth rate of structure derived from our analysis of the BOSS DR12 CMASS sample, combining density-split clustering and galaxy 2PCF measurements. The base-\(\Lambda\)CDM prediction, based on the best-fit Planck 2018 cosmology, is shown in blue, where the darker and lighter shades represent 68 and 95 per cent confidence intervals. We also compare with other measurements from the literature at different redshifts. The data include 6dFGS (Beutler et al., 2012), eBOSS (de Mattia et al., 2021; Bautista et al., 2021; Chapman et al., 2022), as well as other studies performed on BOSS: the consensus DR12 analysis (Alam et al., 2017), a LOWZ simulation-based analysis (Lange et al., 2022), CMASS simulation-based analyses (Yuan et al., 2022; Kobayashi et al., 2022; Zhai et al., 2023), an EFTofLSS analysis (Garno et al., 2020), and a halo perturbation theory analysis (Yu et al., 2023). In some cases, the redshifts of the measurements have been slightly shifted horizontally for visual clarity. constant at the \(1.1\sigma\) level. CMB data alone (blue contour) does not put a very tight constraint on \(w_{0}\), as it is a \(z\approx 1100\) measurement. The grey contours show the posterior resulting from the combination of the Planck_TTTEEE_lowl_lowE PL18 likelihood with late-time probes of the expansion rate, including BAO measurements from BOSS DR12 (Alam et al., 2017), SDSS-MGS (Ross et al., 2015), and 6dFGS (Beutler et al., 2011), as well as type Ia supernovae distance measurements from Pantheon sample (Scolnic et al., 2015), and local estimates of the Hubble parameter from Milky Way Cepheid variables from Riess et al. (2018). This combination tightens the marginalized posterior to \(w_{0}=-1.041^{+0.060}_{-0.053}\) which agrees with our constraint at the \(1.1\sigma\) level. We emphasize that for our results from galaxy clustering, we have adopted the prior constraint for the acoustic scale \(\theta_{*}\) from PL18, so our constraints on \(w_{0}\) are not fully independent from the Planck_TTTEEE_lowl_lowE likelihood. ### Test for systematics In this section, we carry out tests on mock galaxy catalalogues to look for systematics in the theorerical modelling, and we assess the robustness of our cosmological constraints to different choices in our inference pipeline. #### 4.4.1 Recovery tests on the Nseries mocks We begin by testing our pipeline on the Nseries mock galaxy catalogues, which were calibrated onto the clustering of the CMASS NGC galaxy sample, matching its footprint and radial selection (Sect. 3.1.2). We measure the galaxy 2PCF and density-split multipoles from each of the 84 mock realizations, and analyze each mock using the baseline configuration of our pipeline as we did in Sect. 3. We estimate the covariance matrix of the data vectors from the MD-Patchy mocks, following the same procedure as described in Eq. (16). The left panel of Fig. 8 shows the distribution of recovered best-fit values from the 84 fits. The true cosmology of the simulations, which is shown by the vertical red lines, is well within the 68 per cent confidence region of the distribution, showing that our clustering pipeline is able to recover unbiased cosmological constraints even in the presence of complex survey masks, fiber collisions and non-uniform radial selection functions. As a complementary test, we proceed to average the data vectors across the 84 mock realizations, and perform cosmological fits on the mean data vectors. The right panel of Fig. 8 shows the recovered cosmological parameters in this setup, using two different covariance matrices. The grey contours show results with the usual covariance matched to the volume covered by the CMASS sample used throughout the paper. The black contours show results where the covariance is rescaled to match the comoving volume covered by 84 realizations6 of the Nseries suite, which amounts to roughly 120 (\(h^{-3}\)Gpc\({}^{3}\)), after applying the redshift cuts to match our data sample, \(0.45<z<0.6\). Even using this large volume, the true cosmology of the simulations falls within one standard deviation of the marginalized parameter posterior distributions, highlighting the robustness of our pipeline even for datasets far larger than the one analysed in this paper. Footnote 6: It should be noted however that the 84 realizations from Nseries are not fully independent from each other, since they were generated from only seven large cubic boxes that were rotated and trimmed in different ways to construct the cursky mocks. Figure 10 shows the best fit to the galaxy 2PCF measured from the mean of the Nseries samples (using the covariance associated to a single CMASS volume), where we also overplot the measurement from the CMASS sample for comparison. Overall, the fit to the Nseries mocks is accurate at all scales, with the best-fit model always falling within one standard deviation of the error bars. This can be contrasted with the fit to the CMASS data seen earlier in Sect. 4.1, where the data measurement shows a lower and higher clustering amplitude at intermediate and large scales compared to the model, respectively. CMASS also shows a similar difference in clustering with respect to the mean of the Nseries mocks. The \(\chi^{2}\) between the mocks and data monopole is 30 for 23 degrees of freedom, which corresponds roughly to a \(2\sigma\) shift. Although we do not show the other statistics for brevity, we have observed the same trends for DSC. However, it is worth keeping in mind that the cosmology of the Nseries simulations differs substantially from the fiducial cosmology we adopted to convert redshifts to distances, so the Nseries multipoles could be affected by more severe AP distortions than the CMASS data, if the true cosmology of the Universe is closer to our fiducial cosmology compared to the mocks. Figure 7 shows the distribution of log-evidence from the fits to the 84 Nseries mocks and CMASS. We include the results obtained using two different scale cuts: \(s_{\rm min}=1\,h^{-1}\)Mpc and \(s_{\rm min}=50\,h^{-1}\)Mpc. We see that regardless of the scale cut that is used, the evidence of the CMASS fit is significantly lower than for any of the Nseries mocks. This means that, overall, the model is a much better description of the mocks than it is of the data, even when marginalizing over all Figure 6: Constraints on single-parameter extensions to the base-\(\Lambda\)CDM model from the galaxy 2PCF + density-split clustering fits on BOSS DR12 CMASS (pink), Planck TT,TE,EE+lowl+lowE+BAO+SNe (grey). Extensions include variations in the running of the spectral index of the primordial power spectrum, \(dn_{s}/\mathrm{d}\ln k\), the effective number of relativistic species, \(N_{\rm eff}\), and the dark energy equation of state parameter, \(w_{0}\). the parameter space of the model. This supports the hypothesis that there might be residual systematic effects in the CMASS clustering catalogues that are currently unaccounted for by the weighting procedure (Lavaux et al., 2019). While removing all the information below \(s=50\,h^{-1}\)Mpc reduces the gap between Nseries and CMASS, it is still statistically unlikely that the large offsets between the baseline model and the data can be fully explained by a random fluctuation due to sample variance from the observations, if we assume that the mocks are a faithful reproduction of the CMASS data. This could be a combination of the above-mentioned systematic effects with insufficient models of the halo-galaxy connection on small scales. Given the small number of Nseries mocks, we cannot translate these findings into more quantitative statements. #### 4.4.2 Scale cuts Another important aspect we study with the Nseries mocks is the choice of scales that are used in the main data analysis. Figure 9 shows the marginalized constraints on \(\omega_{\rm cdm}\), \(\sigma_{8}\), \(n_{s}\), and \(f\sigma_{8}\), adopting eight different minimum scale cuts, ranging from \(1\,h^{-1}\)Mpc to \(60\,h^{-1}\)Mpc. For these measurements, we use the mean of the 84 Nseries mocks as the data vector, and a covariance associated to a single CMASS volume. We observe that for Nseries, the mean values of the marginalized posteriors are very stable when changing the scale cuts for all parameters that are considered. It is interesting to note that the size of the error bars does not shrink beyond \(\approx 20\,h^{-1}\)Mpc. There are two main reasons for this. The first one is the inclusion of the model uncertainty in the covariance matrix that is used for the calculation of the likelihood [Eq. (15)]. At these scales, although the emulator error has a percent-level accuracy, it starts to dominate the total error budget for the monopole of the 2PCF and the density-split CCF. This ensures that the cosmological inference is always robust, even when including scales where the emulator is less accurate than the precision of the data, which comes at the expense of not being able to extract all the information there is available. The second reason is that even though we are imposing a minimum scale cut in the multipoles, the DSC quintiles are defined using small-scale information from the density field, which also propagates into the multipoles at larger scales, as shown in Paillas et al. (2023). Figure 9 also shows how the constraints from CMASS data change depending on the minimum scale cut. Although the results we get using \(s_{\rm min}=1\,h^{-1}\)Mpc are consistent with those obtained with more conservative cuts to within \(1\sigma\), we observe some interesting trends with scale. \(\omega_{\rm cdm}\) shows some tendency towards larger mean values the more small-scale information we include. Furthermore, \(\sigma_{8}\) transitions to larger mean values going from \(s_{\rm min}=20\,h^{-1}\)Mpc to \(s_{\rm min}=5\,h^{-1}\)Mpc. The fact that these two effects are not observed in the Nseries mocks could be partially attributed to noise in the CMASS data vector, residual observational systematic effects that the mocks do not capture, or misunderstandings in the galaxy-halo connection modelling, which is mostly constrained by small scales. One of the most common systematics that can affect small-scale clustering is fiber assignment, which artificially lowers the clustering amplitude on scales smaller than the fiber collision angular scale. The Nseries mocks are already infused with fiber collisions that should closely match the BOSS data, so any important effect coming from this should also be imprinted in the multipoles measured from the mocks. Based on the fact that the constraints from Nseries are robust against variations in \(s_{\rm min}\), and that the CMASS constraints with different scale cuts are consistent to within \(1\sigma\), we adopt \(s_{\rm min}=1\,h^{-1}\)Mpc as the baseline for our analysis. #### 4.4.3 Systematic error budget Based on the tests described in the previous section, we use the Nseries mocks to determine the contribution to the systematic error budget coming from our emulator. This was trained on periodic boxes at a fixed redshift, but we are applying it to fit a survey that has a non-uniform footprint and radial selection, along with fiber collision effects that artificially decrease the clustering on small scales. Thus, there is the possibility of errors in addition to the emulator error term determined from the test sample when constructing the emulator. To look for such error terms, we calculate the offset between the expected value of each cosmological parameter and the mean of the marginalized posteriors from the fit to the mean of 84 mocks, using the covariance of a \(120(h^{-3}\)Gpc\({}^{3})\) volume (solid contours from the right-hand side panel of Fig. 8). For the combination of DSC and the galaxy 2PCF, the offsets and their associated \(2\sigma\) uncertainties are: \[\Delta\omega_{\rm cdm} =0.00015^{+0.0018}_{-0.0018}\] \[\Delta\sigma_{8} =0.01288^{+0.018}_{-0.018}\] \[\Delta n_{s} =0.00801^{+0.020}_{-0.019}\] \[\Delta f\sigma_{8} =0.00657^{+0.011}_{-0.011}\quad(\rm{DSC+galaxy\ 2PCF})\,,\] while the fits that only include the galaxy 2PCF give \[\Delta\omega_{\rm cdm} =0.00002^{+0.0040}_{-0.0039}\] \[\Delta\sigma_{8} =0.01151^{+0.046}_{-0.045}\] \[\Delta n_{s} =0.0021^{+0.039}_{-0.038}\] \[\Delta f\sigma_{8} =0.00607^{+0.028}_{-0.027}\quad(\rm{galaxy\ 2PCF})\,.\] We do not find statistically significant offsets from the expected values, with the shifts for all parameters appearing well within their \(2\sigma\) limits. However, the level of precision to which we can carry out this test is set by the number of mocks that are available, which limits the total effective volume used in the test. We take the \(2\sigma\) limit of the marginalized distributions of best-fit values (left-hand side Figure 7: Logarithm of the Bayesian evidence obtained from individual fits to the Nseries mocks (histograms) and to the BOSS DR12 CMASS data (dashed lines). We show results for two different minimum scale cuts: \(s_{\rm min}=1\,h^{-1}\)Mpc (green) and \(s_{\rm min}=50\,h^{-1}\)Mpc (pink). The evidence of the CMASS fit is significantly lower than any of the Nseries mocks in both cases, but this discrepancy is reduced when excluding small scales. of Fig. 8), divided by \(\sqrt{84}\), as a conservative (maximum) estimate of the systematic error for each parameter, which are then added in quadrature to the associated statistical errors. We estimate this from the distribution of individual fits divided by \(\sqrt{84}\) rather than from the fit to the mean, because the \(2\sigma\) limits of the latter are dominated by the emulator error on the multipoles, which is already included in the covariance when we calculate the likelihood [Eq. (15)]. In this way, we avoid double counting the emulator uncertainty, only including the limiting precision that comes from the finite number of mocks used for the test. With this setup, the systematic error added to each parameter for the combination of DSC + galaxy 2PCF combination is: \[\sigma_{\alpha_{\rm m}}^{\rm sys} =0.0000156\] \[\sigma_{\alpha_{\rm dm}}^{\rm sys} =0.00037\] \[\sigma_{\sigma_{\rm m}}^{\rm sys} =0.00370\] \[\sigma_{n_{s}}^{\rm sys} =0.00354\quad\mbox{(DSC + galaxy 2PCF)}\,,\] while for the 2PCF-only fits we get \[\sigma_{\alpha_{\rm m}}^{\rm sys} =0.0000074\] \[\sigma_{\alpha_{\rm dm}}^{\rm sys} =0.00046\] \[\sigma_{\sigma_{\rm m}}^{\rm sys} =0.00519\] \[\sigma_{n_{s}}^{\rm sys} =0.00348\quad\mbox{(galaxy 2PCF)}\,.\] These systematic errors are added in quadrature to the statistical error budget. The systematic errors are negligible compared to the statistical errors on the parameter constraints from CMASS, so the values reported in Table 3 are unaffected, up to the significant figures that are shown. #### 4.4.4 Robustness against pipeline settings Here we explore the robustness of our clustering analysis against different choices of settings in the inference pipeline. The tests on this section are performed on the real data, and no longer using the Figure 8: Marginalized constraints on \(\omega_{\rm dm}\), \(\sigma_{\rm g}\), and \(n_{s}\) from fits to the Nseries mock catalogues, which were calibrated onto the clustering of the CMASS NGC sample, matching its number density and geometry. The fits were performed using the baseline configuration of our analysis, consisting on the combination of the density-split and the galaxy correlation functions at scales 1.0 \(h^{-1}\)Mpc \(<s<150\)\(h^{-1}\)Mpc. Left: the dots represent the best-fit values of individual fits to 84 realizations of the Nseries mocks. The contours show the 68 and 85 per cent confidence regions of the distribution of individual fits. Right: fits to data vectors that are averaged over the 84 Nseries mocks, using a covariance rescaled to match the volume of the CMASS sample used in our main analysis (dashed-grey) or the total volume of the Nseries suite (solid-black). Figure 9: Impact of the minimum scale cut on the constraints on base-\(\Lambda\)CDM parameters and \(f\sigma_{\rm g}\) when fitting the mean of the Nseries mocks (red circles), or the BOSS DR12 CMASS data blue squares). The horizontal dashed line shows the true cosmology of the Nseries simulations. Note that the horizontal axes do not use a linear scale. Nseries mocks. As a reminder, our baseline configuration consists of: - Data vector: Concatenation of the monopole and quadrupole moments of the DSC auto-correlation and cross-correlation functions, using four quintiles (Q\({}_{0}\), Q\({}_{1}\), Q\({}_{3}\), and Q\({}_{4}\)) and the galaxy 2PCF. - Parameter space: Base-\(\Lambda\)CDM model with an extended HOD framework, including velocity bias and environment-based assembly bias: \[\mathbf{\theta}_{\rm{cosmom}} =\{\omega_{\rm{cdm}},\omega_{\rm{b}},\sigma_{\rm{R}},n_{\rm{S}}\}\] \[\mathbf{\theta}_{\rm{HOD}} =\{M_{\rm{cut}},M_{1},\sigma,\alpha,\kappa,\alpha_{\rm{vel,c}}, \alpha_{\rm{vel,s}},B_{\rm{cen}},B_{\rm{sat}}\}\] - Priors: Uniform priors for all parameters except the baryon density, for which we adopt a BBN-like Gaussian prior, as detailed in Table 1. - Error budget: The covariance used in the likelihood calculation includes contributions from sample variance of the data vector, and model uncertainty associated with the emulator training. Figure 11 shows the constraints that result from varying various aspects of these settings. Starting from top to bottom, we try a uniform prior for \(\omega_{b}\), finding that it shifts the means of the marginalized posteriors by less than \(0.2\sigma\), but degrading the constrainig power on \(\omega_{\rm{cdm}}\) by 15 per cent. \(\omega_{b}\) itself is basically unconstrained with our analysis under a uniform prior, which is the main motivation for adopting a BBN prior, following other clustering studies in the literature (e.g., Alam et al., 2017; Ivanov et al., 2020; Philcox and Ivanov, 2022). Using only the monopole of the correlation functions weakens the precision of the constraints on \(\omega_{\rm{cdm}}\) and \(\sigma_{\rm{S}}\) by 18 and 13 per cent, respectively, resulting in a 18 per cent degradation of the precision on \(f\sigma_{\rm{S}}\). Using only the most underdense and overdense quintiles (Q\({}_{0}\) and Q\({}_{4}\), respectively) degrades the precision on \(\omega_{\rm{cdm}}\) by 9 per cent. The precision on \(\sigma_{\rm{S}}\) increases by 15 per cent and its mean is shifted to slightly lower values. The increase in precision for \(\sigma_{\rm{S}}\) might seem counter intuitive, but can be explained by the fact that individual quintiles can sometimes lead to marginalized posteriors with slightly different mean values, which is exemplified by the rows where we fit Q\({}_{0}\) and Q\({}_{4}\) separately. This can produce wider contours when all quintiles are fitted simultaneously. Overall, the fact that the constraining power on most parameters is not severely degraded when we fit only the extreme quintiles (which effectively discards half of the DSC data set) agrees with the picture that most of the information from DSC comes from the very under-dense and over-dense regions, as suggested by Paillas et al. (2021). DSC by itself predicts \(\sigma_{\rm{S}}=0.768^{+0.037}_{-0.043}\), which is slightly lower than the predicted value from the 2PCF-only fit. However, both the DSC-only and 2PCF-only fits are consistent within one standard deviation with the baseline fit. We observe that the precision on \(\omega_{\rm{cdm}}\) degrades by a factor of 2.4 when we allow \(N_{\rm{eff}}\) to vary, which is due to the strong correlation between these parameters. However, the precision for the other parameters remains relatively stable. If we also allow \(\mathrm{d}n_{\rm{S}}/\mathrm{d}\ln k\) and \(\omega_{0}\) to vary, it does not significantly affect the constraining power on the base \(\Lambda\)CDM parameters. We find an interesting shift of the estimated \(\sigma_{\rm{S}}\) to lower values when we fix the assembly bias or velocity bias parameters during the fit. As shown in Appendix A, our best-fit model constrains the environment-based assembly bias parameters to be negative, meaning that galaxies preferentially form in haloes in denser environments. We observe a mild correlation between \(\sigma_{\rm{S}}\) and \(\alpha\), \(\alpha_{\rm{vel,s}}\) and \(B_{\rm{sat}}\) that is likely to explain the shifts in \(\sigma_{\rm{S}}\) towards lower values when removing assembly or velocity bias parameters. Finally, we run a fit neglecting the contribution of the model uncertainty to the error budget, which can be achieved by removing \(\mathbf{C}^{\rm{emm}}\) and \(\mathbf{C}^{\rm{sim}}\) from the covariance used to calculate the likelihood. This has a drastic impact on the derived constraints, resulting in a precision that can be more than two times better than the baseline analysis. Although for this particular test we observe that the mean values of the recovered parameters do not shift significantly with respect to the baseline fit, we have explicitly verified that removing the emulator error results in cosmological constraints that can be biased at a more than \(3\sigma\) level when fitting the mean of the Nseries mocks down to \(1\,h^{-1}\)Mpc. ## 5 Discussion and Conclusions We have presented a clustering analysis of the DR12 BOSS CMASS galaxy sample at \(0.45<z<0.6\), using simulation-based models of the galaxy two-point correlation function (2PCF) and density-split clustering (DSC). Our theory framework, which is presented in detail in our companion paper (Cuesta-Lazaro et al., 2023), is based on emulators trained on high-fidelity mock galaxy catalogues, which forward model the cosmological dependence of the full shape of the galaxy 2PCF and DSC multipoles, including redshift-space and Alcock-Paczynski distortions. It should be noted that due to the limitations of the simulation data available for training the emulator, our model fits impose a fixed prior on the acoustic scale \(\theta_{*}\) measured from the CMB (Planck Collaboration et al., 2020). This is, however, a very precise and model-independent measurement, so this prior does not significantly restrict our conclusions about the models analysed. Figure 10: Monopole of the galaxy two-point correlation function, averaged over 84 realizations of the Nseries mocks (violet circles with error bars), along with the best-fit model and its associated uncertainty (violet solid line and bands). Also shown is the monopole from the BOSS CMASS galaxy sample (grey squares with error bars). The lower sub-panel shows the difference between the Nseries mocks and the best-fit model, in units of the error bars. The dark-grey and grey shaded regions demarcate \(1\sigma\) and \(2\sigma\) offsets, respectively. \(\mathbf{\Theta}\) We have validated our theory model against the Nseries mock galaxy catalogues, which were calibrated onto the clustering and selection properties of the CMASS galaxy sample, finding that we can recover unbiased cosmological constraints even using a volume of \(120(h^{-3}\mathrm{Gpc^{3}})\), which is 84 times larger than the volume examined in our data analysis. For our base \(\Lambda\)CDM analysis, we find that the galaxy 2PCF constrains \(\omega_{\mathrm{cdm}}\), \(\sigma_{8}\), and \(n_{s}\) with a precision of 2.8, 6.1, and 3.2 per cent, respectively, using a scale range \(1\,h^{-1}\mathrm{Mpc}<s<150\,h^{-1}\mathrm{Mpc}\). Adding the DSC multipoles using the same scale range tightens the constraining power to a precision of 1.8, 4.3 and 1.8 per cent, respectively, obtaining \(\omega_{\mathrm{cdm}}=0.1201\pm 0.0022\), \(\sigma_{8}=0.792\pm 0.034\), and \(n_{s}=0.970\pm 0.018\). This is a factor of 1.6, 1.4, and 1.8 of improvement in precision with respect to the 2PCF-only constraints, respectively. Combining the galaxy 2PCF and DSC multipoles, we derive \(f\sigma_{8}=0.462\pm 0.020\) at \(z\approx 0.525\), which is a 4.3 per cent constraint. In comparison, the main BOSS clustering analysis presented in Alam et al. (2017) derived a 8.3 per cent constraint on \(f\sigma_{8}\) using both Galatic caps for their \(0.4<z<0.6\) redshift bin. We obtain 1.9 times better precision using only the Northern Galactic cap and a narrower redshift bin. This improvement mainly comes from the inclusion of higher-order clustering information that is captured by DSC, and the addition of non-linear scales in the fitting. Our \(f\sigma_{8}\) constraint is largely consistent with Planck 2018 base-\(\Lambda\)CDM predictions (Planck Collaboration et al., 2020), and also agrees well with other clustering studies in the literature that use the same galaxy sample (Yuan et al., 2022; Yu et al., 2023). Our base-\(\Lambda\)CDM cosmological constraints are summarized in Table 3. We have also performed fits with single-parameter extensions to base-\(\Lambda\)CDM, where we vary the running of the spectral index of the primordial power spectrum (\(\mathrm{d}n_{s}/\mathrm{d}\ln k\)), the density of massless relic neutrinos (\(N_{\mathrm{eff}}\)), and the dark energy equation of state parameter \(w_{0}\). Overall, we do not find compelling evidence for deviations from the base \(\Lambda\)CDM model, obtaining \(N_{\mathrm{eff}}=3.02^{+0.24}_{-0.27}\), \(\mathrm{d}n_{s}/\mathrm{d}\ln k=-0.005\pm 0.013\), and \(w_{0}=-0.956^{+0.046}_{-0.041}\). We have used an extended halo occupation distribution (HOD) framework to model the LRG galaxy-halo connection, using halo catalogues from the AacusSummit suite of simulations. Our HOD constraints are largely consistent with the findings of Yuan et al. (2022) and Yuan et al. (2022). We constrain the minimum halo mass for hosting centrals to be \(\log M_{\mathrm{cut}}=12.65^{+0.08}_{-0.11}\), and the typical halo mass for hosting one satellite \(\log M_{1}=13.69^{+0.10}_{-0.15}\). We find signs of environment-based assembly bias, suggesting a preference for galaxies to form in haloes around denser environments. Overall, we find that our model is able to fit the Nseries mock galaxy catalogues much more accurately than the CMASS data itself. We deem that this is not a problem specific to our emulation framework, since the mean clustering signal of the Nseries mocks, which agrees with the clustering of the AacusSummit simulations for similar cosmologies, shows a similar offset with respect to the CMASS data. This suggests the possibility that there might be residual systematic effects in the data that are currently not captured by the mocks and are not taken into account by the weighting procedure that we adopt when calculating the clustering. It is also worth noting that our emulator has been trained on HOD catalogues at a fixed redshift, \(z=0.5\), and we do not include any possible effects of redshift dependence on the modelling. Furthermore, the effective redshift of our CMASS sample, \(z_{\mathrm{eff}}=0.525\), differs slightly from the redshift of the HOD catalogues. Although the relatively narrow redshift range we are imposing in the CMASS catalogue (\(0.45<z<0.6\)) could alleviate some concerns regarding Figure 11: A comparison of the constraints on the base-\(\Lambda\)CDM parameters and the derived \(f\,\sigma\)s value from fits run with different analysis settings. Our baseline analysis, shown at the top, was run by simultaneously fitting the galaxy 2PCF and density-split multipoles with a base-\(\Lambda\)CDM + extended-HOD model, using scales \(1\,h^{-1}\mathrm{Mpc}<s<150\,h^{-1}\mathrm{Mpc}\). The other points represent variations to that baseline configuration, either by changing the data vector or the model prescription. the redshift dependence of the theoretical model, a careful study of the impact of this simplification on the parameter constraints is going to become even more relevant when extending our framework to other datasets such as eBOSS (Dawson et al., 2016). We plan to study this in future work using the AbacusSummit lightcone simulations that have recently become available (Hazdihyska et al., 2022). On-going and upcoming galaxy redshift surveys, such as DESI (DESI Collaboration et al., 2016), Euclid (Laureijs et al., 2011) and the Nancy Grace Roman Space Telescope (Green et al., 2012) will open exciting avenues to use beyond-two-point statistics for cosmology, not only in terms of improved constraints on cosmological parameters, but also in regards to our understanding of the galaxy-halo connection, the treatment of observational systematics, and the possibility of finding surprises in the data that can challenge our preconceptions about the Cosmos. ## Acknowledgements The authors thank Etienne Burtin for helpful discussions throughout the development of this project. This research was enabled in part by the support provided by Compute Ontario (computeontaro.ca) and the Digital Research Alliance of Canada (alliancecan.ca). WP acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number RGPIN-2019-03908] and from the Canadian Space Agency. SN acknowledges support from an STFC Ernest Rutherford Fellowship, grant reference ST/T005009/2. FB is a University Research Fellow and has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grantagreement853291). YC acknowledges the support of the Royal Society through a University Research Fellowship. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. This work was supported by the U.S. Department of Energy through grant DE-SC0013718 and under DE-AC02-76SF00515 to SLAC National Accelerator Laboratory, and by the Kavli Institute for Particle Astrophysics and Cosmology. This project used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231. The AbacusSummit simulations were run at the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. For the purpose of open access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising. ## Data Availability Statement The source code to generate the figures in this manuscript is available at [https://github.com/florpi/sumbird](https://github.com/florpi/sumbird). The data from the AbacusSummit suite of simulations can be found at [https://abacusnbody.org](https://abacusnbody.org). Any other data necessary to reproduce the results of this work will be shared upon reasonable request to the corresponding author.
2308.16668
$2$-term averaging $L_\infty$-algebras and non-abelian extensions of averaging Lie algebras
In recent years, averaging operators on Lie algebras (also called embedding tensors in the physics literature) and associated tensor hierarchies form an efficient tool for constructing supergravity and higher gauge theories. A Lie algebra with an averaging operator is called an averaging Lie algebra. In the present paper, we introduce $2$-term averaging $L_\infty$-algebras and give characterizations of some particular classes of such homotopy algebras. Next, we study non-abelian extensions of an averaging Lie algebra by another averaging Lie algebra. We define the second non-abelian cohomology group to classify the equivalence classes of such non-abelian extensions. Next, given a non-abelian extension of averaging Lie algebras, we show that the obstruction for a pair of averaging Lie algebra automorphisms to be inducible can be seen as the image of a suitable Wells map. Finally, we discuss the Wells short exact sequence in the above context.
Apurba Das, Sourav Sen
2023-08-31T12:20:51Z
http://arxiv.org/abs/2308.16668v1
# \(2\)-term averaging \(L_{\infty}\)-algebras and non-abelian extensions of averaging Lie algebras ###### Abstract. In recent years, averaging operators on Lie algebras (also called embedding tensors in the physics literature) and associated tensor hierarchies form an efficient tool for constructing supergravity and higher gauge theories. A Lie algebra with an averaging operator is called an averaging Lie algebra. In the present paper, we introduce \(2\)-term averaging \(L_{\infty}\)-algebras and give characterizations of some particular classes of such homotopy algebras. Next, we study non-abelian extensions of an averaging Lie algebra by another averaging Lie algebra. We define the second non-abelian cohomology group to classify the equivalence classes of such non-abelian extensions. Next, given a non-abelian extension of averaging Lie algebras, we show that the obstruction for a pair of averaging Lie algebra automorphisms to be inducible can be seen as the image of a suitable Wells map. Finally, we discuss the Wells short exact sequence in the above context. 2020 MSC classification: 17B40, 17B55, 17B56, 18G45. Keywords: Averaging Lie algebras, Homotopy averaging Lie algebras, Non-abelian extensions, Wells exact sequence. ###### Contents * 1 Introduction * 2 Averaging Lie algebras * 3 \(2\)-term homotopy averaging Lie algebras * 4 Non-abelian extensions of averaging Lie algebras * 5 Automorphisms of averaging Lie algebras and the Wells map * 6 Abelian extensions of averaging Lie algebras: a particular case ## 1. Introduction Studying algebras equipped with additional structures has been of central importance because of their rich mathematical properties and relevance in various disciplines of mathematics and mathematical physics. In the past couple of years, algebras equipped with derivations, Rota-Baxter operators and involutions were studied extensively because of their appearance in many different fields [3, 11, 12, 23]. Averaging operators on associative algebras, another interesting and classical object of interest, drew the attention of the mathematical community, especially in the last couple of decades. Let \(A\) be an associative algebra. A linear map \(P:A\to A\) is said to be an averaging operator on \(A\) if \[P(a)P(b)=P(P(a)b)=P(aP(b)),\mbox{ for }a,b\in A.\] Kampe de Feriet [17] first explicitly defined the notion of averaging operators during the 1930s. Although some applications of averaging operators can be traced back to 1895 when O. Reynolds [30] studied averaging operators in the context of turbulence theory in the disguise of Reynolds operators. In the last century, averaging operators were mostly studied on various function spaces and Banach algebras. In 2000, W. Cao [4] studied averaging operators from an algebraic point of view while constructing free (commutative) averaging algebras. Subsequently, various mathematical studies of averaging algebras were done [6, 10, 28, 29, 33] in connections with combinatorics, number theory, replicators of binary operads, cohomology and deformation theory. Averaging operators can also be defined on Lie algebras (see Definition 2.1). Such operators appeared in the work of Kotov and Strobl [18] by the name of embedding tensors. More precisely, they observed that averaging operators and associated tensor hierarchies form an effective tool in the constructions of supergravity and higher gauge theories (see also [21]). It has been observed that an averaging operator induces a Leibniz algebra structure. A Lie algebra \(\mathfrak{g}\) equipped with an averaging operator \(P\) is called an averaging Lie algebra, denoted by \(\mathfrak{g}_{P}\). Recently, in [25], the authors considered cohomology and deformation theory of averaging Lie algebras using the derived bracket approach. In the present paper, we consider various questions regarding averaging Lie algebras which can be summarized as follows. ### 2-term homotopy averaging Lie algebras The concept of \(L_{\infty}\)-algebras (also called strongly homotopy Lie algebras) plays a prominent role in various contexts of mathematics and mathematical physics [19, 20]. In their fundamental paper [1], Baez and Crans first considered 2-term \(L_{\infty}\)-algebras and their close relationship with categorified Lie algebras. Among others, they considered'skeletal' and'strict' 2-term \(L_{\infty}\)-algebras and gave characterizations of them. By keeping in mind that 2-term \(L_{\infty}\)-algebras are a homotopy analogue of Lie algebras, it is natural to enquire about homotopy averaging operators. An implicit description of a homotopy averaging operator is given in [32]. In this paper, we first give an explicit description of a homotopy averaging operator on a 2-term \(L_{\infty}\)-algebra. However, the complete illustration of a homotopy averaging operator on an arbitrary \(L_{\infty}\)-algebra is yet to be found. Following the classical case, we call a 2-term \(L_{\infty}\)-algebra equipped with a homotopy averaging operator by a 2-term averaging \(L_{\infty}\)-algebra. We show that'skeletal' 2-term averaging \(L_{\infty}\)-algebras can be characterized by third cocycles of averaging Lie algebras (cf. Proposition 3.4 and Theorem 3.5). Next, we introduce crossed modules of averaging Lie algebras and show that'strict' 2-term averaging \(L_{\infty}\)-algebras are characterized by crossed modules of averaging Lie algebras (cf. Theorem 3.8). ### Non-abelian extensions of averaging Lie algebras Extensions (e.g. central extensions, abelian extensions, non-abelian extensions etc.) of some mathematical object are useful to understand the underlying structure [15, 24, 31]. Non-abelian extensions, being the most general among all kinds of extensions, deserve a special mention. The theory of non-abelian extensions was first considered by Eilenberg and Maclane [7] for abstract groups. Subsequently, such extension theory was generalized to Lie algebras by Hochschild [14]. See also [5, 8, 9, 22, 26] for recent advances on non-abelian extensions of Lie groups, Lie algebras and Leibniz algebras. Recently, the non-abelian extension theory of Rota-Baxter Lie algebras and Rota-Baxter Leibniz algebras were made in [13, 25]. In the present paper, we define and study non-abelian extensions of averaging Lie algebras. Among others, we define the second non-abelian cohomology group \(H^{2}_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\) of an averaging Lie algebra \(\mathfrak{g}_{P}\) with values in another averaging Lie algebra \(\mathfrak{h}_{Q}\). In Theorem 4.5, we show that the set of all equivalence classes of non-abelian extensions of \(\mathfrak{g}_{P}\) by \(\mathfrak{h}_{Q}\) is classified by the second non-abelian cohomology group \(H^{2}_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\). ### Inducibility of automorphisms and the Wells map The problem of inducibility of a pair of automorphisms of algebraic structures is another trending direction of research. This problem was first considered by Wells [34] for abstract groups and further studied in [16, 27]. In the context of Lie algebras, the problem can be stated as follows. Let \(0\to\mathfrak{h}\xrightarrow{i}\mathfrak{c}\xrightarrow{p}\mathfrak{g}\to 0\) be a given (non-)abelian extension of Lie algebras. Then for any \(\gamma\in\mathrm{Aut}(\mathfrak{c})\) with \(\gamma(\mathfrak{h})\subset\mathfrak{h}\), there is a pair of Lie algebra automorphisms \((\gamma|_{\mathfrak{h}},\overline{\gamma}=p\gamma s)\in\mathrm{Aut}( \mathfrak{h})\times\mathrm{Aut}(\mathfrak{g})\), where \(s\) is a section of the map \(p\). This pair of Lie algebra automorphisms is said to be induced by \(\gamma\). A pair of automorphisms \((\beta,\alpha)\in\mathrm{Aut}(\mathfrak{h})\times\mathrm{Aut}(\mathfrak{g})\) is called inducible if there exists \(\gamma\in\mathrm{Aut}(\mathfrak{c})\) with \(\gamma(\mathfrak{h})\subset\mathfrak{h}\) that induces \((\beta,\alpha)\). The inducibility problem asks to find the obstruction for the inducibility of a pair of Lie algebra automorphisms. When the given extension is abelian, the inducibility problem was addressed in [2]. More precisely, they defined the Wells map (a generalization of a map considered in [34] for group extensions) in the context of Lie algebras and showed that the obstruction for the inducibility of a pair of Lie algebra automorphisms can be described by the image of the Wells map. In this paper, we have undertaken the inducibility problem in the context of averaging Lie algebras. Given a non-abelian extension \(0\to\mathfrak{h}_{Q}\xrightarrow[]{i}\mathfrak{e}_{U}\xrightarrow[]{p}\mathfrak{g} _{P}\to 0\) of averaging Lie algebras, we first construct an analogue of the Wells map \(\mathcal{W}:\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut}( \mathfrak{g}_{P})\to H^{2}_{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \ for \((x_{1},\ldots,x_{n}),(y_{1},\ldots,y_{n})\in\mathfrak{g}\oplus\cdots\oplus\mathfrak{g}\). With the above Lie bracket, all the maps \(P,Q_{2},\ldots,Q_{n}:\mathfrak{g}\oplus\cdots\oplus\mathfrak{g}\to\mathfrak{g} \oplus\cdots\oplus\mathfrak{g}\) given by \[P(x_{1},\ldots,x_{n})=(x_{2}+\cdots+x_{n},0,\ldots,0)\text{ and }Q_{i}(x_{1}, \ldots,x_{n})=(x_{i},0,\ldots,0)\text{ }(i\geq 2)\] are averaging operators on \(\mathfrak{g}\oplus\cdots\oplus\mathfrak{g}\). 4. Let \(A\) be an associative algebra and \(P:A\to A\) be an averaging operator on \(A\). Then the map \(P\) can be regarded as an averaging operator on the associated Lie algebra \((A,[\,\ ])\). To see this, we observe that \[[P(a),P(b)]=P(a)P(b)-P(b)P(a)=P(P(a)b)-P(bP(a))=P([P(a),b]),\text{ for }a,b\in A.\] 5. Let \(\mathfrak{g}\) be a Lie algebra and \((V,\psi)\) be a representation of it, i.e., \(\psi:\mathfrak{g}\to\operatorname{End}(V)\) is a Lie algebra homomorphism. An _embedding tensor_ on \(\mathfrak{g}\) with respect to the representation \((V,\psi)\) is a linear map \(T:V\to\mathfrak{g}\) that satisfies \[[T(u),T(v)]_{\mathfrak{g}}=T(\psi_{T(u)}v),\text{ for all }u,v\in V.\] See [18] for more details. Note that given a Lie algebra \(\mathfrak{g}\) and a representation \((V,\psi)\), one can construct the semi-direct product Lie algebra on the direct sum \(\mathfrak{g}\oplus V\) with the bracket \[[(x,u),(y,v)]_{\times}:=([x,y]_{\mathfrak{g}},\psi_{x}v-\psi_{y}u),\text{ for }(x,u),(y,v)\in\mathfrak{g}\oplus V.\] Then it is easy to see that a map \(T:V\to\mathfrak{g}\) is an embedding tensor if and only if the map \(P_{T}:\mathfrak{g}\oplus V\to\mathfrak{g}\oplus V\) defined by \(P_{T}(x,u)=(T(u),0)\) is an averaging operator on the semi-direct product Lie algebra. Thus, averaging operators on Lie algebras are closely related to embedding tensors. ### Definition An **averaging Lie algebra** is a Lie algebra \(\mathfrak{g}\) equipped with an averaging operator \(P:\mathfrak{g}\to\mathfrak{g}\). Throughout this paper, we denote an averaging Lie algebra as above simply by the notation \(\mathfrak{g}_{P}\). Let \(\mathfrak{g}_{P}\) and \(\mathfrak{g}^{\prime}_{P^{\prime}}\) be two averaging Lie algebras. A morphism of averaging Lie algebras from \(\mathfrak{g}_{P}\) to \(\mathfrak{g}^{\prime}_{P^{\prime}}\) is given by a Lie algebra homomorphism \(\tau:\mathfrak{g}\to\mathfrak{g}^{\prime}\) that satisfies \(P^{\prime}\circ\tau=\tau\circ P\). Further, it is said to be an isomorphism if \(\tau\) is a linear isomorphism. Given an averaging Lie algebra \(\mathfrak{g}_{P}\), we denote the group of all averaging Lie algebra automorphisms of \(\mathfrak{g}_{P}\) by the notation \(\operatorname{Aut}(\mathfrak{g}_{P})\). ### Remark Averaging Lie algebras are closely related to Leibniz algebras. Recall that a (left) Leibniz algebra is a vector space \(\ell\) equipped with a bilinear bracket \(\{\,\ \};\ell\times\ell\to\ell\) satisfying \[\{x,\{y,z\}\}=\left\{\left\{x,y\right\},z\right\}+\left\{y,\left\{x,z\right\} \right\},\text{ for all }x,y,z\in\ell.\] Let \(\mathfrak{g}_{P}\) be an averaging Lie algebra. Then the vector space \(\mathfrak{g}\) carries a Leibniz algebra structure (induced from the averaging operator \(P\)) with the bracket \(\{x,y\}:=[P(x),y]_{\mathfrak{g}}\), for \(x,y\in\mathfrak{g}\). Conversely, let \((\ell,\{\,\ \})\) be a Leibniz algebra. Let \(\ell_{\text{Lie}}=\ell/\left\{\ell,\ell\right\}\) be the induced Lie algebra. Then the Lie algebra \(\ell_{\text{Lie}}\) has a representation \((\ell,\psi)\) on the vector space \(\ell\), where the Lie algebra homomorphism \(\psi:\ell_{\text{Lie}}\to\operatorname{End}(\ell)\) is given by \(\psi_{[x]}y:=\{x,y\}\), for \([x]\in\ell_{\text{Lie}}\) and \(y\in\ell\). Then the quotient map \(q:\ell\to\ell_{\text{Lie}}\) given by \(q(x)=[x]\) is an embedding tensor. Therefore, the map \[P:\ell_{\text{Lie}}\oplus\ell\to\ell_{\text{Lie}}\oplus\ell\text{ given by }P([x],y)=([y],0)\] is an averaging operator on the semi-direct product Lie algebra \(\ell_{\text{Lie}}\oplus\ell\). ### Definition Let \(\mathfrak{g}_{P}\) be an averaging Lie algebra. A **representation** of \(\mathfrak{g}_{P}\) consists of a Lie algebra representation \((V,\psi)\) with a linear map \(Q:V\to V\) satisfying \[\psi_{P(x)}Q(v)=Q(\psi_{P(x)}v)=Q(\psi_{x}Q(v)),\text{ for all }x\in \mathfrak{g},v\in V.\] We denote a representation as above simply by \(V_{Q}\) when the action map \(\psi\) is clear from the context. Note that any averaging Lie algebra \(\mathfrak{g}_{P}\) can be regarded as a representation of itself, where \(\mathfrak{g}\) is equipped with the adjoint Lie algebra representation. Next, we recall the cohomology of an averaging Lie algebra with coefficients in a representation [32]. Let \(\mathfrak{g}_{P}\) be an averaging Lie algebra and \(V_{Q}\) be a representation of it. For each \(n\geq 0\), the \(n\)-th cochain group \(C^{n}_{\mathrm{ALie}}(\mathfrak{g}_{P},V_{Q})\) is given by \[C^{n}_{\mathrm{ALie}}(\mathfrak{g}_{P},V_{Q})=\begin{cases}0&\text{if }n=0,\\ \mathrm{Hom}(\mathfrak{g},V)&\text{if }n=1,\\ \mathrm{Hom}(\wedge^{n}\mathfrak{g},V)\oplus\mathrm{Hom}(\mathfrak{g}^{ \otimes n-1},V)&\text{if }n\geq 2.\end{cases}\] There is a map \(\delta^{n}_{\mathrm{ALie}}:C^{n}_{\mathrm{ALie}}(\mathfrak{g}_{P},V_{Q})\to C ^{n+1}_{\mathrm{ALie}}(\mathfrak{g}_{P},V_{Q})\) given by \[\delta^{n}_{\mathrm{ALie}}((f,\theta))=(\delta^{n}_{\mathrm{Lie}}(f),\partial ^{n-1}_{\mathrm{Leib}}(\theta)+(-1)^{n}f\circ P^{\otimes n}-(-1)^{n}\ Qf\circ(P^{ \otimes n-1}\otimes\mathrm{Id})),\text{ for }(f,\theta)\in C^{n}_{\mathrm{ ALie}}(\mathfrak{g}_{P},V_{Q}).\] Here \(\delta^{n}_{\mathrm{Lie}}:\mathrm{Hom}(\wedge^{n}\mathfrak{g},V)\to\mathrm{ Hom}(\wedge^{n+1}\mathfrak{g},V)\) is the standard Chevalley-Eilenberg coboundary operator of the Lie algebra \(\mathfrak{g}\) with coefficients in the representation \((V,\psi)\), and \(\partial^{n-1}_{\mathrm{Leib}}:\mathrm{Hom}(\mathfrak{g}^{\otimes n-1},V)\to \mathrm{Hom}(\mathfrak{g}^{\otimes n},V)\) is the Loday-Piashvili coboundary operator of the induced Leibniz algebra \((\mathfrak{g},\{\,\ \})\) with coefficients in a suitable representation on \(V\)[32]. Explicitly, the map \(\partial^{n-1}_{\mathrm{Leib}}\) is given by \[\partial^{n-1}_{\mathrm{Leib}}(\theta)(x_{1},\dots,x_{n}):= \sum_{i=1}^{n-1}(-1)^{i+1}\ \psi_{P(x_{i})}\theta(x_{1},\dots,\widehat{x_{i}},\dots,x_{n})\] \[+(-1)^{n+1}\ \psi_{P(x_{n})}\theta(x_{1},\dots,x_{n-1})+(-1)^{n}\ Q( \psi_{x_{n}}\theta(x_{1},\dots,x_{n-1}))\] \[+\sum_{1\leq i<j\leq n}(-1)^{i}\ \theta(x_{1},\dots,\widehat{x_{i}}, \dots,x_{j-1},[P(x_{i}),x_{j}]_{\mathfrak{g}},\dots,x_{n}),\] for \(\theta\in\mathrm{Hom}(\mathfrak{g}^{\otimes n-1},V)\) and \(x_{1},\dots,x_{n}\in\mathfrak{g}\). Then it turns out that \(\{C^{\bullet}_{\mathrm{ALie}}(\mathfrak{g}_{P},V_{Q}),\delta^{\bullet}_{ \mathrm{ALie}}\}\) is a cochain complex. The corresponding cohomology is called the cohomology of the averaging Lie algebra \(\mathfrak{g}_{P}\) with coefficients in the representation \(V_{Q}\). We denote the \(n\)-th cohomology group by \(H^{n}_{\mathrm{ALie}}(\mathfrak{g}_{P},V_{Q})\). **2.6 Remark**.: In [32] the authors also considered central extensions of an averaging Lie algebra and showed that the set of all equivalence classes of central extensions is classified by the second cohomology group with trivial representation. ## 3. 2-term homotopy averaging Lie algebras The concept of \(L_{\infty}\)-algebras is introduced in [19, 20] as the homotopy version of Lie algebras. In this section, we first introduce homotopy averaging operators on 2-term \(L_{\infty}\)-algebras. A 2-term \(L_{\infty}\)-algebra equipped with a homotopy averaging operator is called a 2-term averaging \(L_{\infty}\)-algebra. We focus on'skeletal' and'strict' 2-term averaging \(L_{\infty}\)-algebras. In particular, we show that skeletal 2-term averaging \(L_{\infty}\)-algebras correspond to third cocycles of averaging Lie algebras. Next, we introduce crossed modules of averaging Lie algebras and show that crossed modules of averaging Lie algebras correspond to strict 2-term averaging \(L_{\infty}\)-algebras. **3.1 Definition**.: [1] A 2-**term \(L_{\infty}\)-algebra** consists of a triple \(\mathcal{G}=(\mathfrak{g}_{1}\xrightarrow{d}\mathfrak{g}_{0},[\,\ ],l_{3})\) in which \(\mathfrak{g}_{1}\xrightarrow{d}\mathfrak{g}_{0}\) is a 2-term chain complex, \([\![\,\ ]\!:\mathfrak{g}_{i}\times\mathfrak{g}_{j}\to\mathfrak{g}_{i+j}\) (for \(0\leq i,j\leq 1\)) is a bilinear map and \(l_{3}:\mathfrak{g}_{0}\times\mathfrak{g}_{0}\times\mathfrak{g}_{0}\to \mathfrak{g}_{1}\) is a skew-symmetric trilinear map satisfying for all \(w,x,y,z\in\mathfrak{g}_{0}\) and \(h,k\in\mathfrak{g}_{1}\), 1. \([\![x,y]\!]=-[\![y,x]\!]\), 2. \([\![x,h]\!]=-[\![h,x]\!]\), 3. \([\![h,k]\!]=0\), 4. \(d[\![x,h]\!]=[\![x,dh]\!]\), 5. \([\![dh,k]\!]=[\![h,dk]\!]\), 6. \(d(l_{3}(x,y,z))=[\![x,[\![y,z]\!]]+[\![y,[\![z,x]\!]]+[\![z,[\![x,y]\!]]\!]\), 7. \(l_{3}(x,y,dh)=[\![x,[\![y,h]\!]]+[\![y,[h,x]\!]]+[\![h,[x,y]\!]]\), * \([w,l_{3}(x,y,z)]-[\![x,l_{3}(w,y,z)]\!]+[\![y,l_{3}(w,x,z)]\!]-[\![z,l_{3}(w,x,y)]\!]\) * \(=l_{3}([\![w,x]\!],y,z)-l_{3}([\![w,y]\!],x,z)+l_{3}([\![w,z]\!],x,y)+l_{3}([\![x, y]\!],w,z)-l_{3}([\![x,z]\!],w,y)+l_{3}([\![y,z]\!],w,x)\). We now introduce the notion of homotopy averaging operator on a \(2\)-term \(L_{\infty}\)-algebra. This can be realized as the homotopy version of averaging operators on Lie algebras. **3.2 Definition**.: Let \(\mathcal{G}=(\mathfrak{g}_{1}\xrightarrow{d}\mathfrak{g}_{0},[\![\,\ ]\!],l_{3})\) be a \(2\)-term \(L_{\infty}\)-algebra. A **homotopy averaging operator** on \(\mathcal{G}\) is a triple \(\mathcal{P}=(P_{0},P_{1},P_{2})\) consists of linear maps \(P_{0}:\mathfrak{g}_{0}\rightarrow\mathfrak{g}_{0}\) and \(P_{1}:\mathfrak{g}_{1}\rightarrow\mathfrak{g}_{1}\), and a skew-symmetric bilinear map \(P_{2}:\mathfrak{g}_{0}\times\mathfrak{g}_{0}\rightarrow\mathfrak{g}_{1}\) such that for all \(x,y,z\in\mathfrak{g}_{0}\) and \(h\in\mathfrak{g}_{1}\), * \(P_{0}\circ d=d\circ P_{1}\), * \(d(P_{2}(x,y))=P_{0}([\![P_{0}(x),y]\!])-[\![P_{0}(x),P_{0}(y)]\!]\), * \(P_{2}(x,dh)=P_{1}([\![P_{0}(x),h]\!])-[\![P_{0}(x),P_{1}(h)]\!]=P_{1}([\![x,P_{ 1}(h)]\!])-[\![P_{0}(x),P_{1}(h)]\!]\), * \([\![P_{0}(x),P_{2}(y,z)]\!]-[\![P_{0}(y),P_{2}(x,z)]\!]+[\![P_{0}(z),P_{2}(x,y )]\!]-P_{1}[z,P_{2}(x,y)]\!]-P_{2}([\![P_{0}(x),y]\!],z)\) and \(-P_{2}(y,[\![P_{0}(x),z]\!])+P_{2}(x,[\![P_{0}(y),z]\!])=l_{3}(P_{0}(x),P_{0}( y),P_{0}(z))-P_{1}l_{3}(P_{0}(x),P_{0}(y),z)\). A \(2\)**-term averaging \(L_{\infty}\)-algebra** is a \(2\)-term \(L_{\infty}\)-algebra \(\mathcal{G}=(\mathfrak{g}_{1}\xrightarrow{d}\mathfrak{g}_{0},[\![\,\ ]\!],l_{3})\) equipped with a homotopy averaging operator \(\mathcal{P}=(P_{0},P_{1},P_{2})\) on it. We denote a \(2\)-term averaging \(L_{\infty}\)-algebra as above by \((\mathfrak{g}_{1}\xrightarrow{d}\mathfrak{g}_{0},[\![\,\ ]\!],l_{3},P_{0},P_{1},P_{2})\) or simply by \(\mathcal{G}_{\mathcal{P}}\). **3.3 Definition**.: Let \(\mathcal{G}_{\mathcal{P}}=(\mathfrak{g}_{1}\xrightarrow{d}\mathfrak{g}_{0},[\! [\,\ ]\!],l_{3},P_{0},P_{1},P_{2})\) be a \(2\)-term averaging \(L_{\infty}\)-algebra. It is said to be * **skeletal** if \(d=0\), * **strict** if \(l_{3}=0\) and \(P_{2}=0\). The following result gives a characterization of skeletal \(2\)-term averaging \(L_{\infty}\)-algebras in terms of \(3\)-cocycles of averaging Lie algebras. **3.4 Proposition**.: _There is a \(1-1\) correspondence between skeletal \(2\)-term averaging \(L_{\infty}\)-algebras and triples of the form \(\big{(}\mathfrak{g}_{P},V_{Q},(f,\theta)\big{)}\), where \(\mathfrak{g}_{P}\) is an averaging Lie algebra, \(V_{Q}\) is a representation and \((f,\theta)\in C^{3}_{\Lambda\mathrm{Lie}}(\mathfrak{g}_{P},V_{Q})\) is a \(3\)-cocycle._ Proof.: Let \(\mathcal{G}_{\mathcal{P}}=(\mathfrak{g}_{1}\xrightarrow{0}\mathfrak{g}_{0},[\,\ ]\!],l_{3},P_{0},P_{1},P_{2})\) be a skeletal \(2\)-term averaging \(L_{\infty}\)-algebra. Then it follows from conditions (L1), (L6) and (A2) that \((\mathfrak{g}_{0},[\![\,\ ]\!])\) is a Lie algebra and \(P_{0}:\mathfrak{g}_{0}\rightarrow\mathfrak{g}_{0}\) is an averaging operator on it. Thus, \((\mathfrak{g}_{0})_{P_{0}}\) is an averaging Lie algebra. On the other hand, by conditions (L7) and (A3), we get that \((\mathfrak{g}_{1})_{P_{1}}\) is a representation of the averaging Lie algebra \((\mathfrak{g}_{0})_{P_{0}}\) with the Lie algebra action map \(\psi:\mathfrak{g}_{0}\rightarrow\mathrm{End}(\mathfrak{g}_{1})\), \(\psi_{x}h=[\![x,h]\!]\), for \(x\in\mathfrak{g}_{0}\), \(h\in\mathfrak{g}_{1}\). With the above averaging Lie algebra and its representation, the conditions (L8) and (A4) are respectively equivalent to \[\big{(}\delta^{3}_{\mathrm{Lie}}(l_{3})\big{)}(w,x,y,z)=0\ \ \text{and}\ \ \big{(}\partial^{2}_{\mathrm{Leib}}(P_{2})\big{)}(x,y,z)-l_{3}\big{(}P_{0}(x),P_{ 0}(y),P_{0}(z)\big{)}+P_{1}l_{3}\big{(}P_{0}(x),P_{0}(y),z\big{)}=0.\] Therefore, \(\delta^{3}_{\mathrm{ALE}}(l_{3},P_{2})=\big{(}\delta^{3}_{\mathrm{Lie}}(l_{3} ),\ \partial^{2}_{\mathrm{Leib}}(P_{2})-l_{3}\circ P^{\otimes 3}_{0}-P_{1}l_{3}(P^{ \otimes 2}_{0}\otimes\mathrm{Id})\big{)}=0\). Hence we obtain a required triple \(\big{(}(\mathfrak{g}_{0})_{P_{0}},(\mathfrak{g}_{1})_{P_{1}},(l_{3},P_{2}) \big{)}\). Conversely, let \((\mathfrak{g}_{P},V_{Q},(f,\theta))\) be a triple in which \(\mathfrak{g}_{P}\) is an averaging Lie algebra, \(V_{Q}\) is a representation (with the action map \(\psi:\mathfrak{g}\rightarrow\mathrm{End}(V)\)) and \((f,\theta)\in C^{3}_{\mathrm{ALE}}(\mathfrak{g}_{P},V_{Q})\) is a \(3\)-cocycle. Then it is easy to verify that \((V\xrightarrow{0}\mathfrak{g},[\,\ ]\!],f,P,Q,\theta)\) is a skeletal \(2\)-term averaging \(L_{\infty}\)-algebra, where the bilinear map \([\![\,\ ]\!]\) is given by \[[\![x,y]\!]:=[x,y]_{\mathfrak{g}},\ [\![x,v]\!]=-[\![v,x]\!]:=\psi_{x}v\ \ \text{and}\ \ [\![u,v]\!]:=0,\ \text{for}\ x,y\in\mathfrak{g},\ u,v\in V.\] This completes the proof. The above result motivates us to consider the following notion. Let \(\mathcal{G}_{\mathcal{P}}=(\mathfrak{g}_{1}\xrightarrow{0}\mathfrak{g}_{0},[\![\,\ ]\!],l_{3},P_{0},P_{1},P_{2})\) and \(\mathcal{G}^{\prime}_{\mathcal{P}^{\prime}}=(\mathfrak{g}_{1}\xrightarrow{0} \mathfrak{g}_{0},[\![\,\ ]\!]^{\prime},l^{\prime}_{3},P^{\prime}_{0},P^{\prime}_{1},P^{\prime}_{2})\) be two skeletal \(2\)-term averaging \(L_{\infty}\)-algebras on the same chain complex. They are said to be equivalent if and there exist a skew-symmetric bilinear map \(g:\mathfrak{g}_{0}\times\mathfrak{g}_{0}\rightarrow\mathfrak{g}_{1}\) and a linear map \(\vartheta:\mathfrak{g}_{0}\rightarrow\mathfrak{g}_{1}\) such that \[(l_{3}^{\prime},P_{2}^{\prime})=(l_{3},P_{2})+\delta_{\mathrm{ALE}}^{2}((g, \vartheta)),\] where \(\delta_{\mathrm{ALE}}^{2}\) is the coboundary operator of the averaging Lie algebra \((\mathfrak{g}_{0})_{P_{0}}\) with coefficients in the representation \((\mathfrak{g}_{1})_{P_{1}}\). With the above notion of equivalence, Proposition 3.4 can be strengthened into the following result. **3.5 Theorem**.: _There is a \(1-1\) correspondence between equivalence classes of skeletal \(2\)-term averaging \(L_{\infty}\)-algebras and triples of the form \(\big{(}\mathfrak{g}_{P},V_{Q},[(f,\theta)]\big{)}\), where \(\mathfrak{g}_{P}\) is an averaging Lie algebra, \(V_{Q}\) is a representation and \([(f,\theta)]\in H^{3}_{\mathrm{ALE}}(\mathfrak{g}_{P},V_{Q})\) is a third cohomology class._ Next, we introduce crossed modules of averaging Lie algebras and characterize strict \(2\)-term averaging \(L_{\infty}\)-algebras. **3.6 Definition**.: A **crossed module** of averaging Lie algebras is a quadruple \(\big{(}(\mathfrak{g}_{1})_{P_{1}},(\mathfrak{g}_{0})_{P_{0}},d,\rho\big{)}\), where \((\mathfrak{g}_{1})_{P_{1}}\) and \((\mathfrak{g}_{0})_{P_{0}}\) are both averaging Lie algebras, \(d:(\mathfrak{g}_{1})_{P_{1}}\rightarrow(\mathfrak{g}_{0})_{P_{0}}\) is an averaging Lie algebra morphism and \(\rho:\mathfrak{g}_{0}\rightarrow\mathrm{Der}(\mathfrak{g}_{1})\) is a Lie algebra homomorphism that makes \((\mathfrak{g}_{1})_{P_{1}}\) into a representation of the averaging Lie algebra \((\mathfrak{g}_{0})_{P_{0}}\) satisfying additionally \[d(\rho_{x}h)=[x,dh]_{\mathfrak{g}_{0}}\ \ \text{and}\ \ \rho_{dh}k=[h,k]_{ \mathfrak{g}_{1}},\ \text{for all}\ x\in\mathfrak{g}_{0}\ \text{and}\ h,k\in\mathfrak{g}_{1}.\] **3.7 Proposition**.: _Let \(\big{(}(\mathfrak{g}_{1})_{P_{1}},(\mathfrak{g}_{0})_{P_{0}},d,\rho\big{)}\) be a crossed module of averaging Lie algebras. Then \((\mathfrak{g}_{0}\oplus\mathfrak{g}_{1})_{P_{0}\oplus P_{1}}\) is an averaging Lie algebra, where \(\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}\) is equipped with the bracket_ \[[(x,h),(y,k)]:=([x,y]_{\mathfrak{g}_{0}},\rho_{x}k-\rho_{y}k+[h,k]_{ \mathfrak{g}_{1}}),\ \text{for}\ x,h),(y,k)\in\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}. \tag{2}\] Proof.: Since \(\mathfrak{g}_{0}\), \(\mathfrak{g}_{1}\) are both Lie algebras and \(\rho:\mathfrak{g}_{0}\rightarrow\mathrm{Der}(\mathfrak{g}_{1})\) is a Lie algebra homomorphism, it follows that \(\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}\) is a Lie algebra with the bracket (2). Moreover, for any \((x,h),(y,k)\in\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}\), we have \[[(P_{0}\oplus P_{1})(x,h),(P_{0}\oplus P_{1})(y,k)]\] \[=[(P_{0}(x),P_{1}(h)),(P_{0}(y),P_{1}(k))]\] \[=\big{(}[P_{0}(x),P_{0}(y)]_{\mathfrak{g}_{0}},\ \rho_{P_{0}(x)}P_{1}(k)-\rho_{P_{0}(y)}P_{1}(h)+[P_{1}(h),P_{1}(k)]_{ \mathfrak{g}_{1}}\big{)}\] \[=\big{(}P_{0}[P_{0}(x),y]_{\mathfrak{g}_{0}},\ P_{1}(\rho_{P_{0}( x)}k)-P_{1}(\rho_{y}P_{1}(h))+P_{1}[P_{1}(h),k]_{\mathfrak{g}_{1}}\big{)}\] \[=(P_{0}\oplus P_{1})[(P_{0}(x),P_{1}(h)),(y,k)]\] \[=(P_{0}\oplus P_{1})[(P_{0}\oplus P_{1})(x,h),(y,k)].\] This shows that the map \(P_{0}\oplus P_{1}:\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}\rightarrow\mathfrak{g }_{0}\oplus\mathfrak{g}_{1}\) is an averaging operator. This proves the result. **3.8 Theorem**.: _There is a \(1-1\) correspondence between strict \(2\)-term averaging \(L_{\infty}\)-algebras and crossed modules of averaging Lie algebras._ Proof.: Let \(\mathcal{G}_{\mathcal{P}}=(\mathfrak{g}_{1}\stackrel{{ d}}{{ \rightarrow}}\mathfrak{g}_{0},[\![\,\ ]\!],l_{3}=0,P_{0},P_{1},P_{2}=0)\) be a strict \(2\)-term averaging \(L_{\infty}\)-algebra. Then it follows from (L1), (L6) and (A2) that \((\mathfrak{g}_{0},[\![\,\ ]\!])\) is a Lie algebra and \(P_{0}:\mathfrak{g}_{0}\rightarrow\mathfrak{g}_{0}\) is an averaging operator on it, i.e., \((\mathfrak{g}_{0})_{P_{0}}\) is an averaging Lie algebra. Next, we define a bilinear bracket \([\,\ ]_{\mathfrak{g}_{1}}:\mathfrak{g}_{1}\times\mathfrak{g}_{1} \rightarrow\mathfrak{g}_{1}\) by \([h,k]_{\mathfrak{g}_{1}}:=[\![dh,k]\!]\), for \(h,k\in\mathfrak{g}_{1}\). Then from conditions (L2), (L5) and (L7) that \((\mathfrak{g}_{1},[\,\ ]\!]_{\mathfrak{g}_{1}}\) is a Lie algebra. Moreover, condition (A3) yields that \(P_{1}:\mathfrak{g}_{1}\rightarrow\mathfrak{g}_{1}\) is an averaging operator. Hence \((\mathfrak{g}_{1})_{P_{1}}\) is also an averaging Lie algebra. On the other hand, the conditions (L4) and (A1) implies that \(d:(\mathfrak{g}_{1})_{P_{1}}\rightarrow(\mathfrak{g}_{0})_{P_{0}}\) is an averaging Lie algebra morphism. Finally, we define a map \(\rho:\mathfrak{g}_{0}\rightarrow\mathrm{Der}(\mathfrak{g}_{1})\) by \(\rho_{x}h:=[\![x,h]\!]\), for \(x\in\mathfrak{g}_{0}\), \(h\in\mathfrak{g}_{1}\). Then it follows from (L7) and (A3) that \(\rho\) makes \((\mathfrak{g}_{1})_{P_{1}}\) into a representation of the averaging Lie algebra \((\mathfrak{g}_{0})_{P_{0}}\). We also have \[d(\rho_{x}h)=d[\![x,h]\!]=[\![x,dh]\!]\ \text{and}\ \ \rho_{dh}k=[\![dh,k]\!]=[ \![h,k]_{\mathfrak{g}_{1}},\ \text{for}\ x\in\mathfrak{g}_{0},h,k\in\mathfrak{g}_{1}.\] Hence \(\big{(}(\mathfrak{g}_{1})_{P_{1}},(\mathfrak{g}_{0})_{P_{0}},d,\rho\big{)}\) is a crossed module of averaging Lie algebras. Conversely, let \(\big{(}(\mathfrak{g}_{1})_{P_{1}},(\mathfrak{g}_{0})_{P_{0}},d,\rho\big{)}\) be a crossed module of averaging Lie algebras. Then it is easy to verify that \((\mathfrak{g}_{1}\stackrel{{ d}}{{\rightarrow}}\mathfrak{g}_{0},[\! [\,\ ]\!],l_{3}=0,P_{0},P_{1},P_{2}=0)\) is a strict \(2\)-term averaging \(L_{\infty}\)-algebra, where the bracket \([\,\ ]:\mathfrak{g}_{i}\times\mathfrak{g}_{j}\rightarrow\mathfrak{g}_{i+j}\) (for \(0\leq i,j\leq 1\)) is given by \[[\![x,y]\!]:=[x,y]_{\mathfrak{g}_{0}},\ [\![x,h]\!]=-[\![h,x]\!]:=\rho_{x}h\ \ \text{ and }\ [\![h,k]\!]:=0,\text{ for }x,y\in \mathfrak{g}_{0},h,k\in\mathfrak{g}_{1}.\] The above two correspondences are inverse to each other. This completes the proof. Combining Proposition 3.7 and Theorem 3.8, we obtain the following result. **3.9 Proposition**.: _Let \(\mathcal{G}_{P}=(\mathfrak{g}_{1}\overset{d}{\rightarrow}\mathfrak{g}_{0},[ \,\ ],l_{3}=0,P_{0},P_{1},P_{2}=0)\) be a strict \(2\)-term averaging \(L_{\infty}\)-algebra. Then \((\mathfrak{g}_{0}\oplus\mathfrak{g}_{1})_{P_{0}\oplus P_{1}}\) is an averaging Lie algebra with the Lie bracket_ \[[(x,h),(y,k)]:=([\![x,y]\!],[\![x,k]\!]-[\![y,h]\!]+[\![dh,k]\!]),\text{ for }(x,h),(y,k)\in \mathfrak{g}_{0}\oplus\mathfrak{g}_{1}.\] **3.10 Example**.: Let \(\mathfrak{g}_{P}\) be an averaging Lie algebra. Then \((\mathfrak{g}_{P},\mathfrak{g}_{P},\mathrm{Id},ad)\) is a crossed module of averaging Lie algebras, where \(ad\) denotes the adjoint representation. Therefore, it follows that \[(\mathfrak{g}\overset{\mathrm{Id}}{\rightarrow}\mathfrak{g},[\,\ ]_{ \mathfrak{g}},l_{3}=0,P_{0}=P,P_{1}=P,P_{2}=0)\] is a strict \(2\)-term averaging \(L_{\infty}\)-algebra. More generally, let \(\mathfrak{g}_{P}\) be an averaging Lie algebra and \(\mathfrak{h}\subset\mathfrak{g}\) be a Lie ideal that satisfies \(P(\mathfrak{h})\subset\mathfrak{h}\). Then \((\mathfrak{h}_{P},\mathfrak{g}_{P},i,ad)\) is a crossed module of averaging Lie algebras, where \(i:\mathfrak{h}\rightarrow\mathfrak{g}\) is the inclusion map. Hence \((\mathfrak{h}\overset{i}{\rightarrow}\mathfrak{g},[\,\ ]_{\mathfrak{g}},l_{3}=0,P_{0}=P,P_{1}=P,P_{2}=0)\) is a strict \(2\)-term averaging \(L_{\infty}\)-algebra. As a particular case, we also get the following. **3.11 Example**.: Let \(\mathfrak{g}_{P}\), \(\mathfrak{b}_{Q}\) be two averaging Lie algebras and \(f:\mathfrak{g}_{P}\rightarrow\mathfrak{b}_{Q}\) be an averaging Lie algebra morphism. Then \((\mathrm{Ker}f,\mathfrak{g},i,ad)\) is a crossed module of averaging Lie algebras. ## 4. Non-abelian extensions of averaging Lie algebras In this section, we consider non-abelian extensions of an averaging Lie algebra \(\mathfrak{g}_{P}\) by another averaging Lie algebra \(\mathfrak{h}_{Q}\). We also introduce the second non-abelian cohomology space \(H^{2}_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\) that classifies the set of all equivalence classes of non-abelian extension of \(\mathfrak{g}_{P}\) by \(\mathfrak{h}_{Q}\). **4.1 Definition**.: 1. Let \(\mathfrak{g}_{P}\) and \(\mathfrak{h}_{Q}\) be two averaging Lie algebras. A **non-abelian extension** of \(\mathfrak{g}_{P}\) by \(\mathfrak{h}_{Q}\) is an averaging Lie algebra \(\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ }}}}}{}}}}{}}{}}{}}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {}{}{}{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}}{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}}{{}{}{}{}{}{}{}{ }{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}}{{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{ }{}{}{}{{}}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{} {}{}{{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{} {}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{} {}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{} {}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{} {}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{} {}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{} {}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{} {}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{{} {}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{{} {}{}{{}{}{}{}{{}{}{}{}{{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{} {}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{ In Remark 2.4, we have seen that an averaging Lie algebra induces a Leibniz algebra structure. This construction is also functorial. Hence a non-abelian extension of an averaging Lie algebra naturally gives rise to a non-abelian extension of (induced) Leibniz algebra in the sense of [22]. Similarly, an equivalence between two non-abelian extensions of an averaging Lie algebra naturally gives rise to an equivalence between the corresponding non-abelian extensions of the (induced) Leibniz algebra. Let (3) be a non-abelian extension of an averaging Lie algebra \(\mathfrak{g}_{P}\) by another averaging Lie algebra \(\mathfrak{h}_{Q}\). A section of (3) is a linear map \(s:\mathfrak{g}\to\mathfrak{c}\) that satisfies \(p\circ s=\operatorname{Id}_{\mathfrak{g}}\). Note that section always exists. Let \(s:\mathfrak{g}\to\mathfrak{c}\) be any section. We define maps \(\chi:\wedge^{2}\mathfrak{g}\to\mathfrak{h}\), \(\psi:\mathfrak{g}\to\operatorname{Der}(\mathfrak{h})\) and \(\Phi:\mathfrak{g}\to\mathfrak{h}\) by \[\chi(x,y) :=\left.[s(x),s(y)\right]_{\mathfrak{c}}-s\left[x,y\right]_{ \mathfrak{g}}, \tag{5}\] \[\psi_{x}(h) :=\left.[s(x),h\right]_{\mathfrak{c}},\] (6) \[\Phi(x) :=U(s(x))-s(P(x)), \tag{7}\] for \(x,y\in\mathfrak{g}\) and \(h\in\mathfrak{h}.\) Since (3) defines a non-abelian extension of the Lie algebra \(\mathfrak{g}\) by the Lie algebra \(\mathfrak{h}\) (by forgetting the average operators), it follows from [9] that \[\psi_{x}\psi_{y}(h)-\psi_{y}\psi_{x}(h)-\psi_{[x,y]_{\mathfrak{g}}}(h)=\left[ \chi(x,y),h\right]_{\mathfrak{h}}, \tag{8}\] \[\psi_{x}\chi(y,z)+\psi_{y}\chi(z,x)+\psi_{z}\chi(x,y)-\chi([x,y]_{\mathfrak{g} },z)-\chi([y,z]_{\mathfrak{g}},x)-\chi([z,x]_{\mathfrak{g}},y)=0, \tag{9}\] for all \(x,y,z\in\mathfrak{g}\) and \(h\in\mathfrak{h}\). Moreover, we have the following. **4.2 Lemma**.: _The maps \(\chi,\psi\) and \(\Phi\) defined above satisfy the following compatible conditions: for all \(x,y\in\mathfrak{g}\) and \(h\in\mathfrak{h}\),_ \[\psi_{P(x)}Q(h)=Q(\psi_{P(x)}h)+Q[\Phi(x),h]_{\mathfrak{h}}-[\Phi(x),Q(h)]_{ \mathfrak{h}}=Q(\psi_{x}Q(h))-[\Phi(x),Q(h)]_{\mathfrak{h}}, \tag{10}\] \[\chi(P(x),P(y))-Q(\chi(P(x),y))-\Phi[P(x),y]_{\mathfrak{g}}+\psi_{P(x)}\Phi(y )-\psi_{P(y)}\Phi(x)+Q(\psi_{y}\Phi(x))+[\Phi(x),\Phi(y)]_{\mathfrak{h}}=0. \tag{11}\] Proof.: For any \(x\in\mathfrak{g}\) and \(h\in\mathfrak{h}\), we have \[\psi_{P(x)}Q(h)-Q(\psi_{P(x)}h)-Q[\Phi(x),h]_{\mathfrak{h}}+[\Phi (x),Q(h)]_{\mathfrak{h}}\] \[=[sP(x),Q(h)]_{\mathfrak{c}}-Q[sP(x),h]_{\mathfrak{c}}-Q[Us(x),h] _{\mathfrak{c}}+Q[sP(x),h]_{\mathfrak{c}}+[Us(x),Q(h)]_{\mathfrak{c}}-[sP(x),Q (h)]_{\mathfrak{c}}\] \[=-Q[Us(x),h]_{\mathfrak{c}}+[Us(x),U(h)]_{\mathfrak{c}}\quad( \text{as $Q=U$ $}|_{\mathfrak{h}})\] \[=0\quad(\text{as $U$ is an embedding tensor}).\] We also have \[\psi_{P(x)}Q(h)-Q(\psi_{x}Q(h))+[\Phi(x),Q(h)]_{\mathfrak{h}}\] \[=[sP(x),Q(h)]_{\mathfrak{c}}-Q[s(x),Q(h)]_{\mathfrak{c}}+[Us(x),Q (h)]_{\mathfrak{c}}-[sP(x),Q(h)]_{\mathfrak{c}}\] \[=-Q[s(x),Q(h)]_{\mathfrak{c}}+[Us(x),U(h)]_{\mathfrak{c}}\quad( \text{as $Q=U$ }|_{\mathfrak{h}})\] \[=0\quad(\text{as $U$ is an embedding tensor}).\] This proves the identities in (10). To prove the identity (11), we observe that \[\chi(P(x),P(y))-Q(\chi(P(x),y))-\Phi[P(x),y]_{\mathfrak{g}}+\psi_{P(x)}\Phi(y)- \psi_{P(y)}\Phi(x)+Q(\psi_{y}\Phi(x))+[\Phi(x),\Phi(y)]_{\mathfrak{h}}\] \[=[sP(x),sP(y)]_{\mathfrak{c}}-s[P(x),P(y)]_{\mathfrak{g}}-Q([sP(x ),s(y)]_{\mathfrak{c}}-s[P(x),y]_{\mathfrak{g}})-Us[P(x),y]_{\mathfrak{g}}+sP[ P(x),y]_{\mathfrak{g}}\] \[+[sP(x),Us(y)]_{\mathfrak{c}}-[sP(x),sP(y)]_{\mathfrak{c}}-[sP(y ),Us(x)]_{\mathfrak{c}}+[sP(y),sP(x)]_{\mathfrak{c}}+Q[s(y),Us(x)]_{\mathfrak{ c}}\] \[-Q[s(y),sP(x)]_{\mathfrak{c}}+[Us(x),Us(y)]_{\mathfrak{c}}-[Us(x), sP(y)]_{\mathfrak{c}}-[sP(x),Us(y)]_{\mathfrak{c}}+[sP(x),sP(y)]_{\mathfrak{c}}\] \[=-s[P(x),P(y)]_{\mathfrak{g}}+Qs[P(x),y]_{\mathfrak{g}}-Us[P(x), y]_{\mathfrak{g}}+sP[P(x),y]_{\mathfrak{g}}+Q[s(y),Us(x)]_{\mathfrak{c}}+[Us(x),Us(y)]_{ \mathfrak{c}}.\] The above expression vanishes as both \(P\) and \(U\) are embedding tensors and \(Q=U|_{\mathfrak{h}}\). This completes the proof. **4.3 Remark**.: Note that the identity (11) can be equivalently described by \[\chi(P(x),P(y))-Q(\chi(x,P(y)))-\Phi[x,P(y)]_{\mathfrak{g}}+\psi_{P(x)}\Phi(y)- \psi_{P(y)}\Phi(x)-Q(\psi_{x}\Phi(y))+[\Phi(x),\Phi(y)]_{\mathfrak{h}}=0. \tag{12}\] Let \(s^{\prime}:\mathfrak{g}\to\mathfrak{e}\) be any other section of (3). Let \(\phi:\mathfrak{g}\to\mathfrak{h}\) be defined by \(\phi(x):=s(x)-s^{\prime}(x)\), for all \(x\in\mathfrak{g}\). If \(\chi^{\prime}\), \(\psi^{\prime}\) and \(\Phi^{\prime}\) are the maps induced by the section \(s^{\prime}\), then for all \(x,y\in\mathfrak{g}\) and \(h\in\mathfrak{h}\), we have \[\psi_{x}h-\psi_{x^{\prime}}h =[s(x),h]_{\mathfrak{e}}-[s^{\prime}(x),h]_{\mathfrak{e}}=[\phi( x),h]_{\mathfrak{h}},\] \[\chi(x,y)-\chi^{\prime}(x,y) =[s(x),s(y)]_{\mathfrak{e}}-s[x,y]_{\mathfrak{g}}-[s^{\prime}(x),s^{\prime}(y)]_{\mathfrak{e}}+s^{\prime}[x,y]_{\mathfrak{g}}\] \[=[s^{\prime}(x),(s-s^{\prime})(y)]_{\mathfrak{e}}-[s^{\prime}(y),(s-s^{\prime})(x)]_{\mathfrak{e}}-(s-s^{\prime})[x,y]_{\mathfrak{g}}+[(s-s^{ \prime})(x),(s-s^{\prime})(y)]_{\mathfrak{h}}\] \[=\psi_{x}^{\prime}\phi(y)-\psi_{y}^{\prime}\phi(x)-\phi[x,y]_{ \mathfrak{g}}+[\phi(x),\phi(y)]_{\mathfrak{h}},\] \[\Phi(x)-\Phi^{\prime}(x) =(Us-sP)(x)-(Us^{\prime}-s^{\prime}P)(x)\] \[=U(s-s^{\prime})(x)-(s-s^{\prime})P(x)\] \[=Q\phi(x)-\phi P(x).\] The above discussion leads us to the following definition. **4.4 Definition**.: 1. Let \(\mathfrak{g}_{P}\) and \(\mathfrak{h}_{Q}\) be two averaging Lie algebras. A **non-abelian 2-cocycle** of \(\mathfrak{g}_{P}\) with values in \(\mathfrak{h}_{Q}\) is a triple \((\chi,\psi,\Phi)\) of linear maps \(\chi:\wedge^{2}\mathfrak{g}\to\mathfrak{h}\), \(\psi:\mathfrak{g}\to\operatorname{Der}(\mathfrak{h})\) and \(\Phi:\mathfrak{g}\to\mathfrak{h}\) satisfying the conditions (8), (9), (10) and (11). 2. Let \((\chi,\psi,\Phi)\) and \((\chi^{\prime},\psi^{\prime},\Phi^{\prime})\) be two non-abelian 2-cocycles of \(\mathfrak{g}_{P}\) with values in \(\mathfrak{h}_{Q}\). They are said to be **equivalent** if there exists a linear map \(\phi:\mathfrak{g}\to\mathfrak{h}\) that satisfies \[\psi_{x}h-\psi_{x}^{\prime}h =[\phi(x),h]_{\mathfrak{h}},\] (13) \[\chi(x,y)-\chi^{\prime}(x,y) =\psi_{x}^{\prime}\phi(y)-\psi_{y}^{\prime}\phi(x)-\phi[x,y]_{ \mathfrak{g}}+[\phi(x),\phi(y)]_{\mathfrak{h}},\] (14) \[\Phi(x)-\Phi^{\prime}(x) =Q\phi(x)-\phi P(x),\text{ for all }x,y\in\mathfrak{g},h\in \mathfrak{h}.\] (15) Let \(H^{2}_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\) be the set of all equivalence classes of non-abelian 2-cocycles of \(\mathfrak{g}_{P}\) with values in \(\mathfrak{h}_{Q}\). This is called the **second non-abelian cohomology group** of the averaging Lie algebra \(\mathfrak{g}_{P}\) with values in \(\mathfrak{h}_{Q}\). The following result shows that the set of all equivalence classes of non-abelian extensions of \(\mathfrak{g}_{P}\) by \(\mathfrak{h}_{Q}\) are classified by the cohomology group \(H^{2}_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\). **4.5 Theorem**.: _Let \(\mathfrak{g}_{P}\) and \(\mathfrak{h}_{Q}\) be two averaging Lie algebras. Then \(\operatorname{Ext}_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\cong H^{2 }_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\)._ Proof.: Let \(\mathfrak{e}_{U}\) and \(\mathfrak{e}^{\prime}{}_{U^{\prime}}\) be two equivalent non-abelian extensions of \(\mathfrak{g}_{P}\) by \(\mathfrak{h}_{Q}\) (see Definition 4.1(ii)). If \(s:\mathfrak{g}\to\mathfrak{e}\) is a section of the map \(p\), then it is easy to observe that the map \(s^{\prime}:=\tau\circ s\) is a section of the map \(p^{\prime}\). Let \((\chi^{\prime},\psi^{\prime},\Phi^{\prime})\) be the non-abelian 2-cocycle corresponding to the non-abelian extension \(\mathfrak{e}^{\prime}{}_{U^{\prime}}\) and the section \(s^{\prime}\). Then we have \[\chi^{\prime}(x,y) =[s^{\prime}(x),s^{\prime}(y)]_{\mathfrak{e}^{\prime}}-s^{\prime}[x,y]_{\mathfrak{g}}\] \[=[\tau\circ s(x),\tau\circ s(y)]_{\mathfrak{e}^{\prime}}-(\tau \circ s)[x,y]_{\mathfrak{g}}\] \[=\tau([s(x),s(y)]_{\mathfrak{e}}-s[x,y]_{\mathfrak{g}})=\chi(x,y) \quad\text{(as $\tau|_{\mathfrak{h}}=\mathrm{Id}_{\mathfrak{h}}$)},\] \[\psi_{x}^{\prime}(h)=[s^{\prime}(x),h]_{\mathfrak{e}^{\prime}}=[\tau\circ s(x),h]_{\mathfrak{e}^{\prime}}=\tau([s(x),h]_{\mathfrak{e}})=\psi_{x}(h)\quad \text{(as $\tau|_{\mathfrak{h}}=\mathrm{Id}_{\mathfrak{h}}$)},\] and \[\Phi^{\prime}(x) =U^{\prime}s^{\prime}(x)-s^{\prime}P(x)\] \[=U^{{}^{\prime}}(\tau\circ s(x))-(\tau\circ s)P(x)\] \[=\tau(Us(x)-sP(x))=\Phi(x)\quad\text{(as $\tau|_{\mathfrak{h}}= \mathrm{Id}_{\mathfrak{h}}$)}.\] Thus, we have \((\chi,\psi,\Phi)=(\chi^{\prime},\psi^{\prime},\Phi^{\prime})\). Hence they give rise to the same element in \(H^{2}_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\). Therefore, there is a well-defined map \(\Upsilon:\operatorname{Ext}_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q}) \to H^{2}_{\mathrm{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\). Conversely, let \((\chi,\psi,\Phi)\) be a non-abelian 2-cocycle of \(\mathfrak{g}_{P}\) with values in \(\mathfrak{h}_{Q}\). Define \(\mathfrak{e}:=\mathfrak{g}\oplus\mathfrak{h}\) with the bilinear skew-symmetric bracket \[[(x,h),(y,k)]_{\mathfrak{e}}:=([x,y]_{\mathfrak{g}},\psi_{x}k-\psi_{y}h+\chi(x,y)+[h,k]_{\mathfrak{h}}),\text{ for }(x,h),(y,k)\in\mathfrak{e}.\] It has been observed in [9] (using the conditions (8) and (9)) that the bracket \([\,\ ]_{\mathfrak{e}}\) satisfies the Jacobi identity. In other words, \((\mathfrak{e},[\,\ ]_{\mathfrak{e}})\) is a Lie algebra. Further, we define a map \(U:\mathfrak{e}\to\mathfrak{e}\) by \[U(x,h):=(P(x),Q(h)+\Phi(x)),\text{ for all }(x,h)\in\mathfrak{e}.\] Then we have \[[U(x,h),U(y,k)]_{\mathfrak{e}}\] \[=[(P(x),Q(h)+\Phi(x)),(P(y),Q(k)+\Phi(y))]_{\mathfrak{e}}\] \[=\big{(}[P(x),P(y)]_{\mathfrak{g}},\ \psi_{P(x)}Q(k)+\psi_{P(x)}\Phi(y)- \psi_{P(y)}Q(h)-\psi_{P(y)}\Phi(x)+\chi(P(x),P(y))\] \[\qquad\qquad\ \ +[Q(h),Q(k)]_{\mathfrak{h}}+[Q(h),\Phi(y)]_{ \mathfrak{h}}+[\Phi(x),Q(k)]_{\mathfrak{h}}+[\Phi(x),\Phi(y)]_{\mathfrak{h}})\] \[=\big{(}P[P(x),y]_{\mathfrak{g}},\ Q(\psi_{P(x)}k-\psi_{y}Q(h)- \psi_{y}\Phi(x)+\chi(P(x),y)+[Q(y),k]_{\mathfrak{h}}+[\Phi(x),k]_{\mathfrak{h }})\] \[\qquad\qquad\ \ +\Phi[P(x),y]_{\mathfrak{g}}\ \ \ \text{(by \eqref{eq:eq:eq:eq:eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq the Well short exact sequence that connects various automorphism groups and the second non-abelian cohomology group. Let \(\mathfrak{g}_{P}\) and \(\mathfrak{h}_{Q}\) be two averaging Lie algebras and be a non-abelian extension of \(\mathfrak{g}_{P}\) by \(\mathfrak{h}_{Q}\). Let \(\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\) be the set of all averaging Lie algebra automorphisms \(\gamma\in\operatorname{Aut}(\mathfrak{e}_{U})\) that satisfies \(\gamma|_{\mathfrak{h}}\subset\mathfrak{h}\). For such a \(\gamma\in\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\), we have \(\gamma|_{\mathfrak{h}}\in\operatorname{Aut}(\mathfrak{h}_{Q})\). For any section \(s:\mathfrak{g}\to\mathfrak{e}\) of the map \(p\), we define a map \(\overline{\gamma}:\mathfrak{g}\to\mathfrak{g}\) by \(\overline{\gamma}(x):=p\gamma s(x)\), for \(x\in\mathfrak{g}\). It is easy to verify that the map \(\overline{\gamma}\) is independent of the choice of the section \(s\) and \(\overline{\gamma}\) is a bijection on \(\mathfrak{g}\). Further, for any \(x,y\in\mathfrak{g}\), we have \[\overline{\gamma}([x,y]_{\mathfrak{g}})=p\gamma(s[x,y]_{\mathfrak{ g}}) =p\gamma([s(x),s(y)]_{\mathfrak{e}}-\chi(x,y))\] \[=p\gamma[s(x),s(y)]_{\mathfrak{e}}\quad(\text{as }\gamma|_{ \mathfrak{h}}\subset\mathfrak{h}\text{ and }p|_{\mathfrak{h}}=0)\] \[=[p\gamma s(x),p\gamma s(y)]_{\mathfrak{g}}=[\overline{\gamma}(x),\overline{\gamma}(y)]_{\mathfrak{g}}\] and \[(P\overline{\gamma}-\overline{\gamma}P)(x) =(Pp\gamma s-p\gamma sP)(x)\] \[=(pU\gamma s-p\gamma sP)(x)\] \[=p\gamma(Us-sP)(x)=0\quad(\text{as }\gamma|_{\mathfrak{h}}\subset \mathfrak{h}\text{ and }p|_{\mathfrak{h}}=0).\] This shows that the map \(\overline{\gamma}:\mathfrak{g}\to\mathfrak{g}\) is an automorphism of the averaging Lie algebra \(\mathfrak{g}_{P}\). In other words, \(\overline{\gamma}\in\operatorname{Aut}(\mathfrak{g}_{P})\). Therefore, we obtain a group homomorphism \[\Pi:\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\to\operatorname{Aut }(\mathfrak{h}_{Q})\times\operatorname{Aut}(\mathfrak{g}_{P})\text{ defined by }\Pi(\gamma):=(\gamma|_{\mathfrak{h}},\overline{\gamma}),\text{ for all }\gamma\in \operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U}).\] **5.1 Definition**.: A pair \((\beta,\alpha)\in\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{ Aut}(\mathfrak{g}_{P})\) of averaging Lie algebra automorphisms is said to be **inducible** if the pair \((\beta,\alpha)\) lies in the image of \(\Pi\), i.e., \((\beta,\alpha)\) is inducible if there exists \(\gamma\in\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\) such that \(\gamma|_{\mathfrak{h}}=\beta\) and \(\overline{\gamma}=\alpha\). Our main aim in this section is to find a necessary and sufficient condition for a pair of averaging Lie algebra automorphisms \((\beta,\alpha)\in\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut}( \mathfrak{g}_{P})\) to be inducible. The main theorem of this section will be stated in terms of the Wells map in the context of averaging Lie algebras. Let \(0\to\mathfrak{h}_{Q}\xrightarrow{i}\mathfrak{e}_{U}\xrightarrow{p}\mathfrak{g }_{P}\to 0\) be a non-abelian extension of averaging Lie algebras. For any fixed section \(s:\mathfrak{g}\to\mathfrak{e}\) of the map \(p\), let \((\chi,\psi,\Phi)\) be the corresponding non-abelian \(2\)-cocycle. Given any pair \((\beta,\alpha)\in\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut}( \mathfrak{g}_{P})\) of averaging Lie algebra automorphisms, we define a new triple \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\) of linear maps \(\chi_{(\beta,\alpha)}:\wedge^{2}\mathfrak{g}\to\mathfrak{h}\), \(\psi_{(\beta,\alpha)}:\mathfrak{g}\to\operatorname{Der}(\mathfrak{h})\) and \(\Phi_{(\beta,\alpha)}:\mathfrak{g}\to\mathfrak{h}\) by \[\chi_{(\beta,\alpha)}(x,y):=\beta\circ\chi(\alpha^{-1}(x),\alpha^{-1}(y)), \ (\psi_{(\beta,\alpha)})_{x}h:=\beta(\psi_{\alpha^{-1}(x)}\beta^{-1}(h))\text{ and }\Phi_{(\beta,\alpha)}(x):=\beta\Phi(\alpha^{-1}(x)), \tag{16}\] for \(x,y\in\mathfrak{g}\) and \(h\in\mathfrak{h}\). Then we have the following result. **5.2 Lemma**.: _The triple \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\) is a non-abelian \(2\)-cocycle._ Proof.: The triple \((\chi,\psi,\Phi)\) is a non-abelian \(2\)-cocycle implies that the identities (8), (9), (10) and (11) hold. In these identities, if we replace \(x,y,h\) by \(\alpha^{-1}(x),\alpha^{-1}(y),\beta^{-1}(h)\) respectively, we simply get the non-abelian \(2\)-cocycle conditions for the triple \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\). For example, it follows from (10) that \[\beta(\psi_{P\alpha^{-1}(x)}Q\beta^{-1}(h))=\beta Q(\psi_{P\alpha ^{-1}(x)}\beta^{-1}(h)) +\beta Q[\Phi\alpha^{-1}(x),\beta^{-1}(h)]_{\mathfrak{h}}-\beta[ \Phi\alpha^{-1}(x),Q\beta^{-1}(h)]_{\mathfrak{h}}\] \[=\beta Q(\psi_{\alpha^{-1}(x)}Q\beta^{-1}(h))-\beta[\Phi\alpha^{-1 }(x),Q\beta^{-1}(h)]_{\mathfrak{h}}.\] This can be written as (using (16)) \[(\psi_{(\beta,\alpha)})_{P(x)}Q(h)=Q((\psi_{(\beta,\alpha)})_{P( x)}h) +\,Q[\Phi_{(\beta,\alpha)}(x),h]_{\mathfrak{h}}-[\Phi_{(\beta, \alpha)}(x),Q(h)]_{\mathfrak{h}}\] \[=Q((\psi_{(\beta,\alpha)})_{x}Q(h))-[\Phi_{(\beta,\alpha)}(x),Q(h) ]_{\mathfrak{h}}.\] This shows that the identity (10) is also holds for the triple \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\). Note that the non-abelian \(2\)-cocycle \((\chi,\psi,\Phi)\) and therefore \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\) depends on the chosen section \(s\). We now define a map \(\mathcal{W}:\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut}( \mathfrak{g}_{P})\to H^{2}_{\operatorname{nab}}(\mathfrak{g}_{P},\mathfrak{h}_ {Q})\) by \[\mathcal{W}((\beta,\alpha))=[(\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi _{(\beta,\alpha)})-(\chi,\psi,\Phi)], \tag{17}\] the equivalence class of \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})-(\chi,\psi,\Phi).\) The map \(\mathcal{W}\) is called the **Wells map**. Note that the Wells map may not be a group homomorphism. ### Proposition _The Wells map \(\mathcal{W}\) does not depend on the chosen section._ Proof.: Let \(s^{\prime}\) be any other section of the map \(p\) and let \((\chi^{\prime},\psi^{\prime},\Phi^{\prime})\) be the corresponding non-abelian \(2\)-cocycle. We have seen that the non-abelian \(2\)-cocycles \((\chi,\psi,\Phi)\) and \((\chi^{\prime},\psi^{\prime},\Phi^{\prime})\) are equivalent by the map \(\phi:=s-s^{\prime}\). Using this, it is easy to verify that the non-abelian \(2\)-cocycles \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\) and \((\chi^{\prime}_{(\beta,\alpha)},\psi^{\prime}_{(\beta,\alpha)},\Phi^{\prime}_{ (\beta,\alpha)})\) are equivalent by the map \(\beta\phi\alpha^{-1}\). Combining these results, we observe that the \(2\)-cocycles \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})-(\chi,\psi,\Phi)\) and \((\chi^{\prime}_{(\beta,\alpha)},\Psi^{\prime}_{(\beta,\alpha)},\Phi^{\prime}_{ (\beta,\alpha)})-(\chi^{\prime},\psi^{\prime},\Phi^{\prime})\) are equivalent by the map \(\beta\phi\alpha^{-1}-\phi\). Therefore, their corresponding equivalence calsses in \(H^{2}_{\operatorname{nab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\) are same. In other words, the map \(\mathcal{W}\) does not depend on the chosen section. We are now in a position to prove the main result of this section. **5.4 Theorem**.: _Let \(0\to\mathfrak{h}_{Q}\xrightarrow{i}\mathfrak{e}_{U}\xrightarrow{p} \mathfrak{g}_{P}\to 0\) be a non-abelian extension of averaging Lie algebras. A pair \((\beta,\alpha)\in\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut} (\mathfrak{g}_{P})\) of averaging Lie algebra automorphisms is inducible if and only if \(\mathcal{W}((\beta,\alpha))=0\)._ Proof.: Let \((\beta,\alpha)\in\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut} (\mathfrak{g}_{P})\) be indicible. Therefore, there exists an automorphism \(\gamma\in\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\) such that \(\gamma|_{\mathfrak{h}}=\beta\) and \(p\gamma s=\alpha\) (for any section \(s\)). For any \(x\in\mathfrak{g}\), we observe that \[p(\gamma s-s\alpha)(x)=\alpha(x)-\alpha(x)=0.\] Therefore, \((\gamma s-s\alpha)(x)\in\operatorname{Ker}(p)=\operatorname{Im}(i)\cong \mathfrak{h}\). We define a map \(\phi:\mathfrak{g}\to\mathfrak{h}\) by \(\phi(x):=(\gamma s-s\alpha)\alpha^{-1}(x)=\gamma s\alpha^{-1}(x)-s(x)\), for \(x\in\mathfrak{g}\). Then we have \[(\psi_{(\beta,\alpha)})_{x}h-\psi_{x}h =\beta(\psi_{\alpha^{-1}(x)}\beta^{-1}(h))-\psi_{x}h\] \[=\beta[s\alpha^{-1}(x),\beta^{-1}(h)]_{\mathfrak{e}}-[s(x),h]_{ \mathfrak{e}}\] \[=[\gamma s\alpha^{-1}(x),h]_{\mathfrak{e}}-[s(x),h]_{\mathfrak{ e}}\quad(\text{as }\beta=\gamma|_{\mathfrak{h}})\] \[=[\phi(x),h]_{\mathfrak{h}},\text{ for any }x\in\mathfrak{g}\text{ and }h\in \mathfrak{h}.\] Similarly, by direct calculations, we observe that \[\chi_{(\beta,\alpha)}(x,y)-\chi(x,y) =\psi_{x}\phi(y)-\psi_{y}\phi(x)-\phi([x,y]_{\mathfrak{g}})+[\phi (x),\phi(y)]_{\mathfrak{h}},\] \[\Phi_{(\beta,\alpha)}(x)-\Phi(x) =Q\phi(x)-\phi P(x),\text{ for any }x,y\in\mathfrak{g}.\] Here \((\chi,\psi,\Phi)\) is the non-abelian \(2\)-cocycle induced by any fixed section \(s\). It follows from the above observation that the non-abelian \(2\)-cocycles \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\) and \((\chi,\psi,\Phi)\) are equivalent by the map \(\phi=\gamma s\alpha^{-1}-s\). Hence we have \[\mathcal{W}((\beta,\alpha)):=[(\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)}, \Phi_{(\beta,\alpha)})-(\chi,\psi,\Phi)]=0.\] Conversely, let \((\beta,\alpha)\in\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut} (\mathfrak{g}_{P})\) be a pair of averaging Lie algebra automorphisms such that \(\mathcal{W}((\beta,\alpha))=0\). Let \(s:\mathfrak{g}\to\mathfrak{e}\) be any section of the map \(p\) and \((\chi,\psi,\Phi)\) be the non-abelian \(2\)-cocycle induced by the section \(s\). Since \(\mathcal{W}((\beta,\alpha))=0\), it follows that the non-abelian \(2\)-cocycles \((\chi_{(\beta,\alpha)},\psi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\) and \((\chi,\psi,\Phi)\) are equivalent, say by the map \(\phi:\mathfrak{g}\to\mathfrak{h}\). Note that, \(s\) is a section of the map \(p\) implies that any element \(e\in\mathfrak{e}\) can be written as \(e=h+s(x)\), for some \(h\in\mathfrak{h}\) and \(x\in\mathfrak{g}\). We now define a map \(\gamma:\mathfrak{e}\to\mathfrak{e}\) by \[\gamma(e)=\gamma(h+s(x)):=(\beta(h)+\phi\alpha(x))+s(\alpha(x)),\text{ for }e=h+s(x)\in \mathfrak{e}.\] **Step I.** (\(\gamma\) is bijective) Suppose \(\gamma(h+s(x))=0\). Then it follows that \(s(\alpha(x))=0\). As both \(s\) and \(\alpha\) are injective maps, we have \(x=0\). Using this in the definition of \(\gamma\), we have \(\beta(h)=0\), which implies \(h=0\). This proves that \(\gamma\) is injective. Finally, let \(e=h+s(x)\in\mathfrak{e}\) be any arbitrary element. We consider the element \(e^{\prime}=(\beta^{-1}(h)-\beta^{-1}\phi(x))+s(\alpha^{-1}(x))\in\mathfrak{e}\). Then we have \[\gamma(e^{\prime})=(h-\phi(x)+\phi(x))+s(x)=h+s(x)=e.\] This shows that \(\gamma\) is also surjective. Hence \(\gamma\) is bijective. **Step II.** (\(\gamma:\mathfrak{e}_{U}\to\mathfrak{e}_{U}\) is an automorphism of averaging Lie algebras) Take any two elements \(e_{1}=h_{1}+s(x_{1})\) and \(e_{2}=h_{2}+s(x_{2})\) of the vector space \(\mathfrak{e}\). We have \[[\gamma(e_{1}),\gamma(e_{2})]_{e} =[\beta(h_{1})+\phi\alpha(x_{1})+s(\alpha(x_{1})),\beta(h_{2})+ \phi\alpha(x_{2})+s(\alpha(x_{2}))]_{\mathfrak{e}}\] \[=[\beta(h_{1}),\beta(h_{2})]_{\mathfrak{e}}+\underbrace{[\beta(h _{1}),\phi\alpha(x_{2})]_{\mathfrak{e}}}_{A}+\underbrace{[\beta(h_{1}),s( \alpha(x_{2}))]_{\mathfrak{e}}}_{B}+\underbrace{[\phi\alpha(x_{1}),\beta(h_{2} )]_{\mathfrak{e}}}_{C}\] \[+\underbrace{[\phi\alpha(x_{1}),\phi\alpha(x_{2})]_{\mathfrak{e }}}_{B}+\underbrace{[\phi\alpha(x_{1}),s(\alpha(x_{2}))]_{\mathfrak{e}}}_{E}+ \underbrace{[s(\alpha(x_{1})),\beta(h_{2})]_{\mathfrak{e}}}_{F}+\underbrace{[s (\alpha(x_{1})),\phi\alpha(x_{2})]_{\mathfrak{e}}}_{G}\] \[+\underbrace{[s(\alpha(x_{1})),s(\alpha(x_{2}))]_{\mathfrak{e}}}_ {H}\] \[=[\beta(h_{1}),\beta(h_{2})]_{\mathfrak{e}}\underbrace{-\beta( \psi_{x_{2}}h_{1})+\psi_{\alpha(x_{2})}\beta(h_{1})}_{A}\underbrace{-\psi_{ \alpha(x_{2})}\beta(h_{1})}_{B}+\underbrace{\beta(\psi_{x_{1}}h_{2})-\psi_{ \alpha(x_{1})}\beta(h_{2})}_{C}\] \[+\underbrace{\beta(\chi(x_{1},x_{2}))-\chi(\alpha(x_{1}),\alpha( x_{2}))-\psi_{\alpha(x_{1})}\phi\alpha(x_{2})+\psi_{\alpha(x_{2})}\phi\alpha(x_{1})+ \phi\alpha([x_{1},x_{2}]_{\mathfrak{e}})}_{D}\] \[\underbrace{-\psi_{\alpha(x_{2})}\phi\alpha(x_{1})}_{E}+ \underbrace{\psi_{\alpha(x_{1})}\beta(h_{2})}_{F}+\underbrace{\psi_{\alpha(x_{ 1})}\phi\alpha(x_{2})}_{G}+\underbrace{\chi(\alpha(x_{1}),\alpha(x_{2}))+s[ \alpha(x_{1}),\alpha(x_{2})]_{\mathfrak{g}}}_{H}\] \[=\beta([h_{1},h_{2}]_{\mathfrak{h}}-\psi_{x_{2}}h_{1}+\psi_{x_{ 1}}h_{2}+\chi(x_{1},x_{2}))+\lambda([x_{1},x_{2}]_{\mathfrak{g}})+s[\alpha(x_ {1}),\alpha(x_{2})]_{\mathfrak{g}}\] \[=\beta([h_{1},h_{2}]_{\mathfrak{h}}+[h_{1},s(x_{2})]_{\mathfrak{e }}+[s(x_{1}),h_{2}]_{\mathfrak{e}}+\chi(x_{1},x_{2}))+\lambda([x_{1},x_{2}]_{ \mathfrak{g}})+s\alpha([x_{1},x_{2}]_{\mathfrak{g}})\] \[=\gamma([h_{1},h_{2}]_{\mathfrak{h}}+[h_{1},s(x_{2})]_{\mathfrak{ e}}+[s(x_{1}),h_{2}]_{\mathfrak{e}}+\chi(x_{1},x_{2})+s[x_{1},x_{2}]_{ \mathfrak{g}})\] \[=\gamma([h_{1},h_{2}]_{\mathfrak{h}}+[h_{1},s(x_{2})]_{\mathfrak{ e}}+[s(x_{1}),h_{2}]_{\mathfrak{e}}+[s(x_{1}),h_{2}]_{\mathfrak{e}}+[s(x_{1}),s(x_{2})]_{ \mathfrak{e}})\] \[=\gamma([h_{1}+s(x_{1}),h_{2}+s(x_{2})]_{\mathfrak{e}})\] \[=\gamma([e_{1},e_{2}]_{\mathfrak{e}}).\] This shows that \(\gamma\) preserves the Lie bracket. It is straightforward to verify that \(\gamma\circ U=U\circ\gamma\). Hence \(\gamma:\mathfrak{e}_{U}\to\mathfrak{e}_{U}\) is an automorphism of averaging Lie algebras. **Step III.** For any \(h\in\mathfrak{h}\) and \(x\in\mathfrak{g}\), we observe that \[\gamma(h)=\gamma(h+s(0))=\beta(h)\text{ and }\] \[(p\gamma s)(x)=p\gamma(0+s(x))=p(\phi\alpha(x)+s(\alpha(x)))=ps(\alpha(x))= \alpha(x).\] This shows that \(\Pi(\gamma)=(\gamma|_{\mathfrak{h}},p\gamma s)=(\beta,\alpha)\). Hence the pair \((\beta,\alpha)\) is inducible. ### Remark It follows from the previous theorem that \(\mathcal{W}((\beta,\alpha))\) is an obstruction for the inducibility of the pair \((\beta,\alpha)\). Therefore, the images of the Wells map are the obstructions for the inducibility of pair of automorphisms in \(\mathrm{Aut}(\mathfrak{h}_{Q})\times\mathrm{Aut}(\mathfrak{g}_{P})\). Further, it follows that if the Wells map \(\mathcal{W}\) is the trivial map, then any pair of averaging Lie algebra automorphisms in \(\mathrm{Aut}(\mathfrak{h}_{Q})\times\mathrm{Aut}(\mathfrak{g}_{P})\) is inducible. Note that the Wells map defined in (17) generalizes the well-known Wells map from the Lie algebra context. In the Lie algebra case, the Wells map fits into an exact sequence (known as the Wells exact sequence). We will now generalize the Wells exact sequence in the context of averaging Lie algebras. Let \(0\to\mathfrak{h}_{Q}\xrightarrow{i}\mathfrak{e}_{U}\xrightarrow{p}\mathfrak{g}_{P}\to 0\) be a non-abelian extension of averaging Lie algebras. We define a subgroup \(\operatorname{Aut}_{\mathfrak{h}_{0}}^{\mathfrak{h},\mathfrak{g}}(\mathfrak{e}_{U })\subset\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\) by \[\operatorname{Aut}_{\mathfrak{h}}^{\mathfrak{h},\mathfrak{g}}(\mathfrak{e}_{U }):=\{\gamma\in\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\mid\Pi( \gamma)=(\operatorname{Id}_{\mathfrak{h}},\operatorname{Id}_{\mathfrak{g}})\}.\] **5.6 Theorem**.: _With the above notation, there is an exact sequence_ \[1\to\operatorname{Aut}_{\mathfrak{h}}^{\mathfrak{h},\mathfrak{g}}(\mathfrak{e}_ {U})\xrightarrow{\iota}\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U}) \xrightarrow{\Pi}\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut }(\mathfrak{g}_{P})\xrightarrow{\mathcal{W}}H^{2}_{\operatorname{hab}}( \mathfrak{g}_{P},\mathfrak{h}_{Q}). \tag{18}\] Proof.: The sequence (18) is exact at the first term as the inclusion map \(\iota:\operatorname{Aut}_{\mathfrak{h}}^{\mathfrak{h},\mathfrak{g}}( \mathfrak{e}_{U})\to\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\) is injective. Next, we shall show that the sequence is exact at the second term. Let \(\gamma\in\operatorname{Ker}(\Pi)\). Then we have \(\gamma|_{\mathfrak{h}}=\operatorname{Id}_{\mathfrak{h}}\) and \(p\gamma s=\operatorname{Id}_{\mathfrak{h}}\) (for any give section \(s\)). Therefore, \(\gamma\in\operatorname{Aut}_{\mathfrak{h}}^{\mathfrak{h},\mathfrak{g}}( \mathfrak{e}_{U})\). Conversely, if \(\gamma\in\operatorname{Aut}_{\mathfrak{h}}^{\mathfrak{h},\mathfrak{g}}( \mathfrak{e}_{U})\), then \(\Pi(\gamma)=(\gamma|_{\mathfrak{h}},p\gamma s)=(\operatorname{Id}_{\mathfrak{h} },\operatorname{Id}_{\mathfrak{h}})\), i.e., \(\gamma\in\operatorname{Ker}(\Pi)\). This proves that \(\operatorname{Ker}(\Pi)=\operatorname{Aut}_{\mathfrak{h}}^{\mathfrak{h}, \mathfrak{g}}(\mathfrak{e}_{U})=\operatorname{Im}(\iota)\). Finally, we take an element \((\beta,\alpha)\in\operatorname{Ker}(\mathcal{W})\). Then it follows from Theorem 5.4 that the pair \((\beta,\alpha)\) is inducible. Hence there exists an automorphism \(\gamma\in\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\) such that \(\Pi(\gamma)=(\beta,\alpha)\). This shows that \((\beta,\alpha)\in\operatorname{Im}(\Pi)\). Conversely, let \((\beta,\alpha)\in\operatorname{Im}(\Pi)\), i.e., \((\beta,\alpha)\) is inducible. Then again by Theorem 5.4, we have \(\mathcal{W}((\beta,\alpha))=0\). Hence \((\beta,\alpha)\in\operatorname{Ker}(\mathcal{W})\) and therefore, we obtain \(\operatorname{Ker}(\mathcal{W})=\operatorname{Im}(\Pi)\). This shows that the sequence (18) is also exact in the third term and this completes the proof. ## 6. Abelian extensions of averaging Lie algebras: a particular case In this final section, we consider abelian extensions of averaging Lie algebras as a particular case of non-abelian extensions. Therefore, we obtain new results for abelian extensions of averaging Lie algebras that were not studied in the literature. Let \(\mathfrak{g}_{P}\) be an averaging Lie algebra and \(\mathfrak{h}_{Q}\) be a representation of it (see Definition 2.5). Consider \(\mathfrak{h}_{Q}\) as an averaging Lie algebra, where \(\mathfrak{h}\) is equipped with the abelian Lie bracket. Let \[0\xrightarrow{i}\mathfrak{h}_{Q}\xrightarrow{i}\mathfrak{e}_{U}\xrightarrow {p}\mathfrak{g}_{P}\xrightarrow{}0 \tag{19}\] be a short exact sequence of averaging Lie algebras. Note that, for any section \(s:\mathfrak{g}\to\mathfrak{e}\) of the map \(p\), there is an induced \(\mathfrak{g}\)-representation on \(\mathfrak{h}\) with the action map \(\psi:\mathfrak{g}\to\operatorname{End}(\mathfrak{h})\) given by \(\psi_{x}h:=[s(x),h]_{\mathfrak{e}}\), for \(x\in\mathfrak{g},h\in\mathfrak{h}\). It is easy to verify that this representation does not depend on the choice of section. Moreover, with this induced \(\mathfrak{g}\)-representation, \(\mathfrak{h}_{Q}\) becomes a representation of the averaging Lie algebra \(\mathfrak{g}_{P}\). An extension (19) is said to be an **abelian extension** if this new representation on \(\mathfrak{h}_{Q}\) coincides with the given one. Equivalence between two abelian extensions can be defined similarly as in the case of non-abelian extensions. Let \(\operatorname{Ext}_{\operatorname{ab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\) be the set of all equivalence classes of abelian extensions of \(\mathfrak{g}_{P}\) by the given representation \(\mathfrak{h}_{Q}\). Then Theorem 4.5 can be rephrased by the following. **6.1 Theorem**.: _Let \(\mathfrak{g}_{P}\) be an averaging Lie algebra and \(\mathfrak{h}_{Q}\) be a representation of it. Then there is a bijection \(\operatorname{Ext}_{\operatorname{ab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q}) \cong H^{2}_{\operatorname{Alie}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\)._ Proof.: Note that an abelian extension is a (non-abelian) extension in which \(\mathfrak{h}\) has abelian Lie algebra structure and the induced \(\mathfrak{g}\)-representation on \(\mathfrak{h}\) coincides with the prescribed one. On the other hand, if we impose these conditions on a non-abelian \(2\)-cocycle \((\chi,\psi,\Phi)\) (i.e., conditions (8), (9), (10), (11)), we simply get that \(\psi\) is the prescribed \(\mathfrak{g}\)-representation on \(\mathfrak{h}\) and \((\chi,\Phi)\) is a \(2\)-cocycle in the cochain complex \(\{C_{\Delta\operatorname{Lie}}^{\bullet}(\mathfrak{g}_{P},\mathfrak{h}_{Q}), \delta_{\operatorname{Alie}}\}\) defined in Section 2. Thus, abelian extensions give rise to \(2\)-cocycles in \(\{C_{\Delta\operatorname{Lie}}^{\bullet}(\mathfrak{g}_{P},\mathfrak{h}_{Q}), \delta_{\operatorname{Alie}}\}\). It is also similar to verifying that equivalent abelian extensions correspond to cohomologous \(2\)-cocycles. Hence we get the bijection \(\operatorname{Ext}_{\operatorname{ab}}(\mathfrak{g}_{P},\mathfrak{h}_{Q}) \cong H^{2}_{\Delta\operatorname{Lie}}(\mathfrak{g}_{P},\mathfrak{h}_{Q})\). Let \(\mathfrak{g}_{P}\) be an averaging Lie algebra, \(\mathfrak{h}_{Q}\) be a representation and \(0\to\mathfrak{h}_{Q}\xrightarrow{i}\mathfrak{e}_{U}\xrightarrow{p}\mathfrak{g}_{P}\to 0\) be an abelian extension of \(\mathfrak{g}_{P}\) by \(\mathfrak{h}_{Q}\). Let \(\psi:\mathfrak{g}\to\operatorname{End}(\mathfrak{h})\) denotes the \(\mathfrak{g}\)-representation on \(\mathfrak{h}\). We define a subgroup \(C_{\psi}\subset\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut}( \mathfrak{g}_{P})\) by \[C_{\psi}:=\{(\beta,\alpha)\in\operatorname{Aut}(\mathfrak{h}_{Q})\times \operatorname{Aut}(\mathfrak{g}_{P})\mid\beta(\psi_{x}h)=\psi_{a(x)}\beta(h),\text{ for all }x\in\mathfrak{g},h\in\mathfrak{h}\}.\] The space \(C_{\psi}\) is called the space of compatible pairs of automorphisms. Note that, if \(s:\mathfrak{g}\to\mathfrak{e}\) is a section of \(p\) and \((\chi,\Phi)\) is the \(2\)-cocycle induced by \(s\), then for any \((\beta,\alpha)\in\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut}( \mathfrak{g}_{P})\), the pair \((\chi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\) may not be a \(2\)-cocycle. However, if \((\beta,\alpha)\in C_{\psi}\), then \((\chi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})\) turns out to be a \(2\)-cocycle. Therefore, we can define a map \(\mathcal{W}:C_{\psi}\to H^{2}_{\operatorname{Alie}}(\mathfrak{g}_{P}, \mathfrak{h}_{Q})\) by \[\mathcal{W}((\beta,\alpha)):=[(\chi_{(\beta,\alpha)},\Phi_{(\beta,\alpha)})-( \chi,\Phi)],\text{ for }(\beta,\alpha)\in C_{\psi}. \tag{20}\] Like the non-abelian case, the Wells map here does not depend on the chosen section. Thus, Theorem 5.4 in the case of abelian extensions can be rephrased as follows. **6.2 Theorem**.: _Let \(0\to\mathfrak{h}_{Q}\xrightarrow{i}\mathfrak{e}_{U}\xrightarrow{p}\mathfrak{ g}_{P}\to 0\) be an abelian extension of an averaging Lie algebra \(\mathfrak{g}_{P}\) by a representation \(\mathfrak{h}_{Q}\). Then a pair \((\beta,\alpha)\in\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut} (\mathfrak{g}_{P})\) is inducible if and only if \((\beta,\alpha)\in C_{\psi}\) and \(\mathcal{W}((\beta,\alpha))=0\)._ Since the image of the map \(\Pi:\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\to\operatorname{Aut} (\mathfrak{h}_{Q})\times\operatorname{Aut}(\mathfrak{g}_{P})\) lies in \(C_{\psi}\subset\operatorname{Aut}(\mathfrak{h}_{Q})\times\operatorname{Aut} (\mathfrak{g}_{P})\), we have the following Wells short exact sequence for abelian extensions. **6.3 Theorem**.: _Let \(0\to\mathfrak{h}_{Q}\xrightarrow{i}\mathfrak{e}_{U}\xrightarrow{p}\mathfrak{ g}_{P}\to 0\) be an abelian extension of an averaging Lie algebra \(\mathfrak{g}_{P}\) by a representation \(\mathfrak{h}_{Q}\). Then there is an exact sequence_ \[1\to\operatorname{Aut}^{\mathfrak{h},\mathfrak{g}}_{\mathfrak{b}}(\mathfrak{ e}_{U})\xrightarrow{i}\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U}) \xrightarrow{\Pi}C_{\psi}\xrightarrow{\mathcal{W}}H^{2}_{\operatorname{nab}}( \mathfrak{g}_{P},\mathfrak{h}_{Q}). \tag{21}\] An abelian extension \(0\to\mathfrak{h}_{Q}\xrightarrow{i}\mathfrak{e}_{U}\xrightarrow{p}\mathfrak{ g}_{P}\to 0\) of an averaging Lie algebra \(\mathfrak{g}_{P}\) by a representation \(\mathfrak{h}_{Q}\) is said to be **split** if there exist a section \(s:\mathfrak{g}\to\mathfrak{e}\) which is a morphism of averaging Lie algebras. Then the averaging Lie algebra \(\mathfrak{e}_{U}\) is isomorphic to the averaging Lie algebra \((\mathfrak{g}\oplus\mathfrak{h})_{P\oplus Q}\), where the Lie bracket on \(\mathfrak{g}\oplus\mathfrak{h}\) is given by the semi-direct product \[[(x,h),(y,k)]_{\times}:=([x,y]_{\mathfrak{g}},\psi_{x}k-\psi_{y}h),\text{ for }(x,h),(y,k)\in\mathfrak{g}\oplus\mathfrak{h}.\] Thus, if \((\chi,\Phi)\) is the \(2\)-cocycle corresponding to the above split abelian extension induced by the section \(s\), then \[\chi(x,y)=[s(x),s(y)]_{\mathfrak{e}}-s[x,y]_{\mathfrak{g}}=0\text{ \ and \ }\Phi(x)=(Us-sP)(x)=0,\text{ for }x,y\in\mathfrak{g}.\] Therefore, it turns out that the Wells map defined in (20) vanishes identically. Hence, for split abelian extensions, the exact sequence (21) takes the following form \[1\to\operatorname{Aut}^{\mathfrak{h},\mathfrak{g}}_{\mathfrak{b}}(\mathfrak{ e}_{U})\xrightarrow{\iota}\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U}) \xrightarrow{\Pi}C_{\psi}\to 1. \tag{22}\] Note that we can define a group homomorphism \(\rho:C_{\psi}\to\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\) by \(\rho((\beta,\alpha))(x,h)=\big{(}\alpha(x),\beta(h)\big{)}\), for \((\beta,\alpha)\in C_{\psi}\) and \((x,h)\in\mathfrak{g}\oplus\mathfrak{h}\cong\mathfrak{e}\). Further, we have \((\Pi\circ\rho)(\beta,\alpha)=\big{(}\rho(\beta,\alpha)|_{\mathfrak{h}}, \overline{\rho(\beta,\alpha)}\big{)}=(\beta,\alpha)\) as \(\overline{\rho(\beta,\alpha)}=p\circ\rho(\beta,\alpha)\circ s=\alpha\). This shows that (22) is a split exact sequence in the category of groups. As a consequence, we obtain the following result. **6.4 Proposition**.: _Let \(0\to\mathfrak{h}_{Q}\xrightarrow{i}\mathfrak{e}_{U}\xrightarrow{p}\to \mathfrak{g}_{P}\to 0\) be a split abelian extension of an averaging Lie algebra \(\mathfrak{g}_{P}\) by a representation \(\mathfrak{h}_{Q}\). Then as groups,_ \[\operatorname{Aut}_{\mathfrak{h}}(\mathfrak{e}_{U})\cong C_{\psi}\times \operatorname{Aut}^{\mathfrak{h},\mathfrak{g}}_{\mathfrak{h}}(\mathfrak{e}_{U}),\] _where the right-hand side is the semi-direct product of groups._ **Acknowledgements**.: The first named author would like to thank Indian Institute of Technology (IIT) Kharagpur for providing the beautiful academic atmosphere where his part of the research has been carried out. The second named author acknowledges the Tata Institute of Fundamental Research (TIFR) Mumbai for their postdoctoral fellowship. **Data Availability Statement**.: Data sharing does not apply to this article as no new data were created or analyzed in this study.
2309.15060
Constrained Deep Reinforcement Learning for Fronthaul Compression Optimization
In the Centralized-Radio Access Network (C-RAN) architecture, functions can be placed in the central or distributed locations. This architecture can offer higher capacity and cost savings but also puts strict requirements on the fronthaul (FH). Adaptive FH compression schemes that adapt the compression amount to varying FH traffic are promising approaches to deal with stringent FH requirements. In this work, we design such a compression scheme using a model-free off policy deep reinforcement learning algorithm which accounts for FH latency and packet loss constraints. Furthermore, this algorithm is designed for model transparency and interpretability which is crucial for AI trustworthiness in performance critical domains. We show that our algorithm can successfully choose an appropriate compression scheme while satisfying the constraints and exhibits a roughly 70\% increase in FH utilization compared to a reference scheme.
Axel Grönland, Alessio Russo, Yassir Jedra, Bleron Klaiqi, Xavier Gelabert
2023-09-26T16:40:47Z
http://arxiv.org/abs/2309.15060v2
# Constrained Deep Reinforcement Learning for Fronthaul Compression Optimization ###### Abstract In the Centralized-Radio Access Network (C-RAN) architecture, functions can be placed in the central or distributed locations. This architecture can offer higher capacity and cost savings but also puts strict requirements on the fronthaul (FH). Adaptive FH compression schemes that adapt the compression amount to varying FH traffic are promising approaches to deal with stringent FH requirements. In this work, we design such a compression scheme using a model-free off policy deep reinforcement learning algorithm which accounts for FH latency and packet loss constraints. Furthermore, this algorithm is designed for model transparency and interpretability which is crucial for AI trustworthiness in performance critical domains. We show that our algorithm can successfully choose an appropriate compression scheme while satisfying the constraints and exhibits a roughly 70% increase in FH utilization compared to a reference scheme. C-RAN, fronthaul, machine learning, reinforcement learning, performance evaluation. ## I Introduction Centralized Radio Access Network (C-RAN) allow the splitting of RAN functionalities between remote radio units (RRU) near antenna sites and the baseband unit (BBU) at a centralized location. In this setup, a centralized pool of BBUs can jointly process RAN functions from multiple RRUs, allowing better resource utilization and dimensioning than non-centralized RAN options [1]. C-RAN also offers increased maintainability, flexibility, upgradability, and improved coordination features such as coordinated multi-point (CoMP) and inter-cell interference coordination (ICIC) among others [2]. Conversely, C-RAN deployments may cause high data rate requirements on the fronthaul (FH) and increased latency in the signal processing chain [2]. A major challenge in C-RAN deployments is the huge demands on bandwidth aggregation required for the FH, especially for specific split options (see e.g. [3]). Fortunately, one can resort to more favourable splits [4], and data compression methods [5, 6, 7] that can help diminish the FH link data rate demands. Despite the use of these methods, further development is necessary and additional measures are required, as explained hereafter. Various approaches have been employed in recent studies to address the challenges mentioned above. For instance, [8] introduces a graph-based framework that effectively reduces the FH cost by appropriately splitting and placing baseband processing functions within the network. A lossless FH compression technique (FH) is introduced in [5], which relies on the proportion of utilized resources. In [9], authors offer insights into FH compression using modulation compression, with a reported reduction in required FH capacity of up to 82%. Modulation compression and scheduling strategies are combined in [7] to further optimize the use of FH-limited deployments. In [10], joint FH compression and precoding design is proposed, where two different splits are investigated to determine the location of the precoder. Noteworthy, the works mentioned above rely on conventional mathematical optimization approaches, which are generally complex and require the availability of underlying models. However, obtaining optimal solutions can be challenging due to their high complexity and the difficulty of acquiring underlying models for realistic scenarios. To help with this, Machine Learning (ML) approaches can be employed. The literature extensively covers the use of ML techniques for complex optimization problems in wireless networks, see e.g. [11] and sources cited therein. In the C-RAN domain, supervised learning has been used to investigate UE slicing and functional split optimization [12]. However, obtaining high-quality labels for supervised learning can be challenging and costly in practice. Another proposed framework, in [13], uses model-free reinforcement learning (RL) to jointly select the most suitable split and computing resource allocation. In this work, we discuss the configuration and optimization of the FH in the downlink (DL) direction assuming a C-RAN deployment. We propose an off-policy constrained reinforcement learning framework. We show that we can partition value iteration which is useful for distribution and explainability. We implement two classical Deep RL algorithms which dynamically adjusts various parameters associated with FH compression schemes, such as modulation order, precoder granularity and precoder weight quantization. The primary objective is to maximize FH utilization, and thus the air interface throughput, while ensuring that FH latency and the packet loss rate remains below predefined thresholds. Our contribution is unique as it addresses the problem of FH compression optimization using learning-based techniques under FH latency and packet loss constraints. The remainder of this paper is organized as follows. In Section II we introduce the framework of Markov Decision Processes (MDPs) and Reinforcement Learning (RL). In Section III we describe the assumed C-RAN scenario, outline our simulation setup, and provide a description of our system models. The formulation of optimization problem is given in Section IV. We derive a RL problem formulation and show its equivalence to the original problem. Next, we derive a reward function that maps to the problem we are solving, along with some additional techniques to stabilize and speed up learning. In Section V, we present the results of our experiments, focusing on the expected throughput gain compared to other less dynamic policies. Lastly, Section VI provides an overview of the final remarks. ## II Background In this section we describe the framework of Markov Decision Processes and Reinforcement Learning. ### _Markov Decision Process_ A Markov Decision Process (MDP) is described by a tuple \((S,A,P,r,p_{0})\)[14]: \(S\) is the state-space, representing all possible states that the system can be in; \(A\) is the action-space, representing the set of all actions that an agent can take; \(P:S\times A\times S\rightarrow[0,1]\) is the transition kernel, where \(P(s^{\prime}|s,a)\) corresponds to the probability of transitioning from state \(s\) to state \(s^{\prime}\) upon selecting action \(a\); \(r:S\times A\rightarrow\mathbb{R}\) is the reward function, providing a scalar reward after taking an action in a given state; lastly, \(p_{0}:S\rightarrow[0,1]\) denotes the initial state distribution of the MDP. The system evolves over discrete time steps. At each step \(t\), the MDP is in a state \(s_{t}\), which is observed by an agent. The agent selects an action \(a_{t}\in A\), according to a stationary Markov policy \(\pi:S\times A\rightarrow[0,1]\), influencing the state transition governed by \(P\). The agent's objective is to find a policy \(\pi\) that maximizes the total expected discounted reward over an infinite horizon. This is quantified by the value function under policy \(\pi\), denoted as \(V^{\pi}(s)\), and defined as: \(V^{\pi}(s)\coloneqq\mathbb{E}^{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})|s_{0}=s\right]\) for some discount factor \(\gamma\in(0,1)\) and starting state \(s\in S\). Similarly, we define the \(Q\)-value function of a policy as \(Q^{\pi}(s,a)=\mathbb{E}^{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t}) |s_{0}=s,a_{0}=a\right]\). Finally, we denote by \(\pi^{*}(s)=\arg\max_{\pi}V^{\pi}(s)\) the optimal policy (or greedy policy). _Reinforcement Learning (RL) deals with the problem of learning an optimal policy \(\pi\) in an MDP with unknown dynamics or large dimensional state-action spaces [15]. Off-policy methods like \(Q\)-learning [16] can be used for MDPs with finite state-action spaces to learn the greedy policy \(\pi^{*}\). However, in the context of high-dimensional state and action spaces, traditional tabular methods fall short. Deep RL algorithms, such as Deep \(Q\)-Networks (DQN) and Soft Actor-Critic (SAC), leverage neural networks to approximate the value function \(V^{\pi}\) and/or the policy \(\pi\). DQN [17], an extension of the \(Q\)-Learning algorithm, uses a deep neural network to approximate the \(Q\)-function \(Q^{\pi}\), enabling RL in environments with continuous state-spaces. and finite-action spaces. SAC [18] is an actor-critic algorithm, based on \(Q\)-learning, designed for environments with continuous action spaces, incorporating an entropy term to achieve a balance between exploration and exploitation. We refer the reader to [17, 18] for more information regarding these algorithms. ## III Problem Formulation and Model In this section, we first present a formal description of the FH compression optimization problem in C-RAN under latency constraints. Then, we formulate this problem as a constrained MDP. ### _C-RAN scenario description_ It is assumed a 3GPP NR Time Division Duplex (TDD) system, with data flowing in the DL direction. In the frequency domain, Orthogonal Frequency Division Multiplexing (OFDM) subcarriers are separated by given subcarrier spacing (SCS), defined as \(\Delta f_{\text{SCS}}=15\cdot 2^{\mu}\) (kHz) with SCS index \(\mu=\{0,1,2,3,4\}\). The total available bandwidth \(B\) (Hz) is divided into \(N_{\text{B},\mu}^{\text{PRB}}\) Physical Resource Blocks (PRBs), each PRB containing 12 consecutive subcarriers. In the time domain, transmission of UL and DL data is cyclically carried out over a predefined time duration measured in UL and DL _slots_ respectively, following a pattern given by the configured TDD frame-structure. Each slot has a duration of 14 symbols, that is \(T_{slot}^{\mu}=14\cdot T_{symb}^{\mu}\), with \(T_{symb}^{\mu}\) the duration of an OFDM symbol. It is also common to define the number of subcarrier-symbol pairs contained in a single PRB during the duration of a slot, namely Resource Elements (REs), hence \(N_{\text{RE}}=12\cdot 14=168\). Massive MIMO with \(N_{ant}\) antennas at the transmitter is considered, where digital precoding is applied with pre-calculated weights based on channel estimations from UL pilot measurements [19]. As a result, up to \(v_{lay}\leq N_{ant}\) spatially multiplexed users (or layers) can be scheduled at the same time, over the same PRB. In this study, we focus on the C-RAN architecture [1], which allows the centralization of the baseband processing for multiple (\(K\)) geographically distributed RRUs, each serving a cell. The processing can be split between a centralized location housing a pool of BBUs and said individual RRUs. Fig. 1(a) illustrates the physical architecture for a simple \(K=3\) cell scenario, where the RRUs and the BBU pool are interconnected via the FH. The FH provides a link of capacity of \(C_{\text{FH}}\) (Gb/s), and transports the aggregate DL generated traffic towards the RRUs. From a logical architecture perspective, see Fig. 1(b), and for each cell served by a RRU, we can consider the Baseband Low (BBL) entity, residing at each RRU, and the Baseband High (BBH) entity residing at the BBU pool. The term _split_ will be used hereafter to describe the amount of baseband processing functions residing in the BBL and the BBH. Similar to [5, 19], the adopted DL split considers the encoded user data and the precoding weights to be sent separately over the FH, see Fig. 1(b). The encoded user data (in bits) can be mapped to the corresponding modulated symbols at the BBL, thus experiencing no quantization errors over the FH. The precoding weights on the other hand, being complex-valued samples, need to be quantized at a bit resolution \(b^{w}\) (bits/sample), thus being prone to quantization errors. ### _FH compression configuration_ The FH capacity dimensioning usually takes into account the statistical multiplexing gain resulting from the spatial traffic distribution at different cells in a given area [20]. In practice, this means that the FH capacity is usually dimensioned below the maximum aggregated peak data rates across all cells, since the probability of such event is considered low. Nonetheless, in the event of high load across all cells, keeping the FH throughput below the (under-)dimensioned capacity will require compression methods. Specifically, three methods will be considered in this paper. Firstly, the modulation order, which in NR is given by the set \(\mathcal{Q}^{\text{NR}}=\{2,4,6,8\}\) bit/symbol [21], can be restricted to some maximum value \(q\in\mathcal{Q}^{\text{NR}}\) so that symbols are represented with fewer bits, thus lowering the FH throughput. This will, of course, have an impact on the air interface throughput, but its use can be limited to specific time instants where the FH cannot support the offered traffic. Second, we can modify the bitwidth \(b^{w}\in\mathbb{N}\) (expressed in bits) for precoding weights used to quantify and represent complex-valued samples for their transmission over the FH. Finally, one can tune the sub-band precoding granularity, \(r^{w}\), which reflects the number of subcarriers that can be assumed to share similar channel conditions and therefore admit the same precoding weight value [5]. We define the sub-band precoding granularity as the number of consecutive PRBs being applied the same precoding weight. For example, \(r^{w}=1\) indicates that each PRB in the entire bandwidth will be precoded with its own distinctive weight, whereas \(r^{w}=2\) means that two consecutive PRBs will be precoded with the same weight (effectively halving the weight payload to be transmitted over the FH), etc. To ensure optimal air interface performance, it is crucial to employ FH compression only when necessary. This means using it when the FH utilization (the ratio between used and available FH capacity) is high and there is a risk of not delivering packets on time over the FH. At medium and low FH loads, compression should be reduced to increase FH utilization while maintaining performance below the limit. With this in mind, the main optimization criteria going forward is to maximize FH utilization while keeping FH latency and packet loss within acceptable limits. The latency for cell \(k\), \(L_{k}\), accounts for the queuing time in BBH and in the switcher as well as the transmission time over the FH link. We further denote the maximum latency as \(L_{\max}\). In the event of high traffic the switcher may experience buffer overflow resulting in packets loss. We denote \(LP_{k}\) as the packet loss for cell \(k\) and aim to maintain the total packet loss over all cells \(\sum_{k}LP_{k}=0\). ### _FH utilization_ Considering the functional split given in Sec. III-A, the FH transports the aggregated traffic towards the different cells, comprising the data payload and precoding weights. The data payload (in bits) to be delivered at a given time slot \(t\) intended for the cell \(k\) is given by: \[N_{t,k}^{d}=N_{\text{\tiny{RE}}}\cdot v_{lay}\cdot N_{t,k}^{\text{\tiny{FB}} }\cdot q_{t,k}, \tag{1}\] where \(v_{lay}\), \(N_{t,k}^{\text{\tiny{FB}}}\), and \(q_{t,k}\) denote number of layers, number of the allocated PRBs and modulation order, respectively. In a similar way, the number of precoding weight bits transmitted at slot \(t\) towards cell \(k\) can be obtained as follows: \[N_{t,k}^{w}=\left\lceil\frac{N_{t,k}^{\text{\tiny{FB}}}}{r_{t,k}^{w}}\right \rceil\cdot v_{lay}\cdot N_{ant}\cdot b_{t,k}^{w}, \tag{2}\] with \(r_{t,k}^{w}\), \(N_{ant}\), and \(b_{t,k}^{w}\), the precoder granularity, the number of antennas, and the weight bit quantization, respectively. The FH data rate at slot \(t\) for cell \(k\) is given by: \[R_{t,k}^{\text{\tiny{FH}}}=\frac{1}{T_{slot}^{\mu}}\left(N_{t,k}^{d}+N_{t,k}^ {w}\right), \tag{3}\] where \(T_{slot}^{\mu}\) is the slot duration. The FH utilization is the ratio between the FH data rate and the FH capacity, \(\rho_{t,k}=\frac{R_{t,k}^{\text{\tiny{FH}}}}{C_{\text{FH}}}\). We abuse notation and also denote \(\rho_{t}=\sum_{k}^{K}\rho_{t,k}\). ### _Constrained MDP Formulation_ Recalling that the compression configuration is controlled via three different parameters \((q_{k},b_{k}^{w},r_{k}^{w})\), we define the system state at time \(t\) to be: \[s_{t}=\{(\rho_{t,k},L_{t,k},LP_{t,k},q_{t,k},b_{t,k}^{w},r_{t,k}^{w}),k\in[K]\}, \tag{4}\] which includes also the FH utilization \(\rho_{t,k}\), the FH latency \(L_{t,k}\), the lost packets \(LP_{t,k}\) and the configuration parameters \((q_{t,k},b_{t,k}^{w},r_{t,k}^{w})\) at time \(t\) and cell \(k\). The action at time \(t\) is defined as an incremental change in configuration as follows: \[a_{t}=\{(\Delta q_{t,k},\Delta b_{t,k}^{w},\Delta r_{t,k}^{w}),k\in[K]\}. \tag{5}\] Here, \(\Delta q_{t,k},\Delta b_{t,k}^{w},\Delta r_{t,k}^{w}\in\{-1,0,1\}\) denote, respectively, changes in the parameters \(q_{t,k}\), \(b_{t,k}^{w}\), and \(r_{t,k}^{w}\). More specifically, \(\Delta q_{t,k}=-1\) leads to a lower modulation order, while Fig. 1: Considered scenario with (a) physical and (b) logical architectures. \(1\) moves to higher modulation order and \(\Delta q_{t,k}=0\) causes no change. Same action encoding applies to \(\Delta b_{t,k}^{w}\) and \(\Delta r_{t,k}^{w}\). Note that the size of the joint action-spaces is exponential in \(K\). Hence, for DQN we assume _homogeneous load_, so that the action spaces is of constant size, i.e., \(O(1)\). Alternatively, using actor-critic algorithms (such as SAC) it is possible to alleviate this issue by using an auto-regressive policy (see also the appendix1). Footnote 1: Technical report with appendix: [https://arxiv.org/abs/2309.15060](https://arxiv.org/abs/2309.15060) Lastly, we define the reward and the constraints. The reward at time \(t\) is defined as \[r(s_{t},a_{t})=\rho_{t}. \tag{6}\] For the cell \(k\), the latency \(L_{k}\) and packet loss \(LP_{k}\) constraints are introduced in the problem formulation as follows \[\mathbb{P}_{s\sim d^{\pi}}\left(\max_{k}L_{k}>L_{max}\right)<\xi, \tag{7}\] \[\mathbb{P}_{s\sim d^{\pi}}\left(\sum_{k}LP_{k}\neq 0\right)<\xi, \tag{8}\] where \(\xi\) is a prescribed confidence level and \(d^{\pi}\) denotes the ergodic stationary measure under policy \(\pi\). ## IV Method In this section, we present RL methods to solve constrained MDPs. For clarity, we will consider a generic constrained RL problem modeled as an infinite horizon MDP with discounted factor \(\gamma\): \[\max_{\pi} \mathbb{E}_{s\sim p_{0}}\left[\sum_{t}^{\infty}\gamma^{t}r(s_{t}, a_{t})|s_{0}=s\right],\] (9) s.t. \[\mathbb{P}_{s\sim d^{\pi}}\left(s\notin\mathcal{S}_{i}\right)<\xi _{i},\qquad i=1,\ldots,N,\] \[s_{t+1}\sim P(\cdot|s_{t},a_{t}),a_{t}\sim\pi(\cdot|s_{t}),\] where \(\mathcal{S}_{i}\) and \(\xi_{i}\) are the safety set and confidence level parameter for the \(i\)-th constraint, respectively. We remark that our constrained MDP formulation presented in section III-D is an instance of the optimization problem defined in (9). Indeed, the states are given by (4), the actions are given by (5), the rewards are given by (6), and the constraints are defined in (7) and (8). ### _Constrained Reinforcement Learning Approach_ Formulating the dual problemJust as in previous works [22, 23], we relax the problem by considering the constraints over the discounted stationary distribution of the states \(d_{\pi}^{\pi}(s)=(1-\gamma)\sum_{s^{\prime}}p_{0}(s^{\prime})\sum_{t\geq 0} \gamma^{\prime}\mathbb{P}(s_{t}=s|\pi,s_{0}=s^{\prime})\), which is the discounted probability of being in state \(s\) at time \(t\) when following policy \(\pi\). In particular, when \(\gamma\to 1\) we find the original optimization problem [14, 22]. Then, notice that \(\mathbb{E}_{s\sim d_{\pi}^{\pi}}[\mathbf{1}_{\{s\notin S_{i}\}}]=1-(1-\gamma) \mathbb{E}_{s_{0}\sim p_{0}}^{\pi}[\sum_{t\geq 0}\gamma^{t}\mathbf{1}_{\{s \in S_{i}\}}]\). Let \(\lambda=\begin{bmatrix}1&\lambda_{1}&\ldots&\lambda_{N}\end{bmatrix}^{\top}\) and \[R(s,a)\coloneqq\begin{bmatrix}r_{0}(s,a)&r_{1}(s,a)&\ldots&r_{N}(s,a)\end{bmatrix} ^{\top},\] where \(r_{0}(s,a)=r(s,a)\) and \(r_{i}=(1-\gamma)\mathbf{1}_{\{s\in S_{i}\}}\) for \(i=1,\ldots,N\). Thus, we can approximate the previous problem in (9) as an unconstrained optimization problem \[\min_{\lambda_{1},\ldots,\lambda_{N}>0}\max_{\pi}\mathbb{E}_{s\sim p_{0}}^{ \pi}\left[\sum_{t}^{\infty}\gamma^{t}\lambda^{\top}R(s,a)\Big{|}s_{0}=s\right]+ \lambda^{\top}\xi, \tag{10}\] where \(\xi=\begin{bmatrix}0&\xi_{1}-1&\ldots&\xi_{N}-1\end{bmatrix}^{\top}\). We observe that for a fixed vector \(\lambda\) the inner maximization amounts to solving an RL problem. Consequently, we can see address the problem using a descent-ascent approach, where we use classical RL techniques to find an optimal policy that maximizes \(\mathbb{E}_{s\sim p_{0}}^{\pi}\left[\sum_{t}^{\infty}\gamma^{t}\lambda^{\top}R (s,a)\Big{|}s_{0}=s\right]\), while updating the dual variables \(\lambda_{i}\). Value function decompositionNotice that the inner problem is linear in the Lagrangian variables. Then, we can rewrite the objective function as \[\min_{\lambda_{1},\ldots,\lambda_{N}>0}\max_{\pi}\lambda^{\top}V^{\pi}+ \lambda^{\top}\xi, \tag{11}\] with \(V^{\pi}=\begin{bmatrix}\mathbb{E}_{s\sim p_{0}}[V_{0}^{\pi}(s)]&\ldots& \mathbb{E}_{s\sim p_{0}}[V_{N}^{\pi}(s)]\end{bmatrix}^{\top}\) and \(V_{i}^{\pi}(s)=\mathbb{E}^{\pi}\left[\sum_{t}^{\infty}\gamma^{t}r_{i}(s,a) \Big{|}s_{0}=s\right]\). We propose to use RL to learn separately each term \(V_{i}^{\pi}(s)\) (as well as the greedy policy of \(\lambda^{\top}V^{\pi}\)), instead of learning directly \(\lambda^{\top}V^{\pi}\) and its greedy policy. Empirically, this seems to improve convergence. Practically, one may be interested in knowing the value for each separate reward function \(r_{i}\) (and thus, derive some insights on the value of the constraints). To that aim, we need a theorem that guarantees that updating each value function separately still leads to the same optimal policy \(\pi^{\star}\) that maximizes \(\mathbb{E}_{s\sim p_{0}}^{\pi}\left[\sum_{t}^{\infty}\gamma^{t}\lambda^{\top}R (s,a)\Big{|}s_{0}=s\right]\). We also let \(Q_{\lambda}^{\star}\) be the \(Q\)-value function of the policy \(\pi^{\star}=\arg\max_{\pi}\sum_{i}\lambda_{i}V_{i}^{\pi}\). Then, let \(K=\mathbb{R}_{+}^{(N+1)\times|S|\cdot|A|}\) be the set of real positive matrices of size \((N+1)\times|S|\cdot|A|\), and let \(Q=\begin{bmatrix}Q_{1}&\ldots&Q_{N+1}\end{bmatrix}^{\top}\in K\), where each \(Q_{i}\) is of size \(|S|\cdot|A|\). Define \(\mathcal{T}_{\lambda}:K\to K\) to be the following the Bellman operator \[\mathcal{T}_{\lambda}Q(s,a)\coloneqq r(s,a)+\gamma\mathbb{E}_{s^{\prime}}\left[Q (s^{\prime},\arg\max_{a^{\prime}}Q(s^{\prime},a^{\prime})^{\top}\lambda)\right]. \tag{12}\] where \(Q(s,a)\) in this case is a row vector of size \(N+1\) and \(s^{\prime}\sim P(\cdot|s,a)\). The following theorem guarantees that by using value-iteration we can indeed find the same optimal policy. **Theorem 1**.: _There exists \(Q^{\star}\in K\) satisfying the fixed point_ \[Q^{\star}(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}}\left[Q^{\star}(s^{\prime}, \arg\max_{a^{\prime}}Q^{\star}(s^{\prime},a^{\prime})^{\top}\lambda)\right].\] _Then, value iteration using \(\mathcal{T}_{\lambda}\) converges to \(Q^{\star}\), i.e., \(\lim_{k\to\infty}Q_{k}=Q^{\star}\), where \(Q_{k}=\mathcal{T}_{\lambda}^{k-1}Q_{0}\) for some \(Q_{0}\in K\). Moreover, \(Q^{\star}=\begin{bmatrix}Q_{1}^{\star}&\ldots&Q_{N+1}\end{bmatrix}\) satisfies \(Q_{\lambda}^{\star}=\sum_{i}\lambda_{i}Q_{i}^{\star}\)._ We refer the reader to the appendix for all the proofs. Q-learning extensionBy extension, the previous result also applies to standard \(Q\)-learning, as shown here. **Corollary 1**.: _Consider a policy that visits all state-action pairs infinitely often, and assume to learn the \(Q\)-values according to the following update at time \(t\) for each \(i\)_ \[Q_{t+1,i}(s_{t},a_{t})=(1-\alpha_{t})Q_{t,i}(s_{t},a_{t})+\alpha_{t}(r_{i}(s_{t}, a_{t})+\gamma Q_{t,i}(s_{t+1},a_{t}^{\prime})),\] _for some learning rate \(\alpha_{t}\) satisfying the Robbins-Monro conditions [24] and \(a_{t}^{\prime}=\arg\max_{a^{\prime}}\sum_{i}\lambda_{i}Q_{t,i}(s_{t+1},a^{ \prime})\). Let \(Q_{t}=\begin{bmatrix}Q_{t,0}&\ldots&Q_{t,N}\end{bmatrix}^{\top}\). Then, w.p.1, \(\lambda^{\top}Q_{t}\to Q_{\lambda}^{\star}\)._ ### _Algorithms_ In this section we outline how to change DQN and SAC to solve the optimization problem in (9). DQN ExtensionBased on the argument of Corollary 1, we propose a straightforward adaptation for DQN. We initialize \((N+1)\)\(Q\)-networks parameterized by \(\theta_{i},i=0,\ldots,N\) (with the corresponding target networks \(\bar{\theta}_{i}\)). Then, the \(i\)-th parameter is updated according to \(\theta_{i}\leftarrow\theta_{i}+\alpha_{t}\nabla_{\theta_{i}}\mathcal{L}_{ \theta_{i}}\), where \[\mathcal{L}_{\theta_{i}}=\mathbb{E}_{(s,a,r,s^{\prime})\sim\mathcal{B}}\left[ (r_{i}+\gamma Q_{\bar{\theta}_{i}}(s^{\prime},\pi_{\theta}(s^{\prime}))-Q_{ \theta_{i}}(s,a))^{2}\right],\] with \(\mathcal{B}\) being the replay buffer. Then, we define the greedy policy according to \(\pi_{\theta}(s)=\arg\max_{a}\sum_{i}\lambda_{i}Q_{\theta_{i}}(s,a)\), and the overall value as \(\sum_{i}\lambda_{i}Q_{\theta_{i}}\). SAC ExtensionAlso for SAC we propose a simple modification. The policy evaluation step follows directly from the DQN extension, while for the policy improvement step we minimize the following KL divergence: \[\pi_{\phi}\leftarrow\arg\min_{\phi}\mathbb{E}_{s\sim\mathcal{B}}\left[\mathrm{ KL}\left(\pi_{\theta}(\cdot|s),\frac{\exp(\sum_{i}\lambda_{i}Q_{\theta_{i}}(s, \cdot))}{Z_{\theta}}\right)\right],\] for some policy parametrized by \(\phi\). Optimization of the dual variablesAs previously argued, we use a descent-ascent approach. This method entails solving an inner RL optimization problem, and then updating the Lagrangian variables accordingly. Therefore, once we have learnt value functions we can perform a gradient update in our dual variable. Then our Lagrange update will be similar to [25]: \[\lambda_{t+1}=\max(0,\lambda_{t}-\beta_{t}(V_{\theta}+\xi)), \tag{13}\] for some learning rate \(\beta_{t}\) and vector \(V_{\theta}=\begin{bmatrix}V_{\theta_{0}}&\ldots&V_{\theta_{N}}\end{bmatrix}^ {\top}\) and \(V_{\theta_{i}}(s)=\max_{a}\sum_{i}\lambda_{i}Q_{\theta_{i}}(s,a)\) (note that the first component of \(\lambda\) is set equal to \(1\), and is not changed). In practice, the descent-ascent approach can be done simultaneously by making sure that the learning rate \(\beta_{t}\) is sufficiently small compared to the learning rate of the inner maximization problem. ## V Numerical Evaluation System simulations are performed via in-house build Base-bAnd System Simulator (BASS) based on ns-3. BASS is used to simulate transmission of DL data and precoding weights over FH and measure latency and packet loss. We use \(K=3\) cells that share a FH with capacity of \(25\) Gbps. The system parameters and values used for the simulations are shown in Table VI. For DQN we assumed _homogeneous load_, to simplify the action space, while for SAC we used an auto-regressive policy (see also the appendix for details) to address the exponential size of the actions-space. Fig. 1(a) shows the average FH utilization over the average number of PRBs for SAC, DQN, and the reference scheme designed to support worst case scenario, i.e., for 273 PRBs. The FH utilization increases with increasing number of scheduled PRBs until it reaches maximum number of PRBs. DQN and SAC improve average FH utilization by 70.3% on average compared to the reference scheme. The average FH utilization of DQN and SAC converges to the reference scheme when approaching high load. SAC [18] performs just as well as DQN. Fig. 1(b) show that the average latency stays within the limit. In this experiment our safety parameter \(\xi\) was set to \(0.025\), and three standard deviations correspond to the \(0.15\%\)-percentile, which far exceeds our designated safety parameter. In the Appendix, in Fig. 3 and 4, we depict the values of the constraints converge for \(V_{C_{1}}\) and \(V_{C_{2}}\), showing that all the value functions eventually learn to be above the designated constraints. Furthermore, it can be seen that the Lagrange multipliers converge as well. ## VI Conclusions In this paper, we have investigated an adaptive FH compression scheme which can operate under latency and packet loss constraints. We have formulated this problem as a constrained optimization problem and designed an RL algorithm to solve it. Finding the exact solution is NP hard and requires accurate modelling, which is hard to obtain in real scenarios. Therefore, we proposed a Deep-RL approach to solve the constrained optimization problem, entailing a novel update for the \(Q\)-values of the policy. Simulation results have shown that our method successfully learns a FH compression policy that maximizes FH utilization, while satisfying FH latency and packet loss constraints. On average, the FH utilization is improved by 70.2% for both the simplistic DQN and the more advanced SAC approach. \begin{table} \begin{tabular}{|c|c|c|} \hline **Parameter** & **Symbol** & **Value** \\ \hline Bandwidth & \(B\) & 100 MHz \\ \hline Number of available PRBs & \(N_{\mathbf{B},t}^{\mathrm{reg}}\) & 273 \\ \hline Number of scheduled PRBs & \(N_{t,k}^{\mathrm{reg}}\) & 1...273 \\ \hline variance of scheduled PRBs & \(\sigma_{N^{\mathrm{reg}}}\) & 1 \\ \hline Number of REs per RB & \(N_{\mathrm{ke}}\) & \(12\times 14=168\) \\ \hline Subcarrier spacing index & \(\mu\) & 1 (\(\Delta f_{\mathrm{scs}}\) =30kHz) \\ \hline Symbol duration & \(T_{symb}^{\mu}\) & 33.33 \(\mu\)s \\ \hline Slot duration & \(T_{symd}^{\mu}\) & 0.5 ms \\ \hline Number of cells & \(K\) & 3 \\ \hline Number of antennas & \(N_{ant}\) & 64 \\ \hline Number of layers & \(v_{l_{\mathrm{low}}}\) & 12 \\ \hline Modulation order & \(q_{t,k}\) & \(\mathcal{Q}=\{6,8\}\) \\ \hline Number of weight bits & \(\underline{b}_{k,k}^{\mu}\) & \(\mathcal{B}^{w}=\{16,\ldots,22\}\) \\ \hline Precoder granularity & \(r_{t,k}^{\mu}\) & \(\mathcal{R}^{w}=\{1,2,4\}\) \\ \hline Max. latency safety parameter & \(\xi\) & \(0.025\) \\ \hline Max allowed latency & \(\tau_{max}\) & 260 \(\mu s\) \\ \hline FH capacity & \(C_{\mathrm{m}}\) & 25 Gb/s \\ \hline \end{tabular} \end{table} TABLE I: Simulation parameters.
2310.00268
Unravel Anomalies: An End-to-end Seasonal-Trend Decomposition Approach for Time Series Anomaly Detection
Traditional Time-series Anomaly Detection (TAD) methods often struggle with the composite nature of complex time-series data and a diverse array of anomalies. We introduce TADNet, an end-to-end TAD model that leverages Seasonal-Trend Decomposition to link various types of anomalies to specific decomposition components, thereby simplifying the analysis of complex time-series and enhancing detection performance. Our training methodology, which includes pre-training on a synthetic dataset followed by fine-tuning, strikes a balance between effective decomposition and precise anomaly detection. Experimental validation on real-world datasets confirms TADNet's state-of-the-art performance across a diverse range of anomalies.
Zhenwei Zhang, Ruiqi Wang, Ran Ding, Yuantao Gu
2023-09-30T06:08:37Z
http://arxiv.org/abs/2310.00268v2
Unravel Anomalies: An End-to-End Seasonal-Trend Decomposition Approach for Time Series Anomaly Detection ###### Abstract Traditional Time-series Anomaly Detection (TAD) methods often struggle with the composite nature of complex time-series data and a diverse array of anomalies. We introduce TADNet, an end-to-end TAD model that leverages Seasonal-Trend Decomposition to link various types of anomalies to specific decomposition components, thereby simplifying the analysis of complex time-series and enhancing detection performance. Our training methodology, which includes pre-training on a synthetic dataset followed by fine-tuning, strikes a balance between effective decomposition and precise anomaly detection. Experimental validation on real-world datasets confirms TADNet's state-of-the-art performance across a diverse range of anomalies. Zhenwei Zhang\({}^{1}\), Ruiqi Wang\({}^{1}\), Ran Ding\({}^{2}\), Yuantao Gu\({}^{1}\)+\({}^{1}\)Department of Electronic Engineering, Tsinghua University, Beijing, China \({}^{2}\)Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China time-series anomaly detection, seasonal-trend decomposition, time-series analysis, end-to-end Footnote †: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. ## 1 Introduction In the realm of signal processing, time-series analysis has emerged as a pivotal area of focus across diverse application domains [1]. The ability to understand underlying temporal patterns and identify anomalies in these signals is paramount. Time-series Anomaly Detection (TAD) takes on the intricate task of pinpointing such deviations from expected behavior in time-series. Despite extensive efforts to accurately label data, real-world datasets are still prone to errors [2, 3]. Recognizing these challenges, our work aligns with a growing trend toward semi-supervised anomaly detection for time-series data [4, 5], which operates on the assumption of a purely normal training dataset. **Challenges.** TAD presents two primary challenges. First, real-world data, with its diverse origins, displays a composition of complex patterns. This necessitates models to effectively learn representations from such intricate temporal dynamics without labeled data. Second, temporal anomalies span various categories, including point- (global and contextual) and pattern-wise (shapelet, seasonal, and trend) anomalies [3, 4]. Specifically, pattern-wise anomalies can arise from varied causes, such as unusual shapes, reduced seasonality, or a declining trend. Such diversity introduces ambiguity, making accurate anomaly detection more challenging for models. **Existing solutions.** In recent years, deep learning models [6, 4] have surpassed classical techniques [7] in TAD tasks. These models generally fall into two categories: autoregression-based and reconstruction-based approaches. Autoregression-based methods identify anomalies through prediction errors [8], while reconstruction-based methods, including Autoencoder-based techniques [9, 10] and generative adversarial networks [11], use reconstruction errors. Despite their overall accuracy, many of these models fail to account for the complex compositional nature of patterns in time-series data or distinguish between different types of anomalies. Consequently, these approaches often overlook nuanced deviations and lack interpretability. **New insights.** Time series are inherently composed of multiple overlapping patterns: seasonality, trend, and remainder. This overlapping nature can obscure different types of anomalies. The advantage of using Seasonal-Trend Decomposition (STD) [12] is vividly demonstrated in Fig. 1, which shows how anomalies can be effectively unraveled by separating them into their respective components. By leveraging the power of STD, our approach can uniquely break down these complex composite patterns. Furthermore, following the taxonomy proposed in [3], we find that different types of anomalies can be systematically associated with their respective components: seasonal anomalies with the seasonal component, trend anomalies with the trend component, and point anomalies with the remainder component. Although existing research has incorporated time-series decomposition into TAD task [13, 14, 15], these approaches do not follow an end-to-end training manner. Specifically, they either depend on pre-defined decomposition algorithms, necessitating elaborate parameter tuning [14, 15], or employ decomposition only for data preprocessing [13]. To overcome the lack of supervised signals for end-to-end training, we introduce a novel two-step training approach. Initially, we generate a synthetic dataset that mimics the decomposed components of real-world data. First, we pre-train our model on this synthetic dataset for decomposition tasks. The model is subsequently fine-tuned on real-world anomalous data, yielding enhanced time-series decomposition and anomaly detection. Figure 1: Schematic of the STD and TAD workflow, which begins by applying STD to the anomalous time-series, yielding its decomposed components. Subsequently, the original series is compared with the reconstructed series to compute the reconstruction error. Finally, the error is transformed into anomaly scores and labels. **Contributions.** Our contributions are threefold. First, we present TADNet, a novel end-to-end TAD model that leverages STD, as illustrated in Fig.2. Inspired by TasNet [16], TADNet is designed to handle time-series data similarly to audio signals, and offers detailed decomposition components, enhancing both interpretability and accuracy. Second, we adopt a targeted training strategy with pre-training and fine-tuning to enable end-to-end decomposition and detection. Third, our model achieves state-of-the-art performance and validates its efficacy through decomposition visualizations. ## 2 Preliminaries **Time-series Anomaly Detection.** Consider a time-series \(\mathcal{T}\in\mathbb{R}^{T\times D}\) of length \(T\). The series is designated as univariate when \(D=1\) and multivariate when \(D>1\). The primary objective of TAD task is to identify anomalies within \(\mathcal{T}\), consequently generating an output series \(\mathcal{Y}\). Each element in \(\mathcal{Y}\) corresponds to the anomalous status of the respective data points in \(\mathcal{T}\), with \(1\) indicating an anomaly. To facilitate this, point-wise scoring methods [6, 17] are employed to produce an anomaly score series \(\mathcal{S}=\{s_{1},s_{2},...,s_{m}\}\), each \(s_{i}\in\mathbb{R}\). These scores are then converted into binary anomaly labels \(\mathcal{Y}\) through an independent thresholding process. **Seasonal-Trend Decomposition.** For a univariate time-series \(x\in\mathbb{R}^{T}\), its structural composition capturing trend and seasonality is represented as \(x_{t}=\tau_{t}+s_{t}+\tau_{t}\), where \(\tau_{t},s_{t}\), and \(\tau_{t}\) denote the trend, seasonal, and remainder components at the \(t\)-th timestamp, respectively. The primary focus of our method for TAD task lies in the seasonal-trend decomposition of univariate time-series. The current literature underscores the benefits of evaluating each variable individually for increased predictive accuracy [18, 19]. Thus, each variable in a multivariate series undergoes independent decomposition, while the overall anomaly detection strategy accounts for its multivariate nature. **Time-domain Audio Separation.** The task is formulated in terms of estimating \(C\) sources \(s_{t}^{(1)},\ldots,s_{t}^{(C)}\in\mathbb{R}^{T}\), given the discrete waveform of the mixture \(x_{t}\in\mathbb{R}^{T}\). Mathematically, this is expressed as \(x_{t}=\sum_{i=1}^{C}s_{t}^{(i)}\). The field of monaural audio source separation has seen advancements through various deep learning models. TasNet [16] introduced the concept of end-to-end learning in this area. Conv-TasNet [20] further developed this approach by incorporating convolutional layers. DPRNN [21] focused on improving long-term modeling through recurrent neural networks. More recently, architectures like SepFormer [22] have integrated attention mechanisms. The theoretical framework of Time-domain Audio Separation exhibits striking resemblances to the Seasonal-Trend Decomposition tasks, thus leading us to contemplate the possible transference of methodologies between these domains for improved time-series decomposition and anomaly detection. ## 3 Methodology ### Overall Framework The overall flowchart of TADNet is shown in Fig.2. The preprocessing contains data normalization and segmentation. The input is first normalized to the range \([0,1)\). Segmentation involves a sliding window approach of length \(P\), converting the normalized \(\mathcal{T}\) into non-overlap blocks of length \(P\) denoted as \(\mathcal{D}=\{\mathcal{X}_{1},\mathcal{X}_{2},\ldots,\mathcal{X}_{N}\}\). Notably, while segmentation offers a more flexible approach to managing longer sequences, it does not influence results. Within the TADNet backbone, we leverage the TasNet architecture and its variants [16, 20, 21] from speech separation. Viewing the seasonal and trend components as distinct audio signals, TasNet facilitates effective STD (see Fig.2). Since the training utilizes only normal samples, anomalies typically disrupt the reconstruction process. To detect these anomalies, we compute the reconstruction error, denoted by \(Score(t)=\|\mathcal{T}_{t,:}-\hat{\mathcal{T}}_{t,:}\|_{2}\), where \(\|\cdot\|_{2}\) represents the L2 norm. ### TADNet Backbone The encoder accepts a univariate time series \(x_{d}\in\mathbb{R}^{P}\), where \(d=1,2,\ldots,D\), sourced from multivariate \(\mathcal{X}_{t}\). It segments this series into multiple overlapping frames. Each frame possesses a length of \(L\) and overlaps with adjacent frames by a stride of \(S\). Sequentially, these frames are collated to constitute \(\mathbf{X}_{d}\in\mathbb{R}^{L\times K}\). Through a subsequent linear transformation, the encoder projects \(\mathbf{X}_{d}\) into a latent space, given by: \(\mathbf{E}=\mathbf{U}\mathbf{X}_{d}\). The matrix \(\mathbf{U}\in\mathbb{R}^{N\times L}\) contains trainable transformation bases in its rows, while \(\mathbf{E}\in\mathbb{R}^{N\times K}\) represents the feature representation of the input time series in the latent space. The separator receives the encoded representation and is tasked with generating masks for each of the decomposed components. Formally, \(\{\mathbf{M}_{\tau},\mathbf{M}_{s},\mathbf{M}_{\tau}\}=\mathcal{F}_{\text{sep} }(\mathbf{E};\theta)\), where \(\mathbf{M}_{\tau}\), \(\mathbf{M}_{s}\), and \(\mathbf{M}_{\tau}\) represent the masks for the trend, seasonal, and remainder components, respectively. Here, \(\mathcal{F}_{\text{sep}}\) denotes the separator subnetwork, Figure 2: Overview of TADNet. The workflow initiates with data augmentation, wherein trend, seasonal, anomalies, and reaminder are integrated to formulate the synthetic dataset. The training of TADnet unfolds in two phases: first, it masters STD on the synthetic data guided by \(L_{\text{dec}}\), followed by fine-tuning on real-world data, leveraging \(L_{\text{rec}}\) to capture typical patterns. The backbone of TADnet consists of an Encoder-Decoder pair, facilitating mapping between the time-domain and latent space, and a separator tasked with generating masks tailored for STD targets. The parameters are shared between the three decoders. which can be implemented using various architectures such as CNN [20], RNN [21], or Transformer [22]. Utilizing these masks, the embeddings for each target from the global feature \(\mathbf{E}\) are: \[\mathbf{E}_{r}=\mathbf{M}_{r}\odot\mathbf{E},\quad\mathbf{E}_{s}=\mathbf{M}_{s} \odot\mathbf{E},\quad\mathbf{E}_{r}=\mathbf{M}_{r}\odot\mathbf{E}, \tag{1}\] achieved via point-wise products of the corresponding masks. The decoder architecture mirrors the encoder, taking in the masked embeddings generated by the separator. These embeddings are mapped back to the time domain via a linear transformation \(V\): \[\hat{\mathbf{S}}_{\tau}=\mathbf{E}_{r}^{T}\mathbf{V},\quad\hat{\mathbf{S}}_{s} =\mathbf{E}_{s}^{T}\mathbf{V},\quad\hat{\mathbf{S}}_{r}=\mathbf{E}_{r}^{T} \mathbf{V}. \tag{2}\] Here, \(V\in\mathbb{R}^{N\times L}\) has \(N\) decoder bases. The reconstructed trend, seasonal, and remainder, denoted as \(\hat{\mathbf{S}}_{\tau}\), \(\hat{\mathbf{S}}_{s}\), and \(\hat{\mathbf{S}}_{r}\), are derived from their respective embeddings. The output time-domain signals, \(\hat{\tau}_{d}\), \(\hat{s}_{d}\), and \(\hat{r}_{d}\), are obtained through an overlap-and-add operation. ### Synthetic Dataset In anomaly detection, real-world data often lack the nuanced trend and seasonal patterns essential for STD. To equip TADnet with robust STD capabilities, we constructed a composite dataset. This dataset is meticulously designed with intricate seasonal and trend shifts, anomalies, and noise to emulate real-world contexts, as illustrated in Fig.1. Both deterministic and stochastic trends are leveraged to craft the trend and seasonal components, which are subsequently normalized to maintain a zero mean and unit variance. **Trend.** The deterministic trend is generated using a linear trend function with fixed coefficients: \(\tau_{t}^{(d)}=\beta_{0}+\beta_{1}\cdot t\), where \(\beta_{0}\) and \(\beta_{1}\) are tunable parameters. The stochastic trend component is modeled using an ARIMA(0,2,0) process, integrated into the trend model as follows: \(\tau_{t}^{(s)}=\sum_{n=1}^{t}nX_{n}\), where \(X_{t}\) is a normally-distributed white noise term, satisfying \(\Delta^{2}\tau_{t}^{(s)}=X_{t}\). **Seasonal.** The deterministic seasonal component combines various types of periodic signals. It includes sinusoidal waves with varying amplitudes, frequencies, and phases, as well as square waves with different amplitudes, periods, and phases. For the slow-changing stochastic sequence, the seasonal component is composed of repeating cycles of a slow-changing trend series \(\tau_{t}^{(s)}\). This series is generated using the trend generation algorithm to ensure a smooth transition between cycles. Each cycle is uniquely characterized by a period \(T_{0}\) and a phase \(\phi\). The stochastic seasonal component is thus formulated as \(s_{t}=\tau_{\text{mod}(t+\eta,T_{0})}^{(s)}\). To enrich the dataset, minor adjustments are made to both the cycle's length and amplitude, including resampling individual cycles and scaling values within a cycle, aiming for more diverse and generalizable signal decomposition. **Remainder.** The remainder component is conceived using a white noise process with adjustable variances. To enhance the robustness of the decomposition model against anomalies and ensure stable decomposition performance, we injected a portion of anomalous data into the synthetic dataset, following the method outlined in [3]. ### Two-Phase Training Strategy We propose a two-phase training strategy for TADnet to ensure its efficacy in both time-series decomposition and anomaly detection tasks. In the first phase, TADnet is pre-trained on a synthetic dataset, with a focus on time-series decomposition. The corresponding loss function, which aggregates the Mean Squared Errors for each decomposed component, is formulated by: \[L_{\text{dec}}=\sum\nolimits_{d=1}^{D}\left(\|\tau_{d}-\hat{\tau}_{d}\|_{2}^{2 }+\|s_{d}-\hat{s}_{d}\|_{2}^{2}+\|r_{d}-\hat{r}_{d}\|_{2}^{2}\right) \tag{3}\] Here, \(\tau_{d}\), \(s_{d}\), and \(r_{d}\) denote the actual seasonal, trend, and residual components for the \(d\)-th dimension, respectively, while \(\hat{\tau}_{d}\), \(\hat{s}_{d}\), and \(\hat{r}_{d}\) represent their predicted counterparts. In the second phase, TADnet is fine-tuned using a real-world TAD dataset. This stage emphasizes the accurate reconstruction of the original time series following its decomposition, a key requirement for effective anomaly detection. The loss function for this stage, which focuses on overall reconstruction accuracy, is given by: \[L_{\text{rec}}=\sum\nolimits_{d=1}^{D}\|x_{d}-(\hat{\tau}_{d}+\hat{s}_{d})\|_ {2}^{2} \tag{4}\] Here, \(x_{d}\) represents the original time series in the \(d\)-th dimension, and \(\hat{\tau}_{d}+\hat{s}_{d}\) is its predicted reconstruction. ## 4 Experiments In our experiments, we employ five real-world datasets encompassing both univariate and multivariate time-series, as outlined in Table 2. These include multi-univariate dataset UCR [23], featured in the KDD 2021 Cup; SMD [9], which provides five weeks of data from a \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c|c c} \hline Dataset & \multicolumn{3}{c|}{**UCR (\(u\))**} & \multicolumn{3}{c|}{**SMD (\(m\))**} & \multicolumn{3}{c|}{**SWaT (\(m\))**} & \multicolumn{3}{c|}{**PSM (\(m\))**} & \multicolumn{3}{c}{**WADI (\(m\))**} \\ Metric & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline OCSVM & 41.14 & 94.00 & 57.23 & 44.34 & 76.72 & 56.19 & 45.39 & 49.22 & 47.23 & 62.75 & 80.89 & 70.67 & 61.89 & 62.31 & 62.10 \\ BeatGAN & 45.20 & 88.42 & 59.82 & 72.90 & 84.09 & 78.10 & 64.01 & 87.46 & 73.92 & 90.30 & 93.84 & 92.04 & 65.13 & 38.32 & 48.25 \\ Omniamodany & 64.21 & 86.93 & 73.86 & 83.34 & 94.49 & 88.57 & 86.33 & 76.94 & 81.36 & 91.61 & 71.36 & 80.23 & 31.58 & 65.41 & 42.60 \\ InterFusion & 60.74 & 95.20 & 74.16 & 87.02 & 85.43 & 86.22 & 80.59 & 85.58 & 83.01 & 83.61 & 83.45 & 83.52 & 80.26 & 30.38 & 44.08 \\ AnomalyTran & 72.80 & 99.60 & 84.12 & 89.40 & 95.45 & 92.33 & 91.55 & 96.73 & **94.07** & 96.91 & 98.90 & 97.89 & 80.30 & 79.23 & 79.76 \\ TranAD & 94.07 & 100.00 & 96.94 & 88.03 & 89.42 & 88.72 & 97.60 & 69.97 & 81.51 & 96.44 & 87.37 & 91.68 & 35.29 & 82.96 & 49.51 \\ DecompTran & 71.58 & 96.83 & 82.31 & 89.32 & 93.94 & 91.57 & 95.17 & 80.30 & 87.10 & 97.65 & 87.21 & 92.14 & 79.40 & 81.01 & 80.20 \\ \hline **TADNet(Ours)** & 97.51 & 100.00 & **98.74** & 94.81 & 91.93 & **93.35** & 92.15 & 88.35 & 90.21 & 98.12 & 99.21 & **98.66** & 94.03 & 82.96 & **88.15** \\ \hline \end{tabular} \end{table} Table 1: Quantitative results for TADNet across five real-world datasets use metrics \(P\), \(R\), and \(F1\) for precision, recall, and F1-score (%). Higher values indicate better performance. Best and second-best results are in bold and underlined, respectively. Dataset are followed by brackets, where \(u\) indicates univariate and \(m\) multivariate. \begin{table} \begin{tabular}{c|c c|c|c|c} \hline Dataset & \#Entities & \#Dim & \#Train & \#Test(labeled) & Anomaly\% \\ \hline UCR & 4 & 1 & 1,200-3,000 & 4,500-6,301 & 1.9 \\ SMD & 28 & 38 & 23.6K-28.7K & 23.6K-28.7K & 4.2 \\ WADI & 1 & 127 & 789.371 & 172,801 & 5.9 \\ SWaT & 1 & 51 & 496.800 & 449,919 & 12.1 \\ PSM & 1 & 25 & 132,481 & 87,841 & 27.8 \\ \hline \end{tabular} \end{table} Table 2: Details of the dataset. leading Internet company; and SWaT [24], offering sensor data from water treatment plant. WADI extends SWaT but contains over twice as many sensors and actuators. Additionally, PSM [25] originates from eBay's application server nodes. NASA's MSL and SMAP [8] datasets were excluded due to their abundance of binary sequences, which are incompatible with our decomposition methods. We benchmark TADNet against: one classic method, OCSVM [7], and six deep models including BeatGAN [11], OmniAnomaly [9], InterFusion [10], AnomalyTran [4], TranAD [6], and De-compTran [13]. Other traditional methods are excluded, as deep learning models have been proven superior [4]. Evaluation is based on standard TAD metrics such as precision, recall, and F1 score. We adopt a widely-used adjustment strategy [9, 4, 6, 13]: if any time point in an abnormal segment is detected, the entire segment is considered correctly identified, aligning with real-world applications. We partition the dataset into blocks of \(P=8,000\) time points to optimize memory. The backbone utilizes TasNet with DPRNN [21], featuring kernel size \(W=2\), encoding dimension \(E=256\), feature dimension \(F=64\), hidden dimension \(H=128\), and six layers. Training employs ADAM with an initial learning rate of \(1\times 10^{-3}\) for 200 epochs, followed by fine-tuning at \(5\times 10^{-4}\) for 10-20 epochs. Experiments run on an NVIDIA GeForce RTX 3090. To ensure fair comparisons with previous work, the Peak Over Threshold method [26] is employed to determine the anomaly threshold. A timestamp is labeled as anomalous if its score exceeds this threshold. ### Results and Analysis The results of our quantitative evaluation are presented in Table 1. We rigorously assess TADNet against seven competitive baselines across five real-world datasets. TADNet manifests superior performance across all metrics, notably achieving the highest F1-score in four out of the five datasets. This consistently elevated performance attests to the efficacy of our approach, which leverages seasonal trend decomposition to disentangle intricate and overlapping temporal patterns. Such decomposition unravels a broader spectrum of anomalies, thereby elevating the detection capabilities of our model. Importantly, the results provide empirical support for the benefit of incorporating decomposition techniques into time-series anomaly detection tasks. **Visualization.** To demonstrate the effectiveness of STD in unraveling complex anomalies, we showcase decomposition and reconstruction error results on multiple real-world datasets in Fig.1 (NeurIPS-TS) and Fig.3 (UCR and SMD). It's noteworthy that anomalies become more clearly evident when comparing their respective decomposed components to the original series. As the reconstruction error and anomaly scores are positively correlated, our reconstruction error closely aligns with the anomalous regions, as seen in the last row. This validates that our method adeptly identifies anomalies, thereby improving detection accuracy while reducing false positives. **Ablation Study.** As shown in Table 3, we further investigate the effect of each part in TADNet. The removal of key components such as the Separator (_w/o Sep_), Decomposition (_w/o Decomp_), or Augmentation (_w/o Augment_) leads to substantial drops in F1-scores, highlighting their essential roles in TADNet's performance. While the iterative training approach (_Iterative_) shows some improvement in specific datasets (notably WADI, increased to 92.06%), its computational overhead makes it less practical for broader applications. We therefore opt for the pretrain-finetune paradigm and leave the study of iterative training for future work. ## 5 Conclusion In summary, we introduced TADNet, an end-to-end TAD model that employs STD to tackle the challenges of complex patterns and diverse anomalies. Our model distinguishes itself by providing interpretable and accurate decomposition components. Through a two-phase training strategy involving pre-training on synthetic data and fine-tuning on real-world data, TADNet not only achieves state-of-the-art performance but also validates its effectiveness via decomposition visualizations. This work represents a substantial advance in time-series anomaly detection, effectively bridging the gap between accuracy and interpretability. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Ablation & UCR & SMD & SWaT & PSM & WADI \\ \hline TADNet & 98.74 & **93.35** & **90.21** & **98.66** & 88.15 \\ \hline _w/o Sep_ & 32.68 & 66.24 & 76.89 & 83.28 & 47.66 \\ _w/o Decomp_ & 48.69 & 84.12 & 88.41 & 95.57 & 65.72 \\ _w/o Augment_ & 40.12 & 74.17 & 83.26 & 98.01 & 62.15 \\ _Iterative_ & **99.12** & 92.14 & 86.55 & 96.58 & **92.06** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation results, evaluated using F1-score (%). ’_w/o Sep_’ excludes the separator from the backbone architecture; ’_w/o Decomp_’ replaces \(L_{\text{dec}}\) with \(L_{\text{rec}}\); ’_w/o Augment_’ omits pretraining on a synthetic dataset; and ’_Iterative_’ involves iterative training between the decomposition and anomaly detection tasks. Figure 3: Visualization of decomposition and detection results in UCR and SMD. The first row shows the raw time series will anomalies, the second and third rows display the seasonal and trend components, respectively, and the final row depicts the reconstruction error. Anomalies are marked with a red background.
2309.05273
Formalizing Multimedia Recommendation through Multimodal Deep Learning
Recommender systems (RSs) offer personalized navigation experiences on online platforms, but recommendation remains a challenging task, particularly in specific scenarios and domains. Multimodality can help tap into richer information sources and construct more refined user/item profiles for recommendations. However, existing literature lacks a shared and universal schema for modeling and solving the recommendation problem through the lens of multimodality. This work aims to formalize a general multimodal schema for multimedia recommendation. It provides a comprehensive literature review of multimodal approaches for multimedia recommendation from the last eight years, outlines the theoretical foundations of a multimodal pipeline, and demonstrates its rationale by applying it to selected state-of-the-art approaches. The work also conducts a benchmarking analysis of recent algorithms for multimedia recommendation within Elliot, a rigorous framework for evaluating recommender systems. The main aim is to provide guidelines for designing and implementing the next generation of multimodal approaches in multimedia recommendation.
Daniele Malitesta, Giandomenico Cornacchia, Claudio Pomo, Felice Antonio Merra, Tommaso Di Noia, Eugenio Di Sciascio
2023-09-11T07:12:17Z
http://arxiv.org/abs/2309.05273v2
# Formalizing Multimedia Recommendation through Multimodal Deep Learning ###### Abstract Recommender systems (RSs) provide customers with a personalized navigation experience within the vast catalogs of products and services offered on popular online platforms. Despite the substantial success of traditional RSs, recommendation remains a highly challenging task, especially in specific scenarios and domains. For example, human affinity for items described through multimedia content (e.g., images, audio, and text), such as fashion products, movies, and music, is multi-faceted and primarily driven by their diverse characteristics. Therefore, by leveraging all available signals in such scenarios, multimodality enables us to tap into richer information sources and construct more refined user/item profiles for recommendations. Despite the growing number of multimodal techniques proposed for multimedia recommendation, the existing literature lacks a shared and universal schema for modeling and solving the recommendation problem through the lens of multimodality. Given the recent advances in multimodal deep learning for other tasks and scenarios where precise theoretical and applicative procedures exist, we also consider it imperative to formalize a general multimodal schema for multimedia recommendation. In this work, we first provide a comprehensive literature review of multimodal approaches for multimedia recommendation from the last eight years. Second, we outline the theoretical foundations of a multimodal pipeline for multimedia recommendation by identifying and formally organizing recurring solutions/patterns. Third, we demonstrate its rationale by conceptually applying it to selected state-of-the-art approaches in multimedia recommendation. Subsequently, we conduct a benchmarking analysis of recent algorithms for multimedia recommendation within Elliot, a rigorous framework for evaluating recommender systems, where we re-implement such multimedia recommendation approaches. Finally, we highlight the significant unresolved challenges in multimodal deep learning for multimedia recommendation and suggest possible avenues for addressing them. The primary aim of this work is to provide guidelines for designing and implementing the next generation of multimodal approaches in multimedia recommendation. Multimodal Deep Learning, Multimedia Recommender Systems () 2018 201 Introduction Over the last few decades, companies have increasingly been developing online platforms to reach their customers and offer them a more comprehensive selection of personalized products and services in various domains, from food and fashion to e-commerce and tourism. Recommender systems (RSs) are among the prominent technologies that work behind the scenes of such platforms to unveil the implicit preference patterns within the intricate set of users and items and curate the presentation of a list of products that customers may enjoy. Such technologies are an essential element of all major Internet businesses, driving up to 35% of Amazon's sales (Zhou et al., 2017) and more than 80% of Netflix's catalog (Navarro et al., 2018). There exist recommendation scenarios, such as multimedia recommendation, where items naturally come with additional side information that may complement the knowledge conveyed by the historical user/item interaction matrix. Multimedia recommendation (Mallesta et al., 2017; Wang et al., 2018) is the task of recommending products or services either described through multimedia content (e.g., a fashion item with a product image and description) or multimedia content themselves (e.g., a movie with its visuals, soundtrack, and subtitles). In such a context, any recorded user/item interaction may hide multiple possible reasons why that interaction occurred. A user could be interested in buying a fashion item due to the description on the item page and could enjoy a movie because of its soundtrack. Understanding these patterns means modeling users' and items' profiles through the _multi-faceted_ aspects of their interactions. Our experience of daily life is intrinsically _multimodal_. We interact with objects surrounding us through our five senses. For instance, watching a movie can involve three senses (i.e., modalities): we watch it (_visual_ modality) while listening to the dialogues (_audio_ modality) and possibly reading its subtitles (_textual_ modality). Multimodal learning has been one of the hot topics in deep learning for some years now, addressing applicative domains such as medical imaging (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), autonomous driving (Gan et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), speech/emotion recognition (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), multimedia retrieval (Gan et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), and, only recently, multimodal large language modelling (Gan et al., 2017). Given the success and popularity it has encountered, some works have tried to outline, categorize, and formalize the core concepts behind multimodality in deep learning (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018). Remarkably, the literature recognizes five steps and challenges when designing a multimodal deep learning pipeline (Beng et al., 2017): _representation, translation, alignment, fusion_, and _co-learning_. Similarly to the cited domains and applications, approaches in multimedia recommendation have been shown to effectively apply multimodal deep learning techniques to the recommendation task. The idea is to model users' and items' profiles through the different modalities and suitably capture the multi-faceted nature of their interconnections. Recent works in the literature have brought multimodality to multimedia recommendation (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) tackling (just to mention a few) micro-video recommendation (Gan et al., 2017; Wang et al., 2018; Wang et al., 2018), food recommendation (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), outfit fashion compatibility (Gan et al., 2017; Wang et al., 2018; Wang et al., 2018), and artist/song recommendation (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). However, and differently from the other outlined domains and scenarios, recommendation lacks a _shared_ theoretical and applicative formalization to align the multimedia recommendation problem with the same formal pipeline proposed in multimodal deep learning (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018). For these reasons, in this work, we first review the most popular and recent state-of-the-art approaches in multimedia recommendation. Indeed, it emerges that three main design choices are involved when proposing novel multimedia recommender systems leveraging multimodality: (i) _Which_ modalities to suitably describe the user/item input data; (ii) _How_ to extract and process meaningful multimodal representations; (iii) _When_ to integrate and inject multimodal data into the training/inference steps. While observing that many multimedia recommendation approaches are rarely aligned on the techniques to adopt for (i), (ii), and (iii), we maintain this could limit the future development of novel solutions in the field. This is true since each work claims to advance with respect to the state-of-the-art but it becomes cumbersome to distinguish which conceptual and implementation _strategies_ are contributing the most (Wang et al., 2018). Thus, inspired by the multimodal pipeline formalized in multimodal deep learning [6, 7, 77], we try to align the same schema with the three design choices recognized above. Our objective is to define a conceptual and theoretical schema that uses multimodality to encompass and summarize the most diffused solutions/patterns in the multimedia recommendation literature. To the best of our knowledge, this represents the first attempt that, differently from similar works in the literature [62, 133], _formalizes_ multimedia recommendation through the core concepts theorized in multimodal deep learning [6, 7, 77]. To sum up, we aim to answer the following research questions (RQs): 1. _Which are the main solutions in the related literature?_ We review existing works in multimedia recommendation adopting multimodal learning techniques, highlighting common and different architectural choices; in this respect, we categorize the reviewed papers according to the type of multimodal input (i.e., _What_), the technique for features processing (i.e., _How_), and the moment to integrate modalities (i.e., _When_). 2. _How can we summarize the observed strategies into a shared formal schema?_ On such basis, and following the related literature on multimodal deep learning, we revisit the multimedia recommendation task under the lens of multimodal deep learning; by mapping the multimodal pipeline outlined in [6, 7, 77] to the threefold categorization from RQ1, we provide the general formulations for a formal schema involving three steps: multimodal input data, multimodal feature processing, and multimodal feature fusion. 3. _How is our formal schema conceptually applicable to existing recommendation models?_ First, we select four multimodal approaches from the recent literature spanning various domains and scenarios in multimedia recommendation; then, we show how the proposed multimodal schema conceptually applies to the selected models in all the steps of the pipeline, thus proving its effectiveness. 4. _Can we integrate the multimodal schema into existing recommendation frameworks?_ We use Elliot [2], a rigorous framework for the reproducibility of recommender systems, and integrate our multimodal schema into it to benchmark six state-of-the-art multimodal techniques for multimedia recommendation (i.e., VBPR [39], MMGCN [109], GRCN [108], LATTICE [124], BM3 [135], and FREEDOM [134]); the measured recommendation metrics, accounting also for beyond-accuracy evaluation, open to future research directions. 5. _Which are the next challenges in multimodal learning for recommendation?_ Driven by the previous findings, we outline technical and conceptual challenges aimed to provide guidelines for future research in the field. The rest of this paper is organized as follows. Section 2 provides a comprehensive overview of the most recent works exploiting multimodality for multimedia recommendation, and highlights the main differences between this work and similar works in the literature. Then, in Section 3, we present our formal schema which tries to embody and generalize the different solutions reviewed in the literature. After that, in Section 4, we show how to apply the schema to a valuable sample of state-of-the-art approaches. Under the same light, in Section 5, we present the implementation of our formal schema, which we adopt to benchmark selected models from the literature. Furthermore, in Section 6, we take advantage of the lesson learned from the literature and the benchmarking study to outline existing technical challenges that may be addressed in future directions (i.e., Section 7). Finally, in Section 8, we sum up the main contributions of this paper. To foster the reproducibility of the current work, we release a GitHub repository with all the reviewed papers, along with the benchmarking framework and results at the following link: [https://github.com/sisinflab/Formal-MultiMod-Rec](https://github.com/sisinflab/Formal-MultiMod-Rec). ## 2. Literature review (RQ1) In this section, we present a literature review on recent multimodal applications for the task of multimedia recommendation. Table 1 reports 43 papers collected from the proceedings of top-tier conferences and journals over the last eight years. A careful review and analysis aimed at outlining recurrent schematic and observed patterns suggests categorizing the retrieved papers according to three key questions: * _Which_ modalities to choose for the input data? * _How_ to process multimodal features in terms of feature extraction and representation? * _When_ to fuse the different modalities to integrate them into the final recommendation framework? To collect all reviewed papers, we also include a public GitHub repository1 to access their direct DOIs. We intend to update this repository with the most recent works leveraging multimodality for multimedia recommendation. Footnote 1: [https://github.com/sisinflab/Formal-MultiMod-Rec](https://github.com/sisinflab/Formal-MultiMod-Rec). ### _Which_ modalities? In multimedia recommendation scenarios, input data generally comes in at least two of the three most common modalities in literature, namely _Visual, Textual_, and _Audio_ modalities. As evident from the collected papers, the vast majority of works consider the visual and textual modalities, which mainly refer to product images and descriptions (e.g., [25; 30; 36; 59; 122; 124]), respectively, while fewer examples leverage such modalities to describe video frames and captions (e.g., [45; 85; 104]) or users' social media interactions through uploaded photographs together with texts (e.g., [17; 113; 126]). Another emerging trend from the literature is that audio is by far the most underrepresented modality, and it is usually coupled with the textual one to describe music in the form of audio signals and songs' descriptions (e.g., [79; 101]). Conversely, the related literature shows that the audio modality is frequently exploited for video input data (e.g., [96; 103; 108; 109; 116]) which is also the unique scenario involving all modalities. The observed disparity in data modalities is not only linked to the specific task the various approaches address (e.g., product, song, or micro-video recommendation) but it is also found in each modality's different availability. In this respect, for example, datasets collecting user-item interactions on e-commerce platforms (e.g., the Amazon reviews dataset or IQON300) are more easily accessible than the ones involving social media videos. For instance, one may consider that a version of the TikTok dataset (introduced in [109]) has been made available with pre-trained multimodal features involving visual, audio, and textual modalities only recently [106]. This modality _misalignment_ is among the most discussed challenges in the community, so we decide to dedicate a section to it later (refer to Section 6.1). ### _How_ to process modalities? Once modalities have been selected for data inputs, two primary operations usually get involved in processing the multimodal data to be fed into the recommender system. First, high-level features are extracted from each of the available modalities. Interestingly, early approaches adopt handcrafted feature extraction (HFE) strategies (e.g., color histograms) as described in [17; 45; 56; 78]. However, with the outbreak and the increasing popularity encountered by deep learning and deep neural models for image and text classification, object detection, and speech recognition, trainable feature extractors (TFE) soon became the de facto standard in the learning of latent features from the input data. In this respect, the literature [26] indicates that the common approach is to use the activation of one of the final hidden layers of deep neural networks. For instance, the authors of [63; 64; 75; 89; 102; 106; 120] exploit features extracted from deep networks. Furthermore, we categorize TFE strategies based on the use of _Pretrained_ deep networks and End-to-End_ learned models. The former refer to the possibility of transferring the learned knowledge of already-trained deep networks to different domains, tasks, and datasets (e.g., see [94]), whereas the latter usually leverage custom deep neural networks trained in the downstream recommendation task. As evident from the collected papers, the pre-trained solution (e.g., [15; 19; 30; 31; 85; 87; 108; 118]) widely surpasses the end-to-end one (e.g., [36; 79; 126]) in terms of popularity, as the adoption of ready-to-use embedded features obtained from state-of-the-art deep learning models represents a more efficient and convenient approach than performing computationally-expensive and data-eager trained feature extractions. Nevertheless, an argument might be made that using features extracted through models already trained on different datasets and tasks could limit their expressiveness regarding the actual multimedia recommendation task. For this reason, we deepen into the issue in Section 6.2, trying to propose viable solutions in Section 7.1. The second operation involved in the feature processing phase regards the implementation of a multimodal representation (MMR) solution to establish relations among the extracted modalities. We recognize two main approaches, namely, either combining all modalities so that they belong to a unique representation (_Joint_) or keeping them separated to leverage the different influence they may have on recommendation (_Coordinate_). From the collected papers, it follows that both the former (e.g., [14; 18; 31; 76; 89; 118; 122]), and the latter (e.g., [124; 109; 78; 114; 115]) are almost equally preferred; however, the coordinate multimodal representation is slightly more popular as learning different representations for each involved modality may help unveil the specific contribution it brings to the final personalized recommendation. Indeed, this could support _explainability_, which is among the hottest topics in the community [127; 128], and especially in multimedia recommendation scenarios, where user-item interactions may, by nature, be driven by non-evident and sometimes contrasting users' preferences and tastes [108; 16; 109]. Finally, the authors from [19; 21] do not integrate any multimodal representation approach since they exploit multimodality only for the optimization of the loss but not to predict user-item preferences. ### _When_ to fuse modalities? The last stage in the multimodal pipeline deals with the fusion of the different processed modalities so that they can be eventually integrated into the recommendation outcome as a single representation of multiple _coordinated_ modalities. This process may take place _before_ or _after_ the prediction of the user-item preference score. On this basis, the former and the latter approaches are usually known as _Early_ (e.g., [101; 109]) and _Late_ (e.g., [25; 115]) fusion, respectively. It is worth pointing out that some solutions recognize a third strategy (i.e., _Hybrid_ fusion) that combines the two versions mentioned above, but for the sake of simplicity, we decide to categorize the works performing this kind of multimodal fusion as a particular case of late fusion. Additionally, we recognize that several approaches from the literature do not provide a precise differentiation between joint multimodal representation and early fusion. To better clarify this technical aspect, we propose to consider _fusion_ as an optional operation that takes place after the feature processing phase only in the case of _coordinate_ multimodal representation (you may refer to Section 6.3). Indeed, as evident from the table, _Joint_ multimodal representation and _Early/Late_ fusion never occur in the same approach. What is more, we observe that early fusion, employed, for instance, in [14; 126; 124; 126; 64; 134; 14; 30; 64; 76; 14; 106], is more popular than late fusion, used in [134; 135; 106; 114; 136; 25; 106]. Motivating this tendency is an unanswered research question that we leave as a possible open issue to impact the design of recommender systems leveraging multimodality. In this respect, you may refer to Section 6.4 for our discussion on the current challenges about modality fusion, and to Section 7.3 where we sketch possible future research directions. ### Similar works to this paper For the sake of completeness, we review the current literature works that provide similar contributions to ours to outline the main differences. As already mentioned, pioneer works such as [7; 6; 77] introduce and formalize (for the first time) the core concepts and ideas behind the field of multimodal deep learning. After that, the recent years have seen a growing interest in systematically reviewing and schematizing techniques for multimodal fusion [32], spanning different application domains such as medicine (Xiong et al., 2017), conversational artificial intelligence (Xiong et al., 2018), and visual content syntesis (Xiong et al., 2018), up to addressing complex and novel machine learning strategies including meta-learning (Xiong et al., 2019). Although the cited works share similar rationales to ours, their focus is more general (e.g., deep learning) or heterogeneous (e.g., medicine) with respect to the multimedia recommendation task. In the recommendation domain, the study presented in (Xiong et al., 2019) is among the closest and most influential works to our proposal in the intention of introducing a unified framework for food recommendation which leverages the concept of multimodality; however, the work is different from ours in that: (i) it only addresses the task of food recommendation, and (ii) it does not provide either mathematical formalizations or benchmarking analyses of the proposed multimodal pipeline. Furthermore, it is worth recalling two surveys regarding the topic of multimodal recommender systems (Xiong et al., 2019; Chen et al., 2020) on arXiv at the moment of this submission. Among the two, the work presented in (Xiong et al., 2019) shows the major similarities to our paper, especially when recognizing a multimodal pipeline for multimedia recommendation and providing an extensive benchmarking study on numerous multimodal approaches in the literature. Nevertheless, our work stands out for the following novel contributions: (i) we systematically follow the multimodal pipeline outlined in (Xiong et al., 2019; Chen et al., 2020; Chen et al., 2020) in the attempt to adapt it to the three main questions arising in the multimedia recommendation literature, namely, _Which?_, _How?_, and _When?_; (ii) we provide mathematical formalizations for each step of the proposed multimodal pipeline to sketch a formal schema for the next generation of multimodal approaches addressing multimedia recommendation; (iii) we identify a wider set of challenges regarding each step in the multimodal pipeline, and try to provide solutions to tackle them all; (iv) given the recent urge to evaluate the performance of recommender systems under other objectives apart from accuracy (Beng et al., 2019; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020), and specifically when investigating the impact of multimodality on novelty and diversity (Xiong et al., 2019) or popularity bias (Xiong et al., 2019), our benchmarking analysis (exploiting the evaluation pipeline from Elliot (Elliot, 2019)) represents the first effort to rigorously assess such large-scale performance measures in multimedia recommendation, opening to further research questions. ## 3. A formal multimodal schema for multimedia recommendation (RQ2) As previously outlined, the literature shows recurrent schematic patterns in adopting multimodal techniques for the task of multimedia recommendation. However, when considering the latest solutions in the field (Section 2) it appears evident that, differently from what happens for other applicative domains in machine learning, such approaches do not seem to follow any shared and officially recognized formal schema aligned with the principles of multimodal deep learning (Xiong et al., 2019; Chen et al., 2020; Chen et al., 2020). To sort things out, in this section, we propose to formally revisit multimedia recommendation under the lens of multimodal deep learning (Figure 1). First, we formalize the standard recommendation task. Then, we theoretically give answers to the core questions previously outlined, namely: _Which_ multimodal input data to adopt, _How_ to extract multimodal features and set relationships among them, and _When_ to fuse modalities. Finally, we specify the multimedia recommendation task through multimodality. In the following, we use the **bold** notation only when we explicitly define a vector for which we indicate its elements (i.e., scalars or other vectors). ### Classical recommendation task We consider users, items, and user-item interactions as the inputs to the recommender system. We denote with \(u\in\mathcal{U}\), \(i\in\mathcal{I}\), and \(r\in\mathcal{R}\) a user, an item, and a user-item interaction, respectively. To ease the notation, we say \(x\in\mathcal{X}\) is a general input to the system, with \(\mathcal{X}=\mathcal{U}\cup\mathcal{I}\cup\mathcal{R}\). Given a set of input data \(\mathcal{X}\), and defined \(\rho(\cdot)\) as the preference score prediction function, a recommender system aims to build a top\(@k\) list of items maximizing the following posterior probability (_prob_): \[\hat{\Theta}_{\rho}=\operatorname*{arg\,max}_{\Theta_{\rho}}prob(\Theta_{\rho} \mid X), \tag{1}\] where \(\Theta_{\rho}=[\theta_{\rho}^{(0)},\theta_{\rho}^{(1)},\ldots,\theta_{\rho}^{(| \mathcal{W}_{\rho}|-1)}]\) is the vector collecting all weights for the inference function \(\rho(\cdot)\), \(\mathcal{W}_{\rho}=\{\theta_{\rho}^{(0)},\theta_{\rho}^{(1)},\ldots,\theta_{ \rho}^{(|\mathcal{W}_{\rho}|-1)}\}\) is the set of such weights, and \(|\mathcal{W}_{\rho}|\) its cardinality. For instance, in the case of latent factor models (e.g., matrix factorization [50]), the set of trainable weights \(\mathcal{W}_{\rho}\) involves the user and item embedding matrices. ### Multimodal input data As shown in Figure 1, the first step of our multimodal schema is to identify input modalities. A common list of modalities for each input data (i.e., user, item, user-item interaction) in multimedia scenarios may be defined as follows: * visual (**v**), e.g., images, video frames; * textual (**t**), e.g., image captions, video subtitles, song lyrics, reviews; * audio (**a**), e.g., songs, podcasts, movie soundtracks. Formally, we define \(m\in\mathcal{M}\) as an admissible modality for the system (i.e., \(\mathcal{M}=\{\textbf{v},\textbf{t},\textbf{a}\}\)). We should mention that data may come with all such modalities or just a subset. For instance, videos from video streaming platforms (such as Netflix or Amazon Prime Video) have frames (**v**), subtitles and/or descriptions (**t**), and an audio track and/or soundtrack (**a**). Similarly, e-commerce platforms (such as Amazon or Zalando) sell products that may come with photographs (**v**) and reviews which stand for the textual feedback users express towards those products (**t**). Let \(x\in\mathcal{X}\) be an input to the recommender system, whose set of available modalities is indicated as \(\mathcal{M}_{x}\subseteq\mathcal{M}\). We represent the _content_ data of input \(x\) in modality \(m\) as \(c_{x}^{(m)}\), with \(m\in\mathcal{M}_{x}\), and the vector of content data for input \(x\) in all modalities as \(\textbf{c}_{x}\). Concerning the examples from above, a video item \(x\) may be described through three modalities (i.e., \(\mathcal{M}_{x}=\{\textbf{v},\textbf{a},\textbf{t}\}\)) and, for example, its visual content data (a frame) is an RGB image indicated as \(c_{x}^{(\textbf{v})}\). Similarly, a Figure 1. Our multimodal schema for multimedia recommendation. After (1) a modality-aware feature extraction, the extracted features may be either directly represented into a unique latent space (\(2a\)) or projected into a different latent space for each modality (\(2b\)). While in the former case, the multimodal representation is used to produce a prediction (\(4\)), in the latter case, all modalities must undergo a fusion phase (\(3\)). In the early fusion (\(3a\)), we produce a final representation that is used for prediction (\(4\)). Otherwise, we first produce a different prediction for each modality (\(4\)), and then we fuse them (late fusion) into a single predicted value (\(3b\)). fashion item \(x\) may be described through two modalities (i.e., \(\mathcal{M}_{x}=\{\mathbf{v},\mathbf{t}\}\)) and, for example, its textual content data (the description) is a set of words indicated as \(c_{x}^{(\mathbf{t})}\). ### Multimodal feature processing As in Figure 1, multimodal inputs are processed to be transferred into a low-dimensional representation. This step runs through a multimodal feature extractor and a component that constructs a multimodal feature representation. #### 3.3.1. Feature extraction Content input data is generally not exploitable as it is in a recommender model (e.g., the matrix of pixels from an image is not meant to be directly integrated into a recommender). Hence, our schema introduces a _Feature Extractor_ (FE) component to extract features, which should follow two principles, being (i) _high-level_ (i.e., meaningful for the recommender system) and (ii) _functional_ to the final recommendation task. Indeed, choosing the most suitable feature extractor for each modality may affect the performance. Let \(c_{x}^{(m)}\) be the content data for input \(x\) in modality \(m\in\mathcal{M}_{x}\). Then, let \(\varphi_{m}(\cdot)\) be the feature extractor function for the modality \(m\). We define the feature extraction process in the modality \(m\) as: \[\overline{c}_{x}^{(m)}=\varphi_{m}(c_{x}^{(m)})\quad\forall m\in\mathcal{M}_{ x}, \tag{2}\] where \(\overline{c}_{x}^{(m)}\) is the extracted feature for input \(x\) in modality \(m\). We use the notation \(\overline{c}_{x}=[\overline{c}_{x}^{(0)},\overline{c}_{x}^{(1)},\dots, \overline{c}_{x}^{(|\mathcal{M}_{x}|-1)}]\) to refer to the vector of extracted features for input \(x\) in all modalities. Generally speaking, \(\varphi_{m}(\cdot)\) may refer either to a handcrafted extractor, HFE (e.g., SIFT and color histogram for visual, and MFCCs for audio), or to a trainable extractor, TFE (e.g., deep learning-based models such as CNNs for visual, audio, and textual). In the latter case, \(\varphi_{m}(\cdot)\) can be either pre-trained or trained end-to-end along with the recommender system. #### 3.3.2. Multimodal representation Once high-level features have been extracted from each modality of the input data, the next step is to design a _Representation_ strategy to handle the relationships among modalities and eventually inject such data into the recommender system. As shown in Section 2, the literature follows two main approaches: _Joint_ and _Coordinate_ (Figure 2). The former relies on projecting multimodal features into a shared latent space to produce a unique final representation (e.g., concatenation is usually the simplest approach). Conversely, the latter involves adopting a different latent space for each modality, with the possibility of setting specific constraints among modalities that are expressed, for instance, through similarity metrics. In the following, we mathematically formalize the two strategies. **Joint representation.** Let \(\overline{c}_{x}\) be the vector of extracted features for input \(x\) in all modalities. In the case of _Joint_ representation, we assume \(\mu(\cdot)\) is the function to produce the multimodal representation of the extracted features. Thus: \[\tilde{c}_{x}=\mu(\overline{c}_{x}), \tag{3}\] where \(\tilde{c}_{x}\) is the multimodal representation for input \(x\). **Coordinate representation.** Let \(\overline{c}_{x}^{(m)}\) be the extracted feature for input \(x\) in modality \(m\in\mathcal{M}_{x}\). In the case of _Coordinate_ representation, we assume \(\mu_{m}(\cdot)\) is the multimodal representation function for modality \(m\), and let \(\mathcal{K}_{x}\) be a set of constraints on multimodal representations of input \(x\). Thus, we say: \[\tilde{c}_{x}^{(m)}=\mu_{m}(\overline{c}_{x}^{(m)})\textbf{ subject to }\mathcal{K}_{x},\textbf{ with }|\mathcal{K}_{x}|\geq 0, \tag{4}\] where \(\tilde{c}_{x}^{(m)}\) is the coordinate multimodal representation for input \(x\) in modality \(m\). Note that in Equation (4) we denote with \(\tilde{c}_{x}=[\tilde{c}_{x}^{(0)},\tilde{c}_{x}^{(1)},\dots,\tilde{c}_{x}^{(| \mathcal{M}_{x}|-1)}]\) the vector of coordinate multimodal representations for input \(x\) in all modalities. ### Multimodal feature fusion As an optional third step, when _Coordinate_ representation is used, our multimodal schema allows an additional _Fusion_ step to combine all produced multimodal representations. In the following, we describe the inference step in the two cases of _Early_ and _Late_ fusion. **Early fusion.** Let \(\tilde{\mathbf{c}}_{x}\) be the vector of coordinate multimodal representations for input \(x\) in all modalities. Then, let \(\gamma_{\mathbf{e}}(\cdot)\) be the function for _Early_ fusion. We generate the multimodal representation for input \(x\) as: \[\tilde{c}_{x}=\gamma_{\mathbf{e}}(\tilde{\mathbf{c}}_{x}). \tag{5}\] Note that after applying Equation (5), everything we describe in the following also applies to _Joint_ representation. We obtain the predicted output \(\hat{y}\) for input \(x\) as: \[\hat{y}=\rho(\tilde{c}_{x}) \tag{6}\] **Late fusion.** Let \(\tilde{c}_{x}^{(m)}\) be the coordinate multimodal representation for input \(x\) in modality \(m\in\mathcal{M}_{x}\). We first predict the different output values for each modality as: \[\hat{y}^{(m)}=\rho(\tilde{c}_{x}^{(m)})\quad\forall m\in\mathcal{M}_{x}. \tag{7}\] Let \(\hat{\mathbf{y}}\) be the vector of multimodal predicted outputs in all modalities. If we denote \(\gamma_{\mathbf{l}}(\cdot)\) as the function for _Late_ fusion, we finally aggregate (fuse) all modality-aware predictions: \[\hat{y}=\gamma_{\mathbf{l}}(\hat{\mathbf{y}}). \tag{8}\] Whatever the type of _Fusion_, the literature shows that various works perform this operation differently, from more straightforward solutions such as concatenation and element-wise addition, multiplication, or average to more refined techniques (i.e., neural-based ones, like attention mechanisms). Note that, in this work, we consider _Late_ fusion also when multimodal representations are exploited for some specific components of the loss function; indeed, in such settings, multimodal fusion does not occur even until the very last stage of the recommendation pipeline (i.e., the calculation and optimization of the loss function). Figure 2. A visual representation of _Joint_ and _Coordinate_ multimodal representation (above and below, respectively). ### Multimodal recommendation task Let \(\mathcal{W}_{\varphi}\), \(\mathcal{W}_{\mu}\), and \(\mathcal{W}_{Y}\) be the sets of the additional model trainable weights from (i) feature extraction, (ii) multimodal representation, and (iii) multimodal fusion, respectively. Note that they could be empty, as the correspondent functions may be non-trainable. Then, given the set of multimodal input data \(\mathcal{X}\), we extend Equation (1): \[\hat{\Theta}=\operatorname*{arg\,max}_{\Theta}prob(\Theta|\mathcal{X}), \tag{9}\] where \(\Theta=[\Theta_{\rho},\Theta_{\varphi},\Theta_{\mu},\Theta_{Y}]\), with \[\Theta_{\varphi}=[\theta_{\varphi}^{(0)},\theta_{\varphi}^{(1)},\ldots, \theta_{\varphi}^{(|\mathcal{W}_{\varphi}|-1)}],\quad\Theta_{\mu}=[\theta_{ \mu}^{(0)},\theta_{\mu}^{(1)},\ldots,\theta_{\mu}^{(|\mathcal{W}_{\mu}|-1)}], \quad\Theta_{Y}=[\theta_{Y}^{(0)},\theta_{Y}^{(1)},\ldots,\theta_{Y}^{(| \mathcal{W}_{Y}|-1)}], \tag{10}\] as the vectors of the model's feature extractor weights, multimodal representation weights, and multimodal fusion weights, respectively. We solve Equation (9) by optimizing the loss \(L\): \[L=L_{rec}(\Theta,\hat{y},y)+\alpha L_{reg}(\Theta) \tag{11}\] where \(y\) is the ground-truth value corresponding to the predicted output \(\hat{y}\), and \(\alpha\) is a model hyper-parameter to weight the _regularization_ component of the loss function (i.e., \(L_{reg}\)). Algorithm 1 provides a general overview of the overall multimodal schema we presented. ## 4. Conceptual application and validation of the multimodal schema (RQ3) After having described the formalism behind our proposed multimodal schema for multimedia recommendation, we devote the current section to the validation of the schema by conceptually applying it to four state-of-the-art approaches from the literature (Table 2). To demonstrate how our formal solution is designed to generally work with multiple recommendation scenarios involving multimedia user-item data interactions, we choose the proposed examples spanning a wide range of tasks, namely micro-video recommendation (Wang et al., 2018), food recommendation (Wang et al., 2018), outfit fashion compatibility (Wang et al., 2018), and artist/song recommendation (Wang et al., 2018). ### Case 1: Micro-video recommendation Wei et al. (Wei et al., 2018) build a bipartite user-item graph for micro-video personalized recommendation. The idea behind the approach is to exploit high-order users-items relations leveraging the multimodal nature of recommended items (i.e., micro-videos), to which users may experience different attitudes. The authors adopt a graph convolutional network (Wang et al., 2018) to refine user and item embeddings (conditioned on the graph topology). #### 4.1.1. Multimodal input data Micro-videos (the items) are described via three modalities, namely: _visual_ (i.e., frames), _textual_ (i.e., user-generated captions and descriptions) and _audio_ (i.e., the audio track, that is not always available). It is worth pointing out that also users are described through three embeddings representing how each item modality might influence them differently. Nevertheless, they cannot be formally considered as multimodal input data (we do not report any information about the multimodal input data and feature extraction columns in the table). #### 4.1.2. Feature extraction Visual features are extracted through a pretrained ResNet50 (Wang et al., 2018) only from key video frames. Textual features are derived from Sentence2Vector (Chen et al., 2019). Audio features are extracted using a pre-trained VGGish (Wang et al., 2018). #### 4.1.3. Multimodal representation The framework leverages three versions of the same bipartite user-item graph (i.e., one for each micro-video modality). The graph convolutional layer first aggregates the neighborhood features of the ego node and then combines the result of such aggregation with the collaborative embedding and the multimodal representation from the previous iteration. Given the formalism introduced above, we might say this approach goes under the definition of _Coordinate_ representation. The model adopts a linear projection for each modality to map the input into a modality-specific latent space, both in the aggregation and combination steps. No explicit constraints are introduced. #### 4.1.4. Multimodal fusion The adoption of a multimodal coordinate representation requires a modality fusion phase. This is performed through element-wise addition among modalities for users and items. As this occurs before feeding them into the inference function, we categorize it as _Early_ fusion. #### 4.1.5. Inference and loss function As in several collaborative filtering approaches, the inference is performed through the inner product between the multimodal representations of users and items. Regarding the loss function, the authors use the broadly-adopted BPR (Wang et al., 2017) optimization framework, maximizing the distance between predicted ratings for positive items (i.e., the ones interacted by users) and negative items (i.e., the ones not already interacted by users). ### Case 2: Food recommendation Wang et al. (Wang et al., 2018) introduce a tripartite framework for food recommendation whose pipeline involves the retrieval of recipes according to user-generated videos, the profiling of users based upon their social media interactions, and the final health-aware food recommendation of recipes. #### 4.2.1. Multimodal input data On the user side, it should be noticed that the input data does not properly follow the above definition we provide about multimodality, as users are profiled only according to the textual description of their generated tweets. However, we maintain the importance of this example since it represents one of the few approaches in the literature that proposes to model users through a multimodality-like solution. On the other side, items' description is multimodal because it integrates frames of user-generated videos to retrieve recipes from (i.e., _visual_ modality) and descriptions of recipe ingredients (i.e., _textual_ modality). #### 4.2.2. Feature extraction Users' tweets are the input to what the authors define as a word-class interaction-based recurrent convolutional network (WIRCNN), which involves a recurrent neural network (RNN) and a convolutional neural network (CNN) to classify user tags. As for items, sampled video frames are encoded through a pre-trained VGGNet-19 (Wang et al., 2018), while textual recipe ingredients are processed via TextCNN (Wang et al., 2018). #### 4.2.3. Multimodal representation Given that the user profile is not multimodal _per se_, we do not recognize any multimodal representation stage. On the item side, the multimodal representation is _Coordinate_, but no particular operation is performed on the extracted multimodal features. #### 4.2.4. Multimodal fusion No modality fusion is run over user profiles. Regarding items, authors adopt an _Early_ modality fusion by concatenating the visual and textual features. #### 4.2.5. Inference and loss function User embeddings are learned for tag prediction, with a one-layer MLP used to predict scores and cross-entropy as a loss function. The final user embeddings are eventually exploited for the main task of food recommendation. Contrarily, item embeddings are directly adopted for the score prediction, run with a one-layer MLP trained on a binary cross-entropy loss. ### Case 3: Outfit fashion compatibility Han et al. (Han et al., 2017) propose a framework to recommend the next fashion item that matches a set of already chosen ones to produce a visually appealing outfit. The authors address the task by considering the items composing a fashionable outfit as a temporal sequence, so they leverage a bidirectional LSTM (Han et al., 2017). #### 4.3.1. Multimodal input data Recommendation is multimodal because the authors adopt both product images (i.e., _visual_ modality) and text descriptions of the fashion items extracted from the product details (i.e., _textual_ modality). #### 4.3.2. Feature extraction The visual features of fashion items are extracted from a GoogleNet InceptionV3 (Zhu et al., 2017) pretrained network (the TFE), whose dimensionality is 2048. As for the textual description, each word is a one-hot-encoded vector. #### 4.3.3. Multimodal representation Visual and textual extracted features are projected into a unique latent space, whose dimensionality is 512. According to the earlier formalism, this approach follows a _Joint_ multimodal representation. On the one hand, 2048-dimensional visual features are compressed into a 512-dimensional embedding through a fully connected neural network layer, which is trained end-to-end with the recommendation model. On the other hand, the textual features for each description are first projected into the 512-dimensional latent space through linear mapping (i.e., adopting a projection matrix, which is also trained end-to-end). Then, the authors adopt bag-of-words to obtain a unique representation for the description of each fashion item. #### 4.3.4. Inference and loss function Only the visual modality is adopted as input for the recommendation inference. However, the textual modality is exploited jointly with the visual input to minimize the contrastive component of the loss function, for whom the cosine similarity measures the distance between visual and textual modalities in the shared latent space. ### Case 4: Artist and song recommendation The approach introduced by Oramas et al. (Oramas et al., 2017) deals with the task of music recommendation. Specifically, the authors propose to divide the problem into artist and song recommendations by learning their separate embeddings and leveraging the textual artist biography and audio spectrogram as inputs. #### 4.4.1. Multimodal input data Multimodality is to be found in the item's description, which is based upon artist biography (i.e., _textual_ modality) and audio spectrogram derived from songs (i.e., _audio_ modality). #### 4.4.2. Feature extraction Artist biographies are processed through the state-of-the-art CNN, which is re-trained using word2vec word embeddings pre-trained on the Google News dataset (Wang et al., 2019). As for the song latent factors, a custom CNN with 256, 512, and 1024 convolutional filters is trained on the time axis, having as output the 4096-dense layer. #### 4.4.3. Multimodal representation Before further processing the extracted multimodal features, both textual and audio features are normalized. Afterward, they optionally go through two separate MLPs (i.e., _Coordinate_ representation). #### 4.4.4. Multimodal fusion The authors explore two possibilities: if no MLP processing occurred in the multimodal representation stage, then normalized features are concatenated and fed into a one-layer MLP; otherwise, multimodal representations are connected to the one-layer MLP. They adopt an _Early_ multimodal fusion in both cases. #### 4.4.5. Inference and loss function Inference is run through the inner product between user and item final embeddings, while cosine distance is the chosen loss function as the learned latent embeddings are l2-normalized. ## 5. Implementation and Benchmarking (RQ4) In this section, we show how we integrate the proposed multimodal schema for multimedia recommendation into Elliot (Elliot, 2017), a framework for the rigorous reproducibility and evaluation of recommender systems. Specifically, we use this implementation to benchmark six state-of-the-art multimedia recommendation approaches (i.e., VBPR (Stein #### 5.2.2. Mmcn Multimodal graph convolution network (Kang et al., 2018), referred to as MMGCN, introduces the concept of multimodality in graph-based recommendation systems. In this approach, the authors suggest training distinct graph convolutional networks for each modality under consideration. This results in three sets of user and item representations, accommodating the various perspectives that users may have toward each modality. Ultimately, all the embeddings related to different modalities, both for users and items, are merged through element-wise addition, and the preference prediction score is computed using an inner product. #### 5.2.3. Grcn Graph-refined convolutional network (Kang et al., 2018), denoted as GRCN leverages the information from various modalities to refine the values of the adjacency matrix. This is particularly important due to the implicit nature of user-item interactions in the bipartite graph. The primary objective is to identify and potentially remove edges that do not accurately reflect each user's actual preferences. The multimodal representations of users and items that are learned are then fused through concatenation to produce a comprehensive representation for predicting preference scores. #### 5.2.4. Lattice Latent structure mining method for multimodal recommendation (Latt et al., 2015), dubbed LATTICE, creates a similarity graph between items for each modality and enhances this structure using graph structure learning techniques. These improved adjacency matrices are then merged through a weighted sum, assigning varying importance weights to each modality. The resulting adjacency matrix is employed to refine item embeddings using a graph convolutional network. Ultimately, the obtained item embeddings can serve as the building blocks for various user and item latent factor-based models, such as BPR-MF. #### 5.2.5. Bm3 Bootstrapped multimodal model (Latt et al., 2015), indicated as BM3, proposes a self-supervised multimodal technique for recommendation. Different from previous approaches using computationally expensive augmentations, BM3 leverages dropout as a simple operation for generating contrastive views of the same embeddings. In detail, the loss function consists of three components, where a reconstruction loss minimizes the similarity between the contrastive views of user and item embeddings, while an inter- and intra-modality alignment loss works to minimize the distance between the contrastive views generated for the same or different modalities. #### 5.2.6. Freedom The authors from (Latt et al., 2015) demonstrate that freezing the item-item multimodal similarity graph (derived from LATTICE) and denoising the user-item graph can lead to improved recommendation performance (the proposed model is named FREEDOM). As for the denoising operation of the user-item graph, the authors propose a degree-sensitive edge pruning to remove noisy edges from the user-item adjacency matrix. Moreover, and differently from LATTICE, the model optimizes a double BPR-like loss function, where the first component of the loss integrates a multimodal-enhanced representation of the item embedding, while the second component explicitly leverages the item projected multimodal features. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Datasets** & \(|\mathcal{U}|\) & \(|\mathcal{I}|\) & \(|\mathcal{R}|\) & **Sparsity (\%)** \\ \hline Office & 4,905 & 2,420 & 53,258 & 99.55\% \\ Toys & 19,412 & 11,924 & 167,597 & 99.93\% \\ Beauty & 22,363 & 12,101 & 198,502 & 99.93\% \\ Sports & 35,598 & 18,357 & 296,337 & 99.95\% \\ Clothing & 39,387 & 23,033 & 278,677 & 99.97\% \\ \hline \hline \end{tabular} \end{table} Table 3. Statistics of the tested datasets. ### Evaluation metrics To conduct the benchmarking analysis, we measure the recommendation performance through accuracy and beyond-accuracy metrics. While the former are commonly adopted measures in the related literature on multimedia recommendation, the latter have been brought to the attention of the community only in recent works, especially to assess the novelty and diversity of the recommended items (Kang et al., 2018), or how multimedia recommender systems are biased towards suggesting popular items (Kang et al., 2018). For the recommendation accuracy, we consider the Recall@\(k\) and the nDCG@\(k\); for the novelty (Kang et al., 2018) and diversity (Kang et al., 2018), we measure the EFD@\(k\) and the Gini@\(k\), respectively; for the popularity bias (Beng et al., 2017), we calculate the APTL@\(k\); finally, as a general index of how recommendations cover the entire catalog of products, we adopt the iCov@\(k\). In the following, we present each of these metrics along with their mathematical formulations. #### 5.3.1. Recall The recall (Recall) evaluates to what extent the recommender systems can retrieve relevant items from the recommendation list: \[\text{Recall@}k=\frac{1}{|\mathcal{U}|}\sum_{u\in\mathcal{U}}\frac{|\text{Rel }_{u}\,@k|}{|\text{Rel}_{u}|}, \tag{12}\] with \(\text{Rel}_{u}\) as the set of relevant items for user \(u\), and \(\text{Rel}_{u}\,@k\) as the set of relevant recommended items in the top@\(k\). #### 5.3.2. Normalized discount cumulative gain The normalized discount cumulative gain (nDCG) measures both the relevance and the ranking position of items from the recommendation lists, considering the various relevance degrees: \[\text{nDCG@}k=\frac{1}{|\mathcal{U}|}\sum_{u}\frac{\text{DCG}_{u}\,@k}{\text{ IDCG}_{u}\,@k}, \tag{13}\] where \(\text{DCG@}k=\sum_{i=1}^{k}\frac{2^{re}\lambda_{u,i}-1}{\log_{2}(i+1)}\) is the cumulative gain of relevance scores in the recommendation list, and \(rel_{u,i}\in\text{Rel}_{u}\), and IDCG is the cumulative gain of relevance scores for an ideal recommender system. #### 5.3.3. Expected free discovery The expected free discovery (EFD), as introduced in (Kang et al., 2018), is a metric accounting for novelty in recommendation that utilizes the _inverse collection frequency_. Specifically, it provides a quantification on how a recommender system can suggest relevant _long-tail_ items (i.e., niche products): \[\text{EFD@}k=C\sum_{i_{k}\in R}\text{disc}(k)P\left(\text{Rel}_{u}\,@k\,\mid i _{k},u\right)\cdot\left(-\log_{2}p(i\mid\text{ seen},\,@)\right). \tag{14}\] #### 5.3.4. Gini index The Gini index (Gini) indicates the disparity in items' popularity when considering the recommendation lists for each user. In detail, it provides a measurement of how certain items are consistently favored by a large portion of users with respect to other items. For the sake of this analysis, we use a normalized version of Gini \begin{table} \begin{tabular}{l l l l} \hline \hline **Models** & **Year** & **Venue** & **Baseline in** \\ \hline VBPR (Kang et al., 2018) & 2016 & AAAI & (Kang et al., 2018; Kang et al., 2018; Kang et al., 2018; Kang et al., 2018; Kang et al., 2018) \\ MMGCN (Kang et al., 2018) & 2019 & MM & (Kang et al., 2018; Kang et al., 2018; Kang et al., 2018; Kang et al., 2018; Kang et al., 2018) \\ GRCN (Kang et al., 2018) & 2020 & MM & (Kang et al., 2018; Kang et al., 2018; Kang et al., 2018; Kang et al., 2018) \\ LATTICE (Kang et al., 2018) & 2021 & MM & (Kang et al., 2018; Kang et al., 2018; Kang et al., 2018; Kang et al., 2018) \\ BM3 (Kang et al., 2018) & 2023 & WWW & (Kang et al., 2018; Kang et al., 2018; Kang et al., 2018) \\ FREEDOM (Kang et al., 2018) & 2023 & MM & (Kang et al., 2018) \\ \hline \hline \end{tabular} \end{table} Table 4. An overview on the selected multimedia recommender systems, along with their publication venue and year, and a non-exhaustive set of papers where they are used as baselines. according to which high values stand for a wide set of items being suggested to users, and low values indicate that a few popular items are generally recommended, leading to a less diverse recommendation experience (Wang et al., 2017). Its (normalized) formulation is: \[\text{Gini@}k=1-\left(\frac{\sum_{i=1}^{|\mathcal{I}|}(2i-|\mathcal{I}|-1)P|_{ \oplus k}(i)}{|\mathcal{I}|\sum_{i=1}^{|\mathcal{I}|}P|_{\oplus k}(i)}\right), \tag{15}\] where \(P|_{\oplus k}(i)\) represents the popularity of item \(i\), in the top@\(k\) recommendation lists, sorted in non-decreasing order (i.e., \(P|_{\oplus k}(i)\leq P|_{\oplus k}(i+1)\)). #### 5.3.5. Average percentage of long-tail items The average percentage of long-tail items (APLT) assesses the presence of popularity bias in recommendation (Beng et al., 2017). When referring to popularity bias, we indicate the tendency of recommender systems to boost the recommendation of popular/mainstream items (i.e., _short-head_) at the detriment of unpopular/niche products (i.e., _long-tail_), thus limiting the exposure of certain item categories to users. The APLT calculates the percentage of _long-tail_ items belonging to the recommendation lists (averaged over all users): \[\text{APLT@}k=\frac{1}{|\mathcal{U}|}\sum_{u\in\mathcal{U}}\frac{|\{i\mid i \in(\hat{\mathcal{I}}_{u}\text{@}k\cap\ \sim\Phi)\}|}{k}, \tag{16}\] where \(\hat{\mathcal{I}}_{u}\text{@}k\) is the list of top@\(k\) recommended items for user \(u\), and \(\Phi\) is the set of items belonging to the _short-tail_ distribution while \(\sim\Phi\) stands for the remaining _long-tail_ items. #### 5.3.6. Item coverage The item coverage (iCov) is a percentage estimating how the recommended items span the entire catalog of products in the recommendation system: \[\text{iCov@}k=\frac{|\bigcup_{u}\hat{\mathcal{I}}_{u}\text{@}k|}{|\mathcal{I }_{train}|}\%, \tag{17}\] where \(\mathcal{I}_{train}\) is the set of items in the training set. ### Reproducibility First, we pre-process the datasets following the 5-core filtering on users and items to remove cold-start users and items as done in (Kang et al., 2017). Second, we split them according to the 80:20 hold-out strategy for the training and test sets, where the former and the latter contain the 80% and the 20% of interactions recorded for each user, respectively. Then, we decide to train the recommendation models so that the number of epochs (i.e., 200) and the batch size (i.e., 1024) are the same for all of them to ensure fair comparison. As for the other models' hyper-parameters, we follow a grid search strategy with 10 explorations comprising both the learning rate and the regularization coefficients and fix the remaining (model-specific) hyper-parameters to the best values according to the original papers and/or codes. Finally, to select the best configuration for each model and dataset, we remove the 50% of the test set for the validation set (following again (Kang et al., 2017)), and select the hyper-parameter setting providing the highest Recall@20 value on the validation data measured for a specific epoch (maximum 200 epochs). To foster the reproducibility of the proposed benchmarks, we provide the codes, datasets, and configuration files to replicate our results at: [https://github.com/sisinflab/Formal-MultiMod-Rec](https://github.com/sisinflab/Formal-MultiMod-Rec), where we integrated the selected multimedia recommender systems into Elliot (Beng et al., 2017). ### Benchmarking results Table 5 reports on the results of the extensive benchmarking analysis we conduct on the selected datasets and state-of-the-art multimedia recommendation systems. The calculated metrics involve both accuracy (i.e., Recall and nDCG) and beyond-accuracy (i.e., EFD, Gini, APLT, and iCov) measures when considering top@10 and top@20 recommendation lists. Based on how we defined all the recommendation metrics, higher values indicate better performance. In terms of **accuracy** performance, we observe that one of LATTICE, BM3, and FREEDOM is steadily among the two best recommendation models, which is something that also emerges from the related literature. This holds across all datasets and top@\(k\) under analysis. Nevertheless, we notice an interesting trend when considering VBPR. Indeed, we observe how this model is quite always among the top-3 recommendation techniques despite being one of the shallowest approaches compared to other more recent and complex models. As already stated in recent works (Yuan et al., 2019; Zhang et al., 2020), this finding demonstrates how even a not-so-deep, but still careful hyper-parameter exploration (such as the one we performed) may help uncover unexpected results with respect to what described in the literature. However, the most surprising behavior involves the analysis of the **beyond-accuracy** performance. While several works depict the most recent approaches in multimedia recommendation (i.e., LATTICE, BM3, and FREEDOM) as dominating the accuracy level, the same does not hold for other metrics accounting for novelty, diversity, and popularity bias. Indeed, the only observable trend in this setting is that GRCN and VBPR steadily settle as best-performing algorithms. Particularly, it is worth pointing out how both approaches can strike a sufficient trade-off between accuracy and beyond-accuracy measures, where VBPR can even reach quite high performance on beyond-accuracy without giving up on accuracy that much. Once again, such observations corroborate what has recently been pointed out in similar works (Yuan et al., 2019; Zhang et al., 2020), by extending the analysis to additional datasets and multimedia recommendation systems. To conclude, the proposed benchmarking study indicates that training and evaluating multimedia recommender systems remain an open challenge in the literature, paving the way to more rigorous and careful analyses to be conducted in future work (see Section 7.4). ## 6. Technical challenges (RQ5) This section aims to overview the main technical challenges we recognize in multimodal approaches for multimedia recommendation. Starting from the proposed schema we presented in the previous sections, we outline the evident (or even less evident) issues emerging from the literature. ### Missing modalities in the input data Describing data under the lens of multimodality may be a two-sided coin. From one perspective, multimodality helps enrich the informative content carried by the input, thus exploring data's multi-faceted nature to learn better-tailored user-item preference patterns (Yuan et al., 2019). On the other side, the need to provide descriptive content for every input modality may come at the expense of some missing modalities (e.g., a video dataset could integrate videos having no textual content, for example, subtitles or descriptions may be only sometimes available). Tackling the modality misalignment in the data is a recent and widely discussed challenge in other domains (Yuan et al., 2019; Zhang et al., 2020; Zhang et al., 2020; Zhang et al., 2020), and requires ad-hoc techniques to provide equal representation of all involved modalities to fully exploit their informative richness (Yuan et al., 2019; Zhang et al., 2020). Nevertheless, to the best of our knowledge, the issue remains open in recommendation. ### Pre-trained feature extractors Deep learning models processing images, texts, or audio have been shown to enrich the informative content of items' profiles in several recommendation algorithms. In most solutions, such architectures are used as pre-trained blocks to extract high-level features from the input data, thus exploiting the capability of deep neural networks to transfer knowledge among different datasets and/or tasks. Despite the ease of adopting ready-to-use feature extraction networks, we seek to underline a conceptual limitation that, to the best of our knowledge, is only partially investigated in the literature. Indeed, pre-trained representations extracted through state-of-the-art deep learning models are not necessarily supposed to capture those semantic features, which will likely captivate users for their final decision-making process. As an example, the embedding feature extracted from a product image (e.g., a bag) through a pre-trained deep convolutional network (e.g., ResNet50) is carrying high-level informative content driven by the task of _image classification_, but this does not mean the same knowledge will be helpful to predict whether the product could be _recommended_ to a user. \begin{table} \begin{tabular}{c l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Datasets**} & \multirow{2}{*}{**Models**} & \multicolumn{6}{c}{top@10} & \multicolumn{6}{c}{top@20} \\ \cline{3-13} & & \multicolumn{2}{c}{**Accuracy**} & \multicolumn{2}{c}{**Beyond-accuracy**} & \multicolumn{2}{c}{**Accuracy**} & \multicolumn{2}{c}{**Beyond-accuracy**} \\ \cline{3-13} & & **Recall** & **nDCG** & **EFD** & **Gini** & **APLT** & **iCov** & **Recall** & **nDCG** & **EFD** & **Gini** & **APLT** & **iCov** \\ \hline \multirow{4}{*}{**Office**} & VBPR & 0.0652 & 0.0419 & 0.1753 & 0.3634 & 0.2321 & 93.833 & 0.1025 & 0.0533 & 0.1479 & 0.3960 & 0.2375 & 97.51\% \\ & MMGCN & 0.0455 & 0.0300 & 0.1140 & 0.0128 & 0.0016 & 3.07\% & 0.0798 & 0.0405 & 0.1027 & 0.0231 & 0.0078 & 4.64\% \\ & GRCN & 0.0393 & 0.0253 & 0.1215 & **0.4587** & **0.3438** & **99.01** & 0.0667 & 0.0339 & 0.1051 & **0.4892** & **0.3469** & **99.79\%** \\ & LATTICE & 0.0664 & 0.0449 & 0.1827 & 0.2128 & 0.1752 & 87.86\% & 0.1029 & 0.0566 & 0.1513 & 0.2652 & 0.2039 & 95.90\% \\ & BM3 & **0.0701** & **0.0460** & **0.1837** & 0.1407 & 0.1427 & 77.13\% & **0.1081** & **0.0583** & **0.1550** & 0.1900 & 0.1715 & 91.55\% \\ & FREEDOM & 0.0560 & 0.0365 & 0.1493 & 0.1922 & 0.1875 & 79.12\% & 0.0884 & 0.0469 & 0.1282 & 0.2439 & 0.2080 & 90.64\% \\ \hline \multirow{4}{*}{**Toys**} & VBPR & 0.0710 & 0.0458 & 0.1948 & 0.2645 & 0.1064 & 84.90\% & 0.1006 & 0.0545 & 0.1527 & 0.3011 & 0.1180 & 92.82\% \\ & MMGCN & 0.0256 & 0.0150 & 0.0648 & 0.0989 & 0.0961 & 37.87\% & 0.0426 & 0.0200 & 0.0570 & 0.1450 & 0.1058 & 52.51\% \\ & GRCN & 0.0554 & 0.0354 & 0.1604 & **0.3954** & **0.2368** & **92.665** & 0.0831 & 0.0436 & 0.1298 & **0.4329** & **0.2482** & **97.73\%** \\ & LATTICE & 0.0805 & 0.0512 & 0.2090 & 0.1656 & 0.0546 & 73.80\% & 0.1165 & 0.0617 & 0.1665 & 0.2026 & 0.0684 & 86.58\% \\ & BM3 & 0.0613 & 0.0393 & 0.1582 & 0.0776 & 0.0486 & 56.23\% & 0.0901 & 0.0478 & 0.1270 & 0.1154 & 0.0658 & 73.50\% \\ & FREEDOM & **0.0870** & **0.0548** & **0.2284** & 0.1474 & 0.0756 & 62.09\% & **0.1249** & **0.0660** & **0.1820** & 0.2007 & 0.0951 & 78.42\% \\ \hline \multirow{4}{*}{**Beauty**} & VBPR & 0.0760 & 0.0483 & 0.2119 & 0.2076 & 0.0833 & **53.065** & 0.1102 & 0.0586 & 0.1700 & 0.2376 & 0.0915 & 91.41\% \\ & MMGCN & 0.0496 & 0.0294 & 0.1300 & 0.0252 & 0.0282 & 13.75\% & 0.0772 & 0.0379 & 0.1105 & 0.0423 & 0.0345 & 21.37\% \\ & GRCN & 0.0575 & 0.0370 & 0.1817 & **0.3823** & **0.2497** & **94.59** & 0.0892 & 0.0466 & 0.1498 & **0.4178** & **0.2608** & **98.56\%** \\ & LATTICE & **0.0867** & **0.0544** & 0.2272 & 0.1153 & 0.0386 & 65.82\% & 0.1259 & 0.0661 & 0.1830 & 0.1558 & 0.0511 & 81.60\% \\ & BM3 & 0.0713 & 0.0443 & 0.1831 & 0.0245 & 0.0179 & 32.31\% & 0.1051 & 0.0545 & 0.1490 & 0.0414 & 0.0228 & 48.75\% \\ & FREEDOM & 0.0864 & 0.0539 & **0.2279** & 0.0921 & 0.0486 & 55.89\% & **0.1286** & **0.0666** & **0.1868** & 0.1359 & 0.0653 & 72.96\% \\ \hline \multirow{4}{*}{**Sports**} & VBPR & 0.0450 & 0.0281 & 0.1167 & 0.1501 & 0.0497 & 75.77\% & 0.0677 & 0.0349 & 0.0949 & 0.1722 & 0.0552 & 86.54\% \\ & MMGCN & 0.0342 & 0.0207 & 0.0791 & 0.0095 & 0.0046 & 5.10\% & 0.0551 & 0.0269 & 0.0678 & 0.0168 & 0.0065 & 8.39\% \\ & GRCN & 0.0330 & 0.0202 & 0.0885 & **0.3087** & **0.2190** & **91.28\%** & 0.0523 & 0.0259 & 0.0746 & **0.3386** & **0.2273** & **97.09\%** \\ & LATTICE & **0.0610** & 0.0372 & 0.1465 & 0.0573 & 0.0129 & 48.44\% & 0.0898 & 0.0456 & 0.1185 & 0.0802 & 0.0185 & 64.90\% \\ & BMS & 0.0548 & 0.0349 & 0.1372 & 0.0776 & 0.0283 & 59.13\% & 0.0825 & 0.0430 & 0.1118 & 0.1120 & 0.0385 & 76.75\% \\ & FREEDOM & 0.0003 & **0.0375** & **0.1494** & 0.0621 & 0.0319 & 48.37\% & **0.0911** & **0.0465** & **0.1219** & 0.0926 & 0.0442 & 65.81\% \\ \hline \multirow{4}{*}{**Clothing**} & VBPR & 0.0339 & 0.0181 & 0.0502 & 0.2437 & 0.0809 & ### Modalities representation The multimodal representation of the extracted input data is among the main stages in the multimodal schema we described since it establishes the relationships for the selected input modalities. Nonetheless, the literature is not generally aligned on its definition since most of the works usually refer to _Joint_ representation and _Early_ fusion interchangeably. We recognize this as a conceptual issue because the two stages (i.e., representation and fusion) should be considered separately. We maintain that the former stands for the initial step to set interconnections among early-extracted multimodal features, while the latter, despite dealing again with modalities relationships, involves features that have been further processed towards the task optimization (i.e., recommendation in our case), thus embodying different rationales and techniques. Furthermore, the related literature suggests two possible solutions to multimodal representation, either _Joint_ or _Coordinate_, where the latter additionally requires the subsequent fusion step. However, each of the paradigms' advantages and whether they might depend on the task remain under investigation. ### Multimodal-aware fusion and optimization While multimodal representation builds on input modalities in the early stages of the schema, multimodal fusion accounts for multimodal features that have already been processed, with a specific focus on the last steps (i.e., inference and model optimization). Similarly to what was observed above, multimodal fusion may come in the form of _Early_ or _Late_ fusion. The significant difference between the two approaches lies in preserving or not modalities separation during the inference (i.e., _Late_ and _Early_, respectively). The literature demonstrates the vast predominance of _Early_ solutions, whereas several works quite often refer to _Late_ fusion by mistaking it for _Early_ fusion. Indeed, providing a precise definition for the two is of paramount importance because the two approaches may serve different purposes. The rationale of _Late_ fusion is to keep the modalities separation explicit during the inference phase so that the contribution of each modality is observable up to the last operation. Moreover, the literature is not aligned on the operation to fuse modalities. Non-trainable fusion functions (e.g., element-wise addition) are usually the preferred direction given that it is more lightweight and easy to perform to trainable approaches (e.g., neural networks) which (on their side) may allow to better tailor user-item preference prediction. ## 7. Future directions (RQ5 - Extension) The scope of this section is to outline possible future directions for the application of multimodality for multimedia recommendation. While some of the presented solutions may apply to the above-raised challenges, we also discuss different research paths we suggest to follow in future work. ### Domain-specific multimodal features Given the limitations imposed by the adoption of pre-trained multimodal features (see again Section 6.2), we wish to underline the benefits of _domain-specific_ features in the multimodal schema we have outlined. Extracting such high-level features from input data would entail injecting meaningful and task-aware informative content into the recommendation system, thereby better-profiling items and users on the platform to generate more tailored recommendations. Domain-specific features should necessitate domain-specific extraction models, which may have been previously trained and optimized on similar tasks to the one we are pursuing. Regarding _fashion_ recommendation, for instance, we recall the work by Ge et al. (Ge et al., 2017), a pre-trained architecture for the comprehensive visual analysis of clothing photos. Another example is the _food_ recommendation system proposed by Yang et al. (Yang et al., 2017), which analyzes food-related photos. Furthermore, in the field of _audio_ and _text_ understanding and classification, Choi et al. (2018) construct a deep model based on convolutional recurrent neural networks for music tagging by taking into account songs' local features and temporal characteristics, whereas Barbieri et al. (2018) tackle the task of sentiment analysis in user-generated tweets. ### Multimodality on user-item interactions Multimodality is the most intuitive approach to describe the multifaceted nature of items in multimedia recommendation (Zhou et al., 2017; Zhou et al., 2017), but this does not hold for the users' profile. First, from a technical point of view, profiling each user through multimodal features (e.g., her voice, her visual appearance) would require sophisticated technologies that users' digital devices could not necessarily support (e.g., smartphones). Second, from a practical point of view, it is likely that users would not be disposed to share such personal data on online platforms, primarily for privacy concerns. Despite the raised critical aspects, a few examples from the literature (Zhou et al., 2017; Zhou et al., 2017) propose to model the user profile in such a way that her preferences toward each multimodal aspect of items are made explicit and learned during model's training. However, these systems rely solely on the multimodal profiles of the items, disregarding alternative information sources. _Product reviews_, which express opinions and comments about items that have been clicked, watched, or purchased, could be a valuable tool for revealing users' nuanced preferences toward each item in the recommendation system. Existing review-based approaches (Zhou et al., 2017; Zhou et al., 2017) work by integrating reviews as the _textual modality_ to represent _items_. However, we believe that a more logical and effective way to integrate reviews would be to view them as a medium to represent _user_ preference _over items_, thereby providing additional and complementary preference scores in addition to numeric ratings or implicit feedback that are typically used to compute recommendations. Such reasoning may be easily generalized to include user-generated data regarding interacting things (such as images or videos of delivered products), which we can characterize as _multimodal feedback_ (see Figure 3). When compared to numerical feedback, which tends to be _atomic_ (single-faceted), multimodal feedback could be considered as _composite_, revealing nuance and the user's multi-faceted opinion of the products (Barbieri et al., 2018). ### Fine-grained multimodal features Multimodality is a way to effectively profile the multi-faceted aspects of items and users' preference (e.g., I bought this smartphone because its technical _description_ is quite exhaustive and its _display_ amazes me; I like this song since I love the _music_ and the _lyrics_). Nevertheless, analyzing and learning users' tastes at modalities' granularity might not be Figure 3. An example of how users generate and upload _multimodal feedback_ about interacted items (e.g., textual reviews, product photos, or even video reviews) on online platforms. Such _user-item_ sources of information may be suitably exploited to better profile user’ preferences. enough to uncover all aspects underlying every user-item interaction. In contexts where modalities bring a great source of heterogeneous information, a finer-grained feature processing could help better unveil hidden facets. For instance, when it comes to the recommendation of fashion items (e.g., dresses, shoes, jewelry), user attention may be captivated by specific item visual characteristics, such as colors, shapes, and particular patterns and motifs (Santantos et al., 2017). Similarly, a song involves several features (Santos et al., 2017) (i.e., pitch, rhythm, and dynamics), which could differently influence users' attitudes towards it. Uncovering and understanding details at this finer granularity should be one of the main directions toward the novel recommendation approaches of multimedia products and services. ### An extensive and fair evaluation of multimedia recommender systems To date, very limited effort has been put into the extensive evaluation of multimedia recommender systems. The principal reason is that, apart from some recent frameworks (Santos et al., 2017; Wang et al., 2018) which integrate multimedia recommender systems into their pipelines, each novel multimedia recommender system introduces its own implementation of the proposed approach with different dataset pre-processing solutions, sampling strategies, and evaluation protocols. Indeed, this may undermine the fair comparison of multimedia recommender systems, which cannot benefit from shared and unified training and evaluation frameworks to run rigorous and reproducible experiments as in other recommendation domains and scenarios (Beng et al., 2016; Wang et al., 2018). To this end, we plan to start from the initial benchmarking analysis we proposed in this work to further assess the reproducibility of the tested baselines. On such basis, the next steps would be to evaluate the recommendation performance under more comprehensive experimental settings involving, for instance, (i) a larger plethora of pre-trained deep learning models for the extraction of multimodal features; (ii) other multimodal datasets involving all modalities (as our framework offers the possibility to inject visual, textual, and audio features); (iii) a more careful evaluation of such models under beyond-accuracy recommendation metrics (Santos et al., 2017; Wang et al., 2018). ## 8. Conclusion In this paper, we highlighted the importance of formalizing the multimedia recommendation task under the lens of multimodal deep learning. By recognizing how the main recommendation approaches in the related literature fall into some recurrent strategy patterns, we outlined a unified multimodal schema that, following the established multimodal deep learning pipeline, formalizes the core stages of multimedia recommendation as: (i) multimodal input data, (ii) multimodal feature processing, (iii) multimodal feature fusion, and (iv) the multimodal recommendation task. After that, we applied the proposed schema to four multimedia recommendation scenarios to conceptually validate its rationale, and integrated the same schema into Elliot to benchmark the results of six state-of-the-art multimedia recommender systems. The obtained results, which assess the recommendation performance in terms of accuracy and beyond-accuracy measures, along with the proposed formal schema, gave the opportunity to highlight technical challenges as well as possible avenues to address such challenges in future directions.
2309.10118
Untangling The Relationship Between Power Outage and Population Activity Recovery in Disasters
Despite recognition of the relationship between infrastructure resilience and community recovery, very limited empirical evidence exists regarding the extent to which the disruptions in and restoration of infrastructure services contribute to the speed of community recovery. To address this gap, this study investigates the relationship between community and infrastructure systems in the context of hurricane impacts, focusing on the recovery dynamics of population activity and power infrastructure restoration. Empirical observational data were utilized to analyze the extent of impact, recovery duration, and recovery types of both systems in the aftermath of Hurricane Ida. The study reveals three key findings. First, power outage duration positively correlates with outage extent until a certain impact threshold is reached. Beyond this threshold, restoration time remains relatively stable regardless of outage magnitude. This finding underscores the need to strengthen power infrastructure, particularly in extreme weather conditions, to minimize outage restoration time. Second, power was fully restored in 70\% of affected areas before population activity levels normalized. This finding suggests the role infrastructure functionality plays in post-disaster community recovery. Interestingly, quicker power restoration did not equate to rapid population activity recovery due to other possible factors such as transportation, housing damage, and business interruptions. Finally, if power outages last beyond two weeks, community activity resumes before complete power restoration, indicating adaptability in prolonged outage scenarios. This implies the capacity of communities to adapt to ongoing power outages and continue daily life activities...
Chia-Wei Hsu, Ali Mostafavi
2023-09-18T19:46:36Z
http://arxiv.org/abs/2309.10118v1
# Untangling The Relationship Between Power Outage and Population Activity Recovery in Disasters ###### Abstract Despite recognition of the relationship between infrastructure resilience and community recovery, very limited empirical evidence exists regarding the extent to which the disruptions in and restoration of infrastructure services contribute to the speed of community recovery. To address this gap, this study investigates the relationship between community and infrastructure systems in the context of hurricane impacts, focusing on the recovery dynamics of population activity and power infrastructure restoration. Empirical observational data were utilized to analyze the extent of impact, recovery duration, and recovery types of both systems in the aftermath of Hurricane Ida. The study reveals three key findings. First, power outage duration positively correlates with outage extent until a certain impact threshold is reached. Beyond this threshold, restoration time remains relatively stable regardless of outage magnitude. This finding underscores the need to strengthen power infrastructure, particularly in extreme weather conditions, to minimize outage restoration time. Second, power was fully restored in 70% of affected areas before population activity levels normalized. This finding suggests the role infrastructure functionality plays in post-disaster community recovery. Interestingly, quicker power restoration did not equate to rapid population activity recovery due to other possible factors such as transportation, housing damage, and business interruptions. Finally, if power outages last beyond two weeks, community activity resumes before complete power restoration, indicating adaptability in prolonged outage scenarios. This implies the capacity of communities to adapt to ongoing power outages and continue daily life activities. These findings offer valuable empirical insights into the interaction between human activities and infrastructure systems, such as power outages, during extreme weather events. They also enhance our empirical understanding of how infrastructure resilience influences community recovery. By identifying the critical thresholds for power outage functionality and duration that affect population activity recovery, this study furthers our understanding of how infrastructure performance intertwines with community functioning in extreme weather conditions. Hence, the findings can inform infrastructure operators, emergency managers, and public officials about the significance of resilient infrastructure in life activity recovery of communities when facing extreme weather hazards. infrastructure resilience, community recovery, power outage, human mobility, location-based data, weather hazard ## 1 Introduction Infrastructure services resilience is a key aspect for reducing impacts and accelerating community recovery within a disaster response plan (Miles, 2012). Critical infrastructure systems are the bedrock of a functioning society. Resilience refers to the ability of infrastructure services to withstand and quickly recover from disruptions or failures. Among critical infrastructures, power infrastructure may be one of the most important as it can have a significant impact on different perspectives including demographic, socioeconomic, transportation and communities (Esmalian, Dong, & Mostafavi, 2020; Ulak, Kocatepe, Konila Sriram, Ozguven, & Arghandeh, 2018). Power outages caused by natural disasters can disrupt essential services, such as healthcare (Adhikari et al., 2017; Hiete, Merz, & Schultmann, 2011), transportation (Ghorbanzadeh, Koloushani, Ozguven, Vanli, & Arghandeh, 2022), and communication (Pourebrahim, Sultana, Edwards, Gochanour, & Mohanty, 2019). Vulnerable populations such as the elderly, children, and low-income households may be disproportionately affected by power outages (Coleman et al., 2022; Esmalian et al., 2021; Lee, Maron, & Mostafavi, 2022). Additionally, power outages can exacerbate the effects of natural disasters by hindering emergency response and recovery efforts. Despite the recognition of the relationship between infrastructure resilience and community recovery, very limited empirical evidence exists regarding the extent to which the disruptions in and restoration of infrastructure services contribute to the speed of community recovery. To address this gap, in this study, we evaluate the extent of impact and speed of restoration of power infrastructure and its contribution to the speed of population activity recovery (as an important aspect of community recovery) through the use of data collected from the 2021 Hurricane Ida in the United States. Figure 1 shows the overview of the study workflow. ## 2 Literature review In this section, we examine the existing literature in three related areas to highlight the point of departures for the current study. ### Use of human mobility or population activity analysis during disasters Understanding human mobility and population activity during disasters can inform effective disaster management and response. Recent studies have utilized social media data, mobile phone data, and other location-based data sources to analyze human mobility during disasters (Hsu, Ho, & Mostafavi, 2022; Yabe, Jones, Rao, Gonzalez, & Ukkusuri, 2022). These studies have shown that of population activity can be perturbed at different levels during hazard events (Jiang et al., 2023; Wang, Wang, & Taylor, 2017). Furthermore, human mobility data has revealed disparities in disaster evacuation patterns, with race and wealth playing a significant role (Deng et al., 2021). Roy, Cebrian, and Hasan (2019) quantified human mobility resilience to extreme events using geo-located social media data, emphasizing the importance of understanding mobility patterns during disasters for effective disaster response and recovery. Hsu et al. (2022) also found that human mobility networks manifest dissimilar resilience characteristics at different scales, highlighting the complexity of human mobility during disasters. The review of literature shows that the extent of impact and recovery of population activities during hazard events can be effectively quantified and analyzed as an important dimension of community recovery analysis (Lee, Chou, & Mostafavi, 2022; Yuan et al., 2022). Hence, in the current study, we utilized human mobility data to evaluate the extent and speed population activity recovery. Figure 1: Overview of research framework: Location-based data and power-outage data are analyzed to estimate power outage and population activity as proxies for the performance of infrastructure system and communities under impact (of Hurricane Ida, in this study). Features, including the extent of impact, recovery/restoration duration, and recovery rate, are extracted from the resilience curves constructed for power outage and population activity; recovery lag and recovery type (whether infrastructure system and community restore/recovery first) can be obtained by comparing the recovery duration between the resilience curves. The distribution of the features collected, the association between features, and the possible threshold that governs the recovery types are examined in the analysis. #### Importance and impact of power outage on communities during hazard events Power outages during hazard events have significant consequences on communities, affecting a wide range of services and sectors, including healthcare, transportation, and communication (Munzberg, Wiens, & Schultmann, 2017). A study by Coleman et al. (2023) highlights the presence of energy inequality in climate hazards based upon the social and spatial disparities in managed and hazard-induced power outages. This energy inequality can disproportionately affect vulnerable communities, exacerbating the difficulties faced during hazard events (Lee, Maron, & Mostafavi, 2022; Moreno & Shaw, 2019). Similarly, Li, Ma, and Cao (2020) leveraged social media data to study community resilience in New York City during the 2019 power outage. These studies highlight how communities would be affected by power outages, regional and population group differences, and the need for preparedness to minimize negative impacts. Despite recognition of the importance of resilience power infrastructure, very little empirical evidence exists regarding the relationship between the influence of extent of power outages and speed of power restoration and community recovery. #### Relationship between infrastructure resilience and community recovery Infrastructure resilience plays a crucial role in community recovery following disasters (Miles, 2012). Comes and Walle (2014) showcased the importance of resilient infrastructure for minimizing the negative effects of disasters on communities measured disaster resilience by investigating the impact of Hurricane Sandy on critical infrastructure systems. Yu and Baroud (2019) demonstrated the link between infrastructure resilience and community recovery using hierarchical Bayesian kernel methods in a case study on recovery from power outages. A comprehensive review by Koliou et al. (2020) on the state of research in community resilience highlighted the progress and challenges in understanding the relationship between infrastructure resilience and community recovery. The authors emphasized the need for further research to better understand how to enhance community resilience through improvements in infrastructure systems. Yabe (2021) used large-scale mobility data to model recovery dynamics of interdependent urban socio-physical systems, providing valuable insights into the resilience of communities in the face of disasters. While these studies emphasize the importance of infrastructure resilience for community recovery and present analytical methods for analyzing the interplay between infrastructure systems and community functionality, limited empirical understanding exists regarding the relationship between the extent and duration of power outages and the trajectory of population activity recovery in extreme events. ## 3 Study Context On August 29, 2021, Hurricane Ida made landfall near Port Fourchon, Louisiana, as a Category 4 storm with sustained winds of 150 mph. Hurricane Ida was one of the deadliest U.S. hurricanes, causing an estimated $75 billion in damage, making it the fourth costliest Atlantic hurricane to make landfall in the United States. With storm surges and intense rainfall and wind damage, power outages persisted for weeks after the hurricane and 91 human fatalities were recorded in Louisiana (Hanchey et al., 2021). The coastal region of Louisiana was particularly affected., More than 22,000 power poles, 26,000 wire spans, and 5,000 transformers being knocked over, broken, or destroyed. Hospitals struggled to provide services with backup generators and residents suffered from the summer heat. We collected data related to power outages and population activities in eight parishes in the New Orleans metropolitan area most affected by the hurricane: Jefferson, Orleans, Plaquetmines, St. Bernard, St. Charles, St. James, St. John the Baptist, and St. Tammany. In Louisiana, parishes are the jurisdictional entity corresponding to counties in other states. Within this area, we collected power outage data and location based mobility data for further analyses. ## 4 Data Description #### Power outage data Real-time power outage data from Hurricane Ida was gathered from PowerOutage.US and the Energy power company website, with a focus on outages caused by Energy, the primary energy provider for the affected areas in coastal Louisiana, serving 1,275,873 residents. Data collection included the total number of customers impacted and the last updated time, from August 29 through November 23, 2021. The frequency of data collection varied, with hourly recordings from 8:00 a.m. to 12:00 a.m. from August 29 to September 3; recordings every three hours from September 4 to October 25; and recordings at only 10:00 a.m., 4:00 p.m., and 9:00 p.m. from October 26 through November 23, due to the decrease in the number of noticeable outages. The percentage of outages at each Zip code was calculated by dividing the number of impacted customers, obtained from the Energy website, by the total population at each Zip code. It's worth noting that the Energy website does not differentiate between commercial and residential customers. As the researchers could not obtain the total number of customers per Zip code, the outage data was normalized by the total population to prevent inaccurately inferring a higher number of outages in areas with larger populations. To further validate the power data from the Energy website, we compared it to the percentage of affected customers at the county level using data from PowerOutage.US. The county-level data from PowerOutage.US matched the Energy data, with both showing similar patterns for changes in affected customers on the same day. We merged the Energy data, which only reported Zip codes with at least one affected customer, with all Zip codes of relevant parishes. We considered only parishes where Energy supplied at least 90% of the total accounted customers according to PowerOutage.US. #### Human mobility data This study utilized a GPS location dataset obtained by Spectus from smartphones. Spectus gathers large-scale anonymous location information from almost 70 million mobile devices in the United States through a compliant framework when users opt in to location services of partner apps. Users have the option to opt out at any time. Spectus works with more than 220 mobile apps that use its proprietary software development kits. Spectus collects data from about one in four smartphones in the United States, representing nearly 20% of the population. The data is collected through partner apps and utilizes the internal GPS hardware of devices. GPS sensor log data has been used in the past for studying human mobility and travel mode detection due to its high spatiotemporal resolution. Spectus de-identifies data and applies additional privacy measures, such as obscuring home locations at the census-block group level and removing sensitive points of interest. The device-level data includes anonymized individual location information, ID, location, and time. Spectus provides access to its own data, tools, and location-based datasets from other providers through a data cleanroom environment for privacy-preserving analyses of multiple geospatial data assets. The data collected by Spectus is transformed into daily trip counts between origins and destinations. The initial data included only anonymized device ID, coordinates, and visit times. The locations visited by each device are determined by identifying the census tract polygon it falls into. The device's movement or trajectory is then determined based on the visit times. Daily trip counts between census tracts are determined by aggregating on a census-tract level. To match the spatial resolution of the power outage data, daily trip counts are further aggregated into the Zip code-level based on the HUD-USPS ZIP Code Crosswalk data provided by HUD's Office of Policy Development and Research (PD&R). This file allocates census tracts to Zip codes. Using daily trip counts, we created a human mobility network for further analysis. In this study, we are interested in the fluctuation of total mobility flow, so we aggregated inflows and outflows. Sometimes inflow and outflow of an origin and destination have different meanings; however, we found that separating inflow and outflow within our period of interest did not yield additional information because they are proportional to the total flow and the have the same flow pattern. The baseline period was August 19 through August 28, 2021. This period is considered a normal period without any perturbations caused by the hurricane. The mobility flow within this period is viewed as baseline performance. Comparing the mobility flow during hurricane perturbed period with the baseline performance gives us the resilience curve. ## 5 Methodology The aim of this study is to examine the association between power outage extent and recovery duration and the population activity recovery to address the following research questions: (1) To what extent does the magnitude of impact and speed of restoration in power infrastructure contribute to the extent and speed of population activity recovery? (2) What characteristics/features could explain variations in the relationship between the extent of impact and restoration speed of power infrastructure with population activity recovery? In the analysis, we calculated metrics of the extent of impact, recovery (restoration) duration, and recovery (restoration) rate for both power infrastructure and population activity in each spatial area. Another metric examined is the recovery lag, defined as the difference between the recovery (restoration) duration of the two systems (i.e., the between full power restoration and full population activity recovery). The metrics examined in this study are summarized in Table 1. To extract the metrics, we obtain the resilience curves that show the relationship between a system's performance and the magnitude of the disturbance or stress it is subjected to from the raw data (Hillebrand et al., 2018; Poulin and Kane, 2021). The extent of impact on population activity can be inferred from mobility flow on a certain day compared to the baseline value of mobility flow during the unperturbed period before Hurricane Ida. We consider the baseline value of mobility flow as full performance (100%). For example, if the value of mobility flow observed on a certain day during Hurricane Ida is 80% of the value of baseline value of mobility flow, then the performance loss or impact is 20%. The impact on the power infrastructure, on the other hand, is determined by the percentage of households experiencing power outages. For example, when 70% of households experienced power outages, then the performance loss or impact is 70%. By applying the aforementioned definition and performing calculations, we transform the raw data into resilience curves for each area at the Zip code-level for both power infrastructure and population activity. From the resilience curves, we can find the turning points, as well as global maximums and minimums, and thus further calculate the extent of impact, recovery duration, and recovery rate. Combining these features with Zip code geodata provides insight into distributions and some spatial inferences. In the next step, we conduct decision-tree analysis to evaluate the relationship between the extent of power outages and power restoration speed with population activity recovery. The decision-tree analysis includes grid search to perform tuning with hyperparameters, including the maximum depth of the tree, minimum number of samples required to be at a leaf node' and minimum number of samples required to split an internal node. We use GridSearchCV, a powerful tool in the scikit-learn library that allows searching for the best set of hyperparameters for a machine-learning model. It works by training the model multiple times with different combinations of the specified hyperparameters then evaluating the model's performance using cross-validation. The best set of hyperparameters is then chosen based on the highest performance score. GridSearchCV saves time and effort compared to manually tuning the hyperparameters, and it can improve the performance of our model (Myles, Feudale, Liu, Woody, & Brown, 2004; Song & Lu, 2015). Based on the decision tree and thresholds that split the branches of the decision tree, we examine whether different extents of power outage and restoration speed would explain variations in population activity recovery duration of different spatial areas. ## 6 Results ### Extent of impact, recovery duration and the recovery types First, we examine spatial patterns of the extent of impact in population activity and power infrastructure. The indicator here is the maximum performance loss compared to baseline performance. Figures 2A and 2C show the spatial distribution regarding the extent of impact on population activity and power infrastructure. Larger negative values and darker colors represent areas suffering from larger impact, which are mostly coastal areas. Figure 2B compares the difference between the extent of impact on both systems, population activity and power infrastructure. The mean performance losses for power infrastructure and population activity is 48% and 83%, respectively. \begin{table} \begin{tabular}{l l l} \hline Systems & **Community** & **Infrastructure** \\ \cline{2-3} & _Population activity_ & _Power infrastructure_ \\ \hline Aspects & Extent of impact & Extent of impact \\ & Recovery duration & Restoration duration \\ & Recovery rate & Restoration rate \\ & Recovery lag & Recovery lag \\ \hline \end{tabular} \end{table} Table 1: Metrics calculated for community and infrastructure systems extent of impact, recovery (restoration) duration, recovery (restoration) rate and recovery lag. Figure 2: (a) The spatial distribution regarding the extent of impact on population activity; (b) Comparison between the extent of impact on population activity (orange) and power infrastructure (blue); (c) The spatial distribution regarding the extent of impact on power infrastructure. The average extent of impact on population activity and power infrastructure are 48% and 83%, respectively. Next, we examine the recovery (restoration) duration for both systems. Figures 3A and 3C show the spatial distribution of the recovery duration for population activity and power infrastructure. Larger values and darker colors represent areas experiencing longer recovery durations, which are mostly coastal areas. Figure 3B compares the distribution of the recovery (restoration) duration for both systems. The distribution for population activity is left skewed with a peak at around 20 days, while the distribution for power infrastructure recovery has a peak at 10 days. The average recovery duration of population activity and the restoration duration of power infrastructure are 17 days and 12 days, respectively. There is an observable offset of 5 to 10 days between the two distributions. Generally, population activity takes longer than the power infrastructure to fully recover. Anticipating power outages, though, residents might evacuate ahead of the hurricane landfall and return later after full power restoration. Comparing the recovery duration of the two systems, the results show the precedence relationship between the full recovery of the two systems as well as the recovery lag. For example, if the duration of full power restoration and full population activity recovery for an area is 15 days and 22 days, respectively, we know that full power restoration comes before full population activity recovery for this area and the recovery lag is 7 days (22 minus 15 days). Figure 4A shows the distribution regarding which system reaches full recovery first. Figure 4B shows which areas having which system reaches fully recovery first. Around 70% of the areas reach full power restoration before full population activity recovery. These are the coastal and city center areas. #### Association between the recovery of infrastructure and human systems A decision (regression) tree is applied to predict the recovery rate of population activity based on features such as extent of impact, restoration duration and restoration rate of power infrastructure. Through gird search across the sets of hyperparameters, we obtain the best performing hyperparameters: maximum depth of the tree set to 2, minimum number of samples required to be at a leaf node = 2 and minimum number of samples required to split an internal node set to 2. Figure 5 shows the resulting decision tree. At the root node, the decision tree splits into two branches based on Figure 4: The distribution regarding which of the systems (population activity and power infrastructure) first reaches full recovery. Figure 3: (a) The spatial distribution regarding the recovery duration of population activity; (b) Comparison between the restoration duration of population activity (orange) and recovery duration of power infrastructure (blue); (c) The spatial distribution regarding the restoration duration of power infrastructure. The average recovery duration of population activity and the restoration duration of power infrastructure are 17 days and 12 days, respectively. the feature power restoration duration. If the power restoration duration is shorter than 14.5 days, the tree follows the left branch. If the power restoration duration is longer than 14.5 days, the tree follows the right branch. The left branch next splits based on the feature power restoration rate. If the power restoration rate is smaller than 18.3% per day, the tree follows the left branch of the split. If the power restoration rate is larger than 18.3% per day, the tree follows the right branch. The pathway results show that if power restoration duration is less than 14.5 days and power restoration rate is less than 18.3% per day, population activity recovery rate would be 4.49% per day. If the power restoration duration is longer than 14.5 days and power restoration rate is greater than 18.3% per day, population activity recovery rate is 6.88%/day. The right branch next splits based on power restoration duration. If the power restoration duration is between 14.5 and 18.5 days, population activity recovery rate is 13.87% day. The results show that, in areas with less than 14 days of power outage duration, in other words, the areas with faster rate of power outage restoration, the speed of population activity is higher. In areas with power outage durations greater than about 14 days, population activity recovery happens prior to full power restoration. We evaluate our decision tree by assessing the importance of each feature and the model's performance in terms of its ability to accurately predict the target variable. The variable power infrastructure restoration duration alone accounts for 98.273% of the importance for making predictions being the main feature of the model while power infrastructure restoration rate account for the rest of 1.727%. Coefficient of determination (R-squared) measures the proportion of the variance in the target variable that is explained by the model, with a value of 1 indicating a perfect fit. The mean squared error (MSE), mean absolute error (MAE), and root mean squared error (RMSE) are measures of the average difference between the predicted and actual values of the target variable, with lower values indicating better performance. Power restoration duration is the most significant in explaining the variability in population activity recovery rate. The R-squared is 0.53, MSE is 7.40, MAE is 1.71, and RMSE is 2.72. Figure 6 visualizes the decision tree (predicting the population activity recovery rate based on the extent of impact, restoration duration and restoration rate of the power infrastructure) on a two-dimensional plane. The solid line represents the first split, and the dotted lines represent the second split, creating subspaces that group the samples. Colors of the sample points indicates which system first reaches full recovery and their size represent our target value (population activity recovery rate). By interpreting the decision tree, we can conclude that there is an association between infrastructure resilience measured by the extent of impact, restoration duration, restoration rate of the power infrastructure and community recovery measured by population activity. In the first split, where restoration duration of power infrastructure is 14.5 days, the two major recovery types (population activity first reaches full recovery and power infrastructure first reaches full restoration) are separated. When power infrastructure is restored fast enough (duration Figure 5: Decision (regression) tree predicting the recovery rate of population activity based on features such as the extent of impact, restoration duration and restoration rate of the power infrastructure. The bold black font represents the features used to split the branches of the decision tree. The bold blue font represents the thresholds that the sample in to groups. The bold red font represents the prediction on population activity recovery rate for each group. within 14.5 days), a greater power restoration rate may further contribute to slightly faster population activity recovery rate. When power infrastructure restoration duration exceeds the threshold, the association between power infrastructure restoration rate and the population activity recovery rate becomes insignificant. The power restoration features and population activity recovery features are more significantly associated within threshold, otherwise untangled. ### Thresholds and notable resilience characteristics In the previous section, the decision tree provides several split rules or thresholds that separate data into groups that might share some characteristics regarding the recovery (restoration) speed. We also know that power infrastructure restoration duration and power infrastructure restoration rate are the more significant features for purposes of separating the data. Therefore, in the analysis presented in this section, we will discuss the interplay between these two features with the others. There is a significant positive correlation between the extent of impact on power infrastructure and the restoration duration of power infrastructure (Figure 7). Generally, power infrastructure restoration duration increases when the extent of impact increases. The results also show that when the impact on power infrastructure exceeds a certain level, power restoration duration does not significantly increase. This level of impact can be characterized as a critical function loss level. If the extent of impact reaches this critical function loss level, the duration of restoration would not increase much beyond this point. Figure 8 shows the relationship between the extent of impact on power infrastructure and the recovery lag between the two systems. There is a significant negative relationship between power infrastructure restoration duration and the lag between the full recovery of power infrastructure and population activity. We can clearly see that the recovery lag decreases as the power restoration duration increases. The recovery lag between the two systems is shorter when the restoration duration of power infrastructure is longer (>14.5 days). Population activity fully recovers slightly before the full restoration of power infrastructure. The recovery lag between the two systems is longer when the restoration duration of power infrastructure is shorter (<=14.5 days), yet sometimes power infrastructure fully restores much earlier than the full recovery of population activity. These findings reveal the significance of power system restoration in population activity recovery in the aftermath of extreme weather events. ## 7 Discussion and Concluding Remarks This study used empirical observational data to examine the relationship between infrastructure resilience and community recovery. Using metrics related to the extent of impact, recovery duration, recovery rate of population activity, and power infrastructure in the aftermath of Hurricane Ida, we evaluated the extent to which the extent of power outages Figure 6: The two-dimensional visualization of the decision tree predicting the recovery rate of population activity based on extent of impact, restoration duration, and restoration rate of power infrastructure). The x-axis represents power infrastructure restoration duration and y-axis is the power infrastructure restoration rate. The solid line represents the first split (threshold: power infrastructure duration = 14.5 days) and the dotted lines represent the second split. The color of sample points indicates which of the systems first reaches full recovery and the size of sample points represents or target value (population activity recovery rate). and duration of power restoration shape the trajectory of population activity recovery. The main findings of this study are threefold. First, the results show that the extent of power outages and the duration of power restoration are positively correlated if the extent of impact is less than a critical functionality loss. The critical functionality loss is the level of impact (i.e., extent of power outage) beyond which the duration of power restoration does not increase considerably with greater functionality loss. This result highlights the importance of strengthening the power infrastructure to reduce the extent of impact during extreme weather events in order to reduce the duration of power outage restoration. Second, the results show that for the majority (70% of spatial areas) of impacted areas, full restoration of power was achieved before the population activity was fully recovered. This finding implies the important role of infrastructure functioning in the post-disaster recovery of communities. Also, the results show that rapid restoration of power is not a necessary condition for rapid population activity. In fact, the shorter the duration of power restoration, the longer the lag between population activity recovery and power outage restoration. This result is intuitive since population activity recovery also depends on other factors, such as transportation infrastructure impacts, housing damage, and interruptions to businesses and schools. Third, the results show that, if the duration of power outages exceeds more than 14 days, population activity recovery could occur prior to full power restoration. The 14-day duration can be a critical threshold beyond which power restoration and population activities untangle. This finding implies population adaptive capacity comes into play to cope with prolonged power outages, evidenced by people to returning to their day-to-day life activities even though power outages still persist. These findings provide novel empirical evidence regarding the interplay between human systems (i.e., population activities) and infrastructure systems (i.e., power outages) in extreme weather events. Also, the findings provide a better empirical understanding of the relationship between infrastructure resilience and community recovery. Through Figure 8: Relationship between the extent of impact on power infrastructure and the restoration duration of power infrastructure. (a) The result shows that the two features are positively correlated: the greater the extent of outage, longer the restoration duration; (b) The result shows the data separated by threshold: power infrastructure restoration duration = 14.5 days. Full restoration of power infrastructure generally takes longer in areas suffering from greater impacts on power infrastructure. Figure 7: Relationship between the extent of impact on power infrastructure and the restoration duration of power infrastructure. (a) The result shows that the two features are positively correlated: the greater the extent of outage, longer the restoration duration; (b) The result shows the data separated by threshold: power infrastructure restoration duration = 14.5 days. Full restoration of power infrastructure generally takes longer in areas suffering from greater impacts on power infrastructure. characterization of critical functionality threshold for power outages and critical threshold for the duration of outages in explaining variations in population activity recovery, the findings of this study move us closer to better understanding of ways through which infrastructure performance interact with community functioning in extreme weather events. The findings can inform infrastructure owners and operators, emergency managers, and public officials regarding the ways through which resilient infrastructure could shape community recovery and the population's life activities in disasters. Accordingly, in making resilience investments and infrastructure prioritization decision making, the loss of functionality of infrastructure can be better associated with disruptions and recovery of population activities to evaluate more objectively the benefits of such decisions. In this study, the impact of hurricanes on population activity and power infrastructure were analyzed. The extent of impact, recovery duration, and recovery types found that the mean performance loss for power infrastructures and population activity was around 50% and 90%, respectively. Areas suffering from larger impact were mostly coastal areas. The recovery duration for both systems was also analyzed, with the power infrastructure showing a significant peak at 10 days and population activity showing a peak at around 20 days. There is an observable offset of 5 to 10 days between the two distributions. This results in population activity taking longer than power infrastructure to fully recover. To investigate the association between the recovery of the systems, we applied a decision tree. A threshold of 14.5 days of restoration duration of power infrastructure was found to be the most significant. The variable restoration duration of power infrastructure alone accounts for 98.273% of the importance for making predictions being the main feature of the model while restoration rate of power infrastructure account for the balance, 1.727%. The decision tree provides good performance in terms of its ability to accurately predict the target variable, with power restoration duration being the most significant feature. When the extent of impact exceeds a certain threshold and restoration is not fast enough, population activity recovery starts regardless. In those areas, power and community recover at the same time or community may even recover before power restoration. When the power infrastructure takes more than 14 days to fully restore, people do not keep continue to wait, but rather resume life activities. In conclusion, this study provides a comprehensive analysis of the impact of hurricanes on population activity and power infrastructure, and the association between the recovery of the systems. The results show that power restoration is an important factor contributing to the population activity recovery. Faster power restoration will help residents return more quickly to normal daily activity faster. Population activity is more affected by hurricanes than power infrastructure. The results of this study can be used to improve the resilience of power infrastructure and population activity in the face of hurricanes. ## Acknowledgments This material is based in part upon work supported by the National Science Foundation under CRISP 2.0 Type 2 No. 1832662 and the Texas A&M University X-Grant 699. The authors also would like to acknowledge the data support from Spectus. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, Texas A&M University, or Spectus. ## Author Contributions All authors critically revised the manuscript, gave final approval for publication, and agree to be held accountable for the work performed therein. C.H. was the lead Ph.D. student researcher and first author, who was responsible for supervising data collection, performed final analysis, and wrote the majority of the manuscript. A.M. was the faculty advisor for the project and provided critical feedback on the project development and manuscript. ## Data availability All data were collected through a CCPA- and GDPR-compliant framework and utilized for research purposes. The data that support the findings of this study are available from Spectus, but restrictions apply to the availability of these data, which were used under license for the current study. The data can be accessed upon request submitted to the providers. The data was shared under a strict contract through Spectus' academic collaborative program, in which they provide access to de-identified and privacy-enhanced mobility data for academic research. All researchers processed and analyzed the data under a non-disclosure agreement and were obligated not to share data further or to attempt to re-identify data. ## Code availability The code that supports the findings of this study is available from the corresponding author upon request.
2309.16569
Audio-Visual Speaker Verification via Joint Cross-Attention
Speaker verification has been widely explored using speech signals, which has shown significant improvement using deep models. Recently, there has been a surge in exploring faces and voices as they can offer more complementary and comprehensive information than relying only on a single modality of speech signals. Though current methods in the literature on the fusion of faces and voices have shown improvement over that of individual face or voice modalities, the potential of audio-visual fusion is not fully explored for speaker verification. Most of the existing methods based on audio-visual fusion either rely on score-level fusion or simple feature concatenation. In this work, we have explored cross-modal joint attention to fully leverage the inter-modal complementary information and the intra-modal information for speaker verification. Specifically, we estimate the cross-attention weights based on the correlation between the joint feature presentation and that of the individual feature representations in order to effectively capture both intra-modal as well inter-modal relationships among the faces and voices. We have shown that efficiently leveraging the intra- and inter-modal relationships significantly improves the performance of audio-visual fusion for speaker verification. The performance of the proposed approach has been evaluated on the Voxceleb1 dataset. Results show that the proposed approach can significantly outperform the state-of-the-art methods of audio-visual fusion for speaker verification.
R. Gnana Praveen, Jahangir Alam
2023-09-28T16:25:29Z
http://arxiv.org/abs/2309.16569v1
# Audio-Visual Speaker Verification via Joint Cross-Attention ###### Abstract Speaker verification has been widely explored using speech signals, which has shown significant improvement using deep models. Recently, there has been a surge in exploring faces and voices as they can offer more complementary and comprehensive information than relying only on a single modality of speech signals. Though current methods in the literature on the fusion of faces and voices have shown improvement over that of individual face or voice modalities, the potential of audio-visual fusion is not fully explored for speaker verification. Most of the existing methods based on audio-visual fusion either rely on score-level fusion or simple feature concatenation. In this work, we have explored cross-modal joint attention to fully leverage the inter-modal complementary information and the intra-modal information for speaker verification. Specifically, we estimate the cross-attention weights based on the correlation between the joint feature presentation and that of the individual feature representations in order to effectively capture both intra-modal as well inter-modal relationships among the faces and voices. We have shown that efficiently leveraging the intra- and inter-modal relationships significantly improves the performance of audio-visual fusion for speaker verification. The performance of the proposed approach has been evaluated on the Voxceleb1 dataset. Results show that the proposed approach can significantly outperform the state-of-the-art methods of audio-visual fusion for speaker verification. Keywords:Cross-Attention Audio-Visual Fusion Speaker Verification Joint-Attention. ## 1 Introduction Speaker verification is the task of verifying the identity of a person, which is primarily carried out using acoustic samples. It has become a key technology for person authentication in various real-world applications such as customer authentication, security applications, etc [23, 18]. In recent years, the performance of speaker verification has been significantly boosted using deep learning models based on acoustic samples such as x-vector [44], xi-vector [24], and ECAPA-TDNN [13]. However, in a noisy acoustic environment, it would be difficult to distinguish different speakers only based on speech signals. Therefore, other modalities such as face, iris, and fingerprints are also explored for verifying the person's identity. Out of all the modalities, face and voice share a very close association with each other in identifying a person's identity [20]. Authenticating the identity of a person from videos has been widely explored in the literature by relying either on faces [19, 35, 46] or voices [6, 43, 60]. Inspired by the close association between faces and voices, audio-visual (A-V) systems [56, 10, 59, 53] have been proposed for speaker verification. However, effectively leveraging the fusion of voices and faces for speaker verification is not fully explored in the literature [26, 48]. Face and voice provide diverse and complementary relationships with each other, which plays a key role in outperforming the performance of individual modalities. Conventionally, A-V fusion can be achieved by three major fusion strategies: feature-level fusion, model-level fusion, and decision-level fusion [55]. Feature-level fusion (or early fusion) is performed by naively concatenating the features of individual audio and visual modalities, which is further used for predicting the final outputs. Model-level fusion deals with specialized architectures for fusion based on models such as deep networks [57], Hidden Markov Model (HMM) [58], and kernel methods [8]. In decision-level fusion, audio and visual modalities are trained independently end-to-end, and then the scores obtained from the individual modalities are fused to obtain the final scores. It requires little training and is easy to implement, however, it neglects the interactions across the modalities and thereby shows limited improvement over the individual performances of faces and voices. Though feature (or early-level) fusion allows the audio and visual modalities to interact with each other at the feature level, they fail to effectively capture the complementary inter-modal and intra-modal relationships with each other. Most of the existing approaches for speaker verification based on A-V fusion either fall in the category of decision-level fusion, where fusion is performed at score level, or early feature-level fusion, which relies on early feature concatenation of audio and visual modalities. Even though naive feature concatenation or using score level fusion shows improvement in the performance of speaker verification, it does not fully leverage the intra-modal and inter-modal relationships among the audio and visual modalities. In some of the videos, the voices might be corrupted due to background clutter. On the other hand, face images can also be corrupted due to several factors such as occlusion, pose, poor resolution, etc. Intuitively, an ideal strategy of A-V fusion should give more importance to the modality, exhibiting better-discriminating features by fully exploiting the complementary relationships with each other. Recently, attention mechanisms have been explored to focus on the more relevant modalities of the video clips by assigning higher attention weights to the modality exhibiting higher discrimination among the speakers [41]. In this work, we have investigated the prospect of leveraging the complementary relationships among the faces and voices, while still leveraging the intra-modal temporal dynamics within the same modality to improve the performance of the system than that of individual audio and visual modalities. Specifically, a joint feature representation is introduced to the joint cross-attentional fusion model along with the feature representations of individual modalities to simultaneously capture both the intra-modal relationships and complementary inter-modal relationships. The major contributions of this paper are as follows: * A joint cross-attentional model is explored for an effective fusion of faces (visual) and voices (audio) by leveraging both the intra-modal and inter-modal relationships for speaker verification. * Deploying the joint feature representation also helps to reduce the heterogeneity among the audio and visual features, thereby resulting in better A-V feature representations * A detailed set of experiments are conducted to show that the proposed approach is able to outperform the state-of-the-art A-V fusion models for speaker verification. ## 2 Related Work ### Audio-Visual Fusion for Speaker Verification Nagrani et al [31] is one of the early works to investigate the close association of voices and faces and proposed a cross-modal biometric matching system. They have attempted to match a given static face or dynamics video with the corresponding voice and vice-versa. They have further explored joint embeddings for the task of person verification, where the idea is to detect whether the faces and voices come from the same video or not [30]. Wen et al [54] also explored shared representation space for voices and faces and presented a disjoint mapping network for cross-modal biometric matching by mapping the modalities individually to their common covariates. Tao et al [47] proposed a cross-modal discriminative network based on the faces and voices of a given video. They have also investigated the association of faces and voices, whether the faces and voices come from the same person or not, and their application for speaker recognition. Another interesting work on cross-modal speaker verification was done by Nawaz et al. [3], where they analyzed the impact of languages for cross-modal biometric matching tasks in the wild. They have shown that both face and speaker verification systems rely on spoken languages, which is caused due to the domain shift across different languages. Leda et al. [40] attempted to leverage the complementary information of audio and visual modalities for speaker verification using a multi-view model, which uses a shared classifier to map audio and visual into the same space. Wang [51] explored various fusion strategies at the feature level and decision level, and showed that high-level features of audio and visual modalities share more semantic information than low-level features, which helps in improving the performance of the system. Chen et al [7] proposed a co-meta learning paradigm for learning A-V feature representations in a self-supervised learning framework. In particular, they have leveraged the complementary information among the audio and visual modalities as a means of supervisory signal to obtain robust A-V feature representations. Meng et al [26] also proposed a co-learning cross-modal framework, where the features of each modality are obtained by exploiting the knowledge from another modality using cross-modal boosters in a pseudo-siamese structure. Tao et al [48] proposed a two-step A-V deep cleansing framework to deal with the noisy samples. They have used audio modality to discriminate the easy and complex samples as a coarse-grained cleansing, which is further refined as a fine-grained cleansing using the visual modality. Unlike prior approaches, we have investigated the prospect of leveraging attention mechanisms to fully exploit the complementary inter-modal and intra-modal relationships among the audio and visual modalities for speaker verification. ### Attention models for Audio-Visual Fusion Attention mechanisms are widely used in the context of multimodal fusion with various modalities such as audio and text [25, 29], visual and text [27, 52], etc. Stefan et al. [17] proposed a multi-scale feature fusion approach to obtain robust A-V feature representations. They have fused the features at intermediate layers of the audio and visual backbones, which are finally combined with the feature vectors of individual modalities in the shared common space to obtain the final A-V feature representations. Peiwen et al [45] proposed a novel fusion strategy, that involves weight-enhanced attentive statistics pooling for both modalities, which exhibit a strong correlation with each other. They further obtain keyframes in both modalities using cycle consistency loss along with a gated attention mechanism to obtain robust A-V embeddings for speaker verification. Shon et al [41] explored an attention mechanism to conditionally select the relevant modality in order to deal with noisy modalities. They have leveraged the complementary information among the audio and visual modalities by assigning higher attention weights to the modality, exhibiting higher discrimination for speaker verification. Chen et al [9] investigated various fusion strategies and loss functions to obtain robust A-V feature representations for speaker verification. They have further evaluated the impact of the fusion strategies on extremely missing or corrupted modalities by leveraging the data augmentation strategy to discriminate the noisy and clean embeddings. Cross-modal attention among the audio and visual modalities has been successfully explored in several applications such as weakly-supervised action localization [22], A-V event localization [14], and emotion recognition [37, 39]. Bogdan et al [28] explored a cross-attention mechanism for the A-V fusion based on cross-correlation across the audio and visual modalities. The features of each modality are learned under the constraints of other modalities. However, they focus only on inter-modal relationships and fail to exploit the intra-modal relationships. Praveen et al [36] explored a joint cross-attentional (JCA) framework for dimensional emotion recognition, which is closely related to our work. However, we have further adapted the JCA model for speaker verification by introducing the attentive statistics pooling module. ## 3 Problem Formualation For an input video sub-sequence \(S\), \(L\) non-overlapping video segments are uniformly sampled, and the corresponding deep feature vectors are obtained from the pre-trained models of audio and visual modalities. Let \(\mathbf{Z_{\mathrm{a}}}\) and \(\mathbf{Z_{\mathrm{v}}}\) denote the deep feature vectors of audio and visual modalities respectively for the given input video sub-sequence \(\mathbf{S}\) of fixed size, which is expressed as: \[\mathbf{Z_{\mathrm{a}}}=\{\mathbf{z_{\mathrm{a}}^{1}},\mathbf{z_{\mathrm{a}}^{2}},...,\mathbf{z_ {\mathrm{a}}^{L}}\}\in\mathbb{R}^{d_{a}\times L} \tag{1}\] \[\mathbf{Z_{\mathrm{v}}}=\{\mathbf{z_{\mathrm{v}}^{1}},\mathbf{z_{\mathrm{v}}^{2}},...,\mathbf{ z_{\mathrm{v}}^{L}}\}\in\mathbb{R}^{d_{x}\times L} \tag{2}\] where \(d_{a}\) and \(d_{v}\) represent the dimensions of the audio and visual feature vectors, respectively, and \(\mathbf{z_{\mathrm{a}}^{l}}\) and \(\mathbf{z_{\mathrm{v}}^{l}}\) denotes the audio and visual feature vectors of the video segments, respectively, for \(l=1,2,...,L\) segments The objective of the problem is to estimate the speaker verification model \(f:\mathbf{Z}\rightarrow\mathbf{Y}\) from the training data \(\mathbf{Z}\), where \(\mathbf{Z}\) denotes the set of audio and visual feature vectors of the input video segments and \(\mathbf{Y}\) represents the speaker identity of the corresponding video sub-sequence \(S\). ## 4 Proposed Approach ### Visual Network Faces from videos involve both appearance and temporal dynamics of video sequences, which can provide information pertaining to a wide range of intra-variations of visual modality. Effectively capturing the spatiotemporal dynamics of facial videos plays a key role in obtaining robust feature representations. Long Short-Term Memory Networks (LSTMs) have been found to be promising in modeling the long-term temporal cues in sequence representations for various applications [38, 49]. In this work, we have used Resnet18 [16] trained on the Voxceleb1 dataset [32] to obtain the spatial feature representations of the video frames. Conventionally, the size of the visual feature vectors of the last convolutional layer will be 512x7x7, which is fed to the pooling layer to reduce the spatial dimension from 7 to size 1. However, this spatial reduction may leave out some useful information, which may deteriorate the performance of the system. Therefore, as suggested by [14], we have deployed scaled dot-product of audio and visual feature vectors for each segment in order to leverage the audio feature vectors to smoothly reduce the spatial dimensions of video feature vectors. Then, we encode the temporal dynamics of the segments of the sequence of visual feature vectors using Bi-directional LSTM with residual embedding. Finally, the obtained feature vectors of visual modality are stacked to form a matrix of visual feature vectors as shown by \[\mathbf{X_{\mathrm{v}}}=(\mathbf{x_{\mathrm{v}}^{1}},\mathbf{x_{\mathrm{v}}^{2}},...,\mathbf{x _{\mathrm{v}}^{L}})\in\mathbb{R}^{d_{v}\times L} \tag{3}\] ### Audio Network With the advent of deep neural networks, speaker verification based on deep feature vectors has shown significant improvement over the conventional i-vector [11] based methods. One of the most widely used deep feature vector embeddings is the x-vector paradigm [44], which uses time-delay neural network (TDNN) and statistics pooling. Several variants of TDNN such as Extended TDNN (ETDNN) [1] and Factored TDNN (FTDNN) [2] have been introduced to boost the performance of the system. Recently, ECAPA-TDNN [13] has been introduced for speaker verification, which has shown significant improvement by leveraging the residual and squeeze-and-excitation (SE) components. So we have also explored ECAPA-TDNN to obtain the deep feature vectors of the audio segments. In order to exploit the temporal dynamics in the speech sequence, LSTMs have also been explored for speaker embedding extraction [5, 4]. Similar to that of visual modality, we have also used Bi-directional LSTMs with residual embedding to encode the obtained audio feature vectors. Finally, the audio feature vectors of \(L\) video clips are stacked to obtain a matrix, shown as \[\mathbf{X_{\mathrm{a}}}=(\mathbf{x_{\mathrm{a}}}^{1},\mathbf{x_{\mathrm{a}}^{2}},...,\mathbf{x _{\mathrm{a}}^{L}})\in\mathbb{R}^{d_{\mathrm{a}}\times L} \tag{4}\] ### Joint Cross-Attentional AV-Fusion: Though audio-visual fusion can be achieved through unified multimodal training, it was found that multimodal performance often declines over that of individual modalities [50]. This has been attributed to a number of factors, such as differences in learning dynamics for audio and visual modalities [50], different noise topologies, with some modality streams containing more or less information for the task at hand, as well as specialized input representations [33]. Therefore, we have obtained deep feature vectors for the individual audio and visual modalities independently, which are then fed to the joint cross-attentional module for audio-visual fusion. Since multiple modalities convey more diverse information than a single modality, effectively leveraging the intra-modal and inter-modal complementary relationships among the audio and visual modalities plays a key role in efficient audio-visual fusion. In this work, we have explored joint cross-attentional fusion to encode the intra-modal and inter-modal relationships simultaneously in a joint framework. Specifically, the joint A-V feature representation, obtained by concatenating the audio and visual features is also fed to the fusion module Figure 1: **Block Diagram of the Joint cross-attention model for A-V fusion** along with the feature representations of individual modalities. By deploying the joint representation, features of each modality attend to themselves, as well as other modalities, thereby simultaneously capturing the semantic inter-modal and intra-modal relationships among audio and visual modalities. Leveraging the joint representation also helps in reducing the heterogeneity among the audio and visual modalities, which further improves the performance of speaker verification. A block diagram of the proposed model is shown in Figure 1. The joint representation of audio-visual features, \(\mathbf{J}\), is obtained by concatenating the audio and visual feature vectors: \[\mathbf{J}=[\mathbf{X_{\mathrm{a}}};\mathbf{X_{\mathrm{v}}}]\in\mathbb{R}^{d\times L} \tag{5}\] where \(d=d_{a}+d_{v}\) denotes the feature dimension of concatenated features. The concatenated audio-visual feature representations (\(\mathbf{J}\)) of the given video sub-sequence (\(\mathbf{S}\)) are now used to attend to the feature representations of individual modalities \(\mathbf{X_{\mathrm{a}}}\) and \(\mathbf{X_{\mathrm{v}}}\). The joint correlation matrix \(\mathbf{C_{\mathrm{a}}}\) across the audio features \(\mathbf{X_{\mathrm{a}}}\), and the combined audio-visual features \(\mathbf{J}\) are given by: \[\mathbf{C_{\mathrm{a}}}=\tanh\left(\frac{\mathbf{X_{\mathrm{a}}^{T}}\mathbf{W_{\mathrm{ja} }}\mathbf{J}}{\sqrt{d}}\right) \tag{6}\] where \(\mathbf{W_{\mathrm{ja}}}\in\mathbb{R}^{L\times L}\) represents learnable weight matrix across the audio and combined audio-visual features, and \(T\) denotes transpose operation. Similarly, the joint correlation matrix for visual features is given by: \[\mathbf{C_{\mathrm{v}}}=\tanh\left(\frac{\mathbf{X_{\mathrm{v}}^{T}}\mathbf{W_{\mathrm{jv }}}\mathbf{J}}{\sqrt{d}}\right) \tag{7}\] The joint correlation matrices \(\mathbf{C_{\mathrm{a}}}\) and \(\mathbf{C_{\mathrm{v}}}\) for audio and visual modalities provide a semantic measure of relevance not only across the modalities but also within the same modality. A higher correlation coefficient of the joint correlation matrices \(\mathbf{C_{\mathrm{a}}}\) and \(\mathbf{C_{\mathrm{v}}}\) shows that the corresponding samples are strongly correlated within the same modality as well as other modality. Therefore, the proposed approach is able to efficiently leverage the complementary nature of audio and visual modalities (i.e., inter-modal relationship) as well as intra-modal relationships, thereby improving the performance of the system. After computing the joint correlation matrices, the attention weights of audio and visual modalities are estimated. For the audio modality, the joint correlation matrix \(\mathbf{C_{\mathrm{a}}}\) and the corresponding audio features \(\mathbf{X_{\mathrm{a}}}\) are combined using the learnable weight matrices \(\mathbf{W_{\mathrm{ca}}}\) to compute the attention weights of audio modality, which is given by \[\mathbf{H_{\mathrm{a}}}=ReLU(\mathbf{X_{\mathrm{a}}}\mathbf{W_{\mathrm{ca}}}\mathbf{C_{\mathrm{ a}}}) \tag{8}\] where \(\mathbf{W_{\mathrm{ca}}}\in\mathbb{R}^{d_{a}\times d_{a}}\) and \(\mathbf{H_{\mathrm{a}}}\) represents the attention maps of the audio modality. Similarly, the attention maps (\(\mathbf{H_{\mathrm{v}}}\)) of visual modality are obtained as \[\mathbf{H_{\mathrm{v}}}=ReLU(\mathbf{X_{\mathrm{v}}}\mathbf{W_{\mathrm{ev}}}\mathbf{C_{\mathrm{v}}}) \tag{9}\] where \(\mathbf{W_{\mathrm{ev}}}\in\mathbb{R}^{d_{v}\times d_{v}}\) denote the learnable weight matrices. Then, the attention maps are used to compute the attended features of audio and visual modalities as: \[\mathbf{X_{\mathrm{att,a}}}=\mathbf{H_{\mathrm{a}}}\mathbf{W_{\mathrm{ha}}}+\mathbf{X_{\mathrm{ a}}} \tag{10}\] \[\mathbf{X_{\mathrm{att,v}}}=\mathbf{H_{\mathrm{v}}}\mathbf{W_{\mathrm{hv}}}+\mathbf{X_{\mathrm{ v}}} \tag{11}\] where \(\mathbf{W_{\mathrm{ha}}}\in\mathbb{R}^{d\times d_{\mathrm{a}}}\) and \(\mathbf{W_{\mathrm{hv}}}\in\mathbb{R}^{d\times d_{v}}\) denote the learnable weight matrices for audio and visual modalities respectively. The attended audio and visual features, \(\mathbf{X_{\mathrm{att,a}}}\) and \(\mathbf{X_{\mathrm{att,v}}}\) are further concatenated to obtain the A-V feature representation, which is given by: \[\widehat{\mathbf{X}}=[\mathbf{X_{\mathrm{att,v}}};\mathbf{X_{\mathrm{att,a}}}] \tag{12}\] The attended audio-visual feature vectors are fed to the Bi-directional LSTM in order to capture the temporal dynamics of the attended joint audio-visual feature representations. The segment-level audio-visual feature representations are in turn fed to the attentive statistics pooling (ASP) [34] in order to obtain the subsequence or utterance-level representation of the audio-visual feature vectors. Finally, the embeddings of the final audio-visual feature representations are used to obtain the scores, where the additive angular margin softmax (AAMSoftmax) [12] loss function is used to optimize the parameters of the fusion model and ASP module. ## 5 Experimental Methodology ### Datasets The proposed approach has been evaluated on the VoxCeleb1 dataset [32], obtained from videos of YouTube interviews, captured in a large number of challenging multi-speaker acoustic environments. The dataset contains 1,48,642 video clips from 1,251 speakers, which is gender-balanced with 55% of the speakers being male. The speakers are selected from a wide range of different ethnicities, accents, professions, and ages. The duration of the video clips ranges from 4 secs to 145 secs. In our experimental framework, we split the voxceleb1 development set (comprised of videos from 1211 speakers) into training and validation sets. We have randomly selected 1150 speakers for training and 61 speakers for validation. We have also reported our results on the Vox1-O (Voxceleb1 Original) test set for performance evaluation. This test set consists of 37720 trials from 40 speakers. ### Evaluation Metric In order to evaluate the performance of our proposed approach, we used equal error rate (EER) as an evaluation metric, which has been widely used for speaker verification in the literature [28, 11]. It depicts the error rate when the False Accept Rate (FAR) is equal to the False Reject Rate (FRR). So the lower the EER, the higher the reliability of the system. ### Implementation Details For the visual modality, the facial images are taken from the images provided by the organizers of the dataset. For regularizing the network, dropout is used with \(p=0.8\) on the linear layers. The initial learning rate of the network was set to be \(1e-2\) is used for the Adam optimizer. Also, weight decay of \(5e-4\) is used. The batch size of the network is set to \(400\). Data augmentation is performed on the training data by random cropping, which produces a scale-invariant model. The number of epochs is set to be \(50\) and early stopping is used to obtain the best weights of the network. For training the audio network, \(80\)-dimensional Mel-FilterBank (MFB) features are extracted using an analysis window size of \(25\) ms over a frameshift of \(10\) ms. The acoustic features are randomly augmented on-the-fly with either MUSAN noise, speed perturbation with a rate between \(0.95\) and \(1.05\), or reverberation [42]. In addition, we use SpecAugment [21] for applying frequency and time masking on the MFB features. The initial weights of the audio network are initialized with values from the normal distribution and the network is trained for a maximum of \(100\) epochs, and early stopping is used. The network is optimized using Adam optimizer with the initial learning rate of \(0.001\) and the batch size is fixed to be \(400\). In order to prevent the network from over-fitting, dropout is used with p = \(0.5\) after the last linear layer. Also, weight decay of \(5e-4\) is used for all the experiments. For the fusion network, we used hyperbolic tangent functions for the activation of cross-attention modules. The dimension of the extracted features of audio modality is set to \(192\) and visual modality as \(512\). In the joint cross-attention module, the initial weights of the joint cross-attention matrix are initialized with the Xavier method [15] and the weights are updated using the Adam optimizer. The initial learning rate is set to be \(0.001\) and batch size is fixed to be \(100\). Also, a dropout of \(0.5\) is applied on the attended A-V features and weight decay of \(5e-4\) is used for all the experiments. ## 6 Results and Discussion ### Ablation Study In order to analyze the performance of the proposed fusion model, we compare the proposed fusion model with some of the widely-used fusion strategies for speaker verification. One of the widely used fusion strategies is score-level fusion, where the scores of the individual modalities are obtained and fused together to estimate the identity of a person. Another common approach for A-V fusion is based on early fusion, where the deep features of audio and visual modalities are concatenated immediately after being extracted, and the concatenated version of the individual modalities is used to obtain the final scores. As we can observe in the Table, the proposed fusion model consistently outperforms both the early fusion and the score level (decision level) by leveraging the semantic intra-modal and inter-modal relationships among the audio and visual modalities for speaker verification. In order to analyze the contribution of the LSTMs in improving the modeling of intra-modal relationships for both individual feature representations and the final attended A-V feature representations, we have carried out a series of experiments with and without Bi-directional LSTMs (BLSTM). The experimental results to analyze the impact of BLSTMs have been shown in Table 1 Initially, we conducted an experiment without using Bi-LSTMs with the proposed fusion model. Then, we introduced Bi-LSTMs only for modeling the temporal dynamics of individual feature representations. We can observe that the performance of the proposed fusion model with the U-BLSTMs for individual feature representations has been improved. Now, we introduce BLSTMs for modeling the temporal dynamics of the final A-V attended feature representations. As observed in Table 1, the performance of the proposed fusion model has been further improved by introducing J-BLSTMs for modeling the temporal dynamics of final A-V feature representations. ### Comparision to state-of-the-art In order to compare with state-of-the-art, we have used the recently proposed A-V fusion model based on two-step multimodal deep cleansing [48]. We have used their deep cleansing approach as a baseline and extended their approach by introducing our proposed fusion model to obtain robust A-V feature representations. The experimental results of the proposed approach in comparison to that of [48] are shown in Table 2. We have reported the results for both the validation set and the Vox1-O test partition of the Voxceleb1 dataset. In order to analyze the fusion performance of the proposed model, we have also reported the results for the individual audio and visual modalities. We can observe that \begin{table} \begin{tabular}{|l|c|} \hline Fusion Method & EER \\ \hline Feature Concatenation (Early Fusion) & 2.489 \\ Score-level Fusion (Decision-level) & 2.521 \\ Proposed Fusion (JCA) without BLSTMs & 2.315 \\ Proposed Fusion (JCA) with U-BLSTMs & 2.209 \\ Proposed Fusion (JCA) with U-BLSTMs and J-BLSTMs & 2.173 \\ \hline \end{tabular} \end{table} Table 1: Performance of various fusion strategies on the validation set the proposed fusion model clearly outperforms the performance of individual modalities. We can also observe that by introducing the proposed fusion model, the performance of the system has been improved better than that of [48]. ## 7 Conclusion In this paper, we present a joint cross-attentional A-V fusion model for speaker verification in videos. Unlike prior approaches, we effectively leverage the intra-modal and complementary inter-modal relationships among the audio and visual modalities. In particular, we obtain the deep features of audio and visual modalities from pre-trained networks, which are fed to the fusion model along with the joint representation. Then semantic relationships among audio and visual modalities are obtained based on the cross-correlation between the individual feature representations and the joint A-V feature representation (concatenated version of audio and visual features). The attention weights obtained from the cross-correlation matrix are used to estimate the attended feature vectors of audio and visual modalities. The modeling of intra-modal relationships in the proposed system has been further improved by leveraging Bi-directional LSTMs to model the temporal dynamics of both the individual feature representations and the final attended A-V feature representations. Experiments have shown that the proposed approach outperforms the state-of-the-art approaches for speaker verification. ## 8 Acknowledgment The authors wish to acknowledge the funding from the Government of Canada's New Frontiers in Research Fund (NFRF) through grant NFRFR-2021-00338.
2309.11560
A Robust Large-Period Discrete Time Crystal and its Signature in a Digital Quantum Computer
Discrete time crystals (DTCs) are novel out-of-equilibrium quantum states of matter which break time translational symmetry. So far, only the simplest form of DTCs that exhibit period-doubling dynamics has been unambiguously realized in experiments. We develop an intuitive interacting spin-$1/2$ system that supports the more non-trivial period-quadrupling DTCs ($4T$-DTCs) and demonstrate its digital simulation on a noisy quantum processor. Remarkably, we found a strong signature of the predicted $4T$-DTC that is robust against and, in some cases, amplified by different types of disorders. Our findings thus shed light on the interplay between disorder and quantum interactions on the formation of time crystallinity beyond periodic-doubling, as well as demonstrate the potential of existing noisy intermediate-scale quantum devices for simulating exotic non-equilibrium quantum states of matter.
Tianqi Chen, Ruizhe Shen, Ching Hua Lee, Bo Yang, Raditya Weda Bomantara
2023-09-20T18:01:01Z
http://arxiv.org/abs/2309.11560v2
# A Robust Large-Period Discrete Time Crystal and its Signature in a Digital Quantum Computer ###### Abstract Discrete time crystals (DTCs) are novel out-of-equilibrium quantum states of matter which break time translational symmetry. So far, only the simplest form of DTCs that exhibit period-doubling dynamics has been unambiguously realized in experiments. We develop an intuitive interacting spin-1/2 system that supports the more non-trivial period-quadrupling DTCs (4\(T\)-DTCs) and demonstrate its digital simulation on a noisy quantum processor. Remarkably, we found a strong signature of the predicted 4\(T\)-DTC that is robust against and, in some cases, amplified by different types of disorders. Our findings thus shed light on the interplay between disorder and quantum interactions on the formation of time crystallinity beyond periodic-doubling, as well as demonstrate the potential of existing noisy intermediate-scale quantum devices for simulating exotic non-equilibrium quantum states of matter. _Introduction.-_ The concept of non-ergodicity [1] in quantum phenomena is ubiquitous and important in quantum many-body physics [2]. It underlies a variety of exotic physical phenomena such as eigenstate thermalization hypothesis [3; 4; 5], many-body localization [6; 7], quantum scars [8; 9], quantum chaos [10], and time crystals [11; 12; 13; 14; 15]. In particular, discrete time crystals (DTCs) [16; 17] are a type of non-ergodic phases of matter that gained prominence in the recent years [18; 19] as the most experimentally realistic form of time crystals. They emerge in periodically driven systems and are characterized by the presence of an order parameter evolving at a period that is robustly locked at an integer multiple of the driving period, persisting indefinitely in the thermodynamic limit [14; 15; 18; 19]. Experimentally realizing DTCs whose order parameter exhibits a much larger period than the corresponding driving period is highly desirable, as it paves the way for observing passive quantum error correction [20], as well as novel dynamical physics such as Anderson localization and Mott insulator transitions in the time domain [21; 22; 23; 24; 25; 26; 27]. Unfortunately, despite a considerable number of theoretical proposals for such large-period DTCs [28; 29; 30; 31; 32], existing experiments were only able to realize period-doubling [33; 34; 35; 36; 37; 38] and period-tripling [39] DTCs. Indeed, as these experiments utilize (pseudo)spin-1/2 particles, they are incompatible with Ref. [28; 29; 31] which utilize bosonic particles. Moreover, with current technology, accessing the very large number of particles as required in Refs. [30; 32] is infeasible. It is worth noting that a particular example of a period-quadrupling DTC was recently realized in an acoustic system [40]. However, the signature of such a DTC is only observable in the boundaries of the system rather than in its bulk. Moreover, as acoustic systems are inherently classical, the obtained large period DTC may not be directly useful for the aforementioned quantum technological applications. In this work, we develop an interacting spin-1/2 system that supports period-quadrupling DTCs (which we shall refer to as 4\(T\)-DTCs) observable even at moderate system sizes. The time-evolution with matrix product states (tMPS) [41; 42] method enables us to numerically investigate, in a controlled manner, whether disorder can harbor nontrivial effects beyond simply degrading the desired signal. Remarkably, we found that the signatures of 4\(T\)-DTCs are not only robust against various types of disorders, but can even be amplified in some cases. Also, at this stage, since most NISQ-era quantum devices possess various kinds of noise, ranging from the relatively poor gate fidelity, the deep circuit depth, to the thermal environment noise rising from the execution of the quantum circuit [43; 44; 45], it is vital to digitally implement such 4\(T\)-DTCs on NISQ-era device to investigate how robust the signatures of 4\(T\)-DTCs can be faithfully explored. Motivated by recent tremendous progress on simulating condensed matter systems on superconducting quantum processors [46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57], we then verify our claim by explicitly realizing the proposed system with the IBM Q quantum processor _ibmq_cairo_. Despite the inevitable noise occurring in our NISQ-era device, a robust period-quadrupling order parameter could still be captured. _Model.-_ We propose a periodically driven spin-1/2 ladder which is schematically depicted in Fig. 1(a) and described by the following periodically quenched Hamiltonian, \[\hat{\mathcal{H}}\left(t\right)=\begin{cases}\sum_{i=1}^{N_{0}}-\frac{h}{2} \left(H_{i}^{xx}-H_{i}^{yy}(t)\right)-JH_{i}^{zz}&0<t<\frac{T}{2},\\ M\sum_{i=1}^{N_{0}}\sigma_{i,b}^{x}&\frac{T}{2}<t<T,\end{cases} \tag{1}\] where \(H_{i}^{xx}=\sigma_{i,a}^{x}\sigma_{i,b}^{x},H_{i}^{yy}(t)=\left(1+\cos\omega t \right)\sigma_{i,a}^{y}\sigma_{i,b}^{x},\tilde{H}_{i}^{zz}=\sum_{i=1}^{N_{0}-1 }\sigma_{i,a}^{z}\sigma_{i+1,a}^{z}\), \(\sigma_{i,a}^{x/y/z}\) are a set of Pauli matrices describing the spin-1/2 particle at the \(i\)-th site of ladder \(a/b\), \(N_{0}\) is the length of the ladder, \(\omega=2\pi/T\), and \(T\) is the driving period. The parameters \(J\) and \(h\) represent the intra- and inter-ladder interaction strength respectively, whilst \(M\) describes the magnetic field strength in a spin-\(1/2\) magnet analogy. Throughout this work, we work in units such that \(\hbar=1\), and set the driving period \(T\) to be \(1\), for easy comparison with the \(4T\) time-crystal oscillation period demonstrated later. Note that the Floquet driving appears not just in the 2-step quench, but also in the continuous time dependence of \(H_{i}^{\gamma\gamma}(t)\). In this case, the term \(\cos\omega t\) in \(H_{i}^{\gamma\gamma}(t)\) serves to increase the non-integrability of our system, i.e., the evolution operator over one period cannot be written as a mere product of two exponentials. To understand how Eq. (1) has the propensity to support the sought-after \(4T\)-DTC, we first consider the special limit of \(hT=MT=\pi\) and \(JT=0\) (to be referred to as the solvable limit hereafter), so that the system reduces to a variation of the model introduced in Ref. [58]. By taking an initial state in which all spins are aligned in the \(+z\)-direction, which we denote as \(|\uparrow\cdots\rangle_{a}\otimes|\uparrow\cdots\rangle_{b}\), it is easily shown (using Eq. (1)) to evolve as [see also Fig. 1(b)] \[|\uparrow\cdots\rangle_{a}\otimes|\uparrow\cdots\rangle_{b} \xrightarrow{(T)} \mathrm{i}|\downarrow\cdots\rangle_{a}\otimes|\uparrow\cdots \rangle_{b} \tag{2}\] \[\xrightarrow{(2T)} \mathrm{i}|\downarrow\cdots\rangle_{a}\otimes|\downarrow\cdots \rangle_{b}\] \[\xrightarrow{(3T)} -|\uparrow\cdots\rangle_{a}\otimes|\downarrow\cdots\rangle_{b}\] \[\xrightarrow{(4T)} -|\uparrow\cdots\rangle_{a}\otimes|\uparrow\cdots\rangle_{b}.\] That is, up to a global phase factor, the state returns to itself only after four periods. Note that if we strictly remain in the non-interacting limit \(J=0\), such \(4T\)-periodicity will no longer hold even if the parameters \(h\) and \(M\) are tuned away from their special parameter values above by the slightest amount. Interestingly, by turning on the inter-site interaction \(J\), our results below show that the above \(4T\)-periodicity becomes more robust against such parameters variations. The induced robustness from the interaction of the form \(\sigma_{i\alpha}^{z}\sigma_{i+1,a}^{z}\) could be understood from its connection to the physics of the quantum repetition codes [20]. Moreover, As such an interaction renders our system truly many-body in nature, the observed robust \(4T\)-periodicity in the vicinity of parameters \(hT=MT=\pi\) and \(JT=0\) thus represents a signature of a genuine \(4T\)-DTC phase. In the following, we shall demonstrate the stability of this \(4T\)-periodic behavior in more detail, for generic parameter values, first through a tMPS numerical simulation, and then alternatively by physically simulating the system on the IBM _ibmq_cazu_ quantum processor. _Evidence of robust \(4T\)-DTC via tMPS.-_ Here, we employ an efficient method of time evolution with matrix product states (tMPS), where the quantum state is represented as an MPS, and the unitary time evolution operator as an matrix product operator (MPO) [59]. To perform a tMPS study of the system, all the sites are realigned on a linear chain of length \(N=2N_{0}\) with next-to-nearest neighbor couplings. We then implement a first-order Suzuki-Trotter algorithm with swap gates [60; 61] to carry out the time-evolution, the mathematical details of which could be found in the supplementary materials [62]. The illustration for the tMPS calculation of the model, numerical details, as well as the transformed Hamiltonian, is also shown in the supplementary materials [62]. To capture the signatures of \(4T\)-DTC in our system, we calculate the stroboscopic averaged magnetization dynamics for spins residing on one of the ladders (which we choose as \(a\)) \[\left\langle S_{z}\right\rangle(t)=\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\langle \sigma_{i,\alpha}^{z}\rangle(t)\,, \tag{3}\] and the associated power spectrum as \[\left\langle S_{z}\right\rangle(\Omega)=\left|\frac{1}{\mathcal{N}_{\mathrm{ tot}}}\sum_{k=1}^{N_{\mathrm{tot}}}\left\langle S_{z}\right\rangle(t)\exp \left[-\frac{k\Omega T}{\mathcal{N}_{\mathrm{tot}}}\right]\right| \tag{4}\] where \(\mathcal{N}_{\mathrm{tot}}\) is the total stroboscopic steps evolved, and \(t=kT\) (\(k=1,2,\cdots,\mathcal{N}_{tot}\)) is the stroboscopic time at step \(k\). Our results are summarized in Fig. 2 for two different sets of parameter values that correspond to the \(4T\)-DTC phase and the thermal phase respectively. Specifically, as the parameters \(h\) and \(M\) are chosen close to but not equal to the solvable limit values, at a finite value of the inter-site interaction \(J\), the period-quadrupling feature of \(\left\langle S_{z}\right\rangle\) is clearly observed (triangle markers in Fig. 2(a)). This is further demonstrated by a sharp peak Figure 1: (a) Schematics of our periodically driven spin-\(1/2\) ladder for \(N_{0}=4\). During the first half of the period (\(0\to T/2\), solid box), the evolution is governed by externally driven Heisenberg spin exchange interactions that are continuously modulated at frequency \(\omega\). In the second half of the period (\(T/2\to T\), dashed box), the interactions are switched off and instead a magnetic field \(M\) is applied in the \(x\) direction. (b) The \(4T\)-periodic oscillations can be intuitively understood in the solvable limit of \(JT=0\) and \(hT=MT=\pi\). With all spins initialized pointing up, the system exhibits undergoes spatially uniform \(4T\)-periodic oscillations; ironically, these oscillations become stablized if a nonzero \(JT\) is introduced. We remark that an additional phase factors of \(i\) or \((-1)\) is omitted in the illustration. at the subharmonic frequency components \(\Omega=\pi/2,3\pi/2\) in the power spectrum of Fig. 2(b). That such a \(4T\)-periodicity is observed over a window of parameter values and not only at a specific set of parameter values suggests that the system indeed supports a \(4T\)-DTC phase. If a parameter \(h\) or \(M\) deviates significantly from its corresponding ideal value, or if the inter-site interaction \(J\) is absent, \(\langle S_{z}\rangle\) quickly decays to zero, and the system is in the thermal phase (empty square and triangle markers in Fig. 2(a)). In Fig. 3(a), we obtain the phase diagram of the system by plotting the subharmonic \(\Omega T=\pi/2\) peak (\(\langle\tilde{S}_{z}\rangle\left(\Omega\right)\) at \(\Omega=\pi/2T\)) in the power spectrum against the two system parameters \(h\) and \(J\) for the system size of \(N=16\). There, a finite (zero) \(\langle\tilde{S}_{z}\rangle\left(\Omega\right)\) at \(\Omega=\pi/2T\) is associated with the the \(4T\)-DTC (thermal) phase. It is observed that the \(4T\)-DTC phase spans over a considerable window of \(h\) values - symmetrically about \(nT=\pi\)-at moderate values of \(J\). At \(J=0\), the period-quadrupling feature is observed only at \(hT=\pi\), further confirming the role of the inter-site interaction in stabilizing the DTC phase. On the other hand, at very large values of \(J\), the \(4T\)-DTC behavior is absent altogether, which could be attributed to the presence of quantum chaos [63]. In Fig. 3(c) and (d), we investigate the effect of spatial disorder on our \(4T\)-DTC system, with the surprising revelation that disorder can actually enhance the signature of a \(4T\)-DTC phase near its border with a thermal phase. (d) is chosen deep in the DTC regime, whereas (c) is taken near the border between DTC and thermal phase. Here, each of the disordered parameters \(h\), \(J\), and \(M\) for each spin in the system is drawn randomly from a uniform distribution of \([P-dP+P+dP]\), where \(P=h,J,M\) and \(dP=dh,dJ,dM\). Figs. 3(c) and (d) present the striking observation that the presence of disorder actually enhances the signature of a \(4T\)-DTC phase near its border with a thermal phase. To understand this result, we should first recall that in a genuine \(4T\)-DTC, a macroscopic number of quasienergies (the eigenphases of the one-period evolution operator) form quadruplets with \(\pi/2\) spacing among them, i.e., they can be written as \(\varepsilon+n\pi/2\) for some \(\varepsilon\) and \(n=0,1,2,3\)[58; 20]. Ideally, such quadruplets of quasienergies should be either non-degenerate or fully degenerate (\(\varepsilon+n\pi/2\) for all \(n\)). In the case of partial degeneracy, i.e., \(\varepsilon+n\pi/2\) are only degenerate for some \(n\), certain perturbations may nonuniformly shift those degenerate quasienergies [see the upper part of Fig. 3(b)], which then breaks their \(\pi/2\) quasienergy spacing and consequently leads to a less robust period-quadrupling signal. In the clean system, such partial degeneracy tends to occur very often; perturbing the system parameters near the DTC-thermal phase transitions then causes the many quadruplets of \(\pi/2\)-separated Figure 2: Numerical evidence of robust \(4T\)-DTC for \(N=16\) sites using tMPS. (a) Magnetization \(\langle S_{z}\rangle\) as a function of time at \(MT=0.98\pi\), which is slightly perturbed away from the “ideal” solvable limit value \(MT=\pi\). For \(hT=0.9\pi\), which is also perturbed from \(\pi\), nonzero interaction strength \(JT=0.16\pi\) (4T-DTC) gives distinctively 4T-periodic oscillations, while \(JT=0\) (blue squares) gives random-looking oscillations. The thermal phase at \(JT=0.1\pi\), \(hT=0.52\pi\) exhibits subdued oscillations. (b) The associated stroboscopic power spectrum \(\langle\tilde{S}_{z}\rangle\), which shows distinct frequency peaks at \(\Omega=\pm\pi/2T\) only for the 4T-DTC phase. Figure 3: (a) Presence of \(4T\)-DTC behavior over a wide range of \(JT\) and \(hT\). The phase diagram representing the value of the subharmonic peak at \(\Omega T=\pi/2\) and \(MT=0.98\pi\). (b) Illustration of how partial degeneracy in the clean system (overlapping blue and red lines) leads to the breakdown of the \(\pi/2\) quasienergy spacing. The left (right)-side quasienergy spacings correspond to the ones before (after) perturbations. (c-d) The full power spectrum associated with the magnetization dynamics up to \(t=100T\) under the influence of various disorders at (c) \(JT=0.13\pi\), \(hT=0.8\pi\), and \(MT=0.98\pi\), i.e., green dot in panel a, and (d) \(JT=0.16\pi\), \(hT=0.9\pi\), and \(MT=0.98\pi\), i.e., blue dot in panel a. The enhanced subharmonic peaks due to disorders are clearly observed near the DTC-thermal phase boundaries (panel c). All data points involving disorders are averaged over 220 realizations. quasienergies above to break down due to the aforementioned mechanism. In the presence of spatial disorder, the system parameters for each spin or pair of spins take on slightly different values. As a result, the probability for a system's quasienergy to be degenerate is significantly reduced, thereby resulting in more robust \(\pi/2\)-separated quadruplets of quasienergies [see the lower part of Fig. 3(b)]. In the Supplemental Material [62], we further demonstrate the above argument by explicitly evaluating the quasienergy levels with and without disorder. Away from the phase transition boundaries, the presence of disorders does not seem to yield a signal improvement. In some cases, disorders instead slightly reduce the subharmonic peak. Indeed, away from the phase transition boundaries (close to the solvable limit), the detrimental partial degeneracy among different quadruplets of \(\pi/2\)-separated quasienergies is already rare to begin with. In this case, disorders instead serve as perturbations in the system parameters with respect to the solvable limit values. Nevertheless, as demonstrated in Fig. 3(c) and (d), our DTC is remarkably robust against moderate disorders (\(\sim 8\%\)). Our numerical findings indicate that these spatial disorders enhance the distinguishing features of the \(4T\)-DTC near its borders with the thermal phase. When deep within the DTC phase, spatial disorders have a tendency to marginally reduce the strength of its distinguishing signatures, although they still remain clearly discernible, which demonstrates the robustness of our \(4T\)-DTC. _Realization of robust \(4T\)-DTC on a quantum processor-_ In the numerical simulations of our model using tMPS, we have uncovered that the signature of \(4T\)-DTC is extremely robust, even in the presence of spatial disorders. This suggests that our model is ideal for physically realizing on the quantum computer, which inevitably exhibits various types of device noise. Particularly, given that other simulations of DTC realized in quantum computers exhibit poor results due to the effect of noise [64; 65], our \(4T\)-DTC model not only possesses Heisenberg spin exchange interactions beyond Ising-type couplings [51], but also holds the promise for much more robust signatures. Here, we proceed to realize our system, and capture its DTC signatures on the IBMQ quantum processor. Naive implementation of the time dynamics of our model in a quantum circuit follows from a similar trotterization procedure as in our tMPS simulations, and more details are shown in the Supplementary Materials [62]. For any quantum circuit implementation, the coupling between qubits needs to be implemented via basic quantum gates such as the controlled NOT (CNOT) and the single-qubit rotation gates; one significant advantage of a quantum circuit implementation over other quantum platforms i.e. ultracold atoms is that a time-dependent model such as a DTC model can implemented without any additional difficulty, just by concatenating different Trotter steps at different stages. Instead of transpiling local couplings within each Trotter step, we adopt a more efficient approach by leveraging the variational circuit optimization technique, and the details of this approach are presented in the Suupelemntary Materials [62]. This strategy of directly implementing the whole circuit requires fewer \(CX\) gates and compresses the circuit depth, thereby suppressing the effect of gate error. For other simulations using the Trotterization approach [64; 65], the signature of the DTC gets poorer under evolving dynamics, as the circuit depth itself grows linearly. In our unique scheme of trained circuits with a fixed circuit depth, as demonstrated below, we successfully achieve robust results for our proposed \(4T\)-DTC model, and in particular, it exhibits remarkably strong resilience to device noise even during many Floquet cycles. In Fig. 4, we present our measured the stroboscopic magnetization \(\langle S_{z}\rangle\) (solid circles) on the IBM quantum computer over time dynamics and compared these with the numerical results (unfilled circles, squares, and triangles) obtained by the tMPS method. Remarkably, thanks to our variational method, we realize the quantum simulation over long periods of time (20 Floquet steps). Within such long-time dynamics, our numerical and quantum results demonstrate an excellent agreement, indicating that our IBM Q simulation gives a perfect characterization of our 4-DTC model. Here, we execute our IBM Q simulation on an 8-qubit case which enables the realization of highly-compressed trained circuits for overcoming device noise [66]. This is already a sufficiently long chain for demonstrating 4T-DTC, given that finite-size effects are insignificant as shown in Fig. 4, as evidenced by the fact that by our tMPS results at different sizes \(N=8,16\), and \(32\) all show qualitatively similar profiles with the IBM Q results. _Conclusion and outlook.-_ We proposed an intuitive and realistic new spin-1/2 model that supports a nontrivial type of DTC, characterized by a robust period-quadrupling observable rather than the more common period-doubling type. Remarkably, we were able to explicitly capture the signatures of such \(4T\)-DTC both numerically and via a NISQ-era IBM quantum processor, even at relatively small number of qubits and in the presence of considerable hardware noise. In particular, excellent agreement was obtained between the two approaches. More surprisingly, we found that spatial disor Figure 4: Physical signature of \(4T\)-DTC behavior on the IBM quantum processor and its comparison with tMPS results at several system sizes (green). The parameters used are: \(JT=0.5\pi\), \(MT=0.98\pi\), \(hT=0.9\pi\). For the details of the device _ibmq_cairo_ and its error information, see Fig. S5 in the supplementary materials [62]. ders could actually improve the signatures of \(4T\)-DTC in some cases, thus shedding more light on the role of disorders in the formation of DTCs. The experimental realization of our \(4T\)-DTC system at larger system sizes is expected to be a significant future research direction, both in the area of quantum computing and in condensed matter platforms such as ultracold atoms [67, 68, 69, 70, 71, 72, 73, 74, 75, 76]. On the one hand, that our \(4T\)-DTC phase exists within a spin-1/2 system makes it a realistic and appropriate phenomenon for benchmarking the performance of various existing noisy intermediate-scale quantum (NISQ) devices. On the other hand, the ability to achieve a large size \(4T\)-DTC may also open up opportunities to harness its technological application beyond observing its subharmonic signatures, e.g., as a quantum memory or a passive quantum error correcting device. Finally, a realistic generalization of our spin-1/2 system construction that supports DTCs beyond period-quadrupling makes for a good avenue for future theoretical and experimental studies that can uncover rich phenomenology lying in the intersection of Floquet, many-body or even non-Hermitian physics. _Acknowledgements.-_ We are grateful to Jiangbin Gong for fruitful discussions. T. C. thanks E. Miles Stoudenmire for fruitful discussion via ITensor discourse group [77]. T. C. and R. S. thank Truman Ng and Russell Yang for discussions on the quantum simulation implementation on IBM Quantum services. C. H. L. and T. C. acknowledges support by Singapore's NRF Quantum engineering grant NRF2021-QEP2-02-P09 and Singapore's MOE Tier-II grant Proposal ID: T2EP50222-0008. T. C. and B. Y. acknowledges the support from Singapore National Research Foundation (NRF) under NRF Fellowship award NRF-NRFF12-2020-0005. R. W. B acknowledges the support provided by the Deanship of Research Oversight and Coordination (DROC) at King Fahd University of Petroleum & Minerals (KFUPM) through project No. EC221010. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. The MPS calculation in this work is performed using ITensor library [78]. The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore ([https://www.nscc.sg/](https://www.nscc.sg/)), and on the National University of Singapore (NUS)'s high-performance computing facilities.
2301.13555
Random matrices associated to Young diagrams
We consider the singular values of certain Young diagram shaped random matrices. For block-shaped random matrices, the empirical distribution of the squares of the singular eigenvalues converges almost surely to a distribution whose moments are a generalisation of the Catalan numbers. The limiting distribution is the density of a product of rescaled independent Beta random variables and its Stieltjes-Cauchy transform has a hypergeometric representation. In special cases we recover the Marchenko-Pastur and Dykema-Haagerup measures of square and triangular random matrices, respectively. We find a further factorisation of the moments in terms of two complex-valued random variables that generalises the factorisation of the Marcenko-Pastur law as product of independent uniform and arcsine random variables.
Fabio Deelan Cunden, Marilena Ligabò, Tommaso Monni
2023-01-31T11:11:32Z
http://arxiv.org/abs/2301.13555v3
# Random matrices associated to Young diagrams ###### Abstract We consider the singular values of certain Young diagram shaped random matrices. For block-shaped random matrices, the empirical distribution of the squares of the singular eigenvalues converges almost surely to a distribution whose moments are a generalisation of the Catalan numbers. The limiting distribution is the density of a product of rescaled independent Beta random variables and its Stieltjes-Cauchy transform has a hypergeometric representation. In special cases we recover the Marchenko-Pastur and Dykema-Haagerup measures of square and triangular random matrices, respectively. We find a further factorisation of the moments in terms of two complex-valued random variables that generalises the factorisation of the Marcenko-Pastur law as product of independent uniform and arcsine random variables. * February 2023 ## 1 Introduction Let \(X\) be a matrix whose entries are i.i.d. complex random variables. The nonnegative definite matrix \(XX^{*}\) is known as _random covariance matrix_, and it is arguably one of the most studied models in random matrix theory with varied applications in physics, statistics and other areas [14, 20, 26]. In this paper we consider a class of random matrices, that we dub \(\lambda\)_-shaped random matrices_. They can be thought of as a generalisation of random covariance matrices where \(X\) has the'shape' of a Young diagram. Instances of these matrix models in the special case of Gaussian entries were studied by Dykema and Haagerup [11] as a tool to construct certain non commutative random variables called _DT-elements_, and in the work of Feray and Sniady [12] in relation to _Stanley's character formula_[31] of the symmetric group. Under various names and in disguised form, Gaussian \(\lambda\)-shaped matrices recently resurfaced in connection to biorthogonal ensembles, last passage percolation and free probability [1, 6, 7, 15, 27]. ## 2 Partitions and Young diagrams We begin by reviewing some of the basic terminology of integer partitons and Young diagrams. A standard reference is [21, Ch. I]. A _partition_ is any (finite or infinite) sequence \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{m},\ldots)\) of weakly decreasing nonnegative integers, \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{m}\geq\cdots\). It is convenient not to distinguish between two such sequences which differ only by a string of zeros at the end. Thus, for example, we regard \((5,4,4,1)\), \((5,4,4,1,0)\), \((5,4,4,1,0,0,\ldots)\) as the same partition. We denote the set of partitions by \(\mathcal{P}\). The number of nonzero elements \(\lambda_{i}\) of a partition \(\lambda\) is called _number of parts_ or _length_ of \(\lambda\), denoted by \(\ell(\lambda)\); the sum of the parts is the _weight_ of \(\lambda\), denoted by \(|\lambda|=\lambda_{1}+\lambda_{2}+\cdots\). If \(|\lambda|=n\), we say that \(\lambda\) is a partition of \(n\) and we write \(\lambda\vdash n\). The _Young diagram_ of a partition \(\lambda\) is the set of points \(\{(i,j)\in\mathbb{N}^{2}\colon 1\leq j\leq\lambda_{i}\}\). A Young diagram is drawn usually as a set of boxes, not of points. In drawing such diagrams we shall adopt the _English convention_, as with matrices, that the first coordinate \(i\) (the row index) increases as one goes downwards, and the second coordinate \(j\) (the column index) increases from left to right. With this convention the diagram can be visualised as a diagram of left-justified rows of boxes where the \(i\)-th row contains \(\lambda_{i}\) boxes (hence each row is not longer than the row on top of it). For instance, the diagram of \(\lambda=(5,4,4,1)\vdash 14\) is: \[\young(1,1 The Young diagram of \(N\lambda\) is a _dilation_ of the Young diagram of \(\lambda\) obtained by replacing each box in \(\lambda\) by a grid of \(N\times N\) boxes. Hence, if \(\lambda\vdash n\), then \(N\lambda\vdash N^{2}n\). For instance, if \(\lambda=(5,4,4,1)\vdash 14\), then \(3\lambda=(15,15,15,12,12,12,12,12,3,3,3)\vdash 3^{2}\cdot 14\). Its diagram is: \(3\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\)\(\cdot\cdot\) the empirical distribution of the eigenvalues \(x_{i}^{(N)}\) of \(W_{N}\). It is natural to ask whether the sequence \((F_{N})_{N\geq 1}\) converges in distribution. The limit, when it exists, will be referred to as the _limiting spectral distribution_ of \(W_{N}\). We begin by recalling two known cases. ### Full matrices Classical Wishart matrices fit naturally in the setting of \(\lambda\)-shaped random matrices. Let \(\lambda^{(N)}=\hbox{\hbox{\hbox to 0.0pt{\kern 2.0pt\hbox{\vrule width 0.4pt height 6.0pt de pth -0.2pt\hss}\hbox{\vrule width 0.4pt height 6.0pt de pth -0. ### Balanced shapes We would like to study spectral properties of \(\lambda\)-shaped random matrices for more general increasing sequence \(\left(\lambda^{(N)}\right)_{N\geq 1}\) of Young diagrams. This amounts to understand the large-\(N\) limit of _moments_ \[\lim_{N\to\infty}\frac{1}{\ell_{N}}\mathbb{E}\,\mathrm{Tr}\,W_{N}^{k},\quad k= 0,1,2,\ldots, \tag{9}\] where \(W_{N}\) is defined is (2). We expect to find a nontrivial limiting spectral distribution if the sequence \(\lambda^{(N)}\) 'converges' to a limit shape (the'macroscopic shape' of \(\lambda^{(N)}\)). The first moment (\(k=1\)) calculation can makes this a bit more precise, \[\frac{1}{\ell_{N}}\mathbb{E}\,\mathrm{Tr}\,W_{N}=\frac{1}{\ell_{N}}\frac{1}{N} \sum_{(i,j)\in\lambda^{(N)}}\mathbb{E}|X_{ij}|^{2}=\frac{\left|\lambda^{(N)} \right|}{N\ell(\lambda^{(N)})}. \tag{10}\] Therefore, in order to have a nontrivial limit distribution one needs (at least) the length of the partitions to scale like the square root of the weight \[N\ell_{N}\sim|\lambda^{(N)}|,\quad\mathrm{as}\quad N\to\infty. \tag{11}\] Young diagrams satisfying such a _growth condition_ are called _balanced Young diagrams_[9]. If we view a Young diagram as a geometric object, Eq. (11) suggests to consider sequences that, after rescaling as \(\frac{1}{N\ell_{N}}\lambda^{(N)}\), tend to a limit shape \(\lambda\). The easiest example of balanced Young diagrams is the sequence of dilations of a fixed partition. Let \(\lambda\in\mathcal{P}\) and consider the sequence \(\lambda^{(N)}=N\lambda\). For such a sequence the ratio \(\frac{\left|\lambda^{(N)}\right|}{N\ell_{N}}=\frac{\left|\lambda\right|}{\ell (\lambda)}\) is constant. ## 4 Block-shaped random matrices Fix a positive integer \(r\), consider the staircase partition \(\mathbbm{P}_{r}\) of length \(r\). Define the sequence \(\lambda^{(N)}=N\mathbbm{P}_{r}\). In this case \(\ell_{N}=Nr\), and \(\lambda^{(N)}\subset\lambda^{(N+1)}\) for all \(N\). The matrix \(X_{N}\) is a _block-shaped random matrix_ with \(\binom{r+1}{2}\) nonzero blocks of size \(N\times N\). **Example 1**.: _For \(r=3\), we have_ \[X_{1}=\begin{pmatrix}X_{11}&X_{12}&X_{13}\\ X_{21}&X_{22}&0\\ X_{31}&0&0\end{pmatrix},\quad X_{2}=\begin{pmatrix}X_{11}&X_{12}&X_{13}&X_{14}&X_{ 15}&X_{16}\\ X_{21}&X_{22}&X_{23}&X_{24}&X_{25}&X_{26}\\ X_{31}&X_{32}&X_{33}&X_{34}&0&0\\ X_{41}&X_{42}&X_{43}&X_{44}&0&0\\ X_{51}&X_{52}&0&0&0&0\\ X_{61}&X_{62}&0&0&0&0\end{pmatrix},\] \[X_{3}=\begin{pmatrix}X_{11}&X_{12}&X_{13}&X_{14}&X_{15}&X_{16}&X_{17}&X_{18}&X _{19}\\ X_{21}&X_{22}&X_{23}&X_{24}&X_{25}&X_{26}&X_{27}&X_{28}&X_{29}\\ X_{31}&X_{32}&X_{33}&X_{34}&X_{35}&X_{36}&X_{37}&X_{38}&X_{39}\\ X_{41}&X_{42}&X_{43}&X_{44}&X_{45}&X_{46}&0&0&0\\ X_{51}&X_{52}&X_{53}&X_{54}&X_{55}&X_{56}&0&0&0\\ X_{61}&X_{62}&X_{63}&X_{64}&X_{65}&X_{66}&0&0&0\\ X_{71}&X_{72}&X_{73}&0&0&0&0&0&0\\ X_{81}&X_{82}&X_{83}&0&0&0&0&0&0\\ X_{91}&X_{92}&X_{93}&0&0&0&0&0&0\end{pmatrix},\quad\mbox{etc.}\] Let \(F_{N}\) be the empirical distribution (3) of the eigenvalues of \(W_{N}\). **Theorem 1**.: _Let \(\lambda^{(N)}=N\,\mbox{\rmgoth P}_{r}\). Then, the sequence \(\left(F_{N}\right)_{N\geq 1}\) converges, with probability \(1\), to the deterministic distribution \(F_{\langle r\rangle}\) with moments_ \[\int_{\mathbb{R}}x^{k}dF_{\langle r\rangle}(x)=\frac{1}{k+1}\binom{(r+1)k}{k}. \tag{12}\] The limiting moments \(m_{k}\) (multiplied by \(r\)) have a combinatorial interpretation. They enumerate plane trees whose vertices are given labels from the set \(\{1,\ldots,r\}\) in such a way that the sum of the labels along any edge is at most \(r+1\). These combinatorial object were invented by Gu, Prodinger and Wagner [17], extending a previous definition by Gu and Prodinger [16]. **Definition 1**.: _A \(r\)-plane tree is a pair \((T,c)\), where \(T=(V,E)\) is a plane tree, and \(c\colon V\to\{1,\ldots,r\}\) is a colouring such that \(c(u)+c(v)\leq r+1\) whenever \(\{u,v\}\in E\)._ **Proposition 1** (Gu, Prodinger and Wagner [17]).: _The number of \(r\)-plane trees on \(k+1\) vertices is_ \[C_{k}^{\langle r\rangle}=\frac{r}{k+1}\binom{(r+1)k}{k}. \tag{13}\] For recent refined formulae see [29]. The integers \(\left(C_{k}^{\langle r\rangle}\right)_{k\geq 0}\) are _generalised Catalan numbers_. Here are a few values of them. \[\begin{array}{c|rrrrrr}k&C_{k}^{(1)}&C_{k}^{(2)}&C_{k}^{(3)}&C_{k}^{(4)}&C_{k}^ {(5)}&C_{k}^{(6)}\\ 0&1&2&3&4&5&6\\ 1&1&3&6&10&15&21\\ 2&2&10&28&60&110&182\\ 3&5&42&165&455&1020&1995\\ 4&14&198&1092&3876&10626&24570\\ 5&42&1001&7752&35420&118755&324632\\ 6&132&5304&57684&339300&1391280&4496388\\ 7&429&29070&444015&3362260&16861455&64425438\\ 8&1430&163438&3506100&34179860&209638330&946996050\\ 9&4862&937365&28242984&354465254&2658968130&14200613889\\ 10&16796&5462730&231180144&3735373880&34270012530&216384285936\end{array}\] The sequences \(\left(C_{k}^{(1)}\right)_{k\geq 0}\) for \(r=1,2,3,4\) are the entries A000108, A007226, A007228, and A124724 in The On-Line Encyclopedia of Integer Sequences [28]. _Remark 1_. For \(r=1\), the moments coincide with the Catalan sequence \[C_{k}^{(1)}=\frac{1}{k+1}\binom{2k}{k}. \tag{14}\] For \(r=2\), \(X_{N}\) is a 'three-blocks' matrix model studied by Flynn-Connolly [13] who proved that the moments are \(\frac{1}{k+1}\binom{3k}{k}\). The sequence entry A005132 in The On-Line Encyclopedia of Integer Sequences. For large \(r\), we recover the moments (7) of the Dykema-Haagerup measure, \[\lim_{r\to\infty}\frac{1}{r^{k+1}}C_{k}^{(r)}=\frac{1}{k+1}\frac{k^{k}}{k!}. \tag{15}\] We now present a few results on the limiting measure. They are motivated by the observation (by Ledoux [18]) that a semicircular variable is equal in distribution to the product of the square root of a uniform random variable and an independent arcsine random variable. We denote by \(\mathrm{U}(0,\ell)\) a random variable uniformly distributed in the interval \([0,\ell]\), and simply by \(\mathrm{U}\) a random variable uniformly distribute in the unit interval \([0,1]\). With \(\mathrm{B}(a,b)\) we generally denote a beta random variable with parameters \(a,b>0\); it has support in the interval \([0,1]\), with density \[\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}x^{a-1}(1-x)^{b-1}, \tag{16}\] where \(\Gamma(\cdot)\) is the Euler Gamma function. Let \(Y_{\mathrm{MP}}\) be a random variable with Marchenko-Pastur distribution (6). Then, \(Y_{\mathrm{MP}}\) is equal in distribution to the product of a uniform random variable on the interval \([0,4]\) and an independent arcsine variable in the interval \([0,1]\). In formulae: \[Y_{\mathrm{MP}}\stackrel{{\mathrm{d}}}{{=}}\mathrm{U}(0,4) \mathrm{B}(1/2,1/2). \tag{17}\] This is equivalent to the above mentioned factorisation of semicircular variables [18]. Indeed, if \(\mathrm{U},\mathrm{U}^{\prime}\) are independent and uniformly distributed on \([0,1]\), we can write \[\mathbb{E}Y_{\mathrm{MP}}^{k}=\mathbb{E}(\mathrm{U}A)^{k}, \tag{18}\] where \(A\stackrel{{\mathrm{d}}}{{=}}\left(2\cos\pi\mathrm{U}^{\prime} \right)^{2}\). The random variable \(A\) is the rescaled squared projection of a uniform point on the unit semicircle, hence an arcsine random variable. See [8] for a'semiclassical' interpretation for Gaussian random matrices. **Proposition 2**.: _Let \(r\geq 1\). Set \(L(r)=\frac{(r+1)^{r+1}}{r^{r}}>0\)._ 1. _The Stieltjes-Cauchy transform of_ \(F_{\langle r\rangle}\)__ \[G_{\langle r\rangle}(z):=\int_{\mathbb{R}}\frac{1}{z-x}\mathrm{d}F_{\langle r \rangle}(x),\] (19) _has the hypergeometric representation_ \[G_{\langle r\rangle}(z)=\frac{1}{r+1}\left(1-{}_{r}F_{r-1}\left[\begin{matrix} -\frac{1}{r+1},-\frac{2}{r+1},\ldots,-\frac{r}{r+1},\frac{L(r)}{z}\\ -\frac{2}{r},-\frac{2}{r},\ldots,-\frac{r-1}{r}\end{matrix};\frac{L(r)}{z} \right]\right)\] (20) 2. _If_ \(Y_{\langle r\rangle}\) _is a real random variable with distribution_ \(F_{\langle r\rangle}\)_, then the following identity in distribution holds_ \[Y_{\langle r\rangle}\stackrel{{\mathrm{d}}}{{=}}\mathrm{U}\left(0,L(r)\right)\prod_{j=1}^{r}\mathrm{B}\left(\frac{j}{r+1},\frac{j}{r}-\frac{j}{ r+1}\right),\] (21) _where the variables on the right are jointly independent._ 3. _Let_ \(\mathrm{U},\mathrm{U}^{\prime}\) _be independent and uniformly distributed on_ \([0,1]\)_. Then,_ \[\mathbb{E}Y_{\langle r\rangle}^{k}=\mathbb{E}\left(\mathrm{U}A_{\langle r \rangle}\right)^{k}.\] (22) _where_ \(A_{\langle r\rangle}\stackrel{{\mathrm{d}}}{{=}}e^{(r-1)\pi \mathrm{U}^{\prime}}\left(2\cos\pi\mathrm{U}^{\prime}\right)^{r+1}\)_._ _Remark 2_.: For the first values of \(r\geq 1\) we get \[Y_{\langle 1\rangle} \stackrel{{\mathrm{d}}}{{=}}\mathrm{U}\left(0,\frac{ 2^{2}}{1^{1}}\right)\mathrm{B}\left(\frac{1}{2},\frac{1}{2}\right), \tag{23}\] \[Y_{\langle 2\rangle} \stackrel{{\mathrm{d}}}{{=}}\mathrm{U}\left(0,\frac{ 3^{3}}{2^{2}}\right)\mathrm{B}\left(\frac{1}{3},\frac{1}{2\cdot 3}\right) \mathrm{B}\left(\frac{2}{3},\frac{2}{2\cdot 3}\right),\] (24) \[Y_{\langle 3\rangle} \stackrel{{\mathrm{d}}}{{=}}\mathrm{U}\left(0,\frac{ 4^{4}}{3^{3}}\right)\mathrm{B}\left(\frac{1}{4},\frac{1}{3\cdot 4}\right) \mathrm{B}\left(\frac{2}{4},\frac{2}{3\cdot 4}\right)\mathrm{B}\left(\frac{3}{4}, \frac{3}{3\cdot 4}\right), \tag{25}\] etc. For \(r=1\), Eq. (23) coincides with the factorisation (17). For \(r=2\), Eq. (24) is equivalent to a formula proved by Mlotkowski and Penson [24, Proposition 3.1]. We can use the decomposition (21) to write 'explicit' expressions for the densities \(F^{\prime}_{r}\). For small values of \(r\) we have \[F^{\prime}_{\langle 1\rangle}(x) =\frac{1}{2\pi}\sqrt{\frac{L(1)-x}{x}}\chi_{[0,L(1)]}(x) \tag{26}\] \[F^{\prime}_{\langle 2\rangle}(x) =\frac{1}{2^{3+1/3}3^{1/2}\pi w^{2/3}}\Big{[}\Big{(}3^{1/2}+2 \sqrt{L(2)-w}\Big{)}\left(3^{3/2}-2\sqrt{L(2)-w}\right)^{1/3} \tag{27}\] The first is the Marchenko-Pastur distribution \(F^{\prime}_{\mathrm{MP}}(x)\). The second is equivalent to formula in [24, Theorem 3.1] proved by Mellin inversion. For generic values of \(r\), a 'direct' way to numerically compute \(F^{\prime}_{\langle r\rangle}(x)\) is by applying the Stieltjes inversion formula \[F^{\prime}_{\langle r\rangle}(x)=-\frac{1}{\pi}\lim_{\epsilon\searrow 0}G_{ \langle r\rangle}(x+\mathrm{i}\epsilon), \tag{28}\] to Equation (20). This is how we made the numerical plots shown in Fig. 1. In fact, one could, as in [30], write \(F^{\prime}_{\langle r\rangle}(x)\) as the inverse Mellin transform of the moments. Since the moments are products of ratios of Pochhammer symbols, the resulting density would be a Meijer-G function (see the note by Dunkl [10]). In fact, from the previous Proposition we can get some precise informations on the support of \(F^{\prime}_{\langle r\rangle}(x)\) and on its behavior at the edges. **Corollary 1**.: _The measure \(F_{\langle r\rangle}\) has a density, \(F^{\prime}_{\langle r\rangle}\) with support_ \[\mathrm{supp}F^{\prime}_{\langle r\rangle}:=\overline{\left\{x\colon F^{ \prime}_{\langle r\rangle}(x)>0\right\}}=\left[0,L(r)\right]. \tag{29}\] Figure 1: Plot of the densities \(F^{\prime}_{\langle r\rangle}(x)\) for several values of \(r\). _Moreover at the edges of the support the density behaves as_ \[F^{\prime}_{\langle r\rangle}(x)\sim\begin{cases}c\,x^{-\frac{r}{r+1}},&\text{ as}\quad x\searrow 0,\\ \\ c^{\prime}\,\sqrt{L(r)-x},&\text{as}\quad x\nearrow L(r),\end{cases} \tag{30}\] _for some constants \(c,c^{\prime}>0\)._ The density vanishes as a square root at the'soft edge' \(x=L(r)\). At the 'hard edge' \(x=0\) the density has an integrable singularity \(x^{-\frac{r}{r+1}}\). For \(r=1\) (Wishart matrices) this is the classical \(x^{-\frac{1}{2}}\) divergence, but for \(r>1\) the singularity is stronger. _Remark 3_.: The generalised Catalan numbers \[C^{\langle r\rangle}_{k}=\frac{r}{k+1}\binom{rk+k}{k} \tag{31}\] are related to the most known _Fuss-Catalan_ sequences \[FC_{r,k}=\frac{1}{rk+1}\binom{rk+k}{k}. \tag{32}\] The Fuss-Catalan sequences also appear in random matrix theory and free probability as limiting moments of _products_ of independent Wishart matrices [2, 19, 23]. As shown by Penson and Zyczkowski [30], the Fuss-Catalan numbers \(FC_{r,k}\) are the moment sequence of a distribution \(F_{FC_{r}}(x)\) whose density is a Meijer G-function, with support on the interval \([0,L(r)]\). ## 5 Proofs Theorem 1 was discovered by a combination of experimenting and guessing, and the proof is based on matrix moments calculations combined with the exact solution for the enumeration of \(r\)-plane trees (Proposition 1). Proof of Theorem 1.: Let \[m_{k,N}:=\int_{\mathbb{R}}x^{k}dF_{N}(x),\quad\text{and}\quad m_{k}=\frac{1}{k +1}\binom{(r+1)k}{k}. \tag{33}\] **Claim 1**.: _Each \(F_{N}\) has moments of all order._ **Claim 2**.: _For all \(k\), the moment sequence \(m_{k,N}\) converges to \(m_{k}\), almost surely._ **Claim 3**.: _The sequence \((m_{k})_{k\geq 0}\) uniquely determine the distribution \(F_{\langle r\rangle}\)._ By the _moment method_, the three claims imply Theorem 1. Note that \(F_{N}\) corresponds to a random measure with _finite_ support. Hence, Claim 1 is immediate. Claim 3 can be showed by checking Riesz's condition [4, Lemma B.2], \[\liminf_{k}\frac{1}{k}m_{2k}^{\frac{1}{2k}}<\infty. \tag{34}\] Indeed, Stirling's approximation formula implies \[\lim_{k\to\infty}\frac{1}{k}\left[\frac{1}{2k+1}\binom{2(r+1)k}{2k}\right]^{\frac {1}{2k}}=0. \tag{35}\] It remains to prove Claim 2: \[\lim_{N\to\infty}m_{k,N}=m_{k},\quad\text{almost surely}. \tag{36}\] Note that \[m_{k,N}=\int_{\mathbb{R}}x^{k}dF_{N}(x)=\frac{1}{rN}\operatorname{Tr}W_{N}^{k}. \tag{37}\] We may first assume that the entries \(X_{ij}\) are uniformly bounded. Under this assumption we can prove the following two Lemmas. **Lemma 1**.: _Let \(\lambda^{(N)}=N\,\mathbb{P}_{r}\). Then, for all \(k\in\mathbb{N}\),_ \[\lim_{N\to\infty}\mathbb{E}m_{k,N}=m_{k}, \tag{38}\] **Lemma 2**.: _Let \(\lambda^{(N)}=N\,\mathbb{P}_{r}\). Then, for all \(k\in\mathbb{N}\),_ \[\mathbb{E}\left(m_{k,N}-\mathbb{E}m_{k,N}\right)^{2}=\operatorname{O}\left( \frac{1}{N^{2}}\right). \tag{39}\] Lemma 1 states the convergence in expectation of the moments. Lemma 2 implies, through Borel-Cantelli, the almost sure convergence (36). We now show that the Theorem holds true under the sole hypothesis of finite second moment \(\mathbb{E}|X_{ij}|^{2}=1\). This is done by the standard procedure of truncation. Fix a constant \(C\) and consider the symmetric matrix \(\hat{X}_{N}\) whose elements satisfy, for \(1\leq,i,j\leq rN\) \[\hat{X}_{N}(i,j)=\frac{X_{ij}1_{|X_{ij}|<C}-\mathbb{E}X_{ij}1_{|X_{ij}|<C}}{ \sqrt{\mathbb{E}\left|X_{ij}1_{|X_{ij}|<C}-\mathbb{E}X_{ij}1_{|X_{ij}|<C} \right|^{2}}}. \tag{40}\] The matrix \(\hat{X}_{N}\) is a truncated and standardised version of \(X_{N}\). Denote by \(\hat{F}_{N}\) the empirical spectral distribution of \(\hat{X}_{N}\). Recall that [3, Appendix C] the space of probability measures on \(\mathbb{R}\) equipped with the weak topology is metrizable with the _Levy distance_, defined for any pair of distribution functions as: \[d_{\text{Levy}}(F,G):=\inf\{\epsilon>0\colon F(x-\epsilon)-\epsilon\leq G\leq F (x+\epsilon)+\epsilon,\text{ for all }x\in\mathbb{R}\}. \tag{41}\] For the reader's convenience, we state explicitly a bound that can be extracted from the book of Bai and Silverstein [4, Theorem 3.7, p. 47-48]. For all \(C>0\), \[\limsup_{N}\left(d_{\text{Levy}}(\hat{F}_{N},F_{N})\right)^{4} \leq 4\mathbb{E}\left(\left|X_{ij}\right|^{2}1_{|X_{ij}|>C}\right)\\ +2\left(1-\left(\mathbb{E}\left|X_{ij}1_{|X_{ij}|<C}-\mathbb{E}X _{ij}1_{|X_{ij}|<C}\right|^{2}\right)^{2}\right),\] almost surely. The above inequality is an application of classic matrix inequalities and the strong law of large numbers. Note that the right-hand side of the inequality can be made arbitrarily small, by choosing \(C\) large, uniformly in \(i,j\). From the fact that weak convergence is equivalent to convergence with respect to the Levy metric, we conclude that \(\hat{F}_{N}\) and \(F_{N}\) have the same limit in distribution. (See also [3, Theorem 2.1.21].) It remains to prove the combinatorial lemmas. We follow the scheme of proof of [4, Theorem 3.7], and its variant by Cheliotis [6] (arXiv version). Proof of Lemmas 1 and 2.: We start from the exact formula \[m_{k,N}:=\int_{\mathbb{R}}x^{k}dF_{N}(x)=\frac{1}{rN}\mathbb{E} \operatorname{Tr}W_{N}^{k}\] \[=\frac{1}{rN^{k+1}}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{k}=1 \\ j_{1},\ldots,j_{k}\end{subarray}}^{rN}\mathbb{E}(X_{N})_{i_{1}j_{1}}\overline{( X_{N})_{i_{2}j_{1}}}\cdots(X_{N})_{i_{k}j_{k}}\overline{(X_{N})_{i_{1}j_{k}}} \tag{42}\] Now, we group row and column indices within the same block as follows \[b\colon[rN] \longrightarrow[r] \tag{43}\] \[i \longmapsto b(i)=\left\lceil\frac{i}{N}\right\rceil.\] (We use the standard notation \([m]:=\{1,\ldots,m\}\).) We introduce the following subset of multiindices \[\Lambda:=\left\{(i,j)\in(rN)^{k}\times(rN)^{k}\colon\begin{array}{l}b(i_{m} )+b(j_{m})\leq r+1\\ b(i_{m+1})+b(j_{m})\leq r+1\end{array}\text{ for all }m\in[k]\right\}. \tag{44}\] If \((i,j)\notin\Lambda\), then the corresponding word in the summation (5) contains a letter identically zero. For two \(k\)-tuples \(i,j\colon[k]\to[rN]\), we associate the bipartite graph \(G(i,j)\) with vertex set \[V(i,j)=\{(1,i(1)),(1,i(2))\ldots,(1,i(k)),(2,j(1)),(2,j(2))\ldots,(2,j(k))\}\] (its cardinality is not necessarily \(2k\) because of possible repetitions), and set of edges \[E(i,j)=\{(2m-1,\{(1,i(m)),(2,j(m))\})\,,(2m-1,\{(2,j(m)),(1,i(m+1))\}):m\in[k]\},\] with the convention \(i(k+1)\equiv i(1)\). Two graphs \(G(i,j)\) and \(G(i^{\prime},j^{\prime})\) are said isomorphic if one becomes the other by renaming the vertices, that is if \(i=\sigma\circ i^{\prime}\), and \(j=\tau\circ j^{\prime}\) for some permutations \(\sigma,\tau\in S_{rN}\). With this notation, we write \[\frac{1}{rN}\mathbb{E}\operatorname{Tr}W_{N}^{k}=\frac{1}{rN^{k+1}}\sum_{i,j \colon[k]\to[rN]}\mathbb{E}X_{G(i,j)}\,1_{(i,j)\in\Lambda},\] where we encode the word \[X_{G(i,j)}:=X_{i(1)j(1)}\overline{X_{i(2)j(1)}}X_{i(2)j(2)}\overline{X_{i(3)j(2)} }\cdots X_{i(k)j(k)}\overline{X_{i(1)j(k)}} \tag{45}\] in the bipartite graph \(G(i,j)\). From \(G(i,j)\) we generate its _skeleton_\(G_{1}(i,j)\) by identifying edges with equal ends. Formally, \(G_{1}(i,j)\) has vertex set \(V(i,j)\), and edge set \[\{\{(1,i(m)),(2,j(m))\},\{(2,j(m)),(1,i(m+1))\}\colon m\in[k]\}.\] Note that if \(G(i,j)\) contains an edge that does not appear at least twice, than \(\mathbb{E}X_{G(i,j)}=0\) because the \(X_{ij}\)'s are independent and centred. Thus we have at most \(k\) edges in the skeleton \(G_{1}(i,j)\) and so at most \(k+1\) vertices. The number of isomorphic graphs \(G_{1}(i,j)\) with \(v\leq k+1\) vertices is \((rN)^{v}(1+o(1))\). Thus the contribution to the expectation of these terms is \(\leq\frac{(rN)^{v}}{rN^{k+1}}(1+o(1))\) which vanishes as \(N\to\infty\), unless \(v=k+1\). In the latter case \(G_{1}(i,j)\) is a tree. Therefore, the indices \((i,j)\) that contribute in the large-\(N\) limit of (5) are those for which 1. \((i,j)\in\Lambda\); 2. the associated \(G_{1}(i,j)\) has exactly \(k+1\) vertices; 3. the closed path \[(1,i(1))\to(2,j(1))\to(1,i(2))\to(2,j(2))\to\cdots\to(1,i(k))\to(2,j(k))\to(1,i(1))\] traverses each edge of the tree exactly twice. In fact, such a pair \((i,j)\) defines a _plane tree_, that is, a tree on which we have specified an order among the children of each vertex. (Among two vertices with common parent, we declare smaller the one that appears first in the sequence \((i(1),j(1),i(2),j(2),\ldots,i(k),j(k))\), that is, the one that is visited first in the path.) So the graph \(G_{1}(i,j)\) can be identified with a plane tree \(T\) with \(k+1\) vertices. Since the \(X_{ij}\)'s are independent and have unit variance, the corresponding word has expectation \(\mathbb{E}X_{G(i,j)}=1\), if \((i,j)\in\Lambda\). Each label \(m\in[rN]\) can be written as \(m=(b-1)N+p\) for a unique choice of 'block index' \(b\in[r]\) and 'inner index' \(p\in[N]\). Therefore, for any choice of block indices, there are \(N(N-1)\cdots(N-k)=N^{k+1}[1+o(1)]\) ways to choose multiindices \(i\) and \(j\) corresponding to the same plane tree \(T\) up to isomorphism. The condition \(1_{(i,j)\in\Lambda}\) is the colouring condition on \(T\) that makes it a \(r\)-plane tree: \[\frac{1}{rN}\mathbb{E}\operatorname{Tr}W_{N}^{k}=\frac{1}{rN^{k+1}}\sum_{ \begin{subarray}{c}\text{plane trees $T$}\\ \text{on $k+1$ vertices}\end{subarray}}\sum_{\begin{subarray}{c}c:\,[k+1]\to[r]\\ c(u)+c(v)\leq r+1\\ \text{for all edges $(u,v)\in T$}\end{subarray}}N^{k+1}[1+o(1)]\] \[=\frac{1}{r}\#\{\text{$r$-plane trees on $k+1$ vertices}\}[1+o(1)]. \tag{46}\] We now recall Proposition 1 \[\#\{\text{$r$-plane trees on $k+1$ vertices}\}=\frac{r}{k+1}\binom{rk+k}{k}, \tag{47}\] and conclude the proof of Lemma 1. To prove Lemma 2, we write \[\frac{1}{(rN)^{2}}\left[\mathbb{E}\left(\Tr W_{N}^{k}\right)^{2}- \left(\mathbb{E}\Tr W_{N}^{k}\right)^{2}\right]\] \[=\frac{1}{(rN)^{2}}\frac{1}{N^{2k}}\sum_{i,j,i^{\prime},j^{\prime }:\,[k]\to[rN]}\left[\mathbb{E}X_{G(i,j)}X_{G(i^{\prime},j^{\prime})}-\mathbb{ E}X_{G(i,j)}\mathbb{E}X_{G(i^{\prime},j^{\prime})}\right]\,1_{(i,j)\in\Lambda}1_{(i^{ \prime},j^{\prime})\in\Lambda}\] \[\qquad\leq\frac{1}{(rN)^{2}}\frac{1}{N^{2k}}\sum_{i,j,i^{\prime },j^{\prime}:\,[k]\to[rN]}\left[\mathbb{E}X_{G(i,j)}X_{G(i^{\prime},j^{\prime} )}-\mathbb{E}X_{G(i,j)}\mathbb{E}X_{G(i^{\prime},j^{\prime})}\right]. \tag{48}\] The latter can be shown to be \(\Or((rN)^{-2})\), as detailed e.g. in [4, p. 50]. Proof of Proposition 2.: To prove i) we assume that \(G_{\langle r\rangle}(z)\) can be expanded in a neighbourhood of \(z=\infty\) \[G_{\langle r\rangle}(z)=\int_{\mathbb{R}}\frac{1}{z-x}\d F_{\langle r\rangle} (x)=\sum_{k=0}^{\infty}\frac{1}{z^{k+1}}\int_{\mathbb{R}}x^{k}\d F_{\langle r \rangle}(x)=\sum_{k=0}^{\infty}m_{k}\frac{1}{z^{k+1}}. \tag{49}\] (It can be verified, a posteriori, that this expansion holds true for \(|z|>L(r)\).) When computing the ratio of consecutive terms of the series we identify the claimed hypergeometric representation (20). In order to prove ii), recall the moments of Beta and uniform random variables, \[\mathbb{E}\mathrm{B}(a,b)^{k}=\frac{\Gamma(a+b)\Gamma(a+k)}{\Gamma(a)\Gamma(a +b+k)}=\prod_{j=0}^{k-1}\frac{a+j}{a+b+j},\qquad\mathbb{E}\mathrm{U}(0,\ell)^ {k}=\frac{\ell^{k}}{k+1}. \tag{50}\] A little calculation shows that \[m_{k}=\frac{[(r+1)k]!}{(rk)!(k+1)!}=\frac{\prod_{j=0}^{(r+1)k- 1}((r+1)k-j)}{\prod_{j=0}^{rk-1}(rk-j)\prod_{j=0}^{k}(k+1-j)}\] \[=\left(L(r)\right)^{k}\frac{\prod_{j=0}^{(r+1)k-1}\left(k- \frac{j}{r+1}\right)}{\prod_{j=0}^{rk-1}\left(k-\frac{j}{r}\right) \prod_{j=0}^{k}\left(k+1-j\right)}=\frac{\left(L(r)\right)^{k}}{k+1}\prod_{i= 1}^{r}\prod_{j=0}^{k-1}\frac{\left(\frac{i}{r+1}+j\right)}{\left(\frac{i}{r}+ j\right)}. \tag{51}\] Compare now with (50) to conclude the proof. Finally, for point iii) we notice that \[m_{k}=\frac{1}{k+1}\binom{(r+1)k}{k}=\frac{1}{k+1}\times\text{ coefficient of }z^{k}\operatorname{in}\left(1+z\right)^{(r+1)k}\] \[=\frac{1}{k+1}\cdot\frac{1}{2\pi\mathrm{i}}\oint_{|z|=1}\frac{(1+ z)^{(r+1)k}}{z^{k+1}}\d z. \tag{52}\] Now change coordinates \(z\mapsto\exp(2\pi\mathrm{i}u)\) to get \[m_{k}=\int_{0}^{1}u^{k}du\cdot\int_{0}^{1}\left[\exp(-2\pi\mathrm{i}u^{\prime}) \left(1+\exp(2\pi\mathrm{i}u^{\prime})\right)^{r+1}\right]^{k}du^{\prime}. \tag{53}\] The authors acknowledge the partial support by the Italian National Group of Mathematical Physics INdAM-GNFM. FDC thanks Neil O'Connell for fruitful discussions at an early stage of this work.
2309.08031
All-dielectric high-Q dynamically tunable transmissive metasurfaces
Active metasurfaces, which are arrays of actively tunable resonant elements, can dynamically control the wavefront of the scattered light at a subwavelength scale. To date, most active metasurfaces that enable dynamic wavefront shaping operate in reflection. On the other hand, active metasurfaces operating in transmission are of considerable interest as they can readily be integrated with chip-scale light sources, yielding ultra-compact wavefront shaping devices. Here, we report designs for all-dielectric low-loss active metasurfaces which can dynamically manipulate the transmitted light wavefront in the near-infrared wavelength range. Our active metasurfaces feature an array of amorphous silicon (a-Si) pillars on a silica substate, which support resonances with quality factors (Q-factors) as high as 9800, as well as other lower-Q resonances. First, we demonstrate that high-Q resonance dips observed in transmission can be transformed into a transmission resonance peak by positioning a-Si pillar resonators at a prescribed distance from a crystalline Si substrate, defined by a silica spacer layer. Next, we report the design of metasurface geometry with realistic interconnect architectures that enable thermo-optic dynamic beam switching with switching times as low as 7.3 {\mu}s. Beam switching is observed for refractive index differences between neighboring metasurface elements as low as 0.0026. Finally, we demonstrate that metasurface structures with both high-Q and lower-Q modes and realistic interconnect architectures can be used for dynamic beam steering.
Ruzan Sokhoyan, Claudio U. Hail, Morgan Foley, Meir Y. Grajower, Harry A. Atwater
2023-09-14T21:10:54Z
http://arxiv.org/abs/2309.08031v1
# All-dielectric high-Q dynamically tunable transmissive metasurfaces ###### Abstract Active metasurfaces, which are arrays of actively tunable resonant elements, can dynamically control the wavefront of the scattered light at a subwavelength scale. To date, most active metasurfaces that enable dynamic wavefront shaping operate in reflection. On the other hand, active metasurfaces operating in transmission are of considerable interest as they can readily be integrated with chip-scale light sources, yielding ultra-compact wavefront shaping devices. Here, we report designs for all-dielectric low-loss active metasurfaces which can dynamically manipulate the transmitted light wavefront in the near-infrared wavelength range. Our active metasurfaces feature an array of amorphous silicon (a-Si) pillars on a silica (SiO\({}_{2}\)) substrate, which support resonances with quality factors (Q-factors) as high as 9800, as well as other lower-Q resonances. First, we demonstrate that high-Q resonance dips observed in transmission can be transformed into a transmission resonance peak by positioning a-Si pillar resonators at a prescribed distance from a crystalline Si substrate, defined by an SiO\({}_{2}\) spacer layer. Next, we report the design of metasurface geometry with realistic interconnect architectures that enable thermo-optic dynamic beam switching with switching times as low as 7.3 \(\upmu\)s. Beam switching is observed for refractive index differences between neighboring metasurface elements as low as 0.0026. Finally, we demonstrate that metasurface structures with both high-Q and lower-Q modes and realistic interconnect architectures can be used for dynamic beam steering. active metasurfaces, high-Q, transmissive metasurfaces, thermo-optic, Fano asymmetry parameter, beam steering ## Introduction The prospect of creating chip-scale ultracompact optical components that can be dynamically programmed to scatter or emit light with an arbitrarily shaped wavefront has long captivated researchers. Realization of chip-scale dynamically programmable optical components operating in the near-infrared or visible wavelength range could be instrumental for important applications, such as light detection and ranging (LiDAR), free space optical communications, additive manufacturing, and directed energy. The quest for these chip-scale spatial light modulators has motivated intensive research efforts in the design of active metasurfaces[1, 2, 3, 4]. A prototypical active metasurface is composed of an array of geometrically identical subwavelength resonant metasurface elements which can dynamically control the phase and amplitude of light scattered by each metasurface element[5]. Recent reports[6, 1] have experimentally demonstrated programmable metasurface chips with electric field effect control of the phase of light scattered by each metasurface element. Dynamic control of the phase at a subwavelength scale has enabled the experimental realization of beam steering and reconfigurable focusing using the same metasurface structure[1]. Notably, this demonstration[1] and the majority of other experimentally reported active metasurfaces operate in reflection. Operation in reflection mode dictates that the illuminating light source is located off-chip, which would unavoidably increase the form factor of the resulting optical system. Moreover, a reflection configuration for the illuminating light source may limit versatility, since part of the metasurface aperture will be blocked by the light source. Transmissive metasurfaces, on the other hand, have the potential to yield more compact monolithic optical systems, since they allow for integration with chip-based light sources such as vertical cavity surface emitting lasers (VCSELs)[7] or photonic cavity surface emitting lasers (PCSELs)[8]. Dynamic amplitude-tunable transmissive metasurfaces have been experimentally demonstrated using ITO field effect modulation[9, 10], conductive polymer electrochemical transitions[11], ionic transport[12], Pockels effect in organic molecules[13], and thermo-optic effect in silicon (Si)[14]. Several researchers[10, 11] have utilized transmittance modulation to achieve diffractive beam switching with modest efficiencies. However dynamically tunable phase control is also a prerequisite for versatile wavefront manipulation. Recent experiments have demonstrated dynamic beam switching in transmission using reorientation of liquid crystals in which a degree of phase control has contributed to the observed dynamic beam switching[2]. Previous works have also used dielectric elastomer actuators[15] and microelectromechanical systems[16] to demonstrate adaptive metalenses. These works[15, 16], however, do not allow for arbitrary wavefront reconfiguration of the transmitted light. Several designs for dynamically tunable transmissive metasurfaces have been proposed using field effect control of transmission phase[17, 18] to dynamically shape the transmitted light wavefront but reported low optical efficiencies (<0.07%), limiting their use in practical applications. Recent theoretical work[19] has also demonstrated an all-dielectric transmissive metasurface which uses carrier injection in Si to realize subwavelength phase control in transmission as well as dynamic wavefront shaping. The optical efficiency of the reported metasurface is ~65%. The maximal phase shift reported for one-dimensional phase gradients was 215\({}^{\circ}\), for an assumed refractive index of silicon \(\Delta\)n ~ 0.01[19]. Inverse design offers the prospect for high-directivity beam steering using all-dielectric transmissive metasurfaces, such as those based on reorientation of liquid crystal molecules[20]. It has been shown[21] that the optical anisotropy of the active liquid crystal medium surrounding the metasurface can be used to achieve large phase modulation in transmission while maintaining an optical efficiency of ~100%. Prior research[22] has also used the concept of congener dipoles to develop a dynamic phase-change metasurface design that enables transmissive phase modulation covering over 240\({}^{\circ}\) while maintaining transmittance exceeding 80%. These recent theoretical studies[21, 22], however, do not assess wavefront shaping capabilities of the designed high-efficiency metasurfaces. Prior research[23] has also reported a thermo-optically reconfigurable metalens, which enables a continuous tunability of its focal length from 165 \(\upmu\)m to 135 \(\upmu\)m when the metalens temperature is increased from 20 \({}^{\circ}\)C to 260 \({}^{\circ}\)C. In our work, we use high quality factor subwavelength resonators as metasurface building blocks that enables achieving dynamically tunable optical response upon modest modulation of the external stimulus. In the last few years, all-dielectric passive metasurfaces exhibiting high quality factors have been explored by researchers[24]. All-dielectric metasurfaces supporting delocalized photonic bound states in the continuum (BICs) have been demonstrated to show narrow-bandwidth resonances where a large electric field enhancement is observed[24, 25]. In addition to structures that support delocalized modes, individual subwavelength dielectric resonators can support so-called quasi-BIC modes, also referred to as supercavity modes[26, 27], which exhibit moderately high quality factors and are weakly coupled to the radiative continuum. The high quality factor of the supercavity modes originates from interference of multiple localized modes supported by a resonator[28]. These distinct features have enabled use of quasi-BIC metasurfaces for numerous applications such as sensing[29] and harmonic generation[30]. Quasi-BIC mode subwavelength nanolasers have also been realized[31]. Quasi-BIC modes supported by an individual cylinder, however, cannot be efficiently excited by a normally incident linearly polarized light, and azimuthally polarized excitation is required[26, 27]. For transmissive metasurfaces, normal incidence illumination with linearly polarized light is important for metasurface integration with chip-scale light sources. Notably, addition of an appropriately spaced back reflector to a cylinder array enables excitation of array quasi-BIC modes with normally incident light[32], but a back reflector precludes use for transmissive metasurfaces. Excitation of quasi-BIC (or, equivalently, supercavity) modes by a normally incident linearly polarized plane wave using a single high-index rectangular parallelepiped has also been reported[33]. Here, we develop all-dielectric high-Q metasurface designs for dynamic beam switching or beam steering in transmission mode, utilizing the modest refractive index modulation achievable by thermo-optic modulation of amorphous Si (a-Si) with assumed index modulation ranging between \(\Delta\)n = 0.0026 to \(\Delta\)n = 0.01. Prior research[34] has theoretically demonstrated a high-Q beam steering active metasurface, which utilizes lithium niobate as an active material. The designed metasurface[34], however, operates in reflection. Our _transmissive_ active metasurface operates at near infrared wavelengths and can be excited by a normally incident linearly polarized light, exploiting either a high-Q mode or lower-Q modes supported by an individual a-Si rectangular parallelepiped (hereafter, for brevity, referred to as a square pillar). The spectral line shape of the high-Q mode at resonance can be 'inverted' from exhibiting a dip in transmission to exhibiting a transmission peak by including a Si substrate separated from the square pillar by a silica (SiO\({}_{2}\)) layer of appropriately chosen thickness. Notably, the resonances discussed in our work also exhibit large (~300deg) transmitted light phase spectral variation near resonance, which is a prerequisite for dynamically tunable phase shifts. We explore how refractive index modulation of modes of individual a-Si pillars can sculpt the transmitted light wavefront both in the near and far field. Finally, we report designs for physically realizable interconnect architectures to enable dynamic beam steering via thermo-optic modulation. ## Results and Discussion Our metasurface motif consists of an array of a-Si rectangular pillars on a silica (SiO\({}_{2}\)) substrate (Figs. 1a and 1b). We consider metasurface illumination by a linearly polarized (\(x\)-polarized) plane wave from within the SiO\({}_{2}\) substrate, and study phase and amplitude characteristics of the transmitted light. We find that this metasurface unit cell supports a high-Q optical mode for geometrical parameters \(h\) = 860 nm and \(l\) = \(w\) = 963 nm (see Figs. 1b and 1c), and a metasurface period of \(P_{x}\)= \(P_{y}\)= 1425 nm. The refractive indices of a-Si and SiO\({}_{2}\) are taken as 3.734[35] and 1.44, respectively. The Q-factor of the supported mode is Q = 9800, as determined by fitting the transmittance spectrum with Fano form lineshape (Supporting Information, Part 1). The observed transmittance dip is accompanied by a broad spectral feature in the transmitted light phase (Fig. 1c), indicating that this unit cell motif can permit design of metasurface phase gradients either via geometric tuning or an external active control. To gain further insight about the high-Q mode, we investigate its spatial mode profile (see Figs. 1d and 1e and Supporting Information, Part 2). Figure 1d indicates that in the \(x\)-direction, the electric field is tightly confined within the resonator and is enhanced by a factor of almost 80. While the largest electric field enhancement is inside the a-Si resonator, we also observe a more modest enhancement of the electric field below and above the a-Si pillar. On the other hand, in the \(y\)-z plane, which is perpendicular only non-zero component is \(E_{x}\) (Fig. 1e). In the \(y\) direction, a non-negligible electric field enhancement is observed between neighboring metasurface unit cells, which could be an indication of significant inter-resonator near-field coupling in the direction perpendicular to the incoming electric field. We find that this inter-resonator coupling has broad implications for comprehensive transmitted light wavefront control. As a next step, we explore how the transmittance characteristics vary with the height of the square a-Si pillars while the width is kept fixed (Fig. 2a). We observe that both the resonance lineshape and linewidth change upon increasing the pillar height (Fig. 2b and 2c). At certain pillar heights, we observe asymmetric resonance lineshapes indicative of Fano resonances, namely a high-Q mode coupled to a broader mode or a "continuum" of modes. By fitting transmittance spectra to Fano resonance lineshapes, we obtained the dependence of the resonance quality factor and the Fano phase on the pillar height (see Supporting Information, Part 1). Notably, the Fano phase characterizes the phase difference between the narrow and broad resonant modes. We observe that the quality factor of the resonance Figure 1: All-dielectric high-Q metasurfaces. a) Schematic of a transmissive metasurface, which consists of an array of amorphous a-Si rectangular pillars on an SiO\({}_{2}\) substrate. b) Schematic of a unit cell of a transmissive metasurface. c) Transmittance and phase spectra of the metasurface depicted in a). In c), the assumed geometrical parameters are as follows: the pillar height is \(h\) = 860 nm, and the pillar length \(l\) and width \(w\) are \(l\) = \(w\) = 963 nm. The metasurface period is \(P_{x}=P_{y}=1425\) nm. The a-Si resonators are illuminated by an x-polarized light from within the substrate. d) and e) show spatial distribution of the electric field amplitude in the metasurface unit cell in x-z and y-z planes, respectively. E\({}_{0}\) denotes the amplitude of the impinging electric field. Both in d) and e), the considered cross-section in which the electric field is calculated traverses the center of the a-Si resonator. increases with the pillar height, and peaks at \(h\) = 860 nm (Fig. 2b), with a value of approximately 9800. At a pillar height corresponding to the maximal Q-factor, the Fano phase passes through 90\({}^{\circ}\) (thus, the Fano asymmetry parameter is zero), indicating a symmetric quasi-Lorentzian lineshape. Interestingly, if we remove the substrate and consider an array of a-Si pillars suspended in free space, Q-factor can be as high as 221,000 (see Supporting Information, Part 3). It may be possible to achieve even higher quality factors in free space by even more systematic variation of the pillar height. When changing the metasurface period, we observe that for periods exceeding 1300 nm, the spectral position of the resonance does not change significantly when increasing the metasurface period (Supporting Information, Part 4). We also examine the spectral behavior of the transmitted light phase, which is of importance for metasurface applications. When the pillar height is below 860 nm (\(h\) \(\leq\) 860 nm), we observe that the phase of the transmitted light spans almost 360\({}^{\circ}\) when varying the wavelength of the transmitted light. On the other hand, when \(h\) > 860 nm, we observe an abrupt change in the spectral characteristics of the transmitted light (see Supporting Information, Part 5). For example, at a pillar height of \(h\) = 870 nm, the phase spans only 40\({}^{\circ}\) (as a function of wavelength). Interestingly, for \(h\) > 860 nm, the phase variation is limited despite a high quality factor. Thus, we find that the pillar height should be below 860 nm for wavefront shaping applications. We compare the optical mode supported by a pillar in an array to the mode of a single isolated pillar (Supporting Information, Part 5). An isolated subwavelength pillar on an SiO\({}_{2}\) substrate supports a Fig.2: Dependence of the properties of the high-Q mode on the height of the a-Si square pillar. Transmittance a) as a function of wavelength and the a-Si pillar height \(h\). b) Quality factor and Fano phase of the designed high-Q mode a function of the a-Si pillar height \(h\). c) and d) show transmittance and phase as a function of wavelength for the a-Si pillar heights of \(h\) = 850 nm and \(h\) = 870 nm, respectively. high-Q mode with a Q-factor of 676, and the high-Q mode profile is identical to that observed in the array configuration (Fig. 1e). For an isolated a-Si pillar in free space, the Q-factor of this mode exceeds 1000, whereas, for an a-Si resonator on a substrate, the Q-factor is reduced due to mode leakage into the substrate. Scattering cross-section calculations indicate that, besides the high-Q mode, the isolated Si pillar also supports two lower-Q modes (Supporting Information, Part 5). When comparing isolated pillar mode profiles with mode profiles of a pillar in an array, we observe evidence of nonlocality for one of the lower-Q modes since its field profile is not observed in the isolated resonator (cf. Supporting Information, Part 5 and Part 3). One shortcoming of this metasurface motif is that large phase variation is accompanied by low transmittance, and operating at the transmittance dip wavelength would result in low metasurface optical efficiency. A design modification that enables high Q-factor modes to be supported with large phase Fig. 3: a) Schematic of a transmissive metasurface. In a) the top panel shows the schematic of the metasurface unit cell while the bottom panel shows the metasurface. The proposed alternative design consists of a c-Si substrate, an SiO\({}_{2}\) spacer, followed by an a-Si rectangular pillar. b) Transmittance and phase spectra of the metasurface with a unit cell shown in a). In b), we assume that the thickness of the SiO\({}_{2}\) spacer \(d\) is \(d\) = 1450 nm. The height, width, and length of the a-Si pillar are \(h\) = 845 nm, and \(w\) = \(I\) = 963 nm. The assumed period values are \(P_{x}\) = 1520 nm and \(P_{y}\) = 1425 nm. c) The resonance quality factor and Fano phase as a function of the SiO\({}_{2}\) thickness. d) Transmittance and phase shift as a function of the a-Si index change at a wavelength of \(\lambda\) = 1534.8 nm. e) Intensity of the electric field in the far field for a two-level phase grating with 10 grating periods when the index change between neighboring a-Si pillars is \(\Delta n\) = 0.0026. f) Overall transmittance of the two-level grating. In f), the green dot marks the transmittance at an operating wavelength of \(\lambda\) = 1535.44 nm. The inset of f) shows the spatial distribution of the electric field in the x-z cross-section of the unit cell. The considered x-z cross-section goes through the center of the a-Si pillars. variation at a transmittance peak rather than a transmittance dip would be highly desirable. We find that the metasurface unit cell design shown in Fig. 3a, with a crystalline silicon (c-Si) substrate and a SiO\({}_{2}\) spacer, enables these criteria to be achieved. The length and width of the a-Si pillar are as in our original design (\(l=w=963\) nm), but the height of the a-Si resonator has been slightly reduced to \(h=845\) nm, since considering taller pillars compromises variation of phase as a function of wavelength. The metasurface period is now chosen as \(P_{x}=1520\) nm and \(P_{y}=1425\) nm. Increasing the metasurface period in the \(x\)-direction to \(P_{x}=1520\) nm reduces the near-field coupling between the neighboring metasurface unit cells. The spectral shape of the transmittance can be controlled by changing the thickness of the SiO\({}_{2}\) spacer. We find, for example, that an SiO\({}_{2}\) spacer thickness of \(d=1450\) nm yields a large phase variation with wavelength accompanied by a transmittance peak of 37% (Fig. 3b and Supporting Information, Part 6). The electric field profiles for a modified unit cell (Fig. 3a) exhibit mode leakage into the SiO\({}_{2}\) spacer (Supporting Information, Part 6). Both \(E_{x}\) and \(E_{z}\) components of the electric field have nonzero values in the SiO\({}_{2}\) spacer layer, and the electric field enhancement in a-Si pillars can be as high as 120. In addition to seeing changes in transmission, when changing the thickness of the SiO\({}_{2}\) spacer \(d\), we observe variations in both the mode quality factor and Fano phase (Fig. 3c). By appropriately choosing the SiO\({}_{2}\) thickness \(d\), we can tune the Fano phase over the range 0\({}^{\circ}\) to 140\({}^{\circ}\) (Fano phase is uniquely defined between 0\({}^{\circ}\) to 180\({}^{\circ}\)). This large variation of the Fano phase is indicative of comprehensive control of the resonant lineshape, exhibiting transmission dips, transmission peaks and asymmetric spectral shapes. Thus, by tailoring resonator mode leakage into the spacer layer, we modify the relative phase between the high-Q mode and the continuum, resulting in a change of the transmittance spectral lineshape. As a next step, we assess metasurface beam steering (Fig. 3a). First, we consider the case in which all pillar refractive indices change by an equal amount \(\Delta n\) and calculate the phase shift and the transmittance as a function of the index change \(\Delta n\) at a given wavelength. Note the phase shift is defined as a difference between the phase of the transmitted light at \(\Delta n\neq 0\) and at \(\Delta n=0\). Our calculations show that a modest refractive index change of \(\Delta n=3\times 10^{3}\) results in a phase shift of \(\sim\)250\({}^{\circ}\) (Fig. 3d), which is sufficient to enable beam steering. We consider the case of one-dimensional beam steering where the phases of all metasurface elements along the \(y\) direction are identical while the phases of neighboring elements in the \(x\) direction differ. Calculations for periodic arrays of identical elements suggest that an index change that would yield a 180\({}^{\circ}\) phase shift between neighboring elements in the \(x\) direction can create a two-level phase grating and strongly suppress the zeroth order beam. However, our full wave simulations indicate that the index change value derived via this method is not optimal, as we obtain an unexpectedly large zeroth order beam. The observed low diffraction efficiency is due to non-negligible near-field coupling between neighboring metasurface elements. Our calculations indicate that an index change \(\Delta n=0.0026\) between neighboring rows of a-Si resonators at fixed operating wavelength of \(\lambda=1535.44\) nm can fully extinguish the zeroth order diffraction beam (Fig. 3e). Interestingly, for these parameters (\(\lambda=1535.44\) nm, \(\Delta n=0.0026\)), the predicted relative phase shift between light scattered by adjacent rows of pillars is only 10\({}^{\circ}\), according to the simulations of periodic arrays of identical resonators. In principle, this phase should result in a non-negligible zeroth order beam (Supporting Information, Part 6). These results indicate that such a simple ansatz for optimal beam steering does not readily apply for these high-Q metasurfaces. Metasurface optical efficiency is a key design parameter; we find that overall transmittance of the two-level phase grating is T = 11% (Figs. 3e and 3f) which is significantly lower than the peak transmittance of \(\sim\)35% derived from the identical resonator array simulations (Fig. S19). To understand the reduced transmittance, we assess the variation of two-level phase grating transmittance with wavelength (Fig. 3f). The transmittance exhibits two distinct peaks indicating that the resonant wavelengths of neighboring metasurface unit cells have been shifted relative to each other by the imposed index change. The near-field distribution of the electric field (see the inset of Fig. 3f) also indicates that at a given operating wavelength, the relative electric field enhancement in neighboring a-Si rectangular pillars differs quite significantly. The optimal operating wavelength, which is marked by a green dot in Fig. 3f, is located between the two resonant peak wavelengths resulting in a lower overall transmittance. The design of high-Q dynamic beam steering metasurfaces has two major challenges: i) One cannot use intuitive phase profile to steer the beam with a high steering efficiency, and the optimal beam steering conditions have to be identified via full wave simulations; ii) diffractive switching occurs only in one direction due to very significant inter-emitter near-field coupling in the direction perpendicular to the polarization of the incoming electric field (\(y\)-direction). Hence, it would be desirable to identify optical modes which enable two-dimensional beam steering [36]. Besides the high-Q mode, several lower-Q modes are supported with quality factors of ~200 (Supporting Information, Parts 3 and 4). The metasurface (Fig. 1) with a-Si rectangular pillar height, width, and length of \(h\) = 845 nm, and \(w\) = \(I\) = 963 nm, respectively, and the period of \(P_{x}\) = \(P_{y}\) = 1500 nm, exhibits a relatively low-Q photonic mode at a wavelength of 1583 nm (Fig. 4a). As seen in Fig. 4a, the resonant dip observed at a wavelength of 1583 nm is also accompanied by a large spectral variation of the phase. When changing the refractive index of a-Si pillars by \(\Delta n\) = 0.01 everywhere in the metasurface, we observe a dynamically tunable phase shift of ~285\({}^{\circ}\) (Fig. 4b), which is large enough to enable dynamical beam steering. Next, we investigate whether the lower-Q mode can be used for dynamic beam steering, in two configurations: i) when the beam is steered in the direction perpendicular to the polarization of the incoming electric field (transverse electric or TE case); ii) when the steering direction is aligned with the polarization of the incoming light (transverse magnetic or TM case). First, we consider the case of a two-level phase grating and assume that the a-Si index change between neighboring element rows is \(\Delta n\) = 0.0048 (for schematic of a grating period see insets of Figs. 4c and 4d). At \(\Delta n\) = 0.0048, neighboring metasurface rows are expected to exhibit a phase difference of 180\({}^{\circ}\) at a wavelength of \(\lambda\) = 1583.85 nm (Fig. 4b). Thus, we expect a near complete suppression of the specularly transmitted beam. However, our full wave simulations show significant transmission at normal incidence, for two reasons: i) near-field coupling between neighboring metasurface elements, ii) the difference in scattered light amplitude from neighboring elements [37]. To fully suppress the normally transmitted beam at an operating wavelength of \(\lambda\) = 1583.85 nm, we developed an optimization procedure for the pillar refractive indices to maximize the diffraction efficiency of a desired order (see Methods), resulting in full suppression of the zeroth diffraction order at an operating wavelength of \(\lambda\) = 1583.85 nm for both TE and TM polarizations of the incoming light (Figs. 4c and 4d). In the case of the TE polarization, the optimized refractive index of the one of the a-Si pillars within the grating period retained its original value (\(n_{1}\) = 3.734), and the refractive index difference \(\Delta n\) between neighboring a-Si pillars is \(\Delta n\) = 0.0055, which yields a phase shift of 200\({}^{\circ}\) (see Fig. 4b), which is close to the originally chosen phase shift of 180\({}^{\circ}\). In the case of the TM polarization, the optimization procedure yielded a-Si refractive indices: \(n_{1}\)= 3.7353 and \(n_{2}\) = 3.7440. The corresponding phase difference for adjacent a-Si pillars is 270 \({}^{\circ}\), which significantly deviates from the original value of 180\({}^{\circ}\). For TM polarization, full suppression of the zeroth diffraction order occurs for a phase profile which significantly differs from the optimal phase shift uncoupled resonator array, indicating that inter-element near-field coupling is more significant for TM than TE polarization. Figure 4: Beam steering using the lower-Q mode. a) Transmittance and phase spectra of the metasurface with a unit cell shown in Fig. 1 b). In a), the assumed metasurface period values are \(P_{x}\) = 1500 nm and \(P_{y}\) = 1500 nm. The assumed geometrical parameters are as follows: the pillar height is \(h\) = 845 nm, and the pillar length and width are \(l\) = \(w\) = 963 nm. b) Transmittance and phase shift as a function of the a-Si index change at a wavelength of \(\lambda\) = 1583.85 nm. c) Intensity of the electric field in the far field as a function of the polar (steering) angle for a two-level phase grating when the incoming electric field is polarized perpendicular to the steering direction (TE polarization). d) Intensity of the electric field in the far field for a two-level phase grating when the incoming electric field is parallel to the steering direction (TM polarization). e) Near-filed electric field distribution in the two-level phase grating depicted in the inset of c), which corresponds to the case of the TE polarization. f) Near-filed electric field distribution in the two-level phase grating depicted in the inset d), which corresponds to the case of the TM polarization. In e) and f), E\({}_{0}\) denotes the amplitude of the impinging electric field. g) and h) plot the intensity of the electric field in the far field as a function of the steering angle in the cases of three-level and four-level phase gratings, respectively. g) and h) correspond to the case of the TE polarization. The insets of g) and h) show the near-field distribution of the electric field in each case. The overall transmittance values are specified in the insets of c), d), g), and h). The operating wavelength is \(\lambda\) = 1583.85\(\) nm. Although we observed diffractive switching of a two-level phase grating, this analysis does not clarify whether intermediate steering angles are possible using a blazed grating-type design approach. In the case of TM polarization, using three- and four-level blazed grating phase profiles, we did not obtain a highly directional beam owing to near-field coupling between neighboring metasurface elements. However, for TE polarization, we demonstrated beam steering to angles of 20.6\({}^{\circ}\) and 15.2\({}^{\circ}\) with reasonable diffraction efficiencies. To steer a beam to a polar angle of 20.6\({}^{\circ}\), we use Figure 4b to construct a phase a three-level phase profile with relative phase between elements of 0\({}^{\circ}\), 120\({}^{\circ}\), 240\({}^{\circ}\), respectively, resulting in beam steering with a diffraction efficiency of 60% at an operating wavelength of \(\lambda\) = 1583.85 Figure 5: Thermo-optic beam switching using the high-Q mode with a realistic interconnect architecture. In a), schematic of the proposed metasurface. In b), schematic of the proposed metasurface unit cell. In a)-b), the square a-Si pillars are connected via a-Si bars. Each a-Si pillar is placed upon an SiO\({}_{2}\) pedestal to enhance thermal insulation between neighboring metasurface elements. The width and length of the pillar is \(w=l=963\) nm, the height of the pillar is \(h=850\) nm. The width of the bar is \(\delta=50\) nm, and the height of the bars is 850 nm. The SiO\({}_{2}\) pedestal has a shape of a rectangular pillar with a length and width \(a=b=200\) nm, and a height of 382 nm. The thickness of the planar SiO\({}_{2}\) spacer is \(d=380\) nm. The whole structure is built on an c-Si substrate. In c), the intensity of the electric field in the far field for a two-level phase grating when the incoming plane wave is \(x\) polarized. In c), the operating wavelength is \(\lambda\) = 1537.15 nm. In d), the overall transmittance spectrum corresponding the two-level phase grating studied in c). In e), the intensity of the electric field in the far field for a two-level phase grating when the incoming plane wave is \(y\) polarized. In e), the operating wavelength is \(\lambda\) = 1534.25 nm. In f), the overall transmittance spectrum corresponding the two-level phase grating studied in f). Insets of d) and f) show spatial distribution of the electric field in the x-z cross-section of the grating period at operating wavelengths in case of x- and y-polarized incoming light, respectively. The refractive index difference between neighboring metasurface elements is \(\Delta n=0.0026\). nm (Fig. 4g). To steer the beam to a polar angle of 15.2deg, a phase profile of 0deg, 90deg, 180deg, 270deg was constructed, resulting in a diffraction efficiency of 38% at an operating wavelength of \(\lambda\) = 1583.85 nm and 46% at \(\lambda\) = 1584.16 nm. To improve the diffraction efficiency at \(\lambda\) = 1583.85 nm, we performed a full wave optimization procedure, yielding a diffraction efficiency of 52% (Figure 4h), with a-Si refractive indices: \(n_{1}\) = 3.7340, \(n_{2}\) = 3.7358, \(n_{3}\) = 3.7386 \(n_{4}\)= 3.7417. Thus, for TE-polarized incident light, our metasurface can steer the beam to intermediate angles between 0deg and 20.6deg. Having determined that both high- and lower-Q modes can be used for transmitted light wavefront manipulation, we seek a feasible design to enable refractive index variation and dynamic beam switching, by refractive index modulation of a-Si using the thermo-optic effect [14, 23, 38]. To selectively heat rows of a-Si pillars, we introduce a-Si electrodes and connect the pillars in series (Figs. 5a and 5b). In our design, we place a-Si pillars on SiO\({}_{2}\) pedestals to limit thermal crosstalk between neighboring pillars. Fabrication entails first pattering a-Si on an SiO\({}_{2}\) spacer followed by hydrofluoric acid wet etching of the SiO\({}_{2}\) spacer into pillars on a c-Si substrate. Figures 5a and 5b illustrate this design - the top and bottom 50 nm-thick a-Si layers are doped, and a voltage is applied to the top and bottom layer resulting in current flow between the electrodes through the lightly doped a-Si layer (see Fig. 5b). The induced current raises the temperature \(T\) of a-Si pillars by Joule heating, with modified a-Si refractive index \(n(T)=n_{\text{Si}}+\Delta n(\Delta T)\). For silicon, a temperature change of \(\Delta T=10\) K changes the refractive index by \(\Delta n\) = 0.00239 [39]. Dynamic beam switching using a higher-Q mode is possible with refractive index difference as low as \(\Delta n\) = 0.0026, corresponding to a relative temperature difference of ~11 K. For beam switching with lower-Q modes, a refractive index difference of \(\Delta n\) = 0.006 is required, corresponding to a relative temperature difference of \(\Delta T=25\) K. These numerical estimates suggest our metasurface can potentially enable beam steering with quite modest temperature differences. To construct a desired transmitted light phase profile, we can heat individual rows of pillars to different temperatures. For the high-Q mode, diffractive switching with a two-level phase grating can be realized by changing the temperature of every other metasurface pixel (i.e., electrically connected rows of metasurface elements) by 11 K (\(\Delta n\) = 0.0026). Heating to create a two-level phase grating, deflects the light to angles of \(\theta\) = \(\pm\)30deg (Figs. 5c and 5e). For an incident \(x\)-polarized electric field, this modest refractive index difference enables diffractive beam switching with a complete suppression of the zeroth diffraction order and an overall transmittance of \(T\) = 13 % (Fig. 5c). We also observe diffractive beam switching for \(y\)-polarized incidence (Fig. 5d) with a lower overall transmittance and at a different operating wavelength. For a-Si on an SiO\({}_{2}\) substrate, we only observed diffractive switching for the high-Q mode when the steering direction aligned the incoming plane wave (TM) polarization. The a-Si electrodes and the SiO\({}_{2}\) pedestal serve to reduce near-field coupling of neighboring metasurface elements. Lower-Q modes are also capable of diffractive switching for both TE and TM polarized incidence, at different wavelengths. We also performed a coupled electrical and thermal analysis (see Methods). To induce a diffraction grating analogous to the one in Fig. 5a, we apply a voltage of \(V_{1}\) = 1.05 V between top and bottom doped a-Si layers of one metasurface pixel while grounding each neighboring metasurface pixel (\(V_{2}\) = 0 V) in a periodic array. Figure 6a illustrates the calculated current density in the silicon bars in one grating period. The largest current density and most significant heat generation occurs in the top and bottom doped a-Si layers of the connector bars. Figure 6b illustrates the steady state temperature distribution for one grating period for \(V_{1}\) = 1.05 V and \(V_{2}\) = 0 V, such that the temperatures of two adjacent metasurface pixels are \(T_{1}\) = 312.8 K and \(T_{2}\) = 301.4 K, respectively, yielding refractive index difference of \(\Delta n\) = 0.0026, assuming a thermo-optic coefficient of 2.3 \(\times\) 10\({}^{-4}\) K\({}^{-1}\)[39]. The power density required to maintain this temperature difference in steady state is 1.62 uW/\(\mu\)m\({}^{2}\). The temperature distribution in the pillars and connectors is nearly uniform due to the high thermal conductivity of silicon, even though heating occurs predominantly in the connector bars. The largest temperature gradient occurs in the silicon Figure 6: Electrical and thermal analysis of thermo-optic beam switching. a) Steady state spatial current density distribution in a thermo-optically controlled metasurface with \(V_{1}\) = 1.05 V and \(V_{2}\) = 0 V. b) Steady state spatial temperature distribution in a thermo-optically controlled metasurface with \(V_{1}\) = 1.05 V and \(V_{2}\) = 0 V. c) Transient temperature difference between the two rows of a thermo-optically controlled metasurface when switching the voltage with a square wave of amplitude \(V\) = 1.05 V and a frequency of 100 kHz. The geometrical parameters are identical to the ones of the structure in Fig. 5. Figure 7: Thermo-optic three-level phase grating with a realistic interconnect architecture and optimized geometry. The incoming plane wave is TM-polarized. a) Intensity of the electric field in the far field for the case of the lower-Q mode. In a), the operating wavelength is \(\lambda\) = 1567.1 nm. b) Intensity of the electric field in the far field for the case of the high-Q mode. In b), the operating wavelength is \(\lambda\) = 1531.3 nm. The insets of a) and b) show the spatial distribution of the electric field in the \(x\)-\(z\) cross-section of the grating period at the operating wavelengths. oxide pedestal due to its large thermal resistance, which greatly reduces the thermal crosstalk between adjacent rows. To assess metasurface dynamic performance, we carried out transient electrical and thermal simulations of dynamic thermal switching between the two pixels comprising one grating period. A response time of 7.3 \(\upmu\)s is obtained for a square wave voltage profile (see Fig. 6c), indicating that thermo-optic switching is possible at frequencies up to 140 kHz. Finally, we assess metasurface beam steering performance with realistic interconnects (Fig. 5a) when a three-level phase profile (0\({}^{\circ}\), 120\({}^{\circ}\), 240\({}^{\circ}\)) is applied to the metasurface, for both lower-Q and high-Q modes under TM-polarized plane wave illumination. This phase profile yields a maximum diffraction efficiency of 35% for the lower-Q mode and 28% for the high-Q mode (see Supporting Information). Optimization (see Methods) of the a-Si pillar refractive indices resulted in a modest improvement of diffraction efficiencies. To further increase diffraction efficiency, we co-optimized the geometrical parameters of the structure and refractive indices of a-Si pillars (see Methods) enabling enhanced diffraction efficiencies of 54% and 69% for the lower-Q and the high-Q mode, respectively (see Fig. 7). ## Conclusions In summary, all-dielectric dynamically tunable transmissive metasurfaces are achievable using high-Q resonances with Q-factors approaching 10,000. Appropriate design enables the metasurface to exhibit a transmittance _peak_ (rather than a transmittance dip at the resonant wavelength for Q-factors of ~3780 (Fig. 3a). These high quality factors enable achieving beam steering at modest variations of the a-Si refractive index on the order of 10\({}^{3}\), which is a unique advantage of the proposed design. Lower-Q modes can also be used for amplitude and phase modulation, but the required a-Si refractive index modulation is higher (up to 0.01) as compared with the case of high-Q mode. Thermo-optically active elements can be addressed using interconnects that do not perturb the resonator cavity modes, and both high-Q and lower-Q modes can realize thermo-optic beam switching, with switching times as low as 7.3 \(\upmu\)s. For both high-Q and lower-Q modes, dynamic switching of both TE- and TM-polarized light is possible, although at different wavelengths. Using a three-level phase grating approach, we have shown that the metasurface with realistic interconnects is capable of dynamic beam steering, with diffraction efficiencies of 54% and 69% for lower Q and high-Q modes, respectively. Finally, we note that the optical efficiency of the proposed metasurface can be significantly enhanced by introducing a metallic back reflector and an SiO\({}_{2}\) spacer. The designed reflective metasurfaces may exhibit optical efficiencies >90% (Supporting Information, Part 9). ## Methods Optical simulations were performed using the finite difference time domain method (FDTD Lumerical). In our optical simulations, a normally incident linearly polarized plane wave illuminates the metasurface from within the substrate (see Figs. 1 and 2). When simulating an array of a-Si pillars, periodic boundary conditions are used in the \(x\) and \(y\) directions, and perfectly matched layers (PML) boundary condition is used in the \(z\) direction. When considering the behavior of an isolated pillar, PML boundary conditions are used at all simulation boundaries, and the simulation volume is \(3\times 3\times 3\)\(\upmu\)m\({}^{3}\). In our simulations the materials are assumed non-dispersive. The refractive indices of a-Si, SiO\({}_{2}\), and c-Si are taken as 3.734 [35], 1.44, 3.43, respectively. When considering realistic interconnect architectures to perform thermo-optic beam switching and thermo-optic beam steering, we assume the complex refractive index of the doped a-Si layers is given as \(n=3.734\)+0.0013 i, which corresponds to the carrier density of \(6\times 10^{18}\,\mathrm{cm}^{-3}\) in the case of the \(n\)-doped Si layer and the carrier density of \(10^{19}\,\mathrm{cm}^{-3}\) in case of the \(p\)-doped Si layer [40]. Complex refractive index of the lightly doped Si core is \(n=3.734\)+0.000025 i, which corresponds to the carrier density of \(3.2\times 10^{17}\,\mathrm{cm}^{-3}\) of the \(n\)-doped Si [40]. In simulations used to generate Figs. 1 and 2, the assumed mesh in the z-direction is 5 nm while the mesh in the x- and y-directions is set to 20 nm. In our beam switching and beam steering simulations, the mesh in the x-, y-, and z-directions is set to 20 nm. When performing the near to far field projection in Fig. 3, we assume that the number of metasurface elements in the x-direction is 20, while for the near to far field projection in Figs. 4, 5, and 7, the number of assumed metasurface elements in the x-direction is 100. To optimize diffraction efficiencies, we used MATLAB to drive a multi-variable nonlinear optimization in FDTD via Lumerical's Automation Application Programming Interface (API). When performing the optimization, we used the sequential quadratic programming (SQP) method and used the inverse of the maximal diffraction efficiency as a figure of merit. The phase profile was assumed periodic, and the geometry and the refractive indices of the structure within a period were varied. When optimizing diffraction efficiencies in Figure 4, only refractive indices of a-Si pillars were varied so that the maximal assumed index change was \(\Delta n=0.01\). When optimizing diffraction efficiencies of the structure with realistic interconnects (Fig. 7) we run a series of optimization runs. First, we co-optimized only refractive indices of the metasurface elements. In the next step, we fixed the refractive index of the metasurface element \(n_{1}\) and co-optimized the structure height and the refractive indices of the two remaining metasurface elements. Next, we co-optimized the \(P_{y}\) period (or \(P_{x}\) period) of the structure and refractive indices of the second and third metasurface element while keeping the refractive index of the first metasurface element fixed. As a final optimization step, we re-optimized the refractive indices of all three metasurface elements. In the case of the lower-Q mode, the optimization yields the following values for the refractive indices of the Si pillars and the geometrical parameters of the structure: \(n_{1}=3.73416\), \(n_{2}=3.738299\), \(n_{3}=3.73778\), \(P_{x}=1440\) nm, \(P_{y}=1440\) nm, and \(h=840.75\) nm, and the observed diffraction efficiency \(\mathrm{D_{eff}}\)=54% (Fig. 7a). In the case of the high-Q mode, the obtained parameter values are as follows: \(n_{1}=3.73416\), \(n_{2}=3.7383\), \(n_{3}=3.744\), \(P_{x}=1520\) nm, \(P_{y}=1495.92\) nm, and \(h=841.207\) nm, and the resulting diffraction efficiency \(\mathrm{D_{eff}}=69\%\) (Fig. 7b). Three-dimensional electrical and thermal simulations were performed using finite element method (COMSOL Multiphysics). In our electrical simulations, we assume that, in the metasurface unit cell, the top and bottom 50 nm of a-Si are \(n\)-doped with assumed carrier density of \(6\times 10^{18}\,\mathrm{cm}^{-3}\). The carrier density of the lightly doped core is taken to be \(3.2\times 10^{17}\,\mathrm{cm}^{-3}\). First, we use electrical solver to obtain the volumetric heat source distribution due to Joule heating. As a next step, we use a thermal solver, to obtain the temperature distribution. Our thermal simulations account for heat conduction and convection. We assume that the top of the metasurface is cooled via natural convection with a heat transfer coefficient of \(h=5\) W/m\({}^{2}\)K. We also assume that the temperature at the bottom of the substrate, 50 mm from the metasurface is fixed to 298 K with an external heat sink. The assumed ambient temperature is also 298 K. When performing thermal simulations, we use periodic boundary conditions in the x and y directions. ## References * [1] Shirmanesh, G. K., Sokhoyan, R., Wu, P. C. & Atwater, H. A. Electro-optically Tunable Multifunctional Metasurfaces. _ACS Nano_**14**, 6912-6920, doi:10.1021/acsnano.0c01269 (2020). * [2] Li, S.-Q. _et al._ Phase-only transmissive spatial light modulator based on tunable dielectric metasurface. _Science_**364**, 1087-1090, doi:10.1126/science.aaw6747 (2019). * [3] Shaltout, A. M., Shalaev, V. M. & Brongersma, M. L. Spatiotemporal light control with active metasurfaces. _Science_**364**, eaat3100, doi:10.1126/science.aat3100 (2019). * [4] Hall, C. U., Michel, A.-K. U., Poulikakos, D. & Eghlidi, H. Optical Metasurfaces: Evolving from Passive to Adaptive. _Advanced Optical Materials_**7**, 1801786, doi:[https://doi.org/10.1002/adom.201801786](https://doi.org/10.1002/adom.201801786) (2019). * [5] Thureja, P. _et al._ Toward a universal metasurface for optical imaging, communication, and computation. _Nanophotonics_**11**, 3745-3768, doi:10.1515/nanoph-2022-0155 (2022). * [6] Park, J. _et al._ All-solid-state spatial light modulator with independent phase and amplitude control for three-dimensional LiDAR applications. _Nature Nanotechnology_**16**, 69-76, doi:10.1038/s41565-020-00787-y (2021). * [7] Xie, Y.-Y. _et al._ Metasurface-integrated vertical cavity surface-emitting lasers for programmable directional lasing emissions. _Nature Nanotechnology_**15**, 125-130, doi:10.1038/s41565-019-0611-y (2020). * [8] Hirose, K. _et al._ Watt-class high-power, high-beam-quality photonic-crystal lasers. _Nature Photonics_**8**, 406-411, doi:10.1038/nphoton.2014.75 (2014). * [9] Lee, Y. _et al._ High-Speed Transmission Control in Gate-Tunable Metasurfaces Using Hybrid Plasmonic Waveguide Mode. _Advanced Optical Materials_**8**, 2001256, doi:[https://doi.org/10.1002/adom.202001256](https://doi.org/10.1002/adom.202001256) (2020). * [10] Howes, A., Wang, W., Kravchenko, I. & Valentine, J. Dynamic transmission control based on all-dielectric Huygens metasurfaces. _Optica_**5**, 787-792, doi:10.1364/OPTICA.5.000787 (2018). * [11] Karst, J. _et al._ Electrically switchable metallic polymer nanoantennas. _Science_**374**, 612-616, doi:10.1126/science.abj3433 (2021). * [12] Thyagarajan, K., Sokhoyan, R., Zornberg, L. & Atwater, H. A. Millivolt Modulation of Plasmonic Metasurface Optical Response via Ionic Conductance. _Advanced Materials_**29**, 1701044, doi:[https://doi.org/10.1002/adma.201701044](https://doi.org/10.1002/adma.201701044) (2017). * [13] Benea-Chelmus, I.-C. _et al._ Electro-optic spatial light modulator from an engineered organic layer. _Nature Communications_**12**, 5928, doi:10.1038/s41467-021-26035-y (2021). * [14] Zangeneh Kamali, K. _et al._ Electrically programmable solid-state metasurfaces via flash localised heating. _Light: Science & Applications_**12**, 40, doi:10.1038/s41377-023-01078-6 (2023). * [15] She, A., Zhang, S., Shian, S., Clarke, D. R. & Capasso, F. Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift. _Science Advances_**4**, eaap9957, doi:10.1126/sciadv.aap9957 (2018). * [16] Arbabi, E. _et al._ MEMS-tunable dielectric metasurface lens. _Nature Communications_**9**, 812, doi:10.1038/s41467-018-03155-6 (2018). * [17] Forouzmand, A. _et al._ Tunable all-dielectric metasurface for phase modulation of the reflected and transmitted light via permittivity tuning of indium tin oxide. _Nanophotonics_**8**, 415-427, doi:10.1515/nanoph-2018-0176 (2019). * [18] Park, J. & Kim, S. J. Subwavelength-spaced transmissive metallic slits for 360-degree phase control by using transparent conducting oxides. _Appl. Opt._**57**, 6027-6031, doi:10.1364/AO.57.006027 (2018). * [19] Forouzmand, A. & Mosallaei, H. A Tunable Semiconductor-Based Transmissive Metasurface: Dynamic Phase Control with High Transmission Level. _Laser & Photonics Reviews_**14**, 1900353, doi:[https://doi.org/10.1002/lpor.201900353](https://doi.org/10.1002/lpor.201900353) (2020). * [20] Chung, H. & Miller, O. D. Tunable Metasurface Inverse Design for 80% Switching Efficiencies and 144* Angular Deflection. _ACS Photonics_**7**, 2236-2243, doi:10.1021/acsphotonics.0c00787 (2020). * [21] Yang, Z., Liu, M., Komar, A., Xu, L. & Neshev, D. N. Phase-Only Tuning of Extreme Huygens Metasurfaces Enabled by Optical Anisotropy. _Advanced Optical Materials_**10**, 2101893, doi:[https://doi.org/10.1002/adom.202101893](https://doi.org/10.1002/adom.202101893) (2022). * [22] Zhuo, S. _et al._ Dynamic Transmissive Metasurface for Broadband Phase-Only Modulation Based on Phase-Change Materials. _Laser & Photonics Reviews_**17**, 2200403, doi:[https://doi.org/10.1002/lpor.202200403](https://doi.org/10.1002/lpor.202200403) (2023). * [23] Archetti, A. _et al._ Thermally reconfigurable metalens. _Nanophotonics_**11**, 3969-3980, doi:10.1515/nanoph-2022-0147 (2022). * [24] Joseph, S., Pandey, S., Sarkar, S. & Joseph, J. Bound states in the continuum in resonant nanostructures: an overview of engineered materials for tailored applications. _Nanophotonics_**10**, 4175-4207, doi:10.1515/nanoph-2021-0387 (2021). * [25] Kodigala, A. _et al._ Lasing action from photonic bound states in continuum. _Nature_**541**, 196-199, doi:10.1038/nature20799 (2017). * [26] Koshelev, K. _et al._ Subwavelength dielectric resonators for nonlinear nanophotonics. _Science_**367**, 288-292, doi:10.1126/science.aaz3985 (2020). * [27] Melik-Gaykazyan, E. _et al._ From Fano to Quasi-BIC Resonances in Individual Dielectric Nanoantennas. _Nano Letters_**21**, 1765-1771, doi:10.1021/acs.nanolett.0c04660 (2021). * [28] Rybin, M. V. _et al._ High-Q Supercavity Modes in Subwavelength Dielectric Resonators. _Physical Review Letters_**119**, 243901, doi:10.1103/PhysRevLett.119.243901 (2017). * [29] Tittl, A. _et al._ Imaging-based molecular barcoding with pixelated dielectric metasurfaces. _Science_**360**, 1105-1109, doi:10.1126/science.aas9768 (2018). * [30] Zograf, G. _et al._ High-Harmonic Generation from Resonant Dielectric Metasurfaces Empowered by Bound States in the Continuum. _ACS Photonics_**9**, 567-574, doi:10.1021/acsphotonics.1c01511 (2022). * [31] Mylnikov, V. _et al._ Lasing Action in Single Subwavelength Particles Supporting Supercavity Modes. _ACS Nano_**14**, 7338-7346, doi:10.1021/acsnano.0c02730 (2020). * [32] Yang, G., Dev, S. U., Allen, M. S., Allen, J. W. & Harutyunyan, H. Optical Bound States in the Continuum Enabled by Magnetic Resonances Coupled to a Mirror. _Nano Letters_**22**, 2001-2008, doi:10.1021/acs.nanolett.1c04764 (2022). * [33] Huang, L., Xu, L., Rahmani, M., Neshev, D. & Miroshnichenko, A. Pushing the limit of high-Q mode of a single dielectric nanocavity. _Advanced Photonics_**3**, 016004 (2021). * [34] Klopfer, E., Dagli, S., Barton, D., Lawrence, M. & Dionne, J. A. High-Quality-Factor Silicon-on-Lithium Niobate Metasurfaces for Electro-optically Reconfigurable Wavefront Shaping. _Nano Letters_**22**, 1703-1709, doi:10.1021/acs.nanolett.1c04723 (2022). * [35] Dood, M. J. A. d., Polman, A., Zijlstra, T. & Drift, E. W. J. M. v. d. Amorphous silicon waveguides for microphotonics. _Journal of Applied Physics_**92**, 649-653, doi:10.1063/1.1486055 (2002). * [36] Claudio U Hail, M. F., Ruzan Sokhoyan, Lior Michaeli, Harry A Atwater. High quality factor metasurfaces for two-dimensional wavefront manipulation. _arXiv:2212.05647_. * [37] Thureja, P. _et al._ Array-Level Inverse Design of Beam Steering Active Metasurfaces. _ACS Nano_**14**, 15042-15055, doi:10.1021/acsnano.0c05026 (2020). * [38] Berto, P. _et al._ Tunable and free-form planar optics. _Noture Photonics_**13**, 649-656, doi:10.1038/s41566-019-0486-3 (2019). * [39] Cocorullo, G., Della Corte, F. G., Moretti, L., Rendina, I. & Rubino, A. Measurement of the thermo-optic coefficient of a-Si:H at the wavelength of 1500 nm from room temperature to 200 "C. _Journal of Non-Crystalline Solids_**299-302**, 310-313, doi:[https://doi.org/10.1016/S0022-3093](https://doi.org/10.1016/S0022-3093)(01)01192-9 (2002). Nedeljkovic, M., Soref, R. & Mashanovich, G. Z. Free-Carrier Electrorefraction and Electroabsorption Modulation Predictions for Silicon Over the 1-14-\(\upmu\)m Infrared Wavelength Range. _IEEE Photonics Journal_**3**, 1171-1180, doi:10.1109/JPHOT.2011.2171930 (2011). ## Author information ### Corresponding Author: Harry A. Atwater *E-mail: [email protected] ## Acknowledgements This work was supported by grant #FA9550-18-1-0354 from Air Force Office of Scientific Research as well as the Meta-Imaging MURI grant #FA9550-21-1-0312 from Air Force Office of Scientific Research. ## Author contributions R.S., H.A.A., and C.U.H. conceived the project. R.S. designed the metasurface and preformed optical simulations. M.F. extracted Q-factors and Fano phase from the simulated transmittance spectra and performed a finite array analysis. C.U.H. performed electrical and thermal simulations and devised strategies minimizing thermal crosstalk. R.S. and M.Y.G. performed the full wave optimization. C.U.H., M.Y.G., and M.F contributed to the discussion of the results. R.S. wrote the manuscript with inputs from other authors. H.A.A supervised all aspects of the project. ### Competing interests The authors declare no competing interests. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. Supporting Information ### 1 Extraction of the quality factor and Fano phase We extracted the quality factor and Fano phase values by fitting the transmittance spectra to the Fano formula [1]: \[T=A\frac{\left|\frac{Y_{0}}{2}-e^{-2i\Delta}\left(i(\omega-\omega_{0})-\frac{Y_{0 }}{2}\right)\right|^{2}}{(\omega-\omega_{0})^{2}+\left(\frac{Y_{0}}{2}\right)^ {2}}+T_{bg}\] Here, \(T_{bg}\) is a constant offset plus a linear background (\(T_{bg}=B+C(\omega-\omega_{0})\)), \(A\) is the resonance amplitude, \(\omega\) denotes the frequency of light while \(\omega_{0}\) is the resonant frequency, \(Y_{0}\) is the damping constant. \(\Delta\) denotes the Fano phase, and the Fano asymmetry parameter \(q\) is related to the Fano phase \(\Delta\) as \(q=\) -cot(\(\Delta\)) [2]. The quality factors are calculated as \(Q=\frac{\omega_{0}}{Y_{0}}\). ## 2 Field profile of the high-Q mode in the pillar array Figure S2: Spatial distribution of the electric field amplitude inside the metasurface unit cell shown in Figure 1., which described the case of an array of a-Si pillars on an SiO\({}_{2}\) substrate. The length and width of the pillar are taken to be \(l=w=963\) nm. The metasurface period is \(P_{x}=P_{y}=1425\) nm. The electric field is plotted in the \(x\)-\(y\) plane, which goes through the center of the pillar. a) An absolute value of the electric field, b) \(x\)-component of the electric field, c) \(y\)-component of the electric field, d) \(z\)-component of the electric field. ## 3 Optical response of a pillar array: effect of the pillar height on the array performance Figure S4: Spatial electric field profiles of a lower Q modes supported by the metasurface in the \(\mathbf{x}\)-\(\mathbf{z}\)**plane**, which passes through the center of the pillar. In the displayed electric field profiles, the \(z\) coordinate ranges from \(z\) = -200 nm to \(z\) = 1000 nm while the top of the SiO\({}_{2}\) substrate corresponds to the plane \(z\) = 0. The \(x\) coordinate ranges from \(x\) = -712.5 to \(x\) = 712.5 nm. The displayed false color plot is identical to Fig. S3a. The spatial field profile of the high-Q mode is identical to the one shown in Figs. 1, S1, and S2. We observe that \(x\)- and \(y\)-components of the electric field of mode 3 have very similar features when varying the pillar height. On the other hand, mode 2 ‘disappears’ after coalescing with mode 3. The mode profile of mode 1 is identical to the one shown in Figs. 1, and S1. Figure S5: Spatial electric field profiles of a lower Q modes supported by the metasurface in the \(y\)-\(z\) plane, which passes through the center of the pillar. In the displayed electric field profiles, the \(z\) coordinate ranges from \(z\) = -200 nm to \(z\) = 1000 nm while the top of the SiO\({}_{2}\) substrate corresponds to the plane \(z\) = 0. The \(x\) coordinate ranges from \(x\) = -712.5 nm to \(x\) = 712.5 nm. The displayed false color plot is identical to Fig. S3a. The spatial field profile of the high-Q mode is identical to the one shown in Figs. 1, S1, and S2. We observe that \(x\)- and \(y\)-components of the electric field of mode 3 have very similar features when varying the pillar height. On the other hand, mode 2 ‘disappears’ after coalescing with mode 3. The mode profile of mode 1 is identical to the one shown in Figs. 1, and S1. In this Figure, we display electric field amplitude \(|E|\). On the considered \(y\)-\(z\) plane, the only non-zero component is \(E_{x}\). When considering an a-Si pillar array we observe that at a pillar height of 870 nm, we are still able to couple to the resonant mode, and the resonant spectral feature is still visible from the false color plots of the transmittance and phase spectra (Fig. S6). The extracted Q-factor of the resonance is ~48,000 (Fig. S7a). In our simulation, we assumed a 5 nm mesh in z direction. Running a series of simulations with finer mesh could potentially enable us to identify the parameter values at which the mode with even higher Q-factor can be observed. We also observe that when take the metasurface period as \(P_{x}=1520\) nm and \(P_{y}=1425\) nm, the Q-factor of the supported mode is 221,000 (Fig. S7b). Thus, increasing the metasurface period from \(P_{x}=1425\) nm to \(P_{x}=1520\) nm can strongly affect the quality factor of the metasurface. ## 4 Optical response of a pillar array: effect of the pillar period on the array performance We study how the period of the metasurface affects the modes supported by the metasurface consisting of an array of a-Si pillars on an SiO\({}_{2}\) substrate (Fig. 1). For metasurface periods exceeding 1200 nm, we observe three distinct modes in the transmittance and phase false color plots, which correspond to the high-Q mode at wavelength around 1540 nm and two lower-Q modes at wavelengths around 1580 nm and 1590 nm, which correspond to mode 2 and mode 3 from Fig. S3. When studying the dependence of the mode position on the \(x\)-period \(P_{x}\), we observe that the quality of the high-Q mode gradually increases with period (Fig. S8c and S9c). We also observe that for periods of \(P_{x}>1300\) nm, the position of the high-Q mode is does not change significantly with period. We also observe that mode 2 shifts stronger with the \(x\)-period \(P_{x}\) as compared with mode 3. We also observe, that when we change the \(y\)-period \(P_{y}\), the high-Q mode shifts stronger as compared with the case when the \(y\)-period is changed (c.f. Figs. S8c and S9c). This result is also consistent with the specifics of the spatial mode profiles of the high-Q mode in the \(x\)-z and \(y\)-z planes (see Fig. 1). When the \(y\)-period \(P_{y}\) increases from 1300 nm to 1520 nm, the high-Q resonance position shifts by ~5 nm. ## 5 Modes supported by a single isolated pillar So far, in our optical simulations we imposed periodic boundaries in \(x\)- and \(y\)- directions implying that we investigate a periodic array of square a-Si pillars. It is not clear whether the identified photonic mode is also supported by an isolated a-Si pillar. To address this question, we simulate scattering cross section of an isolated pillar. In the revised simulation, we use perfectly matched layer (PML) boundary conditions at all simulation boundaries. First, we consider the case of a _single a-Si pillar in air_ and calculate its scattering cross section as a function of wavelength and pillar height (Fig. S10a). The length and width of the pillar are the same as in Fig. 1 of the main manuscript (\(l=w=963\) nm). As seen in Fig. S10, in the vicinity of the highest-Q mode we also observe a decrease in the width of the pillar height. The width of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar height is \(h=830\) nm. The height of the pillar is \(h=830\) nm. observe crossing of two other modes. For a pillar height of \(h\) = 834 nm, the quality factor of the highest-Q mode is ~1000. Next, we study the case of an isolated a-Si pillar on an SiO\({}_{2}\) substrate. Figure S11 plots the scattering cross section of an isolated pillar on an SiO\({}_{2}\) substrate as a function of wavelength and pillar height (Fig. S11a). We observe an overall broadening of the spectral features as compared with the case of a single a-Si pillar in the air. We also observe that for a pillar height of 830 nm, the Q-factor of the supported mode is 676. Fig. S13. Spatial distribution of the \(x\)-component (top panel) and \(y\)-component (bottom panel) of the electric field **E** in the \(x\)-\(z\) plane, which passes through the center of an isolated a-Si pillar on an SiO\({}_{2}\) substrate. The \(x\)- and \(z\)- components of the electric field are plotted along the _modal line_ \(3\) in Fig. S10. Geometrical parameters of the pillar are the same as in Fig. S10. The a-Si pillar height and the operating wavelength are marked at the top of each column. We observe that the \(x\)- and \(z\)- components of the electric field gradually change as we increase the pillar height. Fig. S14. Spatial distribution of the \(x\)-component of the electric field **E** in the \(x\)-z plane, which passes through the center of an isolated a-Si pillar on an SiO\({}_{2}\) substrate. The \(x\)-component of the electric field is plotted along the _modal line 2_ in Fig. S10. Geometrical parameters of the pillar are the same as in Fig. S10. The a-Si pillar height and the operating wavelength are marked at the top of each column. We observe that the \(x\)-components of the electric field gradually changes as we increase the pillar height. To understand the nature of the considered high-Q resonance and its relation to the high-Q resonance observed in the case of an array, we display the spatial distribution of the electric field amplitude in the y-z cross-section of the resonator (Fig. S15). As seen in Fig. S15, the spatial distribution of the electric field amplitude of the high-Q mode in the y-z plane (for example, at \(h\) = 830 nm) is identical to the spatial distribution of the electric field observed in the case of the pillar array (see Fig. 1e). ## 6 Controlling the spectral shape of the resonance Fig. S16. Transmittance a) and phase of the transmitted light b) as a function of the wavelength and thickness of the SiO\({}_{2}\) spacer \(d\) (see the schematic in Fig. 3a). The assumed geometrical parameters are the same as in Fig. 3. c) and d) show transmittance and phase spectra for the SiO\({}_{2}\) spacer thickness for \(d\) = 510 nm and \(d\) = 1870 nm. Figure S17: Quality factor and Fano phase of the high-Q resonance as a function of the SiO\({}_{2}\) thickness \(d\) (for schematic see Fig. 3a). Here, the considered structure and geometrical parameters are identical to the one if Fig. 3. Figure 3c shows that around the SiO\({}_{2}\) spacer thickness \(d\) = 1550 nm, the Fano phase varies abruptly as a function of the SiO\({}_{2}\) spacer thickness. Here, we sample the SiO\({}_{2}\) spacer thickness more finely to accurately capture the details of the variation of the Fano phase. Fig. S18. Spatial distribution of the electric field amplitude in the metasurface unit cell depicted in Fig. 3a. The geometrical parameters of the metasurface unit cell are as follows: the length and width of the pillar is \(l=w=963\) nm, the height of the pillar is \(h=845\) nm. The assumed period values are \(P_{x}\)= 1520 nm and \(P_{y}\)= 1425 nm. The thickness of the SiO\({}_{2}\) spacer is \(d=1450\) nm. The spatial distribution of the electric field is plotted in the \(x\)-\(y\) plane. In a), we plot the absolute value of the electric field \(|\)E/E\({}_{0}|\). In b) and c), we plot the spatial distributions of \(x\)- and \(z\)-components of the electric field, respectively. In d), we plot the real part of the \(z\)-component of the electric field. We observe that both \(x\)- and \(z\)-components of the electric filed adopt non-zero values in the SiO\({}_{2}\) spacer. ## 7 Realistic electrical addressing architectures In the present section, we describe how the proposed metasurface design can be modified to enable thermo-optic control of the wavefront of the transmitted light. The first variant of the proposed metasurface unit cell features a-Si pillars, which are connected via an a-Si bar (for a schematic of the unit cell see Fig. S20a). In Fig. S20a, the whole a-Si layer is lightly doped and is placed on an SiO\({}_{2}\) substrate so that the complex refractive index of a-Si now reads as \(n\) = 3.734+0.0013 i. We can actively control the temperature of the metasurface pixel by biasing the metasurface unit cell at the edge and flowing current in the \(y\) direction. the geometrical parameters of the proposed metasurface unit cell are as follows: 6 = 82 nm, \(w\) = \(l_{1}\)=963 nm, \(h\) = 850 nm, \(P_{x}\) = \(P_{y}\) = 1500 nm. The inset of Fig. S20b shows a schematic of one period of a two-level phase grating, which is utilized to theoretically demonstrate switchable diffraction. Within a grating period, the assumed index change between neighboring a-Si pillars is \(\Delta\)n=0.006 (Figs. S20b and S20a). The switchable diffraction is observed both in the case when the polarization of the electric field is perpendicular and parallel to the a-Si bars (Figs. S20b and S20c, respectively). In Fig. S20b, the operating wavelength is 1586.5 nm, and the overall transmittance at the operating wavelength is \(T\) = 1.2%. In Fig. S20c, the operating wavelength is 1583.2 nm, and the overall transmittance at the operating wavelength is 7=0.3%. Note that to observe the diffractive switching, we have utilized the lower-Q mode described in Fig. 3 of the main manuscript. When using the configuration shown in Fig. S20a, we are not able to demonstrate diffractive beam switching using the higher-Q mode (see Fig. 1 of the main manuscript) since the spectral characteristics of the high-Q mode are strongly affected by introduced optical losses. Namely, in the lossy structure (Fig. S20a), we do not observe large spectral variation of the phase of the transmitted light in the case of no applied current. To enable thermo-optic switching using also higher-Q mode we modify our metasurface unit cell design. In the modified unit cell design, only top and bottom 50 nm layers of the a-Si are doped while the core section of a-Si layer is practically undoped (or very lightly doped). In this implementation, the top and bottom electrodes are biased with respect to each other, and current flows in the vertical direction. We have also slightly modified the geometrical parameters of the metasurface unit cell. In S13d, \(P_{x}\)= 1520 nm and \(P_{y}\)= 1425 nm. The height of the a-Si and the a-Si bar is \(h\) = 845 nm, and the thickness of the SiO\({}_{2}\) spacer is 1440 nm. The width and length of the a-Si pillar are taken to be \(w=l_{1}=963\) nm. The insets of Figs. S20e and S20f show schematics of one period of a two-level phase grating, which is utilized to theoretically demonstrate switchable diffraction. Utilizing the higher-Q mode, we have been able to observe switchable diffraction assuming that the refractive index difference between neighboring a-Si pillars in a grating period is \(\Delta n=0.005\), and the width of the a-Si bar of 6 = 40 nm. In Fig. S20e, the polarization direction of the electric field of the incoming light is perpendicular to the a-Si bars. In Fig. S20e, the operating wavelength is 1536.15 nm, while the overall transmittance at the operating wavelength is \(T=6.8\) %. Interestingly, using the two-level phase grating, we have been able to observe highly asymmetric diffraction pattern originating from geometrical asymmetry of the proposed metasurface unit cell (Fig. S20f). In Fig. S20f, the refractive index difference between neighboring a-Si pillars in a grating period is \(\Delta n=0.006\). The width of the a-Si bar is taken to be \(\delta=82\) nm. the operating wavelength of the utilized lower-Q mode is 1548.4 nm, and the overall transmittance at the operating wavelength is \(T=1\) %. Finally, we consider the case when the metasurface unit cells are placed on an SiO\({}_{2}\) pedestal and the a-Si pillars are connected in series vis a-Si bars. In this case, for the pillar height of \(h=880\) nm, the spectral and phase signatures of the high-Q mode are not observed. The metasurface structure considered in the present section is shown in Figs. 5a and 5b. The geometrical parameters of the considered structure are specified in the caption of Fig. 5. Fig. S24. Thermo-optic beam switching using the lower-Q mode. The metasurface unit cell is given by the schematic shown in Figure 5a. We analyze the optical performance of a two-level phase grating. The unit cell of the grating is shown in the inset of a). The index change between neighboring a-Si pillars is \(\Delta n\) = 0.006. a) The intensity of the electric field in the far field when the incoming plane wave is x polarized, the operating wavelength is \(\lambda\) = 1575.5 nm, and the overall transmittance is T = 7.6 %. In b), the overall transmittance spectrum of the two-level grating studied in a). In c), The intensity of the electric field in the far field when the incoming plane wave is y polarized. In c), the operating wavelength is \(\lambda\) = 1583.2 nm, and the overall transmittance is T = 0.6 %. In d), the overall transmittance spectrum of the two-level grating studied in c). Figure S25: Analytical array factor calculations for a two-level phase grating [3]. The plots calculated intensity of the electric field in the far field as a function of the steering angle in the case of a two-level phase grating. a) and b) correspond to the case of the high-Q mode. In a) and b), we assume that the refractive index difference between neighboring metasurface pixels is 0.0026, which is the value chosen in Fig. 5. In a) and b), the values of the electric field amplitudes and relative phases in a grating period are constructed based on Figs. S22a and S23b. c) and d) correspond to the case of the lower-Q mode. In c) and d), we assume that the refractive index difference between neighboring metasurface pixels is 0.006, which is the value chosen in Fig. S24. In c) and d), the values of the electric field amplitudes and relative phases in a grating period are constructed based on Figs. S22c and S23d. We observe that, in the case of the high-Q mode, the array level analytical calculations yield significantly different results as compared with the case of full wave simulations providing evidence of near-field coupling between neighboring metasurface pixels [cf. c) and Fig. 5c as well as d) and Fig. 5e]. On the other hand, in the case of the lower-Q mode, the array factor calculation results and full wave simulations yield similar results (cf. c) and Fig. S24a as well as d) and Fig. S24). Figure S26: Intensity of the electric filed in the far field as a function of the steering angle in the cases of a three-level phase grating. The incoming light is \(y\) polarized. The three-level phase grating is generated using the simulated phase shift data for an operating wavelength of \(\lambda\) = 1583.2 nm (Fig. S23), and the phase shift values in the grating period are given as (0\({}^{\circ}\), 120\({}^{\circ}\), 240\({}^{\circ}\)). Within a grating period, the values of the real part of the refractive index are taken as (3.734, 3.734+0.003333, 3.734+0.01). a) plots the intensity of the electric field in the far field at an operating wavelength of \(\lambda\) = 1583.2 nm. In b), we keep the same spatial distribution of the real part of the refractive index as in a), but the operating wavelength is now taken as \(\lambda\) = 1583.8 nm. In a) we observe the target steered beam at a steering angle of 20.3\({}^{\circ}\). Additionally, we observe a number of spurious diffraction orders. By changing the operating wavelength to \(\lambda\) = 1583.8 nm, we are able to fully suppress the zeroth diffraction order, but the other spurious diffraction orders are still observed. Fig. S27. Intensity of the electric filed in the far field as a function of the steering angle in the cases of a three-level phase grating at an operating wavelength of \(\lambda\) = 1535.81 nm, which is a resonant wavelength for a high-Q mode. The incoming light is \(y\) polarized. Within a grating period, the values of the real part of the refractive index are taken as (3.734, 3.734+0.003333, 3.734+0.01). By appropriately choosing the operating wavelength, we have been able to suppress the zeroth diffraction order. ## 8 Metasurfaces with a finite number of metasurface elements We consider an \(m\) x \(m\) array of metasurface elements and study how the quality factor of the mode supported by the finite array increases with \(m\) (see Fig. S30). To calculate the quality factor, we calculate the scattering cross-section of the finite array and fit it to the Fano formula (see Section 1). When studying a finite metasurface array, the assumed geometrical parameters are the same as in Fig. 1 of the main manuscript: \(l=w=963\) nm and \(P_{x}=P_{y}=1425\) nm. To reduce the simulation time, the assumed mesh in the \(z\)-direction is 10 nm while the mesh in the x- and y-directions is set to 20 nm. When assuming periodic boundary conditions in the x- and y-directions, the quality factor of the supported mode is 8500. Note that this quality factor value (8500) is lower as compared with the one reported in Figs. 1 and 2 of the manuscript (9800). This difference in the observed quality factors is due to the fact the in the simulations shown in Fig. 1, the mesh in the \(z\)-direction is taken to be 5 nm while the mesh in the x- and y-directions is still set to 20 nm. As seen in Fig. S30, the quality factor of the supported mode monotonously increases when increasing the number of metasurface elements in the array. For a 12x12 array, the quality factor is 6000. Next, we assume that the number of the metasurface elements is finite in the direction _parallel_ to the incoming electric field while in the direction _perpendicular_ to the electric field, the periodic boundary condition is assumed (see the inset of Fig. S30b). The described simulation setup enables reducing the simulation time while accessing the quality factor of the metasurface for a larger number of metasurface elements \(n_{x}\). As seen in Fig. S30b, when the number of the metasurface elements in the direction parallel to the electric field nx is \(n_{x}=10\), the quality factor of the metasurfaces is ~7000. While when \(n_{x}=20\), the quality factor of the metasurface is slightly below 8000. In Fig. S30c, we assume that the number of the metasurface elements is finite in the direction _perpendicular_ to the incoming electric field while in the direction _parallel_ to the electric field, the periodic boundary condition is assumed. In this case, when \(n_{x}=10\), the quality fact of the metasurface is ~6000. Note though that in both cases where the metasurface is finite in the direction _perpendicular_ or _parallel_ to the electric field, for \(n_{x}=20\), the quality factor of the metasurface is ~8000. ## 9 High-efficiency reflective high-Q metasurfaces. By adding an Au back reflector to the designed transmissive metasurface we can attain high-efficiency high-Q reflective metasurfaces. The unit cell of the proposed metasurface design is shown in Fig. S31a. The metasurface unit cell consists of an Au back reflector, followed by an SiO\({}_{2}\) spacer on top of which we place an a-Si pillar. By appropriately choosing the thickness of the SiO\({}_{2}\) spacer we obtain high-efficiency high-Q reflective metasurfaces.
2309.07302
Timed Actors and Their Formal Verification
In this paper we review the actor-based language, Timed Rebeca, with a focus on its formal semantics and formal verification techniques. Timed Rebeca can be used to model systems consisting of encapsulated components which communicate by asynchronous message passing. Messages are put in the message buffer of the receiver actor and can be seen as events. Components react to these messages/events and execute the corresponding message/event handler. Real-time features, like computation delay, network delay and periodic behavior, can be modeled in the language. We explain how both Floating-Time Transition System (FTTS) and common Timed Transition System (TTS) can be used as the semantics of such models and the basis for model checking. We use FTTS when we are interested in event-based properties, and it helps in state space reduction. For checking the properties based on the value of variables at certain point in time, we use the TTS semantics. The model checking toolset supports schedulability analysis, deadlock and queue-overflow check, and assertion based verification of Timed Rebeca models. TCTL model checking based on TTS is also possible but is not integrated in the tool.
Marjan Sirjani, Ehsan Khamespanah
2023-09-13T20:50:11Z
http://arxiv.org/abs/2309.07302v1
# Timed Actors and Their Formal Verification ###### Abstract In this paper we review the actor-based language, Timed Rebeca, with a focus on its formal semantics and formal verification techniques. Timed Rebeca can be used to model systems consisting of encapsulated components which communicate by asynchronous message passing. Messages are put in the message buffer of the receiver actor and can be seen as events. Components react to these messages/events and execute the corresponding message/event handler. Real-time features, like computation delay, network delay and periodic behavior, can be modeled in the language. We explain how both Floating-Time Transition System (FTTS) and common Timed Transition System (TTS) can be used as the semantics of such models and the basis for model checking. We use FTTS when we are interested in event-based properties, and it helps in state space reduction. For checking the properties based on the value of variables at certain point in time, we use the TTS semantics. The model checking toolset supports schedulability analysis, deadlock and queue-overflow check, and assertion based verification of Timed Rebeca models. TCTL model checking based on TTS is also possible but is not integrated in the tool. ## 1 Introduction Actors are introduced for modeling and implementation of distributed systems [8, 4]. Timed Actors allow us to introduce timing constraints, and progress of time, and are most useful for modeling time-sensitive systems. Timed Rebeca is one of the first timed actor languages with model checking support [11]. Timed Rebeca restricts the modeller to a pure asynchronous actor-based paradigm, where the structure of the model can represent the service oriented architecture, while the computational model matches the network infrastructure [2]. In a different context, it may represent components of cyber-physical systems, where components are triggered by events put in their input buffers, or by time events [15]. Timed Rebeca is equipped with analysis techniques based on the standard semantics of timed systems, and also an innovative event-based semantics that is tailored for timed actor models [14]. Timed Rebeca is an extension of the Reactive Object Language, Rebeca [17]. It is reviewed and compared to a few other actor languages in a survey published in ACM Computing Surveys in 2017 [5]. The very first ideas of Rebeca and its compositional verification is presented at AVoCS workshop in 2001 [16]. Timed Rebeca, different formal semantics of it, and the model checking support are presented in multiple papers. Here we present an overall view and insight into different semantics and use a simple example to show the differences visually. ## 2 Timed Rebeca A Timed Rebeca model mainly consists of a number of _reactive class_ definitions. These reactive classes define the behavior of the classes of the actors in the model. The model also has a main block that defines the instances of the actor classes. We use a simple Timed Rebeca model as an example to explain the language features. In this example we consider two different actors. The first actor is able to handle three different tasks, named as _job1_, _job2_, and _job3_. The second actor can only handle one task, named as _job4_. The Timed Rebeca model of this example is shown in Listing 1. there are two classes of actors: Actor1 (lines 1-15) and Actor2 (lines 17-27). The main block in lines 29-32 defines one instance of each class. Each reactive class has a number of _state variables_, representing the local state of the actors. They may contain variables of basic data types, including booleans, integers, arrays, or references to other actors. To make the example model simple, none of the reactive classes of Listing 1 has any state variables. Each class can have a _constructor_, which is used to initialize the created instances of the class by initializing the state variables, and start up running of the model by sending messages to itself or other actors. In the Timed Rebeca model of Listing 1, in the constructor of Actor1 (line 3), the actor sends itself a job1 message. Each reactive class accepts a number of message types which are handled using _message servers_ (msgsrv). Actor1 has three message servers, job1 (lines 5-8), job2 (lines 9-11), and job3 (lines 12-14). Serving a message of type job1 results in sending job2 message to self which is put in the message buffer of itself only after passing 1 unit of time (modeled by using the after construct). The deadline construct denotes the deadline of the message to be handled, if at the time of handling the event the deadline is passed the model checking tool notifies that. Then, there is a delay statement which models progress of time for 5 units of time, this can be used to model a computation delay. In the definition of the message servers, well-known program control structures can be used, including _if-else_ conditional statements, _for_ and _while_ loops, the definition of local variables, and assignments using usual arithmetic, logic, and comparative operators. ``` 1|reactiveclassActor1(3){ 2Actor1(){ 3self.job1(); 4} 5msgsrvjob1(){ 6self.job2()after(1)deadline(10); 7delay(5); 8} 9msgsrvjob2(){ 10 11} 12msgsrvjob3(){ 13self.job3()after(1); 14} 15} 16 17reactiveclassActor2(3){ 18knownrebecs{ 19Actor1a1; 20} 21Actor2(){ 22self.job4()after(2); 23} 24msgsrvjob4(){ 25a1.job3()after(2)deadline(5); 26} 27} 28 29main{ 30Actor1actor1():(); 31Actor2actor2(actor1):(); 32} ``` Listing 1: A simple Timed Rebeca model with two actors. In Timed Rebeca models, we assume that actors have local clocks which are synchronized throughout the model. Each message is tagged with a time stamp (called a time tag). We use a delay(t) statement to model the computation delay, and we use after(t) in combination with a send message statement to model a network delay, or model a periodic event. When we use after(t) in a send message statement it means that the time tag of the message when it is put in the queue of the receiver is the value of the local clock of the sender plus the value of t. The progress of time is forced by the delay statement. We assume that the local clock of each actor is zero when the model starts execution, the local clock is increased by value of t if there is a delay(t) statement. A send statement with an after does not cause an increase in the local time necessarily. The local time of the receiver actor is set to the time tag of the message when the actor picks the message, unless it is already greater than that. The latter situation means that the message sits in the queue while the actor is busy executing another message, in this case the after construct does not cause progress of time. The progress of time happens in the case that the time tag of the message is greater than the local time of the receiver actor, in this case the local time will be pushed forward. In Timed Rebeca, messages are executed atomically and are not preempted. ## 3 Different Semantics of Timed Rebeca We first introduced an event-based semantics for Timed Rebeca and used McErlang for simulation of Timed Rebeca models in [1, 12]. In this semantics we focused on the object-based features of actors, encapsulation and information hiding, and decided on a coarse-grained semantics where serving a message (or handing a request or signal) are the only observable behavior of actors. We considered taking a message from top of the message queue and executing it as an observable action, and we called it an Figure 1: Comparing TTS, FTTS, and the relaxed form of FTTS for the Timed Rebeca model of Listing 1. event. Note that by a message queue in Timed Rebeca, we mean a bag of messages where each message has a time tag of when the message is put in the buffer. Here, by "top of the message queue of an actor", we mean the message with the least time tag in the bag of messages targeted to that actor. In defining the formal semantics of Timed Rebeca as a labeled transition system, we only have one type of label on the transitions, _events_, which are taking messages and executing them. In [11], and its extended version [10], we introduced this event-based semantics of Timed Rebeca as Floating Time Transition System (FTTS) and compared it with the time semantics that is generally used for timed models (for example for Timed CCS [2]) where the transitions can be of the type of an event, progress of time, and a silent action. Although we consider FTTS as the original and most fit semantics for Timed Actors, it may also be seen as a reduction technique in model checking. FTTS can give a significant reduction in state space compared to the standard Timed Transition System. In [10] we proved that there is an action-based weak bisimulation relation between FTTS and TTS of Timed Rebeca. Note that the focus here is on the labels on the transitions not on the values of variables in the states. The semantics presented in [12], is a relaxed form of FTTS in [10] where in choosing the next step in a state we have a simpler policy. The SOS rules of FTTS and the relaxed version are presented in [10] and [12], respectively. In each state, the SOS rule for the scheduler chooses the next message in the bags of actors to be executed. In the relaxed form of FTTS, the scheduler simply chooses the message with the least time tag (targeted to any actor). In FTTS, the scheduler considers the local clock of each actor as well. For each actor, the maximum between the local clock and the lowest time tag of the messages in the message bag of the actor is computed. Then among all the actors, the scheduler chooses the actor with the least of these amounts. The message on the top of the queue of this chosen actor will be executed next. Comparing to the standard TTS, the relaxed form of FTTS preserves the order of execution of messages of all actors if we consider the time tags of messages for ordering. The intuitive reason is that in Timed Rebeca we consider a FIFO policy for scheduling the messages in the message buffer, when we choose the message with the lowest time tag to be executed, it is guaranteed that from that point on, there will be no messages with a smaller time tag added to the message buffer (of any actor). So, the FIFO policy for serving messages can be correctly respected. The subtle point here is that the actor \(a\) with the lowest time tag message \(m\) may be busy when message \(m\) is sitting in its message buffer, in the meanwhile other messages from other actors may get the chance to be executed and send messages to actor \(a\). Of course, the time tag of those messages will be greater than the time tag of message \(m\), but still we are losing the "correct" content of the message buffer of \(a\) at some snapshots in time. By this observation, we moved to the FTTS semantics in [10] where at any point in time, we have the correct content of the message buffer. Using this semantics we may choose to use other scheduling policies for messages (events) in the buffer, for example the _earliest deadline first_ policy. In Figure 1, we show parts of the the state transition system for Rebeca model in Listing 1. In this figure, we see how in TTS we may have three types of labels on the transitions, an event, time progress and \(\tau\) (silent) transitions. In FTTS, in state 2 in Figure 1.b, the scheduler chooses the message <job4,2,\(\infty\)> while in the relaxed form of FTTS, in Figure 1.c, the scheduler chooses the message <job2,1,10>. The reason is that although the message with the lowest time tag is <job2,1,10>, with the time tag 1, the maximum between 1 and the value for the local clock of Actor1 is 5. The maximum between 2 (the time tag for message <job4,2,\(\infty\)> ) and the value for the local clock of Actor2 (which is zero in this state) is 2. Figure 2: The state space of the Rebecca model of Listing 1, generated by Afra using TTS and FTTS semantics ## 4 Model Checking Timed Rebeca Models The verification algorithms of TTS with dense time are generally PSPACE-complete as stated in [6]. In the existing model checking tools commonly the properties are limited to a subset of TCTL properties without nested timed quantifiers. For this subset efficient algorithms are developed. In the case of Timed Rebeca, we use _discrete time_, and hence TTS can be verified efficiently in polynomial time against TCTL properties. Discrete time is the time model in which passage of time is modeled by natural numbers. We developed a model checking tool and a reduction technique for Timed Rebeca models based on TTS semantics against TCTL properties [8]. This toolset is not integrated in the Afra IDE [9]. We also developed a tool for the model checking of Timed Rebeca models based on both TTS and FTTS semantics, which is integrated in Afra. The current implementation of the model checking toolset supports schedulability analysis, and checking for deadlock-freedom, queue-overflow freedom, and assertion based verification of Timed Rebeca models. Note that in FTTS, in each state actors may have different local clocks, so, writing meaningful assertion needs special care. Assertions on state variables of one actor are not problematic. The Timed Rebeca code of the case studies and the model checking toolset are accessible from Rebeca homepage [5]. Figure 2 shows the state space generated automatically by the model checking tool, Afra, for the Timed Rebeca model in Listing 1 based on the two semantics, TTS and FTTS. It is shown that the order of events are preserved while time progress and \(\tau\) transitions are hidden. In state _S9\(0\) in Figure 2.a, and state _S5\(0\) in Figure 2.b, you see how the transition system becomes bounded using a shift operation on the time. The shift keyword means that for example by the event a1.JOB3, we go back to state _S8\(0\) (or _S5_0_), where all the values of state variables, local variables and messages in the message buffers stay the same but the value of parameters related to time (including time tag of all messages and local clock value) change and have a shift by the same value.
2306.17794
Vision Through the Veil: Differential Privacy in Federated Learning for Medical Image Classification
The proliferation of deep learning applications in healthcare calls for data aggregation across various institutions, a practice often associated with significant privacy concerns. This concern intensifies in medical image analysis, where privacy-preserving mechanisms are paramount due to the data being sensitive in nature. Federated learning, which enables cooperative model training without direct data exchange, presents a promising solution. Nevertheless, the inherent vulnerabilities of federated learning necessitate further privacy safeguards. This study addresses this need by integrating differential privacy, a leading privacy-preserving technique, into a federated learning framework for medical image classification. We introduce a novel differentially private federated learning model and meticulously examine its impacts on privacy preservation and model performance. Our research confirms the existence of a trade-off between model accuracy and privacy settings. However, we demonstrate that strategic calibration of the privacy budget in differential privacy can uphold robust image classification performance while providing substantial privacy protection.
Kishore Babu Nampalle, Pradeep Singh, Uppala Vivek Narayan, Balasubramanian Raman
2023-06-30T16:48:58Z
http://arxiv.org/abs/2306.17794v1
Vision Through the Veil: Differential Privacy in Federated Learning for Medical Image Classification ###### Abstract The proliferation of deep learning applications in healthcare calls for data aggregation across various institutions, a practice often associated with significant privacy concerns. This concern intensifies in medical image analysis, where privacy-preserving mechanisms are paramount due to the data being sensitive in nature. Federated learning, which enables cooperative model training without direct data exchange, presents a promising solution. Nevertheless, the inherent vulnerabilities of federated learning necessitate further privacy safeguards. This study addresses this need by integrating differential privacy, a leading privacy-preserving technique, into a federated learning framework for medical image classification. _We introduce a novel differentially private federated learning model and meticulously examine its impacts on privacy preservation and model performance._ Our research confirms the existence of a trade-off between model accuracy and privacy settings. However, we demonstrate that strategic calibration of the privacy budget in differential privacy can uphold robust image classification performance while providing substantial privacy protection. ## 1 Introduction Medical imaging, an integral part of modern healthcare, generates vast volumes of data, offering promising opportunities for machine learning applications, particularly in image classification tasks. Such automation can significantly aid disease detection and diagnosis. However, due to the nature of the data, there are often privacy and security concerns. Tackling these issues is critical, especially in the realm of oncology, where timely access to diverse, high-quality data can facilitate improved detection and treatment strategies. Both benign and malignant tumours are aberrant masses of tissue that are the result of excessive cell division. In a healthy body, there is a controlled cycle of cell birth and death. However, cancer disrupts this balance, leading to unwanted cell growth and potentially forming tumors. Skin lesions, on the other hand, are characterized by localized changes in the skin's color, shape, size, or texture, often resulting from damage or disease. Since cancer is the second leading cause of death worldwide, according to the World Health Organisation, effective diagnosis methods are needed [22]. Medical image classification serves as a vital component of cancer diagnosis. The advancement of machine learning and computer vision techniques have enabled the automation of medical image classification, thereby reducing the time and manual effort required, and leading to more informed decision-making by medical professionals [3]. However, a significant challenge lies in the availability of high-quality, annotated medical image datasets. These datasets, often sensitive and confidential, are difficult to utilize openly for training machine learning models [18]. Federated Learning (FL), a deep learning technique, provides a promising solution. FL allows a model to be trained across multiple clients, each holding local data. The local data is not shared, thereby ensuring data privacy and security [16; 5]. Unlike some classical decentralized approaches, FL does not treat data as independent and identically distributed (IID), enhancing the model's effectiveness [21]. One of the key challenges of federated learning is that data across different nodes may not be IID. In reality, the data on each node (such as a user's phone or a particular hospital's records) may be significantly different from the data on other nodes. This is known as the Non-IID setting. This can occur, for instance, if different users use an app in different ways, or if different hospitals have different patient populations. In FL, the model is taken to the data rather than bringing data to the model. This method enables the inclusion of additional medical images that previously could not be shared due to confidentiality concerns. Despite these advantages, federated learning is not completely immune to potential privacy breaches. Advanced attacks like model inversion or membership inference can still pose threats [2]. Indeed, while only model parameters are shared and not raw data, it is theoretically possible to extract information about the training data from these parameters. For instance, a sophisticated adversary might be able to perform what's known as a "model inversion attack", where they use the shared model parameters to infer sensitive information about the data used to train the model. Even without accessing the raw data, an attacker can use the shared updates to deduce information about the data that contributed to these updates. For more details, [13, 25] are suggested. Differential Privacy (DP) is a privacy-preserving framework that provides robust mathematical guarantees of privacy [8]. The fundamental principle of DP is to add calibrated noise to the data or computation results, making it challenging to extract information about individual data points. By ensuring that the output of a computation does not reveal significant details about any individual data point in the input dataset, DP provides an additional layer of privacy protection. It ensures that the output of a function (in this case, the learned model) is insensitive to changes in any single input (i.e., the removal or addition of any single data point). The main focus of this study is on the integration of differential privacy in federated learning for the categorization of medical images, notably images of cancers and skin lesions. We hypothesize that this framework provides robust privacy guarantees, addressing the ethical, legal, and social implications of data sharing in healthcare applications. Additionally, we explore the balance between privacy and model utility through careful calibration of the privacy budget in DP. This research contributes to the broader discourse on privacy-preserving machine learning, especially in the context of cancer diagnosis, and aims to pave the way for secure, privacy-preserving collaborations in medical image analysis. Our primary contributions can be listed as follows: * We develop a novel federated learning framework with integrated differential privacy for medical image classification. Specifically, we introduce a mathematically rigorous mechanism for calibrating the noise added to the model's parameter updates. This mechanism provides a formal privacy guarantee quantified in terms of the differential privacy parameters. * We propose an adaptive privacy budget allocation strategy for the federated learning rounds that best updates the privacy budget in each round based on the data distribution and model learning progress. This strategy provides a more effective trade-off between the global model's learning accuracy and the level of privacy preservation. * We provide a formal analysis of the trade-off between privacy and utility in the approach we propose. In the domain of medical image classification, we provide mathematical limitations on the loss in model accuracy as a function of the privacy parameter and the data sensitivity. This analysis assists in making informed decisions about the privacy budget allocation in practical settings. The other sections of the paper are structured as follows: A thorough analysis of relevant research in the fields of machine learning, medical imaging, and privacy-preserving methods including federated learning and differential privacy, is presented in Section 2. In Section 3, we detail the methodology of our proposed federated learning framework with integrated differential privacy, including our novel noise calibration mechanism and adaptive privacy budget allocation strategy. Section 4 describes our experimental setup, encompassing the datasets employed, the evaluation metrics, and the comparison models. We also discuss the experimental results and other important findings, providing insights into the privacy-utility trade-off in our proposed framework and the impact of various parameter choices. The paper concludes with Section 5, where we reflect upon our findings, discuss the limitations of our study, and outline potential future research avenues to further enhance the privacy and utility of medical image classification systems. ## 2 Related Work and Background Medical image classification involves the task of assigning a class label to an image or a segment of an image, typically related to a specific disease or condition [20]. Deep learning-based methods, specifically convolutional neural networks (CNNs), have become the state-of-the-art in medical image classification, achieving remarkable performance in tasks such as detecting lung cancer from CT scans or identifying skin cancer from dermoscopic images [9]. Nevertheless, the performance of these models heavily relies on the availability and quality of annotated training data, which can be a challenging prerequisite given the sensitive nature of medical data. Classifying an image or a portion of an image for use in medicine often involves relating the image to a particular disease or condition [20]. Convolutional neural networks (CNNs), in particular, have emerged as the cutting-edge approach for classifying medical images thanks to their exceptional performance in tasks like identifying skin cancer from dermoscopic images and detecting lung cancer from CT scans [9]. However, given the sensitive nature of medical data, finding annotated training data that is both readily available and high-quality can take time and effort. This is because the effectiveness of these models significantly depends on it. However, the advent of Deep Learning has significantly shifted the landscape of medical imaging techniques towards utilizing this powerful approach [3]. Initially, Deep Learning architectures like AlexNet, GoogLeNet, VGG-16, and VGG-19 were widely adopted for these tasks. This was later followed by the emergence of Residual Networks (ResNets) and Inception networks [23], further enhancing the capabilities of medical image classification systems. Currently, with the growing popularity of transfer learning, various architectures have begun incorporating pre-trained models like GoogLeNet and InceptionV3 [26]. There are even instances of combining multiple pre-trained models, such as an amalgamation of a fine-tuned AlexNet with a pre-trained VGG-16, followed by SVM classification [7]. In addition, recent studies have explored utilizing syntactic patterns from medical images through AlexNet [17], and enhancing the quality of images prior to deep learning application [1]. However, despite these advances, a significant concern remains unresolved across the works cited above: the assurance of data privacy and confidentiality. In today's digitized world, this aspect is becoming increasingly paramount, necessitating the exploration of methods that ensure data security and patient privacy. In response to this demand, our research aims to employ the concept of federated learning to safeguard these concerns. There have been several recent contributions in the realm of medical imaging that leverage federated learning. For instance, Zheng Li et al. [12] proposed a federated learning framework with dynamic focus for identifying COVID-19 instances in Chest X-Ray images. The unique aspect of this work is the use of training loss from each model as the basis for parameter accumulation weights, enhancing training efficiency and accuracy. Similarly, Jun Luo et al. [14] proposed the Federated Learning along with Shared Label Distribution (FedSLD) method for classification tasks. This method strategically adjusts the influence of each data sample on local target during optimization, using knowledge about clients' label distribution, thereby mitigating instability induced by data heterogeneity. Additionally, Mohammed Adnan et al. [4] applied a differentially private federated learning framework to histopathology image interpretation - one of the most complex and large-scale types of medical images. This work demonstrated that distributed training, with strong privacy guarantees, can achieve results comparable to conventional training. Integrating differential privacy into federated learning has been studied for various applications. Before sharing the local model updates with the server for aggregation, this integration often entails introducing noise to them [10]. However, this noise addition can degrade the model's performance, leading to a trade-off between privacy and utility. Adaptive strategies for managing this trade-off have been proposed in other domains, such as adjusting the privacy budget allocation based on the model's learning progress [15]. Despite these promising works, most of the existing solutions suffer from issues such as non-deployability or lack of data privacy guarantees. Our methodology, while being lightweight, provides accuracy comparable to these aforementioned works. Moreover, our research comprehensively discusses the impact of parameter variations on the model and underscores the importance of client-side computation, which is less intensive and more accessible to users with mobile devices. We believe that deployable technology for healthcare systems, like our proposed framework, is currently a pressing requirement. In the context of medical image classification, the integration of differential privacy into federated learning is still an open research topic. Ensuring privacy while maintaining high classification performance is crucial in this domain, motivating the development of new methods and strategies for this purpose. In this work, we propose a novel federated learning framework with integrated differential privacy for medical image classification. A new technique for calibrating the noise contributed to model updates, an adaptive method for allocating privacy budgets, and a formal analysis of the privacy-utility trade-off are all included in our approach. Our methodology is tailored towards the characteristics and requirements of medical image data, making it suitable for practical deployment in healthcare settings. ## 3 Methodology Our methodology aims to establish a secure framework for medical image classification, utilizing federated learning with integrated differential privacy. The proposed framework involves multiple client devices and a central server, where each client device has a local model and medical image data. The federated learning process is shown in Algorithm 1, while the schematic representation of the proposed framework is provided in Figure 1. In the following sections, we detail the mathematical components of this framework, including our novel noise calibration mechanism, adaptive privacy budget allocation strategy, formal analysis of the privacy-utility trade-off, and implementation of the federated learning model. ### Federated Learning with Integrated Differential Privacy Federated learning involves training a global model using local models at each client. Denote the global model parameters at iteration \(t\) as \(\mathbf{w}_{t}\). Each client \(i\) holds a local dataset \(\mathbf{D}_{i}\) and computes the local model update \(\Delta\mathbf{w}_{t,i}\) based on \(\mathbf{D}_{i}\) and \(\mathbf{w}_{t}\). The global model is then updated using the aggregated local updates. To integrate differential privacy, we add carefully calibrated noise to each local model update. We model this noise addition process as a Laplace mechanism, which is commonly used in differential privacy due to its simple analytical properties. The noisy local update \(\tilde{\Delta}\mathbf{w}_{t,i}\) is given by: \[\tilde{\Delta}\mathbf{w}_{t,i}=\Delta\mathbf{w}_{t,i}+\mathbf{b}_{t,i}, \tag{1}\] where \(\mathbf{b}_{t,i}\) is noise drawn from a multivariate Laplace distribution with zero mean and scale parameter determined by the privacy parameter \(\epsilon_{t}\) and the sensitivity \(\Delta_{f}\) of the function computing the local update: \[\mathbf{b}_{t,i}\sim\text{Laplace}(0,\Delta_{f}/\epsilon_{t}). \tag{2}\] This process ensures \(\epsilon_{t}\)-differential privacy for each local model update. ### Adaptive Privacy Budget Allocation A key aspect of our methodology is the adaptive allocation of the privacy budget across the federated learning iterations. We aim to optimize the use of the privacy budget based on the data distribution and the model's learning status. Let \(\epsilon\) be the total privacy budget. At each iteration \(t\), we allocate a privacy budget \(\epsilon_{t}\) such that \(\sum_{t}\epsilon_{t}=\epsilon\). Our strategy is to allocate more budget in the early iterations where the model can learn more from the data, and less in the later iterations. We define the learning progress measure \(\pi_{t}\) as the relative improvement in the loss function Figure 1: Schematic representation of proposed framework (* represents the client who starts the learning process to take care of the overfitting issue). from iteration \(t-1\) to \(t\): \[\pi_{t}=\frac{L(\mathbf{w}_{t-1})-L(\mathbf{w}_{t})}{L(\mathbf{w}_{t-1})}, \tag{3}\] where \(L\) is the loss function. We then set \(\epsilon_{t}\) proportional to \(\pi_{t}\), with a proportionality constant determined by the total privacy budget \(\epsilon\): \[\epsilon_{t}=\frac{\epsilon\pi_{t}}{\sum_{t}\pi_{t}}. \tag{4}\] This strategy is designed to optimize the trade-off between learning accuracy and privacy preservation. ### Privacy-Utility Trade-off Analysis In our proposed framework, we proceed to conduct a formal study of the privacy-utility trade-off. We offer constraints on the growth of the loss function brought on by differential privacy. The concept of \((\epsilon,\delta)\)-differential privacy is used to quantify the privacy guarantees provided by an algorithm, typically in the context of statistical and machine learning analyses on private datasets. A randomized algorithm \(A\) provides \((\epsilon,\delta)\)-differential privacy if for all datasets \(D_{1}\) and \(D_{2}\) differing on at most one element, and for all sets of outputs \(S\) of \(A\), the following inequality holds: \[\Pr[A(D_{1})\in S]\leq e^{\epsilon}\Pr[A(D_{2})\in S]+\delta, \tag{5}\] where \(\Pr[A(D)\in S]\) denotes the probability that the output of the algorithm \(A\) applied to dataset \(D\) is in the set \(S\). Here \(\epsilon\) and \(\delta\) are non-negative parameters that control the privacy and accuracy of the algorithm. The parameter \(\epsilon\) is sometimes called the privacy parameter, with smaller values of \(\epsilon\) providing stronger privacy guarantees. The parameter \(\delta\), usually a small positive fraction, represents a failure probability that the privacy guarantee might not be upheld. The overall goal of \((\epsilon,\delta)\)-differential privacy is to ensure that the removal or addition of a single database entry does not significantly change the probability distribution of the algorithm's output, thus providing privacy for individuals contributing data. To guarantee \(\epsilon\)-differential privacy, we introduce noise to the local model updates. This noise is drawn from a Laplace distribution scaled according to the sensitivity \(\Delta_{f}\) of the function and inversely scaled with \(\epsilon\). Consequently, noise addition induces a deviation in the model parameters and therefore raises the empirical risk. **Theorem 3.1**.: _In the proposed differentially private federated learning framework for medical image classification, the excess empirical risk due to differential privacy is bounded by:_ \[\|L(\mathbf{w}_{\epsilon})-L(\mathbf{w}^{*})\|\leq\frac{2\Delta_{f}^{2}\log(1/ \delta)}{\epsilon^{2}T}, \tag{6}\] _where \(\mathbf{w}_{\epsilon}\) denotes the model parameters trained with \(\epsilon\)-differential privacy, \(\mathbf{w}^{*}\) denotes the optimal parameters without privacy, \(L\) is the loss function, \(\Delta_{f}\) is the sensitivity of the function computing the local update, \(\epsilon\) is the total privacy budget, \(T\) is the number of iterations, and \(\delta\) is the failure probability in the differential privacy guarantee._ Proof.: Without loss of generality, we can assume the loss function \(L\) is 1-Lipschitz. This is because we can always normalize the data and the loss function accordingly without changing the privacy guarantee. The Laplace noise added to each local model update is of magnitude \(\Delta_{f}/\epsilon_{t}\), where \(\epsilon_{t}\) is the privacy budget at iteration \(t\). This implies that the magnitude of the noise decreases as \(\epsilon_{t}\) increases, and thus more noise is added in the early iterations where \(\epsilon_{t}\) is smaller. Because of the Laplace noise, the model parameters after each iteration are perturbed from the non-private parameters by a distance of at most \(\Delta_{f}/\epsilon_{t}\). This implies that after \(T\) iterations, the total perturbation is at most \(\Delta_{f}\sum_{t=1}^{T}\frac{1}{\epsilon_{t}}\). In expectation, the magnitude of this perturbation is \(\Delta_{f}E[\sum_{t=1}^{T}\frac{1}{\epsilon_{t}}]\). By using the Jensen's inequality, we have \[E[\sum_{t=1}^{T}\frac{1}{\epsilon_{t}}]\leq\frac{1}{T}\sum_{t=1}^{T}E[\frac{1} {\epsilon_{t}}]=\frac{1}{T}\sum_{t=1}^{T}\frac{1}{E[\epsilon_{t}]} \tag{7}\] Since \(\epsilon_{t}\) is proportional to the learning progress \(\pi_{t}\) which is positive, \(E[\epsilon_{t}]>0\) for all \(t\). So, we have \[E[\sum_{t=1}^{T}\frac{1}{\epsilon_{t}}]\leq\frac{1}{T}\sum_{t=1}^{T}\frac{1}{E[ \epsilon_{t}]}\leq\frac{1}{T}\sum_{t=1}^{T}\frac{1}{\min_{t^{\prime}}E[ \epsilon_{t^{\prime}}]}\leq\frac{1}{\min_{t^{\prime}}E[\epsilon_{t^{\prime}}]} \tag{8}\] Applying the properties of the Laplace distribution and the definition of \((\epsilon,\delta)\)-differential privacy, we can bound the minimum expectation of the privacy budget by \(\frac{\epsilon}{\log(1/\delta)}\), i.e., \(\min_{t^{\prime}}E[\epsilon_{t^{\prime}}]\geq\frac{\epsilon}{\log(1/\delta)}\). Putting all these together, we obtain the bound \[L(\mathbf{w}_{\epsilon})-L(\mathbf{w}^{*})\leq\frac{2\Delta_{f}^{2}}{ \epsilon T}E[\sum_{t=1}^{T}\frac{1}{\epsilon_{t}}]\leq\frac{2\Delta_{f}^{2}}{ \epsilon T}\frac{1}{\min_{t^{\prime}}E[\epsilon_{t^{\prime}}]}\leq\frac{2 \Delta_{f}^{2}\log(1/\delta)}{\epsilon^{2}T} \tag{9}\] This bound decreases with the total privacy budget \(\epsilon\), and shows that the model's performance can be maintained with an appropriate choice of \(\epsilon\). Our analysis thus provides a mathematical foundation for understanding the privacy-utility trade-off in the proposed framework. ### Implementation of the Federated Learning Model Our federated learning model is built on a sequential architecture with three layers, the first two of which employ the "ReLU" activation function and the third of which uses the "Softmax" function. With a categorical cross-entropy loss function, we employ an SGD optimizer. ``` Input: 1. Total privacy budget \(\epsilon\) 2. Number of clients \(n\) 3. Number of iterations \(T\) 4. Local datasets \(\{\mathbf{D}_{i}\}_{i=1}^{n}\) Output: Trained model parameters \(\mathbf{w}_{T}\). Algorithm: 1. **Initialization**: Initialize global model parameters \(\mathbf{w}_{0}\). 2. **For**\(t=1,2,\ldots,T\) communication rounds **do**: 1. **Broadcast**: Send \(\mathbf{w}_{t-1}\) to all clients. 2. **Local Update** 1. Each client \(i\) computes the local model update: \(\Delta\mathbf{w}_{t,i}\leftarrow\text{LocalUpdate}(\mathbf{D}_{i},\mathbf{w}_{t -1})\). 2. Compute sensitivity: \(\Delta_{f}\leftarrow\|\Delta\mathbf{w}_{t,i}\|_{2}\). 3. Sample noise: \(\mathbf{b}_{t,i}\sim\text{Laplace}(0,\Delta_{f}/\epsilon_{t})\). 4. Add noise to the local update: \(\tilde{\Delta}\mathbf{w}_{t,i}\leftarrow\Delta\mathbf{w}_{t,i}+\mathbf{b}_{t,i}\). 3. **Aggregate**: Calculate \(\Delta\mathbf{w}_{t}\leftarrow\frac{1}{n}\sum_{i=1}^{n}\tilde{\Delta}\mathbf{w }_{t,i}\) and update the global model: \(\mathbf{w}_{t}\leftarrow\mathbf{w}_{t-1}+\Delta\mathbf{w}_{t}\). 4. Compute loss: \(L(\mathbf{w}_{t})\). 5. Compute learning progress: \(\pi_{t}\leftarrow\frac{L(\mathbf{w}_{t-1})-L(\mathbf{w}_{t})}{L(\mathbf{w}_{t- 1})}\). 6. Update privacy budget: \(\epsilon_{t}\leftarrow\frac{\epsilon\pi_{t}}{\sum_{t}\pi_{t}}\). 3. **Return**\(\mathbf{w}_{T}\). The learning rate is dynamically adjusted using a decay function, as given by: \[a=\frac{1}{1+\alpha*r}, \tag{10}\] where \(a\) is the learning rate, \(\alpha\) is the decay rate, and \(r\) is the number of rounds. After the model is built, the global model is initialized. For each round, the global weights work as the initial weights for all the local models. Each client's data is randomized, a new local model is created for each client, the weights are initialized, and then the average of the weights of the models trained in each client is assigned to the global weight (Federated Averaging). The model training is repeated until the model converges, and the final model parameters are then used for predicting new medical images. The algorithm 1 implements the methodology described in the paper. Each client computes a local model update and adds noise drawn from the Laplace distribution for privacy protection. The privacy budget is allocated adaptively based on the learning progress at each iteration. The server collects the noisy local updates, scales them by the number of samples at each client, and applies them to the global model. The learning rate is adjusted by the decay function. To sum it up we have introduced a novel noise calibration mechanism for differential privacy, an adaptive privacy budget allocation strategy, and a formal analysis of the privacy-utility trade-off in federated learning. These contributions are expected to enhance the security and efficiency of medical image classification systems. ## 4 Experiments and Evaluation ### Dataset The HAM10000 Skin Image Dataset [24], an acclaimed corpus of dermatoscopic images, has been widely employed in both medical and computer science research domains. This robust dataset, comprising of 10,015 categorized images of pigmented skin lesions, serves as a vital learning tool for various types of skin cancers. The image set encompasses 8,902 benign cases and 1,113 malignant ones. The sheer volume and diversity of this dataset render it a pivotal asset for training and validation of machine learning algorithms, specifically for the development of diagnostic methodologies. Consequently, the dataset has significantly facilitated advancements in the arena of dermoscopic image analysis, thus contributing to the evolution of artificial intelligence applications within the dermatology field. In order to encapsulate samples from dual independent populations, this comprehensive dataset was bifurcated into two parts. The larger segment was deployed for model fine-tuning, while the other part served as an autonomous client. This step was deemed essential given the scarcity and size constraints of publicly available independent datasets within the medical imaging domain. Preliminary experiments revealed that without initial fine-tuning, client-led federated learning models were prone to overfitting and demonstrated a high degree of learning instability. To mitigate this, a strategic approach was employed wherein the global model was initially fine-tuned on a comparatively extensive dataset. Subsequently, the clients were allowed to proceed with the learning process. This approach ensured persistent client-level independence and effectively controlled overfitting. The dataset was subjected to a rigorous pre-processing pipeline to optimize feature extraction and improve model performance. This process involved cropping based on the lesion location, shuffling, normalization, and histogram equalization to enhance image visibility. Moreover, a manual curation process was implemented to include images devoid of distracting elements and outliers into the pre-processed dataset. A paramount facet of federated learning is the independence of client datasets. To ensure this in our study, a custom top model was initially pre-trained on the ImageNet dataset. This pre-trained model was subsequently fine-tuned on the initial split of the HAM1000 dataset. To generate independent client datasets, four discrete datasets from the Cancer Imaging Archive (TCIA) were procured. In addition, two other pivotal datasets, PH2 and MSK, were utilized to create independent clients for our federated learning system. The PH2 dataset, a publicly accessible skin image database developed by the Dermatology Service of Hospital Pedro Hispano (Portugal), features dermoscopic images of melanocytic lesions. The Memorial Sloan Kettering (MSK) dataset, on the other hand, is a proprietary source of skin lesion images that adds further diversity and richness to the pool of independent client data. Each of these independent datasets functioned as individual clients within our federated learning system, thereby strictly maintaining the requisite data independence integral to federated learning. ### Architecture The model architecture utilized for our experiments comprised of the widely acknowledged MobileNetV2 architecture [19] as the base, which was pre-trained on the ImageNet dataset [6]. The choice of MobileNetV2 was driven by its superior performance in image classification tasks and its adaptability for transfer learning. This robust, pre-trained model served as the foundation upon which a customized top was placed, designed specifically to cater to our unique task of skin lesion classification. The model was structured in such a way that the base MobileNetV2 model extracted lower-level features from the skin images, while the custom top performed higher-level feature extraction and final classification. The custom top consisted of several convolutional layers, activation functions, and a final fully connected layer with softmax activation to predict the class probabilities. Dropout and batch normalization layers were also included to improve the generalization ability of the model and to speed up the training process. This base model plus the custom top design allowed us to leverage the power of the MobileNetV2 model and fine-tune it to meet the requirements of our specific classification task. This also ensured that we maximized our use of the limited medical image data available. The detailed structure of the base model, in conjunction with our custom top, is shown in Figure 2. ### Results and Interpretation To evaluate the performance of our framework, we compared it against a baseline federated learning approach without differential privacy. We trained the models using the federated learning algorithm described in Section 3, with and without the integration of differential privacy. We used a total privacy budget of \(\epsilon=1.0\) and set the number of iterations to \(T=36\). Table 1 presents the classification performance of the different models. We report the accuracy, precision, recall, and F1-score as evaluation metrics. As can be observed, the differentially private federated learning framework achieves competitive performance compared to the baseline federated learning approach without privacy. Although there is a slight decrease in accuracy and other metrics, the results remain promising, considering the robust privacy guarantees provided by differential privacy. \begin{table} \begin{tabular}{l c c c c} \hline Model & Accuracy & Precision & Recall & F1-score \\ \hline Baseline (on HAM10000) & 0.9068 & 0.9075 & 0.9067 & 0.9071 \\ Federated Learning (on 3 clients) & 0.8821 & 0.8855 & 0.8786 & 0.8820 \\ Federated Learning with DP (on 3 clients) & 0.8464 & 0.7907 & 0.8203 & 0.8052 \\ \hline \end{tabular} \end{table} Table 1: Classification performance of different models. Figure 2: Detailed model architecture: MobileNetV2 as base model with a custom top. The results demonstrate that our proposed differentially private federated learning framework successfully preserves privacy while maintaining reasonable classification performance. The slightly reduced performance can be attributed to the introduction of noise during the local model updates, as required by differential privacy. However, this trade-off between privacy and utility is acceptable, considering the sensitive nature of medical image data. Furthermore, we conducted experiments to analyze the impact of the adaptive privacy budget allocation strategy described in Section 3. Figure 3 illustrates the allocation of the privacy budget over the course of the federated learning iterations. As shown, the strategy allocates a larger privacy budget in the initial iterations when the model learns the most from the data. It then gradually decreases the privacy budget in later iterations. This adaptive allocation helps optimize the privacy-utility trade-off and ensures efficient use of the privacy budget. ## 5 Conclusion and Future Work In light of the prevailing privacy concerns in healthcare data, this study has proposed a novel approach to medical image classification that skillfully integrates federated learning with differential privacy, ensuring privacy preservation while maintaining performance efficacy. This is particularly crucial in medical image analysis, given the highly sensitive nature of the data involved. Our methodology has successfully established a secure framework for medical image classification utilizing federated learning. The inherent vulnerabilities of federated learning Figure 3: Adaptive privacy budget allocation over federated learning iterations. have been mitigated by integrating differential privacy, thus reinforcing privacy safeguards. We have introduced a novel noise calibration mechanism, adaptive privacy budget allocation strategy, and presented a formal analysis of the privacy-utility trade-off in federated learning. These mechanisms cumulatively contribute to a model that can learn effectively from the data while preserving the privacy of individual contributions. Moreover, the design of the federated learning model and its various components, such as the learning rate decay function, further contribute to its effectiveness. The proposed model exhibited a noteworthy performance in the classification of skin lesion and brain tumor images, demonstrating significant promise for its application in various medical image classification tasks. While the model's performance is influenced by the number of clients, appropriate parameter tuning can optimize the results. While the study successfully addresses the need for privacy preservation in medical image analysis, it has a few limitations. Firstly, the model's performance could be affected by the variability in the quality of the images, and the extent of preprocessing required. Secondly, the choice of \(\epsilon\) (privacy budget) in differential privacy is critical to achieving a balance between privacy and model performance. Deciding the value of \(\epsilon\) requires careful consideration of the particular application and the desired level of privacy. Finally, the trade-off between privacy and utility is inherent in this framework, which mandates further exploration to minimize potential adverse effects on the model's performance while ensuring optimal privacy. Looking ahead, future research should explore strategies to optimize the allocation of the privacy budget in differential privacy to improve model accuracy while upholding robust privacy protection. Another promising direction is to explore other noise addition mechanisms and differential privacy techniques that can further enhance the privacy guarantees of the proposed model. Additionally, more extensive testing of the proposed model with diverse datasets, tasks, and in different healthcare contexts can help to further evaluate and refine the approach. ## Acknowledgements This research has been funded in part by the Ministry of Education, India, under grant reference number OH-31-24-200-428 and the Department of Atomic Energy, India, under grant number 0204/18/2022/R&D-II/13979.
2309.17111
Master-Slave synchronization of silicon optomechanical nanobeam oscillators by external feedback
The remote synchronization of oscillators is essential for improving the performance, efficiency, and reliability of various systems and technologies, ranging from everyday telecommunications to cutting-edge scientific research and emerging technologies. In this work, we unequivocally demonstrate a master-slave type of synchronization between two self-sustained optomechanical crystal oscillators that interact solely through an external optical feedback stage. Several pieces of experimental evidence rule out the possibility of resonant forcing, and, in contrast to previous works, indicate that synchronization is achieved in the regime of natural dynamics suppression. Our experimental results are in agreement with the predictions of a numerical model describing the specific mechanical lasing dynamics of each oscillator and the unidirectional interaction between them. The outcomes of our study pave the way toward the synchronization of clock signals corresponding to far-placed processing elements in a future synchronous photonic integrated circuit.
David Alonso-Tomás, Nestor E. Capuj, Laura Mercadé, Amadeu Griol, Alejadro Martínez, Daniel Navarro-Urrios
2023-09-29T10:10:42Z
http://arxiv.org/abs/2309.17111v1
# Master-slave synchronization of silicon optomechanical nanobeam oscillators by external feedback ###### Abstract The remote synchronization of oscillators is essential for improving the performance, efficiency, and reliability of various systems and technologies, ranging from everyday telecommunications to cutting-edge scientific research and emerging technologies. In this work, we unequivocally demonstrate a master-slave type of synchronization between two self-sustained optomechanical crystal oscillators that interact solely through an external optical feedback stage. Several pieces of experimental evidence rule out the possibility of resonant forcing, and, in contrast to previous works, indicate that synchronization is achieved in the regime of natural dynamics suppression. Our experimental results are in agreement with the predictions of a numerical model describing the specific mechanical lasing dynamics of each oscillator and the unidirectional interaction between them. The outcomes of our study pave the way toward the synchronization of clock signals corresponding to far-placed processing elements in a future synchronous photonic integrated circuit. Synchronization is the term used to describe the coordination of the temporal dynamics of two or more self-sustained oscillators by means of a weak interaction [1]. First proposed by Lord Huygens in 17th century [2], this phenomena has been widely found through nature from microscopic to macroscopic world [3; 4; 5; 6; 7]. Unsurprisingly, synchronization, either unidirectional or bidirectional, has garnered significant interest in the last decades being applied in signal-processing [8], RF communications [9], clock synchronization [10] or even neural networks [11] among others. With the advances in micro and nanotechnologies, efforts have been dedicated to achieve the synchronization of self-oscillating micro and nanoelectromechanical systems (MEMS/NEMS), which offer robust high frequency operation, miniaturization and a great scalability [12; 13; 14; 15; 16]. Optomechanical oscillators (OMOs) are a subset of MEMS/NEMS oscillators that activate large amplitude coherent mechanical motion driven by optical forces[15]. OMOs are great candidates to explore synchronization mechanisms since the interaction can be mediated by optical signals and, therefore, be effective even at large distances between nodes. However, there are only few studies on this topic performed on OMOs based on microdisk or microspheres resonators [17; 18; 19; 20]. On the other hand, there are several proposals concerning synchronization of OMOs that go from purely mechanical synchronization through a mechanical link [21] to the use of a common optical mode that drive both oscillators [22; 23; 24]. These alternatives provide low scalability and would not allow synchronization on demand of distant subsystems within a complex OM network. In this manuscript, we unequivocally demonstrate the synchronization between two OMOs based on silicon optomechanical (OM) crystal nanobeams driven by optical forces, which are spectrally separated both in the mechanical and optical domain. The synchronization scheme is a unidirectional master-slave configuration where the light modulation generated by one OMO is fed to the other one. Unlike other circular resonators like disk, spheres, or toroids, OM crystal nanobeams offer a compact and easily integrable solution within a silicon-based platform. These nanobeams are physically connected to the rest of the chip, allowing for direct extraction of coherent mechanical signals into a phononic circuit when needed. Furthermore, the route towards synchronization is performed in the regime of suppression of the natural dynamics described by Balanov [1] instead of the phase-locking mechanisms explored in previous literature [18; 25]. An essential condition for synchronization between two oscillators is that both of them should be self-sustained [1; 7; 13]. This means that, in each oscillator, gain overcomes mechanical losses and the mechanical motion becomes coherent and of large amplitude. This regime will be referred hereon as mechanical lasing given its similarity with the optical counterpart and the OM crystal nanobeams will be treated as OMOs. The OM crystals under study are driven to the mechanical lasing regime using a self-pulsing (SP) mechanism that has been explored in previous works [26; 27] in the sideband unresolved regime. It is based on the anharmonic modulation of radiation pressure force induced by the dynamical self-limit cycle generated between free-carrier dispersion (FCD) and the thermo-optic (TO) effect in silicon. Essentially, these effects produce a modulation of the refractive index of the material and therefore, a movement of the cavity resonance at a frequency (\(\nu_{sp}\)). When an harmonic (M) of \(\nu_{sp}\) is partially resonant with a mechanical mode of the structure, it can provide coherent amplification and drive it to the mechanical lasing regime. The SP frequency can be thermally tuned in the MHz range by increasing the average intra-cavity photon number (\(n_{0}\)) so that the cavity is heated up (see Supplemental Section S2). The device investigated here is composed of a pair integrated nominally identical one-dimensional OM crystal cavities, which have been fabricated using standard Si nanofabrication techniques on a silicon-on-insulator wafer (see Supplementary Section S1). The outermost five cells on each side of the OM crystals are anchored with tethers to the partially underetched Si frame (Fig. 1a). This arrangement ensures that the flexural modes within the plane are isolated from the frame and restricted to the central area of the cavities, which are specifically engineered to sustain high quality factor optical modes for transversal electric (TE) polarization around 1.53 \(\mu\)m [28]. Both cavities are separated 2 \(\mu\)m in such a way that it is possible to optically excite them simultaneously by placing a single tapered fiber placed in between. Although these geometries are nominally identical, fabrication imperfections produce slightly different mechanical and optical resonant frequencies. In particular, the mechanical modes used in this work correspond to the in plane flexural ones having 3 anti-nodes along the x direction and mechanical frequencies of (\(f_{m}\), \(f_{s}\)) = (\(\Omega_{m}\), \(\Omega_{s}\))/2\(\pi\) = (100.11, 99.41) MHz, where the sub-index m and s denote master and slave, respectively (Fig. 1b). The transmission spectrum of the whole device is obtained performing a sweep in wavelength for low input power (\(P_{in}\) = 0.5 mW), which results in two well separated optical resonances at (\(\lambda_{m}\), \(\lambda_{s}\)) = (1528.20, 1540.05) nm, holding an overall quality factor of \(Q_{m,s}\) = (5.68, 6.67) \(\cdot\) 10\({}^{3}\) respectively (Fig. 1c). It is worth noting that separated optical resonances are essential to avoid optical cross-talk between the OMOs. Fig. 1d shows the setup used to perform the experiment. To achieve the simultaneous optical excitation of both OMOs we employ two tunable lasers, each one tuned at the resonant wavelength of its corresponding OMO. The polarization of each laser is controlled to be TE, which matches that of the cavity modes. Light of the two lasers is afterwards combined and driven to a microloop-shaped fiber that has been thinned down to a diameter of about 1.5 \(\mu\)m. The bottom part of the microloop can act as a probe that enables the local excitation of the optical cavity modes when the cavities Figure 1: Characteristics of tested optomechanical crystal cavities and experimental setup. **(a)** SEM image of a pair integrated optomechanical cavities. Optical cavity modes lie in the highlighted region. **(b)** FEM simulations of the mechanical displacement field of the mechanical mode under study. This analysis utilize a geometry imported from the SEM imageof panel a. **(c)** Optical transmission spectra of the device under test. The red and blue shaded regions denote master and slave resonances respectively. **(d)** Schematic of the experimental setup. Tunable laser (TL); fiber polarizer controller (FPC); tunable fabry-perot filter (WF), photodetector (PD); spectrum analyzer (SA); oscilloscope (OSC); attenuator (ATT), amplificator (AMP); electro-optic modulator (EOM). Red and blue paths indicate the different laser wavelengths necessary to excite master and slave resonances, respectively. Purple path indicates the zones where both wavelengths propagate simultaneously. are placed in the near field region of the fiber. Then, light is divided in two paths and spectrally filtered by tunable fabry-perot filters (WF) to record the laser wavelengths resonant with the master and salve cavities in photodetectors PD1 and PD2, respectively. The slave signal is derived to a spectrum analyzer (SA) while the one of the master is introduced as a modulation feedback on the laser exciting the slave by means of an electro-optic modulator (EOM) with a half-wave voltage \(V_{\pi}\) = 3.5 V. The offset voltage is set near the quadrature point \(V_{DC}\) = 0.5\(V_{\pi}\) so that if the RF modulation signal is small, the output light power responds linearly (see Supplementary Section S3). The magnitude of the feedback signal and, consequently, the modulation amplitude of the slave laser are governed by a stage that allows for tunable attenuation or amplification. As a result of this experimental configuration, the external feedback is the only interaction between both OMOs. Thus, even if the two OMOs are physically placed close to each other, the system is equivalent to have them separated in space. Finally, both detectors output signals are temporally analyzed in a 4-channel oscilloscope (OSC). By using the SP mechanism explained before, both cavities are excited to different mechanical lasing regimes: M = 3 for the case of the slave (where the third harmonic of the SP is providing the mechanical amplification) and M = 1 for the master. This mechanical lasing scheme allows clearly distinguishing between a forcing mechanism and master-slave synchronization, since the differences may be subtle. Indeed, the modulation feedback generated by the master could resonantly drive the slave while eliminating its self-sustained mechanical oscillation. This would obviously lead to a slave mechanical oscillation that will be coherent with that of the master, which may be confused with synchronization. By keeping an M=3 regime in the slave throughout the whole set of measurements we ensure ruling out the possibility of resonant forcing, since otherwise the SP dynamics would disappear. Fig 2a shows the RF response of the slave OMO in a 10 MHz spectral range around the mechanical resonance when varying the feedback amplification. As the feedback modulation amplitude is increased, sidebands resulting from the coherent sum of both non-linear oscillations (slave natural frequency and master modulation harmonics) appear at various possible combination of their frequencies. Synchronization is observed above a certain amplification threshold value of 15 dB, given that the mechanical frequency of the slave is locked to that of the master one and most of the sidebands disappear. Two wide side-bands Figure 2: Radio frequency (RF) analysis of the light transmission modulated by the slave dynamics detected at PD2. **(a)** Contour RF plot near the mechanical natural peak of the slave dynamics as a function of the amplification introduced in the feedback stage. **(b)** Wider RF spectrum for different amplification power. Note that M = 3 dynamics is present as two extra peaks at one and two thirds of the natural frequency of the slave. **(c)** Magnification around the natural resonance of the slave. **(d)** Amplitude of slave’s oscillation at slave (blue) and master (red) natural frequency represented as a function of the feedback amplification. remain at a beating frequency of \((\Omega_{M}\) - \(\Omega_{S})/2\pi\), which are clear signatures of master-slave synchronization that have been reported in previous works addressing synchronization of photonic cavities [24]. Their origin lies in the thermal force noise acting on the slave, which tends to push away the slave dynamics from the synchronization limit cycle. These experimental results have been compared with numerical simulations performed using a model based in the SP equations coupled to harmonic mechanical oscillators, showing a good qualitative agreement (see Supplementary Section S3). The measured synchronization mechanism resembles the suppression of the natural dynamics route described by Balanov [1] for a Van der Pol self-sustained oscillator under the actuation of a harmonic external force. Under this mechanism, the synchronization region is entered at relatively large amplitudes of forcing in comparison to a phase-locking mechanism, being one of the main characteristics of its route the absence of frequency pulling. Indeed, synchronization by suppression appears if the separation between the natural frequencies of the oscillators is rather large, which is indeed our case (\(\Delta f\) = 0.7 MHz, i.e., about 0.7%). As it is shown in Fig. 2b, the M = 3 mechanical lasing slave dynamics is preserved even after the threshold for synchronization. Thus, as mentioned earlier, this is an evidence that the slave is just adapting its dynamics to synchronize with the modulation generated by the master and hence discards resonant forcing. The intensity of the RF peaks appearing in the slave signal at the natural frequencies of each OMO has been analysed in Fig. 2d as a function of the feedback stage amplification. As expected, the RF peak associated with the slave remains constant until it sharply declines after synchronization. In the case of the master RF peak there is a linear relation with feedback amplification with a slope near to one, which indicates that the response of the modulation amplitude to the amplification/attenuation stage is linear. Interestingly, this linear tendency is altered with an abrupt increase of the master RF peak signal when the transition to synchronization occurs. This effect is linked to the transfer of self-sustained oscillation energy from the slave frequency to the master frequency, further confirming synchronization rather than forced oscillation. It is also worth mentioning that, as shown in Fig. 2c, even in the absence of external feedback there is a weak RF peak associated to the dynamics of the master, which is associated to a subtle mechanical cross-talk between the OMOs through the frame surrounding them. This interaction, in addition to being negligible compared to the amplitude of the lasing mechanical motion, is not in phase with the one introduced in the slave OMO using the external feedback mechanism. Therefore its contribution to the synchronization mechanism can be neglected. The temporal behaviour of the transmitted signals of both OMOs have been analyzed in the oscilloscope by recording traces of 800 ns using the slave OMO signal as the trigger. Fig. 3 shows the data represented as a Poincare map, where the z-axis has been chosen to be \(\sin\left(\frac{2\pi\Omega_{sync}}{3}\Delta t\right)\) to illustrate the trajectory in a three dimensional space. Below the synchronization threshold (Fig. 3a) most of the phase space is filled by the traces, which is a clear indication that slave and master signals are not in sync. On the other hand, when synchronization occurs, the trajectory follows a closed curve in the phase diagram (Fig. 3b). A fit was performed to the oscillation trace of the slave to clearly observe the trajectory of the cycle. Finally, in Fig. 3c Figure 3: Temporal dynamics and phase noise of OM oscillators in the synchronization and free running regime. **(a)** Poincaré map of the recorded temporal traces of the free running slave using an stroboscopic technique with a sampling frequency of \(f_{sync}/3\). Each colored curve corresponds to a different value of the initial delay (\(\Delta t\)). Projections in the different 2-dimensional planes are shown in grey. **(b)** Same representation for the case when the feedback amplification is above the synchronization threshold (20 dB). A fit (green) is performed to the raw data of the slave oscillation (grey). **(c)** Phase noise of the master oscillator (red) and the free running and synchronized slave oscillator (yellow and green, respectively) as a function of the frequency offset. the phase noise of the free running and synchronized slave OMO is reported and compared with that of the master. The synchronized slave reduces its phase noise at low frequencies until it becomes similar to that of the free running master OMO, which exhibit a value of -85 dBc/Hz at 10 KHz. In conclusion, it has been unambiguously demonstrated a master-slave type of synchronization between two independent 1D optomechanical cavity self-sustained oscillators by introducing an external optical feedback mechanism. The route towards synchronization has been shown to be by suppression of the natural dynamics instead of the more standard phase-locking. Furthermore, one dimensional cavities offer the advantage of being easy to integrate and less bulky compared to previous systems described in the literature. Even though in this work the oscillators are integrated in the same platform, the synchronization achieved by this method does not depend on the distance between them. In that way, the system presented here could be upscaled to networks of optomechanical oscillators interacting remotely. This work was supported by the MICINN projects ALLEGRO (Grants No. PID2021-124618NB-C22 and PID2021-124618NB-C21) and MOCCASIN-2D (Grant No. TED2021-132040B-C21). A. M. acknowledges funding from the Generalitat Valenciana under grants ID-IFEDER/2020/041 and IDIFEDER/2021/061.
2309.03164
J-Guard: Journalism Guided Adversarially Robust Detection of AI-generated News
The rapid proliferation of AI-generated text online is profoundly reshaping the information landscape. Among various types of AI-generated text, AI-generated news presents a significant threat as it can be a prominent source of misinformation online. While several recent efforts have focused on detecting AI-generated text in general, these methods require enhanced reliability, given concerns about their vulnerability to simple adversarial attacks. Furthermore, due to the eccentricities of news writing, applying these detection methods for AI-generated news can produce false positives, potentially damaging the reputation of news organizations. To address these challenges, we leverage the expertise of an interdisciplinary team to develop a framework, J-Guard, capable of steering existing supervised AI text detectors for detecting AI-generated news while boosting adversarial robustness. By incorporating stylistic cues inspired by the unique journalistic attributes, J-Guard effectively distinguishes between real-world journalism and AI-generated news articles. Our experiments on news articles generated by a vast array of AI models, including ChatGPT (GPT3.5), demonstrate the effectiveness of J-Guard in enhancing detection capabilities while maintaining an average performance decrease of as low as 7% when faced with adversarial attacks.
Tharindu Kumarage, Amrita Bhattacharjee, Djordje Padejski, Kristy Roschke, Dan Gillmor, Scott Ruston, Huan Liu, Joshua Garland
2023-09-06T17:06:31Z
http://arxiv.org/abs/2309.03164v1
# J-Guard: Journalism Guided Adversarially Robust Detection of AI-generated News ###### Abstract The rapid proliferation of AI-generated text online is profoundly reshaping the information landscape. Among various types of AI-generated text, AI-generated news presents a significant threat as it can be a prominent source of misinformation online. While several recent efforts have focused on detecting AI-generated text in general, these methods require enhanced reliability, given concerns about their vulnerability to simple adversarial attacks. Furthermore, due to the eccentricities of news writing, applying these detection methods for AI-generated news can produce false positives, potentially damaging the reputation of news organizations. To address these challenges, we leverage the expertise of an interdisciplinary team to develop a framework, **J-Guard**, capable of steering existing supervised AI text detectors for detecting AI-generated news while boosting adversarial robustness. By incorporating stylistic cues inspired by the unique journalistic attributes, **J-Guard** effectively distinguishes between real-world journalism and AI-generated news articles. Our experiments on news articles generated by a vast array of AI models, including ChatGPT (GPT3.5), demonstrate the effectiveness of **J-Guard** in enhancing detection capabilities while maintaining an average performance decrease of as low as 7% when faced with adversarial attacks. ## 1 Introduction Recent advances in transformer-based generative models have led to substantial enhancements in the Natural Language Generation (NLG) capabilities of advanced conversational AIs, such as ChatGPT and BARD. These AI tools generate human-like text on a large scale by leveraging state-of-the-art (SOTA) pre-trained language models (PLMs) such as GPT 4 (OpenAI, 2023), GPT 3.5 (Ouyang et al., 2022), GPT 3 (Radford et al., 2019), OPT (Zhang et al., 2022) and Lambda (Thoppilan et al., 2022). Considering the current trend of deploying these models in services offered to the general public, we can anticipate further improvements in NLG from future models. However, deploying such NLG-capable models for public use poses the risk of potential misuse. Adversaries can employ these models to establish harmful agendas and conduct influence operations that deceptively steer the opinions of large groups of a target populace (Shu et al., 2020; Goldstein et al., 2023). AI-generated news articles are particularly concerning, as they can cause significant damage to the information ecosystem. Malicious actors can easily prompt AI models to generate text that purports to be authentic news but contains falsified information (Shu et al., 2018; Zellers et al., 2019). To make matters worse, current models are capable of generating misinformation and factually incorrect text in large volumes at a minimal cost through APIs. A recent report 1 by NewsGuard, an organization that combats misinformation online, identified an emerging set of 49 newsbots, i.e., news and information sites, that appear to incorporate AI for news generation. Therefore, it is crucial to have computational methods to discern between AI-generated news and actual human-written news to combat the persistent challenges to the information ecosystem. Footnote 1: [https://www.newsguardtech.com/special-reports/newsbots-ai-generated-news-websites-proliferating/](https://www.newsguardtech.com/special-reports/newsbots-ai-generated-news-websites-proliferating/) In recent years, much interesting work has been done on detecting AI-generated text (Zellers et al., 2019; Mitchell et al., 2023; Kirchenbauer et al., 2023). However, most of these methods, which we discuss in our Related Works section, do not explicitly focus on AI-generated news. Therefore, using these general-purpose AI text detectors to detect AI-generated news has a few challenges: 1) the unique attributes of professional journalism make news articles distinct from typical human-written text. Thus applying general AI text detection methods for AI-generated news detection could lead to false positives that potentially damage the reputation of journalists and news organizations, and 2) existing AI text detectors are highly vulnerable to adversarial attacks, e.g., paraphrasing(Sadasivan et al., 2023; Krishna et al., 2023). To address the above challenges, we leverage the expertise of an interdisciplinary team, which includes journalists, computer scientists, and communication scholars, to develop a framework for **J**ournalism **G**uided **A**dversarially **R**obust **D**etection of AI-generated News (**J**-**Guard**). To this end, we first studied the unique professional journalism attributes of human-written news articles' writing and publishing process. Throughout the journalism process, many stylometric cues are incorporated, including journalism standards employed by the journalist as well as specific newsroom style guides and standards imposed by the newsroom editors. Here we hypothesize that even though the PLMs learn human-level writing via pretraining, they potentially will display semantic gaps in replicating these style guides and journalism standards inherent to the news production process. Therefore, we propose incorporating a simple yet effective set of auxiliary stylistic cues to guide the existing supervised AI text detectors to discern real-world journalism with the AI generation of news articles using PLMs. Furthermore, as we will show, since these cues quantify the high-level stylometry of the text, the detection process is more robust to the character and word level perturbations, thus, reinforcing the adversarial robustness of our AI-generated news detection methodology. To summarize, the main contributions of our paper are as follows: 1. To the best of our knowledge, we are the first to study and quantify stylistic cues resulting from the latent journalism process in real-world news organizations towards discriminating AI-generated news. 2. We propose a computational framework incorporating these stylistic cues to detect AI-generated news. 3. We conduct extensive experiments on a publicly available vast array of PLMs, including ChatGPT (GPT 3.5), to show our approach's effectiveness in detecting AI-generated news. 4. By producing character and word level attacks, we empirically show how the stylistic cues we incorporated improved the adversarial robustness of AI-generated news detection. ## 2 Journalism Background Journalism as an industry does not universally subscribe to codes of conduct, owing largely to a historical rebuke of standardization as a profession (Shapiro, 2010). Several trade groups, including the Society for Professional Journalists, have created detailed style guides. Many news organizations have adopted them internally, and others have created their versions. Scholars (Broersma, 1880; Shapiro, 2010; Mateus, 2018) have noted that, though the reporting process is typically situational, which makes it difficult to routinize, there are some key areas in which common methods, processes, and values signal an intent to establish credibility. And the form and style are integral to convincing people of the 'truthiness' of newsworthy events (Broersma, 1880; Mateus, 2018). Journalistic practices that have been widely adopted include the use of the inverted pyramid as a storytelling format (Mateus, 2018) and a style of writing based on the Associated Press Stylebook 2. Mateus (Mateus, 2018) describes form and style "as key components of journalistic discourse that, in a given time, are able to generate credibility and confidence." Though the AP Stylebook is not universally followed among news organizations, and some make situational exceptions, if we encounter purported news articles that are widely divergent from what AP recommends, we hypothesize that this is a strong signal of inauthenticity. In fact, adherence to the Stylebook is one of the key factors in the Associated Press' automated journalism efforts (Linden, 2017). Footnote 2: [https://www.apstylebook.com/](https://www.apstylebook.com/) In our study, we aim to integrate the aforementioned hypothesis of inauthenticity into the task of detecting AI-generated news. Specifically, we investigate the extent to which current AI models are capable of generating news articles that adhere to professional journalism standards. Figure 1 illustrates a clear distinction in the distribution between GPT3-generated news articles and those written by humans from reputable news organizations such as CNN and the Washington Post. As illustrative examples of journalism features, we consider the length of introductory sections (leading sentences and paragraphs) and the usage of Oxford commas. Professional journalism typically employs shorter and more concise introductions, while the use of Oxford commas is infrequent in accordance with AP standards. Hence, we observe the potential for leveraging our hypothesis to enhance the detection of AI-generated news. In the subsequent section, we will delve into a detailed discussion of the journalism features that can be utilized for detecting AI-generated news. ## 3 AI-generated News Detection This section presents the details of the **J-Guard** framework. The **J-Guard** framework consists of two main components: (a) the base AI text detector component and (b) the Journalism guidance component. The base AI text detector is any PLM sequence classification model. The journalism guidance component injects auxiliary journalism cues into the detection pipeline, thus transforming the base detector into an AI-generated news detector. We will provide a comprehensive discussion of these two components in the following sections. ### Base AI Text Detector The Base AI text detector component consists of a pretrained transformer encoder stack with \(n\) encoders to learn the semantic representation of the given input news article \(X\). Here we define \((x_{1},x_{2},...,x_{k})\) as the token representation of the input \(X\) according to the tokenizer of the PLM model we choose for the base detector component. We denote the representation learned by the base AI text detector as \(B_{k\times d}\) where \(k\) is the sequence length (i.e., number of tokens of the input news article), and \(d\) is the hidden state size of an encoder block. From the representation matrix, \(B_{k\times d}\), we select the final hidden vector representation of the special token [CLS], \(B_{d}^{CLS}\) as the feature vector for our task of detecting AI-generated news. Then \(B_{d}^{CLS}\) is passed to the journalism guidance component for further processing. ### Journalism Guidance The cornerstone of the **J-Guard** framework lies in the journalism guidance of the Base AI text detector toward detecting AI-generated news--different modules within the journalism guidance component help achieve this goal. As illustrated in figure 2, the journalism guidance component comprises a Journalism Feature Extractor and a Guidance Head as sub-components. #### 3.2.1 Journalism Feature Extractor As postulated in Section 2, encountering a news article that widely deviates from the recommended styles and standards of the AP stylebook may serve as a strong indication of inauthenticity. Therefore, the journalism feature extractor is a computational module that incorporates this hypothesis to enhance the detection of AI-generated news. The feature extractor takes the input news article \(X\) in the form of a set of tokens \(w_{1},w_{2},...,w_{m}\). Here, \((w_{1},w_{2},...,w_{m})\) represents the tokenized version of the input article \(X\) using an improved Treebank Word Tokenizer3. Subsequently, a set of extractor functions \(f\in F\) is applied to these tokens to extract various scores that quantify the divergence of the input article from the AP recommended styles and standards. For discussion, it is useful to la Figure 1: Distribution of GPT3 Generated News vs Human-written News. Figure 2: Proposed framework **J-Guard**: The base detector component here is a supervised PLM-based detector for AI text detection. bel three subsets of the extractor function set \(F\)\(F^{i}|i\in{1,2,3}\), such that \(F=F^{1}\cup{F^{2}\cup{F^{3}}}\). The three subsets can be broadly defined as follows: 1. \(F^{1}\): **Organization and grammar standards** - functions that quantify the wording and grammatical structure of the news article (sentences and paragraphs forming) 2. \(F^{2}\): **Punctuation usage** - functions that quantify the punctuation usage of the news article 3. \(F^{3}\): **Formatting standard violations** - functions that quantify the violations of the formatting of different elements in a news article, such as date, time, and number, in reference to the AP standards Within each extractor category, we extract multiple features that can quantify the deviation of the input article from the AP recommended styles and standards. In \(F^{1}\), we examine the overall wording structure of the news article, as well as the leading sentence and paragraph, as the size of these components could serve as indicators of inauthenticity. For example, a large leading or introductory part is not very common for news articles. Additionally, we consider grammatical elements, such as tense and voice, which can provide cues about journalism standards. For instance, the use of past tense and passive voice is not common in news writing. As a result, the following features are extracted: mean word count (WC), mean sentence count (SC), WC of the leading sentence, WC of the leading paragraph, mean SC with passive voice, and mean SC with past tense. In \(F^{2}\), we analyze the usage of punctuation. In addition to standard punctuation marks, we also examine symbols that are rarely found in genuine news articles, such as the number sign. Consequently, the following features are extracted in \(F^{2}\): mean usage of "!" "#", "...", and the Oxford comma per paragraph. Lastly, under \(F^{3}\), we investigate format violations in the input article based on AP standards. Specifically, we identify and count violations related to date, time, and number formats. The detailed implementations of each feature extractor function can be found in the appendix section A. Some of the feature extractors mentioned above return mean values in the range of [0,1] while some return absolute counts, which will be larger than 1. Therefore we normalize the feature vector before incorporating it with the task of AI-generated news detection. Let \(n\) be the number of journalism features, \(f_{i}\in F\) and \(W=(w_{1},w_{2},\ldots,w_{m})\) be the treebank tokenization of \(X\) then we define the final normalized journalism feature vector \(J_{n}\) as: \[J_{n}=\frac{[f_{1}(W),f_{2}(W),...,f_{n}(W)]}{||[f_{1}(W),f_{2}(W),...,f_{n}(W )]||}. \tag{1}\] #### 3.2.2 Guidance Head We propose to enhance the detection capabilities and adversarial robustness of our detector by incorporating the learned journalism features, \(J_{n}\), into the output of the base AI text detector, \(B_{d}^{CLS}\). A naive approach would be to simply concatenate both features and pass them through the fully connected feedforward neural network, which we refer to as the Classification Head, to predict the final classification label \(\hat{y}\). However, this naive approach may lead to the overshadowing of \(J_{n}\) by \(B_{d}^{CLS}\) due to the large dimensionality of \(B_{d}^{CLS}\) compared to \(J_{n}\). Furthermore, direct concatenation of the two feature vectors without considering their different ranges poses a feature scaling issue. To address these challenges, we propose the incorporation of an additional set of feedforward layers, referred to as the Guidance Head. This Guidance Head includes a hidden layer with a size equal to or larger than the input layer. This choice is made to prevent overshadowing of \(J_{n}\). The Guidance Head learns the relationships between the feature vectors \(B_{d}^{CLS}\) and \(J_{n}\), without overshadowing \(J_{n}\), by mapping the input [\(B_{d}^{CLS},J_{n}\)] to a higher-dimensional feature space. Note that we first normalize the [\(B_{d}^{CLS}\), \(J_{n}\)] vector before passing it to the Guidance Head to avoid feature scaling issues. Finally, the Guidance Head's output layer produces a reduced vector of the scaled-up hidden representation, which we pass to the Classification Head for the final prediction. To summarise, as shown in equation 2, the whole purpose of Guidance Head is to learn the function \(g_{\theta}\) that learns fusion between \(B_{d}^{CLS}\) and \(J_{n}\). \[C_{l}=g_{\theta}\left(\frac{[B_{d}^{CLS},J_{n}]}{||[B_{d}^{CLS},J_{n}]||}\right) \tag{2}\] Here, \(C_{l}\) is the reduced vector of size \(l\) produced by the output layer of the Guidance Head. Finally, the output of the Guidance Head, \(C_{l}\) is passed to the Classification Head to predict the final classification label \(\hat{y}\). Using the ground truth label, we incorporate standard cross-entropy loss to train the whole framework **J-Guard** end to end. Experiments and Results This section describes the experimental settings used to validate our framework, including the datasets and baselines, followed by a thorough analysis of the experiments. We conducted several experiments to investigate whether the proposed journalism features can improve the detection of AI-generated news. We aim to answer the following two research questions through our experiments: * Do the identified journalism features, enhance the detection of AI-generated news? * Do the identified journalism features, enhance the adversarial robustness of AI-generated news detection? ### Datasets and AI Generators We evaluate our approach on a vast array of AI generators, i.e., PLMs -- To this end, we use the benchmark dataset TuringBench (Uchendu et al., 2021). TuringBench is a dataset consisting of human-written news articles, mostly from CNN and the Washington Post, and AI-generated news from more than 10 PLM generators. Of these, we used the following PLMs for our analysis: Grover, CTRL, PPLM\({}_{gpt2}\) (base model used is GPT2), GPT2, GPT3. Within TuringBench, data is generated using various combinations of PLMs and model sizes. To maintain brevity in our analysis, we have included only the largest model size for each PLM. This selection is justified by the understanding that the largest model size for each PLM is expected to produce the highest quality text, making it more challenging to detect. Therefore, our results can be extrapolated to smaller PLMs as well. Furthermore, we performed our experiments on a ChatGPT dataset that we created. Given the human-like quality of text generated by newer PLMs like GPT3.5 and GPT4 (OpenAI, 2023), it is important to evaluate our detection framework on such language models. To create this dataset, we followed steps similar to the ones in the TuringBench paper (Uchendu et al., 2021). Specifically, we sampled around 9,000 news articles from CNN and the Washington Post and use these as 'human' written articles. For each of these articles, we prompt ChatGPT (with backend gpt-3.5-turbo, model version as of March 14, 2023) to generate an equivalent news article. To do this, we experiment with several types of prompts, and for the final data generation, we use the prompt: "Generate a news article with the <headline>.", where <headline> is the headline from the corresponding human written article. For the ChatGPT generations, we set \(top\_p\) to 1, \(temperature\) to 0.5 and limit the length of the generated text to 1024 tokens. The final dataset contains 9k human-written and 9k ChatGPT-generated articles, which we divided into train, test, and validation splits (7:2:1) similar to TuringBench. We will release this dataset to the public upon acceptance of the paper (section 8.2). ### Baselines Our experiments consist of two categories of AI news detector baselines: First, we study simple feature-based classification schemes which use logistic regression (LR) with BOW and Word2vec features as a baseline to evaluate the quality of the journalism features (JF) we selected via our journalism analysis. Second, we aim to empirically compare and validate **J-Guard** with SOTA PLM-based methods for AI-generated text detection. The SOTA baselines can be further categorized into 1) **Zero-shot PLM-based classifiers**: GLTR (Gehrmann et al., 2019), and the newer zero-shot baseline DetectGPT (Mitchell et al., 2023). These two approaches work without supervised training datasets for detecting AI-generated text and 2) **Supervised PLM-based classifiers**: We consider OpenAI's GPT-2 detector (RoBERTa-large) as our supervised PLM-based detector baseline. We considered two variants of this model i) OpenAI\({}_{Zero}\) - OpenAI's off-the-shelf GPT-2 detector without any task-specific tuning, ii) OpenAI\({}_{FT}\) - OpenAI's GPT2 detector finetuned for AI news detection. Further technical details about the baselines can be found in the appendix section B. ### Detection Setup **Implementation Details of J-Guard:** The base AI text detector is one of the key components of the **J-Guard** framework, involving a supervised PLM specifically designed for detecting AI-generated text. In our research, we conducted experiments using various existing PLMs (base size), including RoBERTa, BERT, DeBERTa, and DistilBERT. Among these models, RoBERTa exhibited the highest performance, and therefore, it was selected as the base AI text detector for the **J-Guard** framework, while reporting the experiment results. Both the Classification Head and the Guidance Head were implemented using feedforward neural net works comprising one hidden layer. For the training of the overall framework, a max length of 512, a learning rate of \(2\times 10^{-5}\), and a dropout rate of 0.2 were employed. The training process utilized a 40 GB NVIDIA A100 GPU (\(\approx\) 1hr per AI generator). **Task Details:** We consider the task of AI-generated news detection as a binary classification problem. In our data, we have train, test, and validation (7:2:1 ratio) splits for each AI generator, where we use the train set to finetune models on the task of AI news detection and the test set to record the classification performance. The validation set was used for early stopping to determine the number of training epochs. See appendix section C for more details. ### Adversarial Attack Setup In order to validate the adversarial robustness of the detector, we conducted two common attacks that have been observed in previous work: Cyrillic injection and paraphrasing Crothers et al. (2022); Sadasivan et al. (2023); Liang et al. (2023). In the Cyrillic injection attack, we perturbed the input text by replacing English characters with similar-looking Cyrillic characters. Specifically, we selected three highly frequent English vowels, "a", "e", and "o," and replaced them with their Cyrillic counterparts. For the paraphrasing attacks, we employed a PLM-based approach that incorporates the T5 model to paraphrase a given input text Sadasivan et al. (2023). ### Results and Discussion This section discusses the experimental results under AI-generated news detection, including additional experiments on feature importance and PLM choice for the **J-Guard**. Furthermore, we empirically show the adversarial robustness of the **J-Guard** by emulating multiple attack scenarios. #### 4.5.1 RQ1 - AI-generated News Detection Performance Here, we present an evaluation of the performance of AI news detection using a wide range of AI generators. Table 1 reports the AUROC scores for different detectors (rows) across different AI generators/PLMs (columns). Based on the results in Table 1, we make the following observations regarding AI-generated news detection: 1) **Effectiveness of journalism features** - when we look at the logistic regression results (1st 3 result rows of Table 1), we can see that journalism features outperform simple BOW and word2vec performance across all the AI generators. This suggests that the journalism feature space provides a reasonable boundary for discriminating between human-written news and AI-generated news. 2) **Effectiveness of J-Guard** - Our proposed method outperforms all the detection baselines in 4 out of 6 AI generators. However, for PPLM\({}_{gpt2}\) and GPT2 generators, we observe that the finetuned OpenAI detector (OpenAI\({}_{FT}\)) outperforms **J-Guard** by a small margin. The OpenAI detector has an advantage in detecting GPT2 and PPLM\({}_{gpt2}\) as it is exposed to GPT2 samples in the first stage of finetuning done by OpenAI. 3) **Effectiveness of task-specific training** - We observe that off-the-shelf zero-shot methods (GLTR, DetectGPT, and OpenAI\({}_{Zero}\)) perform poorly across many AI generators in detecting AI news. However, the performance improves significantly when we further finetune the OpenAI\({}_{Zero}\) on the AI news detection task (OpenAI\({}_{FT}\)). This observation highlights the importance of task-specific supervision. We also analyzed the impact of the base AI detector choice on our framework, **J-Guard**. We experimented with multiple open-source PLMs, as shown in Figure 3. We evaluate each PLM with and without **J-Guard** to evaluate the detection performance and report the average performance across the AI generators considered in our study. We found that the detection performance could be enhanced with the use of **J-Guard** on each PLM. Among all the models, RoBERTa yielded the best performance. Additionally, we conducted a study to better understand the significance of journalism features in AI news detection with the help of a Shapley Additive Explanations (SHAP) Lundberg and Lee (2017) explainer on the logistic regression classifier. The SHAP values were used to indicate fea Figure 3: Effect of the choice of PLM for framework **J-Guard**- Average AUROC across all six AI generators, before and after Journalism guidance. ture importance. We only present SHAP plots for the GPT3 detection task for brevity reasons, but SHAP plots related to other AI generator detection tasks can be found in the appendix section D.2. The SHAP plots show that certain features, such as mean sentence count for a paragraph (_mean_sent_count_para_), the word count of lead paragraph size (_wc_lead_para_), and past tense usage (_past_tense_count_) are highly significant in distinguishing AI news from human-written news, as depicted in Figure 4. #### 4.5.2 RQ2 - Adversarial Robustness of AI News Detection This section discusses our experiments on evaluating the adversarial robustness of AI news detectors. As outlined in section 4.4, we conducted two types of attacks on the detectors: character-level attacks involving Cyrillic injection and word-level attacks involving paraphrasing. Table 2 shows the detectors' performance (AUROC) difference before and after each attack. For brevity, we only report the detection performance on GPT3 and ChatGPT data, while the other results can be found in the appendix section D.2. Based on the results presented in Table 2, we make the following observations regarding the adversarial robustness of AI news detectors: **Attack Success** - We have observed that almost every SOTA baseline detector we have considered is susceptible to adversarial attacks. On average, the performance of the detectors dropped by at least 15-20%. In contrast, we observed a low attack success rate with the GLTR model. However, the observation of low attack success is meaningless as the GLTR model had a near-random guess (\(\approx 0.5\) AUROC) performance before the attack. The overall reduction in performance following the Cyrillic injection attack can be attributed to the tokenizer. Cyrillic letters in the input text alter the token representation, subsequently affecting detection. For paraphrasing, modifying the original text could alter the learned decision boundary during detector training, leading to a performance decline. **Improved Adversarial Robustness of J-Guard** - We have observed that **J-Guard** is quite resilient to adversarial attacks, with an average performance drop of only 7%. It is apparent that this robustness is due to the journalism features employed by **J-Guard**. For example, \(\text{OpenAI}_{FT}\), which shares the same PLM architecture and training data for detection as **J-Guard**, has an average performance drop of nearly 15%. In the journalism feature space, we check for high-level semantic gaps and violations of journalism standards. Character-level attacks, such as Cyrillic injection, have a negligible effect on these feature calculations. Even with paraphrasing attacks, the edit distance between the original \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Dataset \(\rightarrow\) & \multicolumn{5}{c|}{TuringBench} & In-House Data \\ \hline \begin{tabular}{c} Generator \(\rightarrow\) \\ \(\downarrow\) \\ \end{tabular} & Grover & CTRL & \(\text{PPLM}_{gpt2}\) & GPT2 & GPT3 & ChatGPT \\ \hline LR+ BoW & 0.816 & 0.775 & 0.792 & 0.822 & 0.806 & 0.810 \\ \hline LR+ W2V & 0.854 & 0.793 & 0.804 & 0.871 & 0.852 & 0.847 \\ \hline LR + JF & 0.897 & 0.831 & 0.873 & 0.931 & 0.912 & 0.883 \\ \hline GLTR & 0.482 & 0.784 & 0.634 & 0.542 & 0.454 & 0.728 \\ \hline DetectGPT & 0.549 & 0.806 & 0.492 & 0.505 & 0.557 & 0.766 \\ \hline \begin{tabular}{c} \(\text{OpenAI}_{Zero}\) \\ \end{tabular} & 0.746 & 0.763 & 0.918 & 0.857 & 0.773 & 0.756 \\ \hline \begin{tabular}{c} \(\text{OpenAI}_{FT}\) \\ \end{tabular} & 0.975* & 0.969* & **0.966** & **0.980** & 0.951* & 0.925* \\ \hline **J-Guard** & **0.986** & **0.972** & 0.965* & 0.975* & **0.968** & **0.934** \\ \hline \end{tabular} \end{table} Table 1: Proposed **J-Guard** model performance (AUROC) values for AI-generated news detection. Bold shows the best AUROC within each column (Detector-PLM generator combination); asterisk (*) denotes the second-best AUROC. Figure 4: SHAP values to estimate journalism feature importance. and perturbed text may be substantial in the input space but insignificant in the journalism feature space, making **J-Guard** robust to such attacks. ## 5 Related Work ### AI-generated Text Detection Several methods have been explored for detecting AI-generated text, such as logistic regression, SVC, etc. [16]. GLTR [1] uses a set of simple statistical tests to check whether an input text sequence is AI-generated or not. Fine-tuned PLM detectors are also used and considered state of the art [15, 11, 12], such as OpenAI's GPT2 detector that uses a RoBERTa backbone finetuned with GPT-2 outputs [1]. With the rapid advancement of newer language models like GPT3.5/4, there is a growing emphasis on the capabilities of few-shot or zero-shot detection and the interpretability of these detectors [12]. Some new detectors include commercial products such as GPTZero 4 and OpenAI's detector that is trained on the text generated by GPT-35. An interesting zero-shot detection approach, DetectGPT [13], operates on the hypothesis that minor rewrites of AI-generated text would exhibit lower log-probabilities under the model compared to the original sample. Watermarking [15] on PLM-generated text has gained attention as a detection mechanism in the research community. However, its success hinges on the cooperation and support of the organizations that develop the PLMs. Footnote 4: [https://gptzero.me/](https://gptzero.me/) Footnote 5: [https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text](https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text) ### Adversarial Robustness of AI Text Detection Multiple studies have examined the vulnerability of AI text detectors, specifically those designed for early PLMs like Grover and GPT2 [1, 10, 11]. These studies conducted various attacks at the character and word levels, including flipping upper-lower case, using homoglyphs, misspelling words, and replacing synonyms. The results indicate that supervised-PLM-based AI text detectors are highly susceptible to these attacks, with success rates reaching up to 96% in some cases [14]. Recent research has also demonstrated that paraphrasing the input text can significantly undermine the performance of AI text detection approaches [15, 16], raising concerns about the reliability of such methods. Proposed solutions involve semantic retrieval to counter paraphrase attack [17], but they rely on text generation APIs like the OpenAI API, which limits their practical applicability when evaluating independent detection mechanisms. Previous research has highlighted two important considerations for detecting AI-generated text. 1) it is impractical to rely on a single detector for all types of AI text, emphasizing the need for domain-specific models, and 2) ensuring the detector's robustness against adversarial attacks is critical, warranting further investigation in this field. ## 6 Conclusion In this paper, we examine the task of detecting AI-generated news from a multidisciplinary perspective, aiming to identify domain-specific signals that can enhance detection accuracy while preserving robustness against adversarial attacks. We analyzed the real-world news production process compared to AI news generation and identified a set of stylistic cues that measure the deviation of AI-generated news from journalistic standards established by entities such as the Associated Press. Our proposed framework, **J-Guard**, incorporated these auxiliary features and steered existing supervised PLM-based AI text detectors to achieve robust performance across various text-generation AIs, including ChatGPT. For future work, it would be interesting to see how prompt engineering can generate news articles that evade journalism-guided detection. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Generator \(\rightarrow\) & \multicolumn{2}{c|}{GPT3} & \multicolumn{2}{c|}{ChatGPT} \\ \hline Attack \(\rightarrow\) & \multirow{2}{*}{Para.} & \multirow{2}{*}{Cyri.} & \multirow{2}{*}{Para.} & \multirow{2}{*}{Cyri.} \\ \cline{1-1} Detector \(\downarrow\) & & & & \\ \hline GLTR & **0.041** & 0.055 & 0.095 & 0.056 \\ \hline DetectGPT & 0.222 & 0.196 & 0.254 & 0.183 \\ \hline OpenAI\({}_{Zero}\) & 0.244 & 0.201 & 0.223 & 0.154 \\ \hline OpenAI\({}_{FT}\) & 0.159 & 0.138 & 0.166 & 0.150 \\ \hline **J-Guard** & 0.090 & **0.041** & **0.091** & **0.040** \\ \hline \end{tabular} \end{table} Table 2: Detector performance change after the attack (AUROC before the attack - AUROC after the attack). Bold shows the lowest AUROC difference within each column (detector-attack combination). ## 7 Limitations ### Assumption of Professional Journalism In our study, we make the assumption that the human-written portion of the dataset is produced through a professional journalism process. This means that the news organization or journalist adheres to the journalism standards commonly defined by organizations such as the Associated Press (AP). It is important to note that our hypotheses and findings are valid only under this assumption. If the human-written articles come from a non-professional journalism source, we expect the detection performance to decrease since the distinction achieved through journalism features may no longer hold. ### Domain-Specific Training The approach we propose follows the supervised learning paradigm for AI news detection. As a result, it requires specific training data to be effective in real-world AI news detection scenarios. For instance, if we aim to ensure the performance of **J-Guard** on an AI text generator \(X\), we first need to gather a training dataset consisting of news articles generated by \(X\). It is important to emphasize that our approach does not claim to have cross-AI generator generalized detection capabilities. However, the set of journalism features we proposed are agnostic to the AI generator and derived from real-world journalism process analysis. ### In-House Dataset As described in our section 4.1, we generated our dataset using ChatGPT due to the lack of publicly available ones. Although we followed a similar data collection and generation pipeline as TuringBench (Uchendu et al., 2021), it is worth noting that there may be differences in the pre-processing and data cleanup we performed compared to the methods employed by the authors of TuringBench. ### Generalizability for ChatGPT-generated Text Detection Throughout our paper, we emphasize the specificity of our analysis and its focus on AI news detection. Therefore, the differences in ChatGPT text detection performance reported by the community 6, as opposed to the high-performance results presented in our work, can be attributed to the domain of the data, specifically news articles. We hypothesize that detecting a particular domain, such as news articles with a specific type and text style, is easier than detecting generic text generated by ChatGPT. In summary, our paper does not claim that **J-Guard** can be used for general ChatGPT text detection tasks; instead, it presents a specific method tailored to improve the detection of ChatGPT-generated news. Footnote 6: [https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text](https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text) ## 8 Ethical Considerations ### Intended Use It is crucial to consider the intended real-world application of **J-Guard** and its societal impact. Our research on AI news detection aims to develop an algorithm that effectively identifies and mitigates the spread of misinformative, AI-generated news articles. The primary application of our work lies in online content moderation and forensics, where the decisions made by our detector can be utilized to flag or remove news articles from social media platforms, web search results, and other platforms. However, a significant ethical concern arises from potential false positives generated by our method. Suppose the detector incorrectly flags a genuine news article from a reputable organization as AI-generated. In that case, it may lead to the censorship of legitimate news, causing harm to the reputation and rights of the journalist and the publishing organization. Hence, we strongly advise users not to incorporate **J-Guard** into fully automated real-world content moderation or forensics systems unless a human annotator or analyst works in conjunction with the system to make the final decision. ### ChatGPT-generated News In our study, we conducted experiments using the in-house ChatGPT-generated news articles. It is crucial to emphasize that we adhered to the usage policies7 of OpenAI while generating these news articles through the API (refer to the prompt details in Section 4.1). We recognize the importance of not publicly releasing any AI-generated news article, as we cannot guarantee the factual accuracy of the content. Therefore, we will implement an on-demand release structure for our ChatGPT-generated news articles. Individuals or organizations requesting access to our generated news articles for legitimate academic research purposes will be granted permission to download the data. ### Fairness and Bias in Detection Our research endeavors to prioritize using natural language processing tools for the betterment of society while upholding principles of fairness and impartiality. We transparently disclose our methodology, results, and, most importantly, limitations to mitigate biases and address ethical concerns. Furthermore, we commit to continuous assessment and improvement of our system in the future. ### Malicious Use of Adversarial Attacks We understand the potential danger of an adversary misusing the adversarial attack setup we presented in our section 4.4 to attack existing commercial AI text detectors. However, we posit that finding these limitations and vulnerabilities in AI text detector systems (red-teaming) will outweigh the potential for misuse, given we help future researchers mitigate these issues. However, as a precaution, we will not release the adversarial setup code base to the public. Similar to ChatGPT data, individuals or organizations requesting access to our adversarial attack setup for legitimate academic research purposes will be granted permission to receive the code base.
2309.13377
Learning Invariant Representations with a Nonparametric Nadaraya-Watson Head
Machine learning models will often fail when deployed in an environment with a data distribution that is different than the training distribution. When multiple environments are available during training, many methods exist that learn representations which are invariant across the different distributions, with the hope that these representations will be transportable to unseen domains. In this work, we present a nonparametric strategy for learning invariant representations based on the recently-proposed Nadaraya-Watson (NW) head. The NW head makes a prediction by comparing the learned representations of the query to the elements of a support set that consists of labeled data. We demonstrate that by manipulating the support set, one can encode different causal assumptions. In particular, restricting the support set to a single environment encourages the model to learn invariant features that do not depend on the environment. We present a causally-motivated setup for our modeling and training strategy and validate on three challenging real-world domain generalization tasks in computer vision.
Alan Q. Wang, Minh Nguyen, Mert R. Sabuncu
2023-09-23T13:46:49Z
http://arxiv.org/abs/2309.13377v1
# Learning Invariant Representations with a Nonparametric Nadaraya-Watson Head ###### Abstract Machine learning models will often fail when deployed in an environment with a data distribution that is different than the training distribution. When multiple environments are available during training, many methods exist that learn representations which are invariant across the different distributions, with the hope that these representations will be transportable to unseen domains. In this work, we present a nonparametric strategy for learning invariant representations based on the recently-proposed Nadaraya-Watson (NW) head. The NW head makes a prediction by comparing the learned representations of the query to the elements of a support set that consists of labeled data. We demonstrate that by manipulating the support set, one can encode different causal assumptions. In particular, restricting the support set to a single environment encourages the model to learn invariant features that do not depend on the environment. We present a causally-motivated setup for our modeling and training strategy and validate on three challenging real-world domain generalization tasks in computer vision. ## 1 Introduction Machine learning models often fail when there is significant distribution shift. The goal of domain generalization is to be able to perform well with new distributions [21; 56; 71]. In this work, we are interested in settings where multiple domains/environments are available during training and we have access to environment indicators. A popular way to tackle domain generalization in this setting is to learn representations that are invariant across environments [20; 41; 60]. The hope is that such representations will work well in, or are transportable to, unseen environments. This invariance is often encoded via constraints on a learned predictor which aligns its behavior across environments; often, these conditions are derived using causal reasoning and/or by making assumptions about the data-generating process [39]. In a parametric setting, almost all existing methods enforce these constraints by training a single model and adding a regularizer on top of a standard predictive loss [1; 5; 18; 20; 27; 48; 54; 58]. Most notably, invariant risk minimization (IRM) enforces the representations to be such that the optimal classifier on top of those representations is the same across all environments. Other examples include enforcing the layer activations of the predictor to be aligned across environments [48], enforcing the predictor to be calibrated across environments [54], and enforcing gradients of the predictor to be aligned across environments [46]. Often, optimizing these constraints demand approximations or relaxations that undermine the efficacy of the approach [22]. In this work, we take a different approach using a nonparametric strategy based on a recently-proposed Nadaraya-Watson (NW) head [55]. Instead of computing the class probability directly from an input query, the NW head makes a prediction by comparing the learned representations of the query to the elements of a support set that consists of labeled data. Thus, the NW prediction is computed _relative to other real datapoints_ in the support set, with the support set providing a degree of flexibility not possible with parametric models. In particular, one can manipulate it during training in a way which restricts the types of comparisons that the model can make. In this work, we manipulate the support set during training to encode causal assumptions for the purposes of learning invariant representations. Specifically, restricting the support set to be drawn from a single environment precludes the possibility of using environment-specific features to make a prediction for a given query. We show that this setup is causally-motivated and relates to existing causal frameworks. Furthermore, we show that this training strategy leads to competitive to superior results compared to state-of-the-art parametric baselines. Our contributions are as follows: * We present causally-motivated assumptions for domain generalization which justify our modeling and training strategy. * We present a novel approach to invariant representation learning using the nonparametric Nadaraya-Watson head, which can account for causal assumptions by manipulating a support set. In particular, we propose a training strategy which, unlike competing baselines, has _no invariance hyperparameter to tune_. * We validate our approach on several datasets and demonstrate competitive results compared to state-of-the-art parametric baselines. ## 2 Related Works ### Domain Generalization and Invariant Representations Domain generalization seeks to make models robust to unseen environments and is an active area of research [21; 56; 71]. One line of work augments or synthetically-generates additional training images to increase robustness of learned features to unseen environments [59; 63; 64; 65; 70]. In particular, LISA uses a mixup-style [67] interpolation technique to generate augmented images, which the authors demonstrate improves out-of-distribution robustness [64]. Another line of work broadly seeks to align features across distributions. Deep CORAL aligns correlations of layer activations in deep neural networks [48], and other works minimize the divergence of feature distributions with different distance metrics such as maximum mean discrepancy [51; 32], an adversarial loss [14; 30], and Wasserstein distance [69]. Still other works approach the problem from the perspective of the gradients and optimization [12; 29; 34; 46; 57]. For example, Fish aligns the gradients from different domains [46]. One can also achieve domain generalization via learning invariant representations, which often requires reasoning about the data-generating process from a causal perspective to arrive at appropriate constraints [39]. Invariant causal prediction (ICP) formulates the problem from a feature selection perspective, where the goal is to select the features which are direct causal parents of the label [40]. Invariant Risk Minimization (IRM) can be viewed as an extension of ICP designed for deep, nonlinear neural networks. The IRM objective can be summarized as finding the representation \(\varphi\) such that the optimal linear classifier's parameters \(w^{*}\) on top of this representation is the same across all environments [1]. This bi-level program is highly non-convex and difficult to solve. To find an approximate solution, the authors consider a Lagrangian form, whereby the sub-optimality with respect to the constraint is expressed as the squared norm of the gradients of each of the inner optimization problems. Follow-up works analyzing IRM have raised theoretical issues with this objective and presented some practical concerns [16; 42; 22]. Various flavors of IRM have also been proposed by introducing different regularization terms [27; 54; 58]. ### Nonparametric Deep Learning Nonparametric models in deep learning have received much attention in previous work. Deep Gaussian Processes [10], Deep Kernel Learning [62], and Neural Processes [24] build upon Gaussian Processes and extend them to representation learning. Other works have generalized \(k\)-nearest neighbors [37; 49], decision trees [68], density estimation [13], and more general kernel-based methods [15; 36; 66] to deep networks and have explored the interpretability that these frameworks provide. Closely-related but orthogonal to nonparametric models are attention-based models, most notably self-attention mechanisms popularized in Transformer-based architectures in natural language processing [53] and, more recently, computer vision [11; 19; 38]. Nonparametric transformers apply attention in a nonparametric setting [24]. Recently, Wang et al. proposed the NW head [55], an extension of the classical NW model [2; 35; 61] to deep learning. In the NW head, the prediction is a weighted average of labels from a support set. The weights are computed from distances between the query and support features. The NW head can yield better calibration and interpretability, with similar accuracy compared to the dominant approach of using a parametric classifier with fully-connected layers. In this work, we leverage the NW head to encode causal assumptions via the support set. The interpretability and explainability benefits of the NW head carry over in this work; while not of primary focus, we explore these properties in the Appendix. ## 3 Preliminaries **Problem Setting.** Let \(X,Y\) denote a datapoint and its corresponding discrete class, and \(E\) denote the environment (or domain) where \(X,Y\) originates.1 That is, the elements of the training dataset \(\mathcal{D}_{tr}=\{x_{i},y_{i},e_{i}\}_{i=1}^{N}\) are drawn first by sampling the discrete random variable \(e_{i}\sim P(E)\), and then sampling \(x_{i},y_{i}\sim P(X,Y\mid E=e_{i}):=P_{e_{i}}(X,Y)\). Our goal is to learn classifiers that will generalize to new, unseen environments. Footnote 1: We assume the support of \(E\), \(\operatorname{supp}(E)\), is finite. **Assumptions.** We assume there exists a pair of latent causal parents of \(X\): an environment-independent ("content") factor \(Z_{C}\) and an environment-dependent ("style") factor \(Z_{S}\).2 We as Figure 1: Illustration of proposed approach. Support set of labeled datapoints (square/triangle) from 3 environments lie in 3 regions in the feature space. Black circle denotes query datapoint with unknown label. a) The NW head models \(P(Y|X)\) by making predictions as a function of distances to labeled datapoints in the feature space (visualized as dotted arrows). b) Balancing comparisons across labels for all environments models \(P^{B}(Y|X)\). c) Conditioning on a single environment models \(P_{e}(Y|X)\). Figure 2: a) Causal Directed Acyclic Graph (DAG) we consider in this work. Solid nodes are observed and dashed nodes are unobserved. We assume an anti-causal setting where label \(Y\) causes \(X\), and \(X\) has 2 causal parents: “style” features, \(Z_{S}\), which are influenced by the environment \(E\); and environment-independent “content” features of \(X\), \(Z_{C}\), which are causally influenced by the label \(Y\). \(E\) potentially influences \(Y\). Both \(E\) and \(Y\) have direct influence on style features \(Z_{S}\). b) Same DAG as a) with an intervention on \(Y\). We note that \(Y\perp\!\!\!\perp E\mid Z_{C}\) and \(Y\not\perp\!\!\!\perp E\mid Z_{S}\). sume the causal mechanism that generates \(X\) from (\(Z_{C}\), \(Z_{S}\)) is injective, so that, in principle, it is possible to recover the latent features from the observations; i.e. there exists a function \(g\) such that \(g(X)=(Z_{C},Z_{S})\). We further assume that \(g\) can be disentangled into \(g_{C}\) and \(g_{S}\), such that \((Z_{C},Z_{S})=g(X)=(g_{C}(X),g_{S}(X))\). The causal graph is depicted in Fig. 2a. Finally, we assume that if any \(X=x\) has a non-zero probability in one environment, it has a non-zero probability in all environments. **Motivation.** The motivating application in this work is image classification, where each realization of \(E\) might represent a different site, imaging device, or geographical region where \(X,Y\) are collected. For example, in medical imaging, different hospitals (\(E\)) may collect images (\(X\)) attempting to capture the presence of some disease (\(Y\)), but may differ in their imaging protocols which lead to differences in the image style features \(Z_{S}\) (e.g. staining, markings, orientation). In addition, we allow \(Z_{S}\) to be influenced by the label itself (for example, positive examples are more likely to be marked by a doctor or have specific staining than negative examples). Finally, the prevalence of \(Y\) may be influenced by \(E\) (for example, the prevalence of a disease may be higher in a certain hospital). The goal is to find an estimator for \(Y\) which relies only on the direct causal links between \(Y\) and \(X\) and not on any spurious associations between \(E\) and \(X\), as these may change in a new, unseen environment. That is, we seek an estimator which relies only on \(Z_{C}\) and which is independent of \(E\) or \(Z_{S}\). First, we note the direct causal dependence \(E\to Y\). For example, a model can exploit this association by learning to over-predict majority classes in a certain environment. One way to remove the direct dependence is by intervening on \(Y\), thus removing incoming edges to \(Y\). This essentially corresponds to matching the environment-specific prior on \(Y\) between environments, and results in the intervened graph in Fig. 2b.3 Let us refer to any distribution which follows the DAG in Fig. 2b as \(P_{e}^{B}(X,Y)\). Footnote 3: This may be interpreted as making the model robust to label shift, see [31; 44]. Second, we observe that there is a potential non-causal association flow between \(E\) and \(Y\) through the colliders \(X\) and \(Z_{S}\), when either one of these are conditioned on (i.e. are observed). An estimator which relies on \(Z_{S}\) potentially leaks information from \(E\), and this is unstable in new environments. Reading d-separation on this intervened graph, we infer that \(Y\leavevmode\hbox{\small 1\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3. 2. Conditioning \(\mathcal{S}\) on a single environment, denoted \(\mathcal{S}_{e}\) (see Fig. 1c). This can be interpreted as conditioning the probability estimate on \(E=e\), i.e.: \[\hat{P}_{e}(Y=y\mid X=x;\phi):=f_{\phi}(x,\mathcal{S}_{e}).\] (4) Note that both balancing and conditioning can be achieved simultaneously, which we denote \(\mathcal{S}_{e}^{B}\). ### Objective and Optimization Details Given a dataset of samples \(\mathcal{D}_{tr}=\{x_{i},y_{i},e_{i}\}_{i=1}^{N}\), we wish to leverage the NW head as a conditional estimator for \(Y\) conditioned on \(Z_{C}=g_{C}(X)\), where \(g_{C}(X)\) is characterized by Eq. 1. This necessitates an optimization over both \(\phi\) and the space of functions \(g_{C}\). Thus, we solve the following constrained maximum likelihood over \(\phi\) and \(g_{C}\): \[\operatorname*{argmax}_{\phi,g_{C}}\sum_{i=1}^{N}\log\hat{P}_{e_ {i}}^{B}(y_{i}\mid g_{C}(x_{i});\phi) \tag{5}\] \[\text{s.t.}\ \hat{P}_{e}^{B}(y_{i}\mid g_{C}(x_{i});\phi)=\hat{P}_{e^ {\prime}}^{B}(y_{i}\mid g_{C}(x_{i});\phi),\ \ \forall i\in\{1,...,N\},\ \forall e,e^{\prime}\in E.\] Note that Eq. 1 implies that \(P_{e}^{B}(y_{i}\mid g_{C}(x_{i}))=P^{B}(y_{i}\mid g_{C}(x_{i}))\). Thus, the objective is equivalent to unconstrained maximum likelihood under the assumption in Eq. 1. Instead of solving for \(g_{C}\) explicitly, we let both \(\phi\) and \(g_{C}\) be related by the composition \(\varphi=\phi\circ g_{C}\), and set \(\varphi\) to be the learnable mapping of the NW head, i.e. a neural network. Then, the objective becomes: \[\operatorname*{argmin}_{\varphi}\sum_{i=1}^{N}L(f_{\varphi}(x_{i},\mathcal{S}_{e_{i}}^{B}),y_{i}) \tag{6}\] \[\text{s.t.}\ f_{\varphi}(x_{i},\mathcal{S}_{e}^{B})=f_{\varphi}( x_{i},\mathcal{S}_{e^{\prime}}^{B}),\ \ \forall i\in\{1,...,N\},\ \forall e,e^{\prime}\in E,\] where \(L\) is the cross-entropy loss. To make the objective tractable, we consider two possible variants: 1. **Explicit.** Solve the optimization problem explicitly via a Lagrangian formulation: \[\operatorname*{argmin}_{\varphi}\sum_{i=1}^{N}L(f_{\varphi}(x_{i},\mathcal{S }_{e_{i}}^{B}),y_{i})+\lambda\sum_{e,e^{\prime}\in E}\sum_{i=1}^{N}\|f_{ \varphi}(x_{i},\mathcal{S}_{e}^{B})-f_{\varphi}(x_{i},\mathcal{S}_{e^{\prime} }^{B})\|_{2}^{2}.\] (7) where \(\lambda>0\) is a hyperparameter. 2. **Implicit.** Relax the optimization problem into the following unconstrained problem: \[\operatorname*{argmin}_{\varphi}\sum_{e\in E}\sum_{i=1}^{N}L(f_{\varphi}(x_{i },\mathcal{S}_{e}^{B}),y_{i}).\] (8) Figure 3: A depiction of the NW head on a tumor detection task. The NW head computes Euclidean distances \(s(\cdot,\cdot)\) between query and support features, and uses the distances to weight the support labels. Colored squares represent labels. Diagram displays two different support sets. Top is unconditional support, where support data is drawn from the training data without knowledge of environment information. Bottom is an example of a manipulated support where all support data is drawn from a fixed environment (note similarity in color). Such a support set precludes the possibility of using environment-specific features to make a prediction. In this formulation, the constraint will be approximately satisfied in the sense that the model will be encouraged to predict the ground truth for a given image, which is identical across all environments. In practice, how well the solution satisfies the constraint will depend on model capacity, the data sample, and optimization procedure. ### Optimization Details During training, the support set \(\mathcal{S}\) is drawn stochastically from the training set \(\mathcal{D}_{tr}\), and all queries and support datapoints are passed through the feature extractor \(\varphi\). For computational efficiency, instead of sampling a unique support mini-batch at the query-level, we sample a unique support at the mini-batch level. Thus, if \(N_{q}\) and \(N_{s}\) are the query and support mini-batch sizes respectively, the effective mini-batch size is \(N_{q}+N_{s}\), instead of \(N_{q}N_{s}\). For the implicit variant, we sample one support set for a given mini-batch of queries, forward pass through the NW head, and compute the loss in Eq. (8). For the explicit variant, we sample two support sets for a given mini-batch of queries, perform two independent forward passes through the NW head for each support set, and compute the loss in Eq. (7). As discussed in prior work [55], the support batch size is a hyperparameter analogous and orthogonal to the query batch size. A technical point is that the set of labels in the support mini-batch must cover the set of labels in the query mini-batch. Thus, in our implementation, for \(\mathcal{S}^{B}\), we cycle through all classes and randomly draw \(N_{c}\) examples per class to include in the support. For tasks with a large number of classes, one can subsample from the total number of classes, so long as the sampled classes cover the set of query classes. ### Inference modes Similar to how the support set can be manipulated during training, we can also design different inference strategies corresponding to different configurations of the support set at test-time. We explore several different inference modes which are possible under the NW framework: 1. **Random.** Sample uniformly at random over the dataset, such that each class is represented \(k\) times. 2. **Full.** Use the entire balanced training set. 3. **Ensemble.** Given the set of balanced features computed from Full mode, partition the computed features for all training datapoints by environment, compute the softmax predictions with respect to each environment balanced across labels, and average the predictions. 4. **Cluster.** Given the set of balanced features computed from Full mode, perform \(k\)-means clustering on the features of the training datapoints for each class. These \(k\) cluster centroids are then used as the support features for each class. This can be viewed as a distillation of the full training set for efficient inference, with the caveat that the support set no longer corresponds to observed datapoints. While Full, Ensemble, and Cluster require computing features for the entire support set, in practice these features and centroids can be precomputed. In our experiments, we find that Cluster mode can be a sufficient replacement to Full mode, while being computationally cheaper. These inference modes can be used interchangeably and flexibly. As an example, consider a workflow which would involve using Cluster mode to perform efficient inference, and then using Full mode on a select few (potentially problematic) test queries to understand model behavior. ### Connections to Prior Work Our assumptions in Eq. (1) are common across many works related to learning invariant predictors [40; 27; 54; 41; 26]. Representatively, under the binary classification setting, the IRM objective finds a representation function \(\varphi\) which elicits an invariant predictor across environments \(E\) such that for all \(h\) that has a non-zero probability for \(\varphi(X)\) in any (and all) environment(s): \[\mathbb{E}_{e}[Y\mid\varphi(X)=h]=\mathbb{E}_{e^{\prime}}[Y\mid\varphi(X)=h], \ \forall e,e^{\prime}\in E.\] Eq. (1) can be viewed as a generalization of this equality to multi-class settings.4. Footnote 4: This equality has also been called “sufficiency invariance” [60]. Furthermore, note that given the feature extractor \(\varphi\), the NW mechanism \(f\) is a nonlearnable classifier, whereas \(w\) is learned in the IRM setting. Thus, our proposed objective can be interpreted as learning invariant features \(\varphi\), where the _fixed classifier constraint is satisfied by construction_. This avoids the need to approximate the complex bilevel optimization problem with a regularizer which assumes convexity and requires computing the Hessian. Essentially, \(f\) enforces invariance through the manipulation of the support set, providing a more intuitive and computationally simpler objective to optimize. In the Experiments section, we compare IRM against a variant of our algorithm where we freeze the learned representations and finetune a linear classifier on top using the same training data. We find that our algorithm performs better than IRM on all datasets, suggesting that it captures invariant representations better than IRM. ## 5 Experiments and Results ### Baselines We compare against several popular and competitive baseline algorithms: empirical risk minimization (ERM) [52], invariant risk minimization (IRM) [1], deep CORAL [48], Fish [46], LISA [64], and CLOvE [54]. When available, results on baselines are pulled from their respective papers. Details on baseline algorithms are provided in the Appendix. ### Datasets We experiment on 3 real-world domain generalization tasks. Two are from the WILDS benchmark [23], and the third is a challenging melanoma detection task. Details on the datasets are summarized in Table 1, and further information is provided in the Appendix. \begin{table} \begin{tabular}{l l l l l} \hline \hline _Dataset_ & _\# Classes_ & _Env_ & _\# Envs_ & _Architecture_ & _Metric_ \\ \hline Camelyon-17 & 2 & Hospital & 3 & DenseNet-121 & Average acc. \\ ISIC & 2 & Hospital & 3 & ResNet-50 & F1-score \\ FMoW & 62 & Region & 5 & DenseNet-121 & Worst-region acc. \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of Datasets. \begin{table} \begin{tabular}{l l l l} \hline \hline _Algorithm_ & _Camelyon-17_ & _ISIC_ & _FMoW_ \\ \hline ERM [52] & 70.3\(\pm\)6.4 & 58.2\(\pm\)2.9 & 32.6\(\pm\)1.6 \\ IRM [1] & 70.9\(\pm\)6.8 & 57.9\(\pm\)1.0 & 31.3\(\pm\)1.2 \\ CORAL [48] & 72.4\(\pm\)4.4 & 59.1\(\pm\)2.2 & 31.7\(\pm\)1.0 \\ Fish [46] & 74.7\(\pm\)7o-2 & 64.4\(\pm\)1.7 & 34.6\(\pm\)0.0 \\ LISA [64] & 77.1\(\pm\)6.5 & 64.8\(\pm\)2.3 & 35.5\(\pm\)1.8 \\ CLOvE [54] & 79.9\(\pm\)3.9 & 66.2\(\pm\)2.2 & **40.1\(\pm\)**0.6 \\ \hline NW\({}^{\text{B}}\), Random & 71.7\(\pm\)5.3 & 56.7\(\pm\)1.4 & 31.1\(\pm\)0.8 \\ NW\({}^{\text{B}}\), Full & 72.0\(\pm\)6.7 & 61.9\(\pm\)3.5 & 31.6\(\pm\)0.9 \\ NW\({}^{\text{B}}\), Cluster & 70.6\(\pm\)6.9 & 61.4\(\pm\)2.3 & 31.3\(\pm\)0.9 \\ NW\({}^{\text{B}}\), Ensemble & 71.9\(\pm\)6.0 & 63.9\(\pm\)3.8 & 32.2\(\pm\)1.0 \\ NW\({}^{\text{B}}\), Probe & 69.2\(\pm\)7.4 & 59.7\(\pm\)2.5 & 29.9\(\pm\)1.5 \\ \hline NW\({}^{\text{B}}_{\text{e}}\), Random & 74.8\(\pm\)8.4 / 75.3\(\pm\)3.2 & 57.5\(\pm\)1.9 / 55.0\(\pm\)0.9 & 31.2\(\pm\)0.7 / 30.9\(\pm\)0.5 \\ NW\({}^{\text{B}}_{\text{e}}\), Full & **80.0\(\pm\)**2.7 / 79.7\(\pm\)1.9 & 69.6\(\pm\)2.3 / 70.0\(\pm\)1.0 & 35.0\(\pm\)0.7 / 34.6\(\pm\)0.4 \\ NW\({}^{\text{B}}_{\text{e}}\), Cluster & 78.6\(\pm\)2.5 / 79.0\(\pm\)1.4 & **71.1\(\pm\)**1.7 / 71.0\(\pm\)1.0 & 33.9\(\pm\)0.6 / 34.0\(\pm\)0.3 \\ NW\({}^{\text{B}}_{\text{e}}\), Ensemble & 79.5\(\pm\)2.6 / 79.6\(\pm\)1.9 & 69.5\(\pm\)2.2 / 69.8\(\pm\)0.8 & 37.8\(\pm\)0.9 / 38.2\(\pm\)0.4 \\ NW\({}^{\text{B}}_{\text{e}}\), Probe & 75.3\(\pm\)7.3 / 75.8\(\pm\)3.3 & 61.4\(\pm\)3.1 / 63.4\(\pm\)2.8 & 33.9\(\pm\)1.5 / 32.7\(\pm\)1.4 \\ \hline \hline \end{tabular} \end{table} Table 2: Metric average \(\pm\) standard deviation for all datasets (%). Higher is better. **Bold** is best and underline is second-best. Implicit / Explicit. 1. The Camelyon-17 dataset [4] comprises of microscopic images of stained tissue patches from different hospitals, where the label corresponds to presence of tumor tissue in patches and the environment is the hospital where the patch comes from. 2. The melanoma dataset is from the International Skin Imaging Collaboration (ISIC) archive5. The ISIC dataset comprises of dermoscopic images of skin lesions from different hospitals, where the label corresponds to whether or not the lesion is diagnosed as melanoma and the environment is the hospital where the image comes from. There is significant less positive examples and negative examples, with this label imbalance varying across environments (see Appendix). Footnote 5: [https://www.isic-archive.com](https://www.isic-archive.com) 3. The Functional Map of the World (FMoW) dataset [6] comprises of RGB satellite images, where the label is one of 62 building or land use categories, and the environment represents the year the image was taken and its geographical region. ### Experimental Setup For each model variant, we train 5 separate models with different random seeds, and perform model selection on an out-of-distribution (OOD) validation set. For WILDS datasets, we follow all hyperparameters, use model selection techniques, and report metrics as specified by the benchmark. This includes using a DenseNet-121 backbone initialized with pretrained ImageNet weights as \(\varphi\) and no random augmentations for both datasets. Similarly for ISIC, we use a pretrained ResNet-50 backbone as \(\varphi\) with no augmentations, and perform model selection on an OOD validation set. Due to significant label imbalance, we report F1-score instead of average accuracy. For NW algorithms, we refer to models which balance classes (i.e. modeling Eq. (3)) as NW\({}^{\text{B}}\), and models which additionally condition on environment (i.e. modeling both Eq. (3) and Eq. (4)) as NW\({}^{\text{B}}_{\text{e}}\). For NW\({}^{\text{B}}_{\text{e}}\) models, we train explicit and implicit variants. For all NW algorithms, we perform evaluation on all inference modes. In addition, for completeness, we experiment on a variant where we freeze the feature extractor and finetune a linear probe on top of the learned representations on the same training data \(\mathcal{D}_{tr}\), which we refer to as "Probe". As an example, the implicit variant of NW\({}^{\text{B}}_{\text{e}}\) is trained on Eq. (8), where the support set is balanced across classes (B) and conditioned on an environment (e). We set \(N_{c}=8\) for Camelyon-17 and ISIC and \(N_{c}=1\) for FMoW. An analysis of this hyperparameter is provided in the Appendix. The query batch size \(N_{q}\) is set to 8 for all NW experiments. For Random and Cluster inference modes, we set \(k=3\). This was chosen based on prior work [55], where \(k=3\) was shown to balance good error rate performance with computational efficiency. For explicit variants, we tune \(\lambda\) for all datasets via grid search on a held-out validation set. Full hyperparameter details are provided in the Appendix. All training and inference is done on an Nvidia A6000 GPU and all code is written in Pytorch.6. Footnote 6: Our code is available at [https://github.com/alanqrwang/nwhead](https://github.com/alanqrwang/nwhead). ### Results Table 2 shows the main results. We find that on Camelyon-17 and ISIC datasets, NW\({}^{\text{B}}_{\text{e}}\) with Full mode outperforms all baselines and variants we consider. In addition, NW\({}^{\text{B}}_{\text{e}}\) variants typically have lower variance across random seeds as compared to baselines. For FMoW, NW\({}^{\text{B}}_{\text{e}}\) with Ensemble mode performs around \(2\%\) lower than the best performing baseline, CLoVE. We observe that the most computationally-efficient inference mode, Cluster, performs comparably to Full mode for NW\({}^{\text{B}}_{\text{e}}\) models, and is in fact the highest-performing model for ISIC. Thus, we conclude that the support set can be an efficient replacement for Full. For ISIC, we find that almost all NW\({}^{\text{B}}\) modes (except Random) perform \(\sim 3\%\) better than ERM. This may be attributed to balancing classes across environments, which we suspect has added benefit for highly imbalanced tasks. In contrast, this boost is less apparent for Camelyon-17, which has relatively balanced classes. As an ablation, we compare NW\({}^{\text{B}}\) against an NW variant without class-balancing in the Appendix. NW\({}^{\text{B}}_{\text{e}}\) further improves over NW\({}^{\text{B}}\) by \(\sim 7\%\). Exploring further, we compare NW\({}^{\text{B}}\) against an ERM variant with balanced classes per environment, which we denote ERM\({}^{\text{B}}\). This achieves \(63.0\pm 2.5\), which is on-par with \(\text{NW}^{\text{B}}\). This is expected as the theoretical assumptions are the same for both models. Comparing implicit to explicit variants of \(\text{NW}^{\text{B}}_{\text{e}}\), we do not find much difference in explicitly enforcing Eq. 1, although we do observe significantly lower variances across model runs. Generally, we find the slight performance gain of explicit training to not be worth the computational overhead of doubling the number of support set forward passes per gradient step and tuning the hyperparameter \(\lambda\). While not the highest-performing, we highlight the consistent \(1\)-\(5\%\) improvement of Probe models over IRM, indicating that \(\text{NW}^{\text{B}}_{\text{e}}\) may be better at capturing invariant features. However, other non-parametric inference modes still outperform Probe, possibly indicating that the learned features are more suitable for NW-style classifiers. ## 6 Discussion and Limitations There are several advantages of the proposed NW approach over previous works. First, the implicit training strategy in Eq. (8) has no hyperparameter to tune, while remaining competitive with and often outperforming state-of-the-art baselines which all require tuning a hyperparameter coefficient in the regularized loss. Second, the NW head enables interpretability by interrogating nearest neighbors in the feature space. Since these neighbors directly contribute to the model's prediction (Eq. (2)), interrogation enables a user to see what is driving the model's decision-making. This not only allows for greater model transparency, but also enables interrogating the quality of the invariant features. We explore this capability in Section H in the Appendix. Note this this degree of transparency is not present in parametric baselines. Lastly, from an intuitive standpoint, we believe our non-parametric approach to enforcing invariance across environments is more natural than baseline methods, since an environment is encoded by manipulating the support set to contain real samples only from that environment. Other baseline methods resort to proxy methods to enforce invariance [54; 48; 27]. One important limitation of our method is computational (see Appendix for analysis of runtimes). The proposed approach requires pairwise comparisons, which scales quadratically with sample size. Practically, this means passing a separate support mini-batch in addition to a query mini-batch at every training iteration. This limitation is compounded for explicit variants, in which two sets of support sets must be drawn independently. Future work may explore more computationally-efficient training strategies. At inference time, Full, Cluster, and Ensemble modes are expensive procedures which require computing features for the entire support set, although precomputing features can mitigate this. However, we argue that in high-risk, safety-critical domains like medical imaging, high inference throughput may not be as important as performance, interpretability, and robustness. We expect the proposed approach to work well with tasks that have several (and diverse) sets of examples per label class in each environment. If this is not the case, as in the FMoW dataset, the resulting model will be sub-optimal. In particular, in the extreme case where no example is present for a specific class in a given environment, constructing a support set with labels that cover the ground truth label of the query images will not always be possible. This will, in turn, impact performance. ## 7 Conclusion We presented a nonparametric strategy for invariant representation learning based on the Nadaraya-Watson (NW) head. In the NW head, the prediction is made by comparing the learned representations of the query to the elements of a support set that consists of labeled data. We demonstrated two possible ways of manipulating the support set, and demonstrate how this corresponds to encoding different assumptions from a causal perspective. We validated our approach on three challenging and real-world datasets. We believe there are many interesting directions of further research. First, our treatment is restricted to classification tasks. Future work may explore an extension to the regression setting. Second, it can be interesting to explore adaptation to the test domain, given additional information. For example, reweighting the occurrence of samples per label could provide improved results given knowledge about the edge \(E\to Y\) in the test distribution. One can further envision implementing the proposed method in settings where there are previously unseen test time labels/tasks. Finally, we are interested in replacing the fixed similarity function with a learnable kernel. ## Acknowledgements Funding for this project was in part provided by the NIH grant R01AG053949, and the NSF CAREER 1748377 grant.
2309.05515
Interactions between several types of cosmic strings
We study the interaction of several types of static straight cosmic strings, including local strings, global strings, and bosonic superconducting strings with and without magnetic currents. First, we evaluate the interaction energy of two widely separated cosmic strings using the point source formalism and show that the most dominant contribution to the interaction energy comes from the excitation of the lightest mediator particles in a underlying theory. The interaction energy at arbitrary separation distances is then analyzed numerically by the gradient flow method. It turns out that an additional scalar field introduced in the bosonic superconducting string becomes an additional source of attraction. For such a bosonic superconducting string, we find that a string with two winding numbers is energetically favorable compared to two strings with a single winding number in a certain parameter region. Our analysis reveals that a phase structure of bosonic superconducting strings is richer than that of local and global strings and that the formation of bound states at intersections of bosonic superconducting strings is favored.
Kohei Fujikura, Siyao Li, Masahide Yamaguchi
2023-09-11T14:58:14Z
http://arxiv.org/abs/2309.05515v1
# Interactions between several types of cosmic strings ###### Abstract We study the interaction of several types of static straight cosmic strings, including local strings, global strings, and bosonic superconducting strings with and without magnetic currents. First, we evaluate the interaction energy of two widely separated cosmic strings using the point source formalism and show that the most dominant contribution to the interaction energy comes from the excitation of the lightest mediator particles in a underlying theory. The interaction energy at arbitrary separation distances is then analyzed numerically by the gradient flow method. It turns out that an additional scalar field introduced in the bosonic superconducting string becomes an additional source of attraction. For such a bosonic superconducting string, we find that a string with two winding numbers is energetically favorable compared to two strings with a single winding number in a certain parameter region. Our analysis reveals that a phase structure of bosonic superconducting strings is richer than that of local and global strings and that the formation of bound states at intersections of bosonic superconducting strings is favored. ###### Contents * 1 Introduction * 2 Cosmic string solutions * 2.1 Local strings * 2.2 Global strings * 2.3 Bosonic Superconducting strings * 3 Cosmic string interactions: Analytic studies * 3.1 Local strings interactions * 3.2 Global strings interactions * 3.3 Bosonic superconducting strings interactions * 4 Cosmic string interactions: Numerical studies * 4.1 Local strings * 4.2 Global strings * 4.3 Bosonic superconducting strings without current * 4.4 Bosonic superconducting strings with current * 5 Conclusions * A Normalization and Numerical solutions * A.1 Normalization of local and bosonic superconducting strings * A.2 Normalization of global strings * A.3 Numerical solutions * B Numerical calculation of the interaction energy of two cosmic strings ## 1 Introduction In the early universe, cosmological thermal phase transitions associated with spontaneous symmetry breaking can take place. If the topology of the vacuum manifold is not simply connected, then after phase transitions, macroscopic one-dimensional line-like topological defects called cosmic strings can be produced through the Kibble-Zurek mechanism [1; 2]. (See Ref. [3] for the review of cosmic strings.) A typical example of a (local) cosmic string solution is the Abrikosov-Nielsen-Olesen (ANO) string [4; 5] realized in the Abelian-Higgs model, where the condensation of the Higgs field takes place. Further introduction of a new \(\vec{U}(1)\) gauge group and a new Higgs field leads to a localized condensate of the new Higgs inside the string in a certain parameter region, showing persistent current-carrying superconductivity [6]. In addition, if the theory includes Yukawa coupling between the string-forming Higgs and new fermions, there also exists a current-carrying cosmic string solution [6; 7]. Depending on the spin statistics of the charge carriers, such current-carrying string solutions are conventionally called bosonic or fermionic superconducting strings, and their dynamics and phenomena have been investigated in many literatures [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. After the formation of cosmic strings, they constitute a web-like structure called a string network system. During the evolution of a string network system, cosmic strings can intersect many times, and the dynamics of these intersections can affect the fate of the string network system. In particular, when two cosmic string segments intersect, it is conceivable that they generically snap and reconnect with their partners. This process is called reconnection (or intercommutation), and occurs dominantly for simple string solutions such as local strings and global strings, unless the relative velocities are sufficiently fast [21; 22; 23; 24]. Interestingly, if the interaction between two string segments is attractive and strong enough, Y-shaped junctions (Y-junctions) can form after string intersection [25]. The formation of Y-junctions is discussed in Refs. [26; 27; 28; 29; 30; 31; 32]. Characteristic phenomena of Y-junctions such as distinctive gravitational lensing [33; 34], gravitational wave bursts due to cusps and kinks [35; 36; 37] have also been discussed. As a first step toward clarifying the dynamics of string network systems, including the formation of Y-junctions, it is very important to reveal the interaction between two static, straight cosmic string segments. (Here, "static" means the velocity of the string is zero.) Analytical studies of the interaction between two static, parallel global and local strings are initiated in Refs [21; 38] under the assumption that the whole field configurations can be approximated by the superposition of each string. After these works, Speight proposed a novel method, called point source formalism, to calculate the interaction energy of two local strings [39]. It is shown that the asymptotic field configurations of a local string can be well represented by line-like static sources that couple linearly to the fields in the underlying theory. When two strings are sufficiently separated, the total external sources of the two-string system are given by the superposition of each string's source. The interaction energy can be then evaluated in linearized field theories using the standard Green's function method. This formulation can be easily extended to the calculation of interaction energies for more complicated cosmic string solutions. On the other hand, when two static cosmic string segments are sufficiently close together, the point source formalism is not suitable to estimate the interaction energy. Clearly, in such a case, the non-linear effects caused by internal structure of cosmic strings on interaction energy cannot be ignored. Therefore, when two strings are very close together, it is very challenging to compute interaction energy analytically, which results in the need for numerical calculations that do not rely on the point source formalism. The interaction energy of two local strings at arbitrary separation distances have been accurately evaluated using the variational approach in Ref. [40] and the gradient flow method in Ref. [41]. (See also Refs. [42; 43] for related works.) While the interaction energy and the dynamics of intersections of a simple two-local-string system have been clarified, cosmic string solutions with more complicated field configurations, such as bosonic superconducting strings, have not been fully understood yet. Ref. [44] has studied the interaction energy of bosonic superconducting strings without current, using the point source formalism that takes into account the effect of another scalar field condensing inside the string. However, there have been neither analytical studies of the interaction energy of bosonic current-carrying strings using the point source formalism nor numerical studies of bosonic superconducting strings that include the effects of nonlinear internal structure.1 One of the main purposes of the present paper is to determine the interaction energy of two static, parallel bosonic superconducting strings using analytic calculations with the point source formalism and numerical calculations with the gradient flow method. In addition, for completeness, we will also apply this to local(ANO) and global strings. For bosonic superconducting strings, we consider both the cases with and without magnetic currents associated with newly introduced \(\tilde{U}(1)\) gauge group. For the case without magnetic currents, we show that the interaction sourced by the Higgs field in the fundamental representation of \(\widetilde{U}(1)\) is always attractive, which gives a dominant contribution to the interaction energy in a certain parameter space. Especially, the short-range attraction due to the present of this Higgs field results in a non-trivial phase structure of the interaction energy of the bosonic superconducting strings. We argue that this short-range attraction cannot be seen in the point source formalism and can only be captured by a full numerical calculation. In the case with magnetic currents, we find that the contribution triggered by the \(\widetilde{U}(1)\) gauge field is related to the directions of the currents. But its effect is only important in the long-distance limit due to the effect of the back-reaction on \(\widetilde{U}(1)\) Higgs field (current quenching effect), which allows only small values of currents. Meanwhile, the short-range attraction is found to be weaker with the presence of magnetic currents as another result of current quenching effect. Consequently, this attraction is maximal when the magnetic current is zero. This paper is organized as follows. In Sec. 2, we review several static cosmic string solutions, including local(ANO) strings, global strings, bosonic superconducting strings. We then show that asymptotic configurations can be obtained in the linearized field theory by introducing certain external sources. In Sec. 3, the interaction energies of various string segments are estimated using the point source formalism. In Sec. 4, we present our numerical results using the relaxation method (gradient flow method). Sec. 5 is devoted to conclusions and discussion. Some numerical details are given in Appendices. ## 2 Cosmic string solutions In this section, we review various types of cosmic string solutions, including local, global, bosonic superconducting cosmic strings. We derive the asymptotic configurations of these strings analytically in the long-distance limit. It is shown that the asymptotic field configurations can be obtained by introducing corresponding external sources that linearly couple to fields in the free field theory. ### Local strings Let us first consider the simplest cosmic string solution realized in the Abelian-Higgs model, known as the Abrikosov-Nielsen-Olsen (ANO) string [4; 5]. The Lagrangian density of the Abelian-Higgs model is given by2 Footnote 2: Here, we focus on the conventional polynomial form of the potential. There exist string solutions with the Coleman-Weinberg type potentials [49]. Its string solution and interaction energy between these strings have recently been investigated in details by Ref. [41]. \[\begin{split}&\mathcal{L}_{\text{AH}}=-\frac{1}{4}F^{\mu\nu}F_{ \mu\nu}+|D_{\mu}\phi|^{2}-V(\phi),\\ & V(\phi)=\frac{\lambda_{\phi}}{4}\left(|\phi|^{2}-\eta_{\phi}^{ 2}\right)^{2}.\end{split} \tag{1}\] where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) and \(D_{\mu}\equiv\partial_{\mu}-ieA_{\mu}\) are the field strength and the covariant derivative of the \(U(1)\) gauge field, respectively. The variational principle leads to the following equations of motion: \[\begin{split}&\left(D_{\mu}D^{\mu}-\frac{\lambda_{\phi}}{2}\left(| \phi|^{2}-\eta_{\phi}^{2}\right)\right)\phi=0,\\ &\partial_{\nu}F^{\nu\mu}=J^{\mu}.\end{split} \tag{2}\] Here, \(J^{\mu}\) is the Noether current associated with local \(U(1)\) symmetry given by \[J_{\mu}=ie\left(\phi D_{\mu}\phi^{*}-\phi^{*}D_{\mu}\phi\right). \tag{3}\] The ANO string solution is obtained by considering the boundary conditions with nontrivial windings. Assuming that the cosmic string configuration is static, straight (here and hereafter we use "straight" to indicate translational symmetric in the string direction) and circularly symmetric on the plane perpendicular to the string axis direction, we can use the following ansatz: \[\phi(x)=\phi_{r}(r)e^{in\theta},\ \ A_{\mu}(x)=A_{\theta}(r)\delta_{ \theta}^{\mu}. \tag{4}\] Here, \((r,\ \theta,\ z)\) is cylindrical coordinate, where \(r,\ \theta\) and \(z\) is the radial distance, the angle and the height, respectively. \(\phi_{r}(r)\) and \(A_{\theta}(r)\) are real functions, and \(n\) is a non-zero integer number called winding number. Boundary conditions of \(\phi\) and \(A_{\mu}\) at infinity are determined by finiteness of the energy, that is, the scalar potential, \(V(\phi)\), and the covariant derivative \(D_{\mu}\phi\) must vanish at infinity: \[\phi_{r}=\eta,\ \ D_{\mu}\phi=0,\ \ r\rightarrow\infty. \tag{5}\] Regularities of \(|\phi|\) and \(A_{\theta}\) at the origin require the following boundary conditions, \[\phi_{r}(r\to 0)=0,\ A_{\theta}(r\to 0)=0. \tag{6}\] Analytic solutions of Eq. (2) under boundary conditions Eqs (5) and (6) are not known, and hence, a numerical calculation is required. See appendix A for detailed setup of numerical calculations. A consequence of boundary condition Eq. (5) is that a magnetic flux must be quantized at infinity. \[\oint_{\mathbb{S}^{1}}A=\frac{n}{e}\oint_{\mathbb{S}^{1}}\mathrm{ d}\theta=\frac{2\pi n}{e},\ (r\rightarrow\infty). \tag{7}\] Here, \(A\equiv A_{\mu}\mathrm{d}x^{\mu}\) while the integration is taken over the circle enclosing the origin in the two-dimensional plane which is perpendicular to the straight ANO string. Note that the first equality only holds at infinity. Let us now focus on the asymptotic solutions. We work in the different field parameterizations defined by \[\phi(x)=\left(\eta+\frac{\sigma(x)}{\sqrt{2}}\right)e^{i\pi(x)},\ U_{\mu}(x)= A_{\mu}(x)-\frac{1}{e}\partial_{\mu}\pi(x), \tag{8}\] where \(\sigma(x)\) is the radial component of the \(\phi\) field and \(\pi(x)\) is the Nambu-Goldstone boson (NG-boson). Although the above field parameterizations are ill-defined at the origin \(\phi=0\), they are convenient to seek asymptotic field configurations of the ANO string at infinity, as we will see below. With these field parameterizations, the Lagrangian density Eq. (1) can be rewritten as \[\mathcal{L}_{\rm AH} =\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma-\frac{1}{2}m_{ \phi}^{2}\sigma^{2}-\frac{1}{4}\mathcal{F}^{\mu\nu}\mathcal{F}_{\mu\nu}+\frac{1 }{2}m_{e}^{2}U_{\mu}U^{\mu}\] \[+\sqrt{2}e^{2}\eta\sigma U_{\mu}U^{\mu}+\frac{e^{2}}{2}\sigma^{2} U_{\mu}U^{\mu}-\frac{\sqrt{2}\lambda}{4}\eta\sigma^{3}-\frac{\lambda}{16} \sigma^{4}, \tag{9}\] where \(m_{\phi}\equiv\sqrt{\lambda_{\phi}}\eta\) and \(m_{e}\equiv\sqrt{2}e\eta\) are masses of the scalar and \(U(1)\) gauge field evaluated at infinity, respectively, and \(\mathcal{F}_{\mu\nu}=\partial_{\mu}U_{\nu}-\partial_{\nu}U_{\mu}=F_{\mu\nu}\). Let us find asymptotic solutions for the scalar and the gauge fields. Circular symmetry restricts forms of field configurations as \(\sigma(x)=\sigma(r)\) and \(U_{\mu}(x)=U_{\theta}(r)\delta_{\theta}^{\mu}\). Since \(\sigma(r)\simeq 0\) due to the boundary conditions Eq. (5), we can neglect terms that depend on \(\sigma(r)\) at infinity. Then the equation of motion for \(U_{\theta}(r)\) becomes linear and is given by \[\left(\frac{\partial^{2}}{\partial r^{2}}-\frac{1}{r}\frac{ \partial}{\partial r}-m_{e}^{2}\right)U_{\theta}(r)=0. \tag{10}\] Therefore, we can obtain an analytical asymptotic solution of \(U_{\theta}(r)\) under boundary condition Eq. (5) as \[U_{\theta}^{\rm sol}(r)=k_{e}rK_{1}(m_{e}r), \tag{11}\] where \(K_{m}(x)\) is the modified Bessel function of the second kind order \(m\), while \(k_{e}\) is a numerical constant. It is worth noting that the modified Bessel functions have the following properties, \[K_{0}(r)=\sqrt{\frac{\pi}{2r}}e^{-r},\ K_{1}(r)=\sqrt{\frac{ \pi}{2r}}e^{-r},\ (r\to\infty). \tag{12}\] Since \(K_{1}(x)\) is exponentially suppressed at large distance \(x\gg 1\), we can safely neglect non-linear terms of \(U_{\theta}\), and then, the equation of motion for \(\sigma(r)\) is simplified as \[\left(\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{ \partial}{\partial r}-m_{\phi}^{2}\right)\sigma(r)=0. \tag{13}\] Here, we have assumed that the quadratic term of the gauge field \((U_{\theta})^{2}\) is negligible compared to \(\sigma(r)\), which is justified only for \(\lambda<8e^{2}\)[50]. The solution consistent with the boundary condition Eq. (5) is given by \[\sigma^{\rm sol}(r)=k_{\phi}K_{0}(m_{\phi}r), \tag{14}\] where \(k_{\phi}\) is a constant. For the parameter region \(\lambda>8e^{2}\), the quadratic term \((U_{\theta})^{2}\) is not negligible, and thus, the asymptotic expression Eq. (14) is no longer justified [50]. In this case, the \((U_{\theta})^{2}\) term has to be included to derive the correct asymptotic configurations. Numerical constants \(k_{\phi}\) and \(k_{e}\) are determined by the requirement that exact numerical solutions of Eq. (2) in the long-distance limit should coincide with the asymptotic solutions. For the ANO string configuration, this procedure is explicitly examined in Ref. [39]. For a special case \(\lambda/2e^{2}=1\) called the critical coupling or Bogomol'nyi-Prasad-Sommerfield (BPS) limit, analytic expressions have been found [38]. Let us next find external sources that represent the presence of an ANO string with given constants \(k_{\phi}\) and \(k_{e}\). Following Ref. [39], we introduce external sources \(J_{\sigma}\) and \(j_{\mu}\) in the linearized field theory as \[\mathcal{L}_{\rm AH}=\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu} \sigma-\frac{1}{2}m_{\phi}^{2}\sigma^{2}-\frac{1}{4}\mathcal{F}^{\mu\nu} \mathcal{F}_{\mu\nu}+\frac{1}{2}m_{e}^{2}U_{\mu}U^{\mu}-J_{\sigma}\sigma-j_{ \mu}U^{\mu}. \tag{15}\] At infinity, since string configurations are described by Eqs. (11) and (14), we can read off the explicit forms of external sources: \[J_{\sigma}\equiv\left(\nabla^{2}-m_{\phi}^{2}\right)\sigma^{\rm sol }(|\mathbf{x}|)=-2\pi k_{\phi}\delta^{(2)}(\mathbf{x}), \tag{16}\] \[j_{i}\equiv\left(\nabla^{2}-m_{e}^{2}\right)U_{i}^{\rm sol}(|\bm {x}|)=2\pi\frac{k_{e}}{m_{e}}\epsilon_{ji}\nabla_{j}\delta^{(2)}(\mathbf{x}). \tag{17}\] Here, we change the two-dimensional coordinate from \((r,\theta)\) to the two-dimensional Cartesian coordinate \(\mathbf{x}\equiv(x,y)\) for later convenience. The Laplace operator is defined by \(\nabla^{2}\equiv\partial_{x}^{2}+\partial_{y}^{2}\), while \(\epsilon_{ij}\) is the two-dimensional Levi-Civita symbol. \(\delta^{(2)}(\mathbf{x})\) is the two-dimensional delta function. Following relations are used to derive Eq. (17) \[(\nabla^{2}-1)K_{0}(|\mathbf{x}|)=-2\pi\delta^{(2)}(\mathbf{x}),\ \frac{ \mathrm{d}}{\mathrm{d}r}K_{0}(r)=-K_{1}(r). \tag{18}\] The above result implies that the ANO string configurations \(\sigma(r)\) and \(U_{\theta}(r)\) can be successfully represented by point-like external sources in the long-distance limit where the internal structure of the string is coarse-grained. As argued in Ref. [39], \(J_{\sigma}\) and \(j_{\mu}\) represent a scalar monopole with charge \(-2\pi k_{\phi}\) and a magnetic dipole with moment \(-2\pi k_{e}/m_{e}\) whose direction is parallel or antiparallel to the \(z\)-direction depending on the sign of the winding number, respectively. In particular, the existence of magnetic dipole moments follows from the presence of the magnetic flux inside the string given by Eq. (7). Analogous to a charged electron linearly coupled to a massless photon, \(J_{\sigma}\) and \(j_{\mu}\) couple to a massive scalar \(\sigma\) and a gauge field \(A_{\mu}\), respectively, for a local string. This picture is very useful when calculating the interaction between two strings, as seen in Sec. 3. ### Global strings In this subsection, we derive solutions for global strings and show that the existence of global strings can be approximately described by external currents associated with scalar and topological charges. The scalar and topological charges are linearly coupled to a massive scalar field and a two-index antisymmetric tensor field, respectively. The Lagrangian density of the Goldstone model with a complex scalar field is described by \[\mathcal{L}_{\rm global}=|\partial_{\mu}\phi|^{2}-V(\phi), \tag{19}\] where \(V(\phi)\) is defined as same as Eq. (1). With the scalar field parametrization given by Eq. (8), above Lagrangian can be rewritten as \[\mathcal{L}_{\rm global}=\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma+ |\phi|^{2}\partial_{\mu}\pi\partial^{\mu}\pi-\frac{1}{2}m_{\phi}^{2}\sigma^{2 }-\frac{\sqrt{2}}{4}\lambda_{\phi}\eta_{\phi}\sigma^{3}-\frac{\lambda_{\phi}}{ 16}\sigma^{4}. \tag{20}\] By using the ansatz of the static, straight string solution given by Eq. (4), we obtain the equations of motion of \(\sigma\) and \(\pi\) fields as \[\left(\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial }{\partial r}-m_{\phi}^{2}\right)\sigma-\frac{3}{4}\sqrt{2}\lambda_{\phi}\eta_ {\phi}\sigma^{2}-\frac{\lambda_{\phi}}{4}\sigma^{3}+\sigma(\partial_{\mu}\pi)^ {2}+\sqrt{2}\eta_{\phi}(\partial_{\mu}\pi)^{2}=0, \tag{21}\] \[\partial_{\mu}(|\phi|^{2}\partial^{\mu}\pi)=0. \tag{22}\] Assuming the power law dependence, \(\sigma(r)\propto r^{-a}\) (\(a>0\)), at infinity and inserting \(\pi=n\theta\) into above expression, we obtain an asymptotic solution of \(\sigma(r)\) at the leading order, \[\sigma^{\rm sol}(r)=-\frac{\sqrt{2}n^{2}\eta_{\phi}}{m_{\phi}^{2}r^{2}}. \tag{23}\] We have confirmed that this result is in agreement with the numerical one. We first focus on the external source describing the massive excitation parameterized by \(\sigma(r)\). We introduce the external source as \[\mathcal{L}=\frac{1}{2}(\partial_{\mu}\sigma)^{2}-\frac{1}{2}m_{\phi}^{2} \sigma^{2}-J_{G}\sigma. \tag{24}\] \(J_{G}\) is also determined by the requirement that it reproduces the scalar configuration given by Eq. (23). Substituting the asymptotic solution in Eq. (23), to the leading order, one obtains \[J_{G}=\frac{\sqrt{2}n^{2}\eta_{\phi}}{r^{2}}. \tag{25}\] From the above expression, it is obvious that \(J_{G}\) is not a localized source, which is different from the point-like \(J_{\sigma}\) in the ANO string. This stems from the fact that a contribution from the NG-boson, which is a massless degree of freedom originated from the term \((\partial_{\mu}\pi)^{2}\) in Eq. (21), is dominant at infinity. For the local string, this contribution is canceled by the gauge field configuration. We now turn our attention to external sources describing the massless excitations of the global string. Since a non-trivial winding at the center of the string implies the existence of a magnetic flux associated with a NG-boson, it is appropriate to introduce a dual field of NG bosons [51; 52; 53]. The dual field of the massless real scalar field \(\pi(x)\) is a two-index antisymmetric tensor field \(B_{\mu\nu}\) satisfying the following equation: \[|\phi|^{2}\partial_{\mu}\pi=\frac{\eta_{\phi}}{2}\epsilon_{\mu\nu\rho\sigma} \partial^{\nu}B^{\rho\sigma}. \tag{26}\] This equation makes sense when \(|\phi|\neq 0\) and \(\pi(x)\) is equivalent to \(B_{\mu\nu}\) in Minkowski spacetime as long as the on-shell condition Eq. (26) is satisfied [51]. Then the Lagrangian density in terms of \(B_{\mu\nu}\) can be written as \[\mathcal{L}_{\rm dual} =\frac{\eta_{\phi}^{2}}{6|\phi|^{2}}(H_{\mu\nu\rho})^{2}, \tag{27}\] \[H^{\mu\nu\rho} \equiv\partial^{\mu}B^{\nu\rho}+\partial^{\rho}B^{\mu\nu}+ \partial^{\nu}B^{\rho\mu}, \tag{28}\] where we have omitted the terms only depending on \(\sigma\) because they do not play an important role in the following discussion. \(H^{\mu\nu\rho}\) is the field strength tensor of \(B_{\mu\nu}\). In the dual picture, the interaction of NG-boson and its magnetic charge can be equivalently expressed by introducing the electric source of the dual field, \[\mathcal{L}=\mathcal{L}_{\text{dual}}-B_{\mu\nu}J_{B}^{\mu\nu}. \tag{29}\] Here, \(J_{B}^{\mu\nu}\) is the external source of \(B_{\mu\nu}\). The equation of motion of the dual field is given by \[\partial_{\rho}\left(\frac{\eta_{\phi}^{2}}{|\phi|^{2}}H^{\rho\mu\nu}\right)=- J_{B}^{\mu\nu}. \tag{30}\] We can also obtain the equation of motion of \(B_{\mu\nu}\) by simply substituting the on-shell condition in Eq. (26) into field equation of \(\pi\) in Eq. (22). Then by comparing with Eq. (30) one can read off the expression for \(J_{B}^{\mu\nu}\) as \[J_{B}^{\mu\nu}=\eta_{\phi}\epsilon^{\mu\nu\rho\alpha}\partial_{\rho}\partial_ {\alpha}\pi. \tag{31}\] Due to the antisymmetric property of \(\epsilon^{\mu\nu\rho\sigma}\), the right-hand side of the above equation is zero, except at the origin, \(|\phi|=0\). At the origin, \(\pi\) is not well-defined, therefore, one cannot conclude that \(J_{B}^{\mu\nu}\) vanishes at the origin. The presence of the magnetic charge of the NG-boson can be seen by integrating over the two-dimensional plane, which is perpendicular to the straight string axis, as \[\int\mathrm{d}^{2}x\epsilon^{03ij}\partial_{i}\partial_{j}\pi= \oint_{\mathbb{S}^{1}}\mathrm{d}\pi=2\pi n. \tag{32}\] We see that \(J_{B}^{\mu\nu}\) is zero except at the origin, and integrating over a two-dimensional plane gives a nonzero finite value. This is exactly the definition of a delta function, and consequently we have \[J_{B}^{03}(\mathbf{x})=-J_{B}^{30}(\mathbf{x})=2\pi n\eta_{\phi}\delta^{( 2)}(\mathbf{x}). \tag{33}\] Since this external source represents the winding number of the global string, the interaction Eq. (29) with the external source Eq. (33) can be understood as a topological coupling between the NG-boson and the global string. ### Bosonic Superconducting strings In this subsection, we review the bosonic superconducting string and derive its asymptotic solutions. The simplest model that accommodates bosonic superconducting solutions possesses two gauge symmetries with \(U(1)\times\widetilde{U}(1)\) and the Lagrangian density is defined by [6] \[\begin{split}&\mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+|D_{ \mu}\phi|^{2}-\frac{1}{4}\tilde{F}^{\mu\nu}\tilde{F}_{\mu\nu}+|\tilde{D}_{\mu }\tilde{\phi}|^{2}-V(\phi,\tilde{\phi}),\\ & V(\phi,\tilde{\phi})=\frac{\lambda_{\phi}}{4}\left(|\phi|^{2}- \eta_{\phi}^{2}\right)^{2}+\frac{\lambda_{\tilde{\phi}}}{4}\left(|\tilde{\phi }|^{2}-\eta_{\phi}^{2}\right)^{2}+\beta|\phi|^{2}|\tilde{\phi}|^{2},\end{split} \tag{34}\] where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\), \(\tilde{F}_{\mu\nu}=\partial_{\mu}\tilde{A}_{\nu}-\partial_{\nu}\tilde{A}_{\mu}\) and \(D_{\mu}\equiv\partial_{\mu}-ieA_{\mu}\), \(\tilde{D}_{\mu}\equiv\partial_{\mu}-ig\tilde{A}_{\mu}\) are the field strength and the covariant derivative of the \(U(1)\), and \(\widetilde{U}(1)\) gauge field, respectively. Here, we simply assume \(\lambda_{\phi}>0,\ \lambda_{\tilde{\phi}}>0\) and \(\beta>0\). The bosonic superconducting solution is realized when \(U(1)\) symmetry is spontaneously broken and gives rise to the ANO string configurations, as we discussed in Sec. 2.1, while \(\widetilde{U}(1)\) symmetry is only broken in the string interior by the localized \(\tilde{\phi}\) condensation. Thus \(|\phi|=0\) and \(|\tilde{\phi}|\simeq\eta_{\tilde{\phi}}\) at the string center, and \(|\phi|=\eta_{\phi}\) and \(|\tilde{\phi}|=0\) at infinity are realized. We shall discuss the parameter region leading to the bosonic superconducting cosmic string solution. For an illustrative purpose, we here neglect the \(\tilde{U}(1)\) gauge field and the non-trivial \(z\)-dependence of the phase of \(\tilde{\phi}\), which will be included later. A presence of the biquadratic interaction of the form \(\beta|\phi|^{2}|\tilde{\phi}|^{2}\) contributes to the effective mass term of \(\tilde{\phi}\) as \[m_{\tilde{\phi}}^{2}=\beta|\phi|^{2}-\frac{\lambda_{\tilde{\phi}} }{2}\eta_{\tilde{\phi}}^{2}+\lambda_{\tilde{\phi}}|\tilde{\phi}|^{2}. \tag{35}\] \(\widetilde{U}(1)\) symmetry is unbroken well outside the string when the square-mass of \(\tilde{\phi}\) is positive, that is, \(\beta\eta_{\phi}^{2}-\lambda_{\tilde{\phi}}\eta_{\tilde{\phi}}^{2}/2>0\). Also, \(\lambda_{\phi}\eta_{\phi}^{4}>\lambda_{\tilde{\phi}}\eta_{\tilde{\phi}}^{4}\) is required to avoid the metastability at the minimum of potential energy, \(|\phi|=\eta_{\phi}\) and \(|\tilde{\phi}|=0\), corresponding to the string exterior. Near the center of the string, \(|\phi|\simeq 0\), condensation \(|\tilde{\phi}|\neq 0\) occur around the center of the string due to the instability \(m_{\tilde{\phi}}^{2}<0\) at \(\tilde{\phi}=0\). In this argument, however, we disregard the contribution from the gradient energy of \(\tilde{\phi}\). It is shown in Refs. [6; 54; 55] that the localized condensation of \(\tilde{\phi}\) is energetically favorable for \(\beta\lesssim\lambda_{\tilde{\phi}}^{2}\eta_{\tilde{\phi}}^{4}/(4\lambda_{ \phi}\eta_{\phi}^{4})\) including the gradient energy under the approximation that \(\phi(r)\simeq r^{n}\) around \(r=0\) and the back-reaction of \(\tilde{\phi}\) on \(\phi\) is negligible. As a result, conditions to form the bosonic superconducting string without effects of \(\tilde{U}(1)\) gauge field are summarized as \[\frac{\lambda_{\tilde{\phi}}}{2}\frac{\eta_{\tilde{\phi}}^{2}}{ \eta_{\phi}^{2}}<\beta\lesssim\frac{1}{4}\frac{\lambda_{\tilde{\phi}}^{2}}{ \lambda_{\phi}}\frac{\eta_{\tilde{\phi}}^{4}}{\eta_{\phi}^{4}}. \tag{36}\] In this class of models, the cosmic string can carry a current flowing along the string and exhibit superconductivity when the phase of \(\tilde{\phi}\) has a non-trivial \(z\)-dependence. For a static straight string, the general ansatz of \(\tilde{\phi}\) and \(\tilde{A}_{\mu}\) are given by [56] \[\tilde{\phi}=\tilde{\phi}_{r}(r)e^{-is(r)\alpha(z)},\ \tilde{A}_{\mu}=- \frac{1}{g}\alpha(z)\partial_{\mu}s(r), \tag{37}\] where \(s\) and \(\alpha\) are real functions. This parameterization is not a mere gauge transformation from \(\tilde{\phi}=\tilde{\phi}_{r}\) and \(\tilde{A}_{\mu}=0\) unless \(\alpha(z)\) is independent of \(z\). Therefore it describes the real physical excitation. This ansatz is equivalent to the following expression up to the gauge transformation, \[\tilde{\phi}=\tilde{\phi}_{r}(r),\ \tilde{A}_{\mu}=\frac{1}{g}s(r) \partial_{\mu}\alpha(z). \tag{38}\] With Eq. (38), one obtains the equation of motion of \(\alpha(z)\) from the Lagrangian density Eq. (34), \[r\,\partial_{r}s(r)\,\partial_{z}^{2}\alpha(z)=0\,. \tag{39}\] When \(s(r)\) is not independent of \(r\), the static solution of this equation is simply given by \(\alpha(z)=\omega z\) where \(\omega\) is a constant. With \(\phi\) and \(A_{\mu}\) following the same ansatz as the ANO string in Eq. (4), the equations of motion of \(\phi_{r},\ A_{\theta},\tilde{\phi}_{r}\) and \(s\) are given by \[\begin{split}&\frac{\partial^{2}\phi_{r}}{\partial r^{2}}+\frac{1}{r} \frac{\partial\phi_{r}}{\partial r}-\frac{e^{2}}{r^{2}}\phi_{r}\left(A_{\theta }-\frac{n}{e}\right)^{2}-\frac{1}{2}\lambda_{\phi}\phi_{r}(\phi_{r}^{2}-\eta_{ \phi}^{2})-\beta|\tilde{\phi}|^{2}\phi_{r}=0,\\ &\frac{\partial^{2}A_{\theta}}{\partial r^{2}}-\frac{1}{r}\frac{ \partial A_{\theta}}{\partial r}-2e^{2}\phi_{r}^{2}\left(A_{\theta}-\frac{n}{ e}\right)=0,\\ &\frac{\partial^{2}\tilde{\phi}_{r}}{\partial r^{2}}+\frac{1}{r} \frac{\partial\tilde{\phi}_{r}}{\partial r}-\tilde{s}^{2}\tilde{\phi}_{r}- \frac{1}{2}\lambda_{\tilde{\phi}}\tilde{\phi}_{r}(\tilde{\phi}_{r}^{2}-\eta_{ \tilde{\phi}}^{2})-\beta|\phi|^{2}\tilde{\phi}_{r}=0,\\ &\frac{\partial^{2}\tilde{s}}{\partial r^{2}}+\frac{1}{r}\frac{ \partial\tilde{s}}{\partial r}-2g^{2}\tilde{s}\tilde{\phi}_{r}^{2}=0.\end{split} \tag{40}\] Here, we use the normalization \(\tilde{s}(r)=\omega s(r)\) for convenience. Boundary conditions at infinity are determined by the finiteness of the energy, and are given by \[\phi_{r}(r\to\infty)=\eta_{\phi},\ A_{\theta}(r\to\infty)=\frac{n}{e},\ \tilde{\phi}_{r}(r\to\infty)=0. \tag{41}\] The first and the second conditions are the same as those of the ANO string in Eq. (5). The regularity at the origin requires \[\phi_{r}(r\to 0)=0,\ A_{\theta}(r\to 0)=0,\ \partial_{r}\tilde{\phi}_{r}(r \to 0)=0,\ \partial_{r}\tilde{s}(r\to 0)=0. \tag{42}\] As in the case of the ANO string, no exact analytic solution is known. The numerical solutions of Eqs. (40) are shown in App. A. Let us now seek the asymptotic solution for the current-carrying superconducting string. Assuming \(\tilde{\phi}_{r}\) falls faster than \(\tilde{s}(r)^{2}\) at large \(r\), we neglect the third term and the non-linear terms of \(\tilde{\phi}_{r}\) in the equation of motion for \(\tilde{\phi}\) in Eq. (40). Noting that \(\tilde{s}(r\to\infty)\simeq\tilde{s}(r=0)\) contributes to the effective mass of \(\tilde{\phi}\), but we do not know the value of \(\tilde{s}\) at infinity, Eq. (40) can be approximated by \[\frac{\partial^{2}\tilde{\phi}_{r}}{\partial r^{2}}+\frac{1}{r} \frac{\partial\tilde{\phi}_{r}}{\partial r}-m_{\tilde{\phi},\infty}^{2}\tilde {\phi}_{r}=0,\ \ \ \ m_{\tilde{\phi},\infty}^{2}\equiv\beta\eta_{\phi}^{2}-\frac{1}{2} \lambda_{\tilde{\phi}}\eta_{\phi}^{2}+\tilde{s}^{2}(r=0), \tag{43}\] and its solution at infinity is given by \[\tilde{\phi}_{r}^{\rm sol}=k_{\tilde{\phi}}K_{0}(m_{\tilde{\phi},\infty}r), \tag{44}\] where \(k_{\tilde{\phi}}\) is a numerical constant. Since \(\tilde{\phi}_{r}\) is exponentially suppressed at long distance, the \(\tilde{s}\tilde{\phi}_{r}^{2}\) term in Eq. (40) can be ignored. As a result, the equation becomes the standard Laplace equation in two dimensions without angular dependence, whose solution is given by \[\tilde{s}^{\rm sol}(r)=k_{s}\ln\left(\frac{r}{\delta}\right)+k_{s0}. \tag{45}\] Here, \(k_{s}\) and \(k_{s0}\) are numerical constants. \(\delta\) is the typical width of string core. We confirm that these asymptotic solutions in Eqs. (44) and (45) agree with our exact numerical calculations. Two numerical constants \(k_{\tilde{\phi}}\) and \(k_{s}\) are then determined by the fitting of numerical calculations shown in table 1. We now briefly explain the effect of \(\widetilde{U}(1)\) current induced by the condensation of \(\tilde{\phi}\). The total amount of the current associated with \(\widetilde{U}(1)\) symmetry flowing along the straight string is defined by \[\widetilde{J}_{\rm tot}=\int{\rm d}^{2}x\,\widetilde{J}_{z},\ \widetilde{J}_{z}(r)\equiv-2g\tilde{s}(r)\tilde{\phi}_{r}^{2}(r). \tag{46}\] Here, \(\widetilde{J}_{z}\) is the \(z\)-component of the Noether current associated with \(\widetilde{U}(1)\). Since \(\tilde{\phi}_{r}\) condensation is strongly localized around the string, \(\widetilde{J}_{z}(r)\) is trapped inside the string. Very interestingly, one cannot make \(\widetilde{J}_{\rm tot}\) arbitrarily large due to the following reason. It is clear that the effective mass term of \(\tilde{\phi}\) defined by Eq. (35) is now modified by the non-zero \(\tilde{s}\) as \[m_{\tilde{\phi}}^{2}(r)=\beta\phi_{r}^{2}(r)-\frac{1}{2}\lambda_{\tilde{\phi}} \eta_{\phi}^{2}+\lambda_{\tilde{\phi}}\tilde{\phi}_{r}^{2}(r)+\tilde{s}^{2}(r). \tag{47}\] For a large value of \(\tilde{s}(r=0)\), the effective mass term at the origin becomes positive even with \(\tilde{\phi}_{r}=0\) near the string center, which results in symmetry restoration of \(\widetilde{U}(1)\) and no condensation of \(\tilde{\phi}\). Hence, for a large amplitude of \(\tilde{s}(r=0)\), a back-reaction to \(\tilde{\phi}\) is significant which makes the strength of the condensation weaker. Consequently, \(\widetilde{J}_{\rm tot}\) begins to get smaller as \(\tilde{s}(r=0)\) is sufficiently large to have significant back recon on \(\tilde{\phi}_{r}\). This phenomenology is known to be current quenching [6]. On the other hand, \(\widetilde{J}_{\rm tot}\) also becomes smaller for a smaller \(\tilde{s}(r=0)\) when it is small enough that the back-reaction from \(\tilde{s}\) on \(\tilde{\phi}_{r}\) is a negligible amount. Therefore, there exists a maximum of the total current flowing inside the string, at least in the case of the magnetic current. Next, let us find external sources that represent the bosonic superconducting string. As in the case of the ANO string, we introduce currents \(J_{\tilde{\phi}}\) and \(\tilde{j}_{\mu}\) that reproduce the asymptotic field configurations \(\tilde{\phi}_{r}^{\rm sol}\) and \(\tilde{A}_{\mu}^{\rm sol}\left(\Leftrightarrow\tilde{s}^{\rm sol}\right)\) in the linear theory, \[\mathcal{L}=\mathcal{L}_{\rm AH}+\frac{1}{2}(\partial_{\mu}\tilde{\phi}_{r})^{ 2}-\frac{1}{2}m_{\tilde{\phi},\infty}^{2}\tilde{\phi}_{r}^{2}-\frac{1}{4} \tilde{F}^{\mu\nu}\tilde{F}_{\mu\nu}-J_{\tilde{\phi}}\tilde{\phi}_{r}-\tilde{ j}_{\mu}\tilde{A}^{\mu}. \tag{48}\] In this expression, \(\mathcal{L}_{\rm AH}\) is defined by Eq. (15). Substituting the asymptotic solutions into the equations of motion derived from Eq. (48), one can read off corresponding sources as \[\begin{split}& J_{\tilde{\phi}}=(\nabla^{2}-m_{\tilde{\phi}, \infty}^{2})\tilde{\phi}_{r}^{\rm sol}(\mathbf{x})=-2\pi k_{\tilde{\phi}}\delta^{ (2)}(\mathbf{x}),\\ &\tilde{j}_{z}=-\frac{1}{g}\nabla^{2}\tilde{s}^{\rm sol}(\mathbf{x}) =2\pi k_{A}\delta^{(2)}(\mathbf{x}).\end{split} \tag{49}\] Here, \(k_{A}=k_{s}/g\) is a numerical constant. As is the case in the ANO string, a non-trivial configuration of \(\tilde{\phi}_{r}\) can be expressed by \(J_{\tilde{\phi}}\) regarded as the monopole charge of \(-2\pi k_{\tilde{\phi}}\). Similarly, a non-trivial configuration of the gauge field \(\tilde{A}_{z}\) is sourced by the static magnetic current flowing along the string. One should note that \(k_{A}\) can be expressed as \(k_{A}=2\pi\widetilde{J}_{\rm tot}\), as is evident from the Gauss's law. Therefore, unlike other numerical constants, \(k_{\phi},~{}k_{e}\) and \(k_{\tilde{\phi}}\), there is a strong restriction on \(k_{A}\) due to the current quenching effect at least for the magnetic current. ## 3 Cosmic string interactions: Analytic studies In this section, we investigate the interaction energy between two strings using the method proposed in Ref. [39]. In the following analysis, we assume that two strings are straight and static, separated by a fixed distance, and that each string possesses a single winding number, \(n=\pm 1\). We will argue that the most dominant contribution comes from the lightest field in the underlying theory. ### Local strings interactions In this subsection, we compute interaction energies of local(ANO) strings. When two strings are widely separated, effective descriptions of ANO strings with external sources in Eq. (15) are applicable. We assume the whole field configuration can be approximated by the superposition of each source, \[\begin{split}& J_{\sigma}=-2\pi k_{\phi 1}\delta^{(2)}(\mathbf{x}-\mathbf{x}_{1})-2\pi k_{\phi 2}\delta^{(2)}(\mathbf{x}-\mathbf{x}_{2}),\\ & j_{i}=2\pi\frac{k_{e1}}{m_{e}}\epsilon_{ji}\nabla_{j}\delta^{(2 )}(\mathbf{x}-\mathbf{x}_{1})+2\pi\frac{k_{e2}}{m_{e}}\epsilon_{ji}\nabla_{j}\delta^{( 2)}(\mathbf{x}-\mathbf{x}_{2}).\end{split} \tag{21}\] Here, we introduce the two-dimensional Cartesian coordinate \(\mathbf{x}=(x^{1},x^{2})\) which parameterizes the plane perpendicular to the string axes. \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are positions of each string axe fixed by hand, while \(k_{\phi i}\) and \(k_{ei}\) (\(i=1,2\)) are charges of each string. The effective Lagrangian density of this system is defined by Eq. (15) with the above external sources. The equations of motion obtained from Lagrangian density in Eq. (15) are given by \[(\Box+m_{\phi}^{2})\sigma=-J_{\sigma},\quad(\Box+m_{e}^{2})U_{\mu}=j_{\mu}. \tag{22}\] One can solve the above differential equations by the standard Green's function method: \[\begin{split}&\sigma^{\text{sol}}(x)=\int\mathrm{d}^{4}x^{\prime}\,G(x-x^ {\prime};m_{\phi})J_{\sigma}(x^{\prime}),\\ & U_{\mu}^{\text{sol}}(x)=-\int\mathrm{d}^{4}x^{\prime}\,G(x-x^{ \prime};m_{e})j_{\mu}(x^{\prime}),\\ & G(x-x^{\prime},M)\equiv\int\frac{d^{4}p}{(2\pi)^{4}}\frac{1}{p^ {2}-M^{2}}e^{ip(x-x^{\prime})}\,.\end{split} \tag{23}\] Putting above solutions into original Lagrangian given by Eq. (15), we obtain \[\mathcal{L}=-\frac{1}{2}J_{\sigma}\sigma^{\text{sol}}-\frac{1}{2}U_{\mu}^{ \text{sol}}j^{\mu}. \tag{24}\] The interaction energy is then evaluated as \[E_{\text{int}}=\frac{1}{2}\int dz\int d^{2}x\int d^{2}x^{\prime} \left(-J_{\sigma}(x)D_{\sigma}^{(2)}(x-x^{\prime})J_{\sigma}(x^{\prime})+J_{ \mu}(x)D_{U}^{(2)\mu\nu}(x-x^{\prime})J_{\nu}(x^{\prime})\right). \tag{25}\] In this calculation, we have used the fact that \(D_{\sigma}^{(2)}(x-x^{\prime})\) and \(D_{U}^{(2)\mu\nu}(x-x^{\prime})\) are the two-dimensional Euclidean propagators defined by \[D_{\sigma}^{(2)}(x-x^{\prime})=\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2} }\frac{1}{\mathbf{p}^{2}+m_{\phi}^{2}}e^{i\mathbf{p}\cdot(\mathbf{x}-\mathbf{x}^{\prime})}= \frac{1}{2\pi}K_{0}(m_{\phi}|\mathbf{x}-\mathbf{x}^{\prime}|), \tag{26}\] \[D_{U}^{(2)\mu\nu}(x-x^{\prime})=\frac{\eta^{\mu\nu}}{2\pi}K_{0} (m_{e}|\mathbf{x}-\mathbf{x}^{\prime}|). \tag{27}\] Essentially, \(K_{0}(M|\mathbf{x}-\mathbf{x}^{\prime}|)\) is the two-dimensional Yukawa potential with mass \(M\). Substituting the superposed external sources in Eq. (21) into Eq. (25), we find \[E_{\text{int}}=\int dz2\pi\left[k_{e1}k_{e2}K_{0}(m_{e}d)-k_{\phi 1}k_{\phi 2}K_{0}(m_{ \phi}d)\right],\ d\equiv|\mathbf{x}_{1}-\mathbf{x}_{2}|. \tag{28}\] This result is in agreement with the previous studies [38, 39]. The first term of Eq. (3.8) represents the interaction energy coming from the massive gauge field associated with the presence of the magnetic flux of the ANO strings, while the second term is the contribution from the massive scalar field. The signs of Eq. (3.8) simply reflect the fact that the gauge interaction through the magnetic flux is repulsive for winding numbers with the same signs, while the scalar interaction is always attractive since the charge of scalar field should always take the same sign. Due to the asymptotic forms of \(K_{0}(x)\), the interaction energy is exponentially suppressed on length scales longer than the inverse mass of the underlying field. For \(2e^{2}/\lambda>1\) (\(m_{\phi}<m_{e}\)), the scalar attraction is always dominant at \(d\to\infty\). Conversely, if \(2e^{2}/\lambda<1\) (\(m_{\phi}>m_{e}\)), the repulsion sourced by the gauge field is dominant at \(d\to\infty\) if the winding number of two strings are opposite. These arguments are essentially independent of the precise values of the charges, \(k_{\phi}\) and \(k_{e}\), as long as they are finite values. Therefore, the interaction energy of the string at \(d\to\infty\) can be easily revealed by applying the point source formalism. Once the values of \(\lambda_{\phi}\) and \(e^{2}\) are specified, \(k_{\phi}\) and \(k_{e}\) can be calculated explicitly as explained in Sec. 2.1. In this case, as in the original analysis [39], we can discuss which interactions are dominant at any \(d\). We will do this in the next section and discuss the validity of the point source formalism by comparing the results obtained by the non-linear numerical calculations with the analytic ones. ### Global strings interactions In this subsection, we estimate the interaction energy of two straight and static global strings. In the following discussion, we first focus on the interaction energy from the topological charge. The effective Lagrangian density of the NG-boson is given by Eq. (2.29) through its dual fields. If the two global strings are far apart, the external sources can be approximated by a superposition of their respective sources, \[J_{B}^{03}=-J_{B}^{30}=2\pi n_{1}\eta_{\phi}\delta^{(2)}(\mathbf{x}-\mathbf{x}_{1})+2 \pi n_{2}\eta_{\phi}\delta^{(2)}(\mathbf{x}-\mathbf{x}_{2}), \tag{3.9}\] where \(n_{1}\) and \(n_{2}\) are winding numbers of global strings whose axes placed at \(\mathbf{x}=\mathbf{x}_{1}\) and \(\mathbf{x}=\mathbf{x}_{2}\), respectively. In the region sufficiently far from the string axis, one can use the approximation \(|\phi|\simeq\eta_{\phi}\). Then one can solve the equation of motion of the dual field \(B_{\mu\nu}\) in Eq. (2.30) by using the standard Green's function method as \[B^{\mu\nu}(x)=\int\mathrm{d}^{4}x^{\prime}\,G(x-x^{\prime},\epsilon)J_{B}^{\mu \nu}(x^{\prime}). \tag{3.10}\] Here, we fix a gauge by imposing \(\partial_{\mu}B^{\mu\nu}=0\), which is analogous to the Coulomb gauge in the classical electrodynamics.3 We introduce infinitesimal mass \(\epsilon\) of \(B_{\mu\nu}\) to avoid infrared divergence, which will be taken to be zero at the end of calculation. Footnote 3: For a detailed discussion of the gauge degrees of freedom and its fixing for \(B_{\mu\nu}\) field, see e.g. Ref. [57]. Then the interaction energy becomes \[E_{\rm int}[B]=\frac{1}{2}\int dz\int d^{2}x\int d^{2}x^{\prime}\left(J_{B}^{ \mu\nu}(x)D_{B}^{(2)}(x-x^{\prime})J_{B\mu\nu}(x^{\prime})\right), \tag{3.11}\] where \(D_{B}^{(2)}\) is defined by two-dimensional massless propagator, \[D_{B}^{(2)}(x-x^{\prime})=\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}}\frac{1}{\mathbf{p}^{2 }+\epsilon^{2}}e^{i\mathbf{x}\cdot(\mathbf{x}-\mathbf{x}^{\prime})}=\frac{1}{2\pi}K_{0}( \epsilon|\mathbf{x}-\mathbf{x}^{\prime}|). \tag{3.12}\] The interaction energy is then evaluated as \[E_{\rm int}[B]=\int dz4\pi\eta^{2}n_{1}n_{2}K_{0}\left(\epsilon d \right)\simeq-\int dz4\pi\eta^{2}n_{1}n_{2}\ln\left(\epsilon d\right). \tag{3.13}\] The approximation of modified Bessel function \(K_{0}(\epsilon d)\simeq-\log(\epsilon d)\) for \(\epsilon d\ll 1\) is used in the last equality. We redefine the origin of the interaction energy at \(d=m_{\eta}^{-1}\). This yields \[E_{\rm int}[B]\equiv-\int dz4\pi\eta^{2}n_{1}n_{2}\left(\ln\left( \epsilon d\right)-\ln\left(\epsilon m_{\eta}^{-1}\right)\right)=-\int dz4\pi \eta^{2}n_{1}n_{2}\ln\left(m_{\eta}d\right). \tag{3.14}\] The above logarithmic dependence at long distance, \(d\to\infty\), is in agreement with previous studies [21, 38]. The massless two-index antisymmetric tensor field leads to the long-range force. The same-sign topological charges of two global strings can be regarded as the same magnetic charges of axions which lead to the repulsive force. Let us next estimate the interaction energy of the massive mode. Assuming that two strings are widely separated such that external sources can be approximated by the following form, \[J_{G}=\sqrt{2}\eta_{\phi}\left(\frac{n_{1}^{2}}{(\mathbf{x}-\mathbf{x}_{ 1})^{2}}+\frac{n_{2}^{2}}{(\mathbf{x}-\mathbf{x}_{2})^{2}}\right). \tag{3.15}\] Then we obtain the interaction energy of \(\sigma\) field as \[E_{\rm int}[\sigma]=-\int dz\int d^{2}x\frac{2}{\lambda_{\phi}} \frac{n_{1}^{2}}{(\mathbf{x}-\mathbf{x}_{1})^{2}}\frac{n_{2}^{2}}{(\mathbf{x}-\mathbf{x}_{2})^ {2}}. \tag{3.16}\] There exists divergent contribution at two string axes, \(\mathbf{x}=\mathbf{x}_{i}\) (\(i=1,2\)). Obviously, effective descriptions by the external sources cannot be justified at these points. Therefore we introduce the cutoff which is of the same order of the string thickness \(\delta=1/(\lambda_{\phi}\sqrt{\eta_{\phi}})\). Then one can evaluate the interaction energy as \[E_{\rm int}[\sigma]\simeq-\int\mathrm{d}z\frac{2\pi}{\lambda_{ \phi}}n_{1}^{2}n_{2}^{2}\frac{1}{d^{2}}\ln\left(\frac{\delta^{2}}{d^{2}}\right),\ d\equiv|\mathbf{x}_{1}-\mathbf{x}_{2}|. \tag{3.17}\] This interaction energy is suppressed by \(1/d^{2}\) in the long-distance limit. Thus, the interaction energy is dominated by the external source associated with the topological charge in (3.14) at \(d\to\infty\). ### Bosonic superconducting strings interactions In this subsection, we compute the interaction energy of bosonic superconducting strings. As discussed in Sec. 2.3, there are four external sources corresponding to non-trivial field configurations in the \(U(1)\times\widetilde{U}(1)\) model. Since the asymptotic behavior of the string configurations associated with the spontaneously broken \(U(1)\) is exactly the same as for the ANO string, the interaction energy originated from these fields is given by Eq. (3.8). Here, we estimate the interaction energy sourced by the \(\widetilde{U}(1)\) Higgs field, and the current trapped inside the string. In the following discussion, we omit the contributions from the \(U(1)\) sector, but they are always implicitly present. An effective description of the widely separated bosonic superconducting strings is given by Eq. (2.48) with the following superimposed external sources, \[\begin{split}& J_{\phi}=-2\pi k_{\phi 1}\delta^{(2)}(\mathbf{x}-\mathbf{x}_{1})-2\pi k_{\phi 2}\delta^{(2)}(\mathbf{x}-\mathbf{x}_{2}),\\ &\tilde{j}_{z}=-2\pi k_{A1}\delta^{(2)}(\mathbf{x}-\mathbf{x}_{1})-2\pi k _{A2}\delta^{(2)}(\mathbf{x}-\mathbf{x}_{2}).\end{split} \tag{3.18}\] Calculation of the interaction energy is straightforward using the Green's function method. (See e.g. Sec. 3.1 for detailed computations.) The resultant interaction energy from \(\widetilde{U}(1)\) sector is expressed as \[E_{int}[\tilde{\phi},\tilde{s}]=\int dz2\pi\left[-k_{\tilde{\phi}1}k_{\tilde{ \phi}2}K_{0}(m_{\tilde{\phi},\infty}d)+k_{A1}k_{A2}\ln\left(\frac{d}{\delta} \right)\right],\quad d=|\mathbf{x}_{1}-\mathbf{x}_{2}|. \tag{3.19}\] Here, \(\delta\) is the typical width of the strings. The first term is originated from the localized condensate scalar field \(\tilde{\phi}\), while the second term comes from the current trapped inside the string. The constant \(k_{\tilde{\phi}}\) does not depend on the sign of the winding number or the direction of the current. Regardless of the directions of the two strings, \(k_{\tilde{\phi}1}\) and \(k_{\tilde{\phi}2}\) should take the same signature. This implies that the first term is always attractive. The second term, on the other hand, depends on the direction of the static currents. If the direction of each current is the same (opposite), then it is an attractive (repulsive) force. It is worth noting that the first term and the contribution from the \(\widetilde{U}(1)\) sector are exponentially suppressed on their mass scales, whereas the second logarithmic term is always dominant at infinity. This stems from the fact that the mass of \(\tilde{\phi}_{r}\) is generically non-vanishing at infinity, while the \(\widetilde{U}(1)\) gauge field is massless because the condensate of \(\tilde{\phi}_{r}\) is localized inside the string. Figure 1 shows the dependence of the interaction energy of bosonic superconducting cosmic strings, the sum of Eq. (3.8) and Eq. (3.19), on the separation distance for the benchmark point A specified in the App. A. We used the normalized quantity \(\bar{d}\) and \(\bar{\mathcal{E}}_{int}\) in the figure according to the App. A. The charges \(k_{\phi},\ k_{e},\ k_{\tilde{\phi}}\) and \(k_{A}\) are shown in Table 1. As shown in the right panel of the figure, the logarithmic dependence of \(\bar{\mathcal{E}}_{int}\) on \(\bar{d}\) is observed in the long-distance limit. \(k_{A}\) is the charge associated with the \(\widetilde{U}(1)\) current trapped inside the string, and is found to be one or two orders of magnitude smaller than the other charges due to current quenching effects, as explained in Sec. 2.3. Tiny values of \(k_{A}\) have a small effect on \(E_{\rm int}\) compared to other contributions when \(d\) is comparable to or shorter than the mass scale of all fields in the underlying theory. Thus, the effect of non-zero currents is only important in the long-distance limit. This is a generic feature of bosonic superconducting strings and can also be observed for other parameter choices. Figure 1: The dependence of the interaction energy of bosonic superconducting cosmic strings on the separation distance \(\bar{d}\) is shown in the region of \(0<\bar{d}<8\) (left panel). The point source formalism for benchmark point A given in Table 1 is used. Also shown is a plot of \(\bar{\mathcal{E}}_{\rm int}\) in the region of \(8<\bar{d}<20\) using the same benchmark point. Cosmic string interactions: Numerical studies In this section, the interaction energy of static and straight strings is evaluated numerically for arbitrary separation distances. In particular, the gradient flow method is used. This method was recently employed in Ref. [41] to reveal the interaction energy of two conventional local strings and two local strings with Coleman-Weinberg potential. The action for numerical simulations is the same as Eq. (34): \[\begin{split}& S=\int\mathrm{d}^{4}x\left[-\frac{1}{4}F^{\mu\nu}F_{ \mu\nu}+|D_{\mu}\phi|^{2}-\frac{1}{4}\widetilde{F}^{\mu\nu}\widetilde{F}_{\mu \nu}+|\widetilde{D}_{\mu}\tilde{\phi}|^{2}-V(\phi,\tilde{\phi})\right],\\ & V(\phi,\tilde{\phi})=\frac{\lambda_{\phi}}{4}\left(|\phi|^{2}- \eta_{\phi}^{2}\right)^{2}+\frac{\lambda_{\tilde{\phi}}}{4}\left(|\tilde{\phi }|^{2}-\eta_{\phi}^{2}\right)^{2}+\beta|\phi|^{2}|\tilde{\phi}|^{2}.\end{split} \tag{41}\] If one neglects the part of \(\tilde{\phi}\) and \(\tilde{A}_{\mu}\) with \(\beta=0\) and \(g=0\), this action represents local strings, and if one further neglects the part of \(A_{\mu}\) with \(e=0\), it represents global strings. We use the normalization described in App. A.1 for local strings and superconducting strings. For global strings, we use another normalization described in App. A.2. We prepare a two-dimensional lattice plane and place the two axes of the strings at \((\bar{x},\bar{y})=(\pm\bar{d}/2,0)\), where \((\bar{x},\bar{y})\) is the two-dimensional Cartesian coordinate. Here, the position of string axis is defined as the point where the value of the scalar field, \(\bar{\phi}\), vanishes, that is, \(|\bar{\phi}|=0\). In the following analysis, we will focus on the case where two strings having the unity winding numbers with the same signs. The results for two strings with opposite winding numbers are discussed with point source formalism in Chap. 3. Analysis of the interaction energy between cosmic strings with winding numbers more than unity is beyond the scope of the present paper. The number of lattice sites is \(600\times 600\) and the box length is \(\bar{L}=30\) (corresponding to lattice spacing with \(\bar{a}=0.05\)), unless otherwise stated. The details of the simulation setup and the numerical recipe are explained in App. B. Before presenting the numerical results, let us explain how the axes of the two strings are fixed in the numerical simulation. If we do not fix the angular mode of \(\bar{\phi}\) but fix the string axes by simply imposing \(|\bar{\phi}|=0\) at \((\bar{x},\bar{y})=(\pm\bar{d}/2,0)\), the numerical simulation shows that four points of \(\bar{\phi}\simeq 0\) are observed at large values of \(\bar{\lambda}_{\phi}\gg 2\). Two of these points are located on the initial string axes \((\bar{x},\bar{y})=(\pm\bar{d}/2,0)\), but the other two should not appear. Therefore, only imposing the condition \(|\bar{\phi}|=0\) on the initial string axes may not guarantee a fixed separation distance \(\bar{d}\). We thus fix the angular dependence of \(\bar{\phi}\) in the whole region of the simulation box, in addition to the condition \(|\bar{\phi}|=0\) at \((\bar{x},\bar{y})=(\pm\bar{d}/2,0)\) as in Ref. [40]. In this case, we confirm that the positions of the two string axes are maintained in the numerical simulation, and hence, the fixed separation distance \(\bar{d}\) is well-defined. (See App. B for detailed procedures to fix the angular dependence of \(\bar{\phi}\).) For this reason, we fix the angular dependence of \(\bar{\phi}\) for the entire simulation box in the following analysis. ### Local strings In this subsection, we present numerical results for the interaction energy of two local strings, originally calculated using the variational method in Ref. [40]. Figure 2 shows the field configurations and the energy density of two local strings for \(\bar{\lambda}_{\phi}=2\) with a fixed separation, \(\bar{d}=2\), on a \(\bar{y}=0\) slice, while Fig. 3 displays these surface plots with the same parameter set. We confirm that the configurations of \(\bar{\phi}\) and \(\bar{A}_{i}\) can be approximated by \(\bar{\phi}=\bar{\phi}_{+}\bar{\phi}_{-}\) and \(\bar{A}_{i}=\bar{A}_{i+}+\bar{A}_{i-}\), where \(\bar{\phi}_{\pm}\) and \(\bar{A}_{i\pm}\) (\(i=1,2\)) are the \(n=1\) local string solutions whose axes are fixed at \((\bar{x},\bar{y})=(\pm\bar{d}/2,0)\) for sufficiently large \(\bar{d}\). It is also confirmed that the \(n=2\) solution of local strings is reproduced in the limit of \(\bar{d}=0\) as it should be. Figure 4 displays the dependence of the energy density on \(\bar{d}\) at different values of \(\bar{\lambda}_{\phi}\). From this figure, it is easy to see that the dependence of \(\bar{\mathcal{E}}_{\rm int}\) on \(\bar{d}\) can be classified into three cases: \(\bar{\lambda}_{\phi}\gtrless 2\), \(\bar{\lambda}_{\phi}=2\). In the special case with \(\bar{\lambda}_{\phi}=2\), \(\bar{\mathcal{E}}_{\rm int}\) does not depend on \(\bar{d}\). For \(\bar{\lambda}_{\phi}>2\), \(\bar{\mathcal{E}}_{\rm int}\) becomes smaller as \(\bar{d}\) is increased. In this case, a stable solution is obtained in the long distance limit, \(\bar{d}\to\infty\), but the \(n=2\) ANO solution is unstable against perturbations of \(\bar{d}\). Conversely, for \(\bar{\lambda}_{\phi}<2\), \(\bar{\mathcal{E}}_{\rm int}\) becomes smaller as \(\bar{d}\) becomes smaller. Therefore, the \(n=2\) ANO solution is stable for \(\bar{\lambda}_{\phi}<2\) in contrast to the case with \(\bar{\lambda}_{\phi}>2\). It should be noted that, since the field configuration with \(n=2\) is a static solution to the equation of motion, the first derivative of \(\bar{\mathcal{E}}_{\rm int}\) with respect to \(\bar{d}\) must vanish at \(\bar{d}=0\) for an arbitrary \(\bar{\lambda}_{\phi}\). The stability of local strings is generically difficult to capture with an analytic approach when \(\bar{d}\) is small, except for \(\bar{\lambda}_{\phi}=2.0\). However, as explicitly shown in Ref. [41], analytic estimation is possible by using perturbations around the BPS state if \(\bar{\lambda}_{\phi}\) is close to the BPS limit \(\bar{\lambda}_{\phi}\simeq 2\). Finally, in Fig. 5, the results obtained by our full numerical calculation based on the gradient flow method are compared with those obtained by the point source formalism and the variational approach [40]. In the figure, we focus on two benchmark points used in Ref. [40]. We see that our results are in very good agreement with those of Ref. [40]. For short distances where the inverse of the lightest mass among the scalar and the gauge bosons is longer than \(\bar{d}\), the results of the point source formalism deviate from the full numerical calculation. In particular, analyses relying on the point source formalism indicate the existence of a saddle point or a minimum of \(\bar{\cal E}_{\rm int}\) with respect to \(\bar{d}\), suggesting a non-trivial phase structure of the local string. Our full numerical calculations, however, did not find such a non-trivial phase structure. This is because an effective description of the local string by the point source formalism cannot be justified for small \(\bar{d}\). Figure 4: The dependence of the interaction energy density, \(\bar{\cal E}_{\rm int}\), on the distance, \(\bar{d}\), are shown for various couplings, \(\bar{\lambda}_{\phi}\), in the case of local strings. ### Global strings In this subsection, we present numerical results for the interaction energy of two straight global strings. A numerical study of the interaction energy of two global strings is given in Ref. [58]. Figure 6 shows the field configuration of \(|\bar{\phi}|\) and the energy density of two global strings on \(\bar{y}=0\) slice for \(\bar{\lambda}_{\phi}=4\) with the fixed separation, \(\bar{d}=2\), while Fig. 7 is a surface plot for the same parameter set. We have numerically confirmed that the whole field configuration cannot be well approximated by the form of superposition ansatz, \(\bar{\phi}=\bar{\phi}_{+}\bar{\phi}_{-}\), where \(\bar{\phi}_{\pm}\) is the global string configuration with unit winding number. Their axes are placed at \((\bar{x},\bar{y})=(\pm\bar{d}/2,0)\), similarly to the local string case. The interaction energy density \(\mathcal{E}_{\rm int}(\bar{x},\bar{y})\) has a kink around the string axes because the position of the string axes are fixed by hand. Figure 8 shows the dependence of \(\mathcal{E}_{\rm int}\) on \(\bar{d}\) in the case of a global string with \(\bar{\lambda}_{\phi}=4\). Figure 5: The left figure shows the results for \(\bar{\lambda}_{\phi}=3.38\) and the right figure shows those for \(\bar{\lambda}_{\phi}=0.98\). The interaction energies estimated by the full numerical calculation using the gradient flow method (black curve), the point source formalism (magenta dotted curve), and the variational approach in Ref. [40] (blue dashed curve) are shown. In the right panel of the figure, for clarity, the origin of the vacuum energy of the local string is shifted such that its \(\vec{\mathcal{E}}_{\rm int}\) at \(\bar{d}=0\) coincides with that of the global string. The logarithmic dependence of \(\vec{\mathcal{E}}_{\rm int}\) on \(\bar{d}\) is observed at large \(\bar{d}\), which is in good agreement with the results of the point source formalism and previous study [58]. When the distance from the string axis is closer than \(m_{e}^{-1}\), the angular mode of \(\bar{\phi}\) is not canceled by the gauge field, so the behavior of the local string is almost identical to that of the global string. Thus, when \(d\lesssim m_{e}^{-1}\), the behavior of \(\vec{\mathcal{E}}_{\rm int}\) of the local string is the same as that of the global string. This feature can be seen from the right panel of Fig. 8. Therefore, the phase structure of the interaction energy of the global string can be understood as the limit of \(e\to 0\) (\(m_{e}^{-1}\to\infty\)). ### Bosonic superconducting strings without current In this subsection, we show our numerical results for the interaction energy of two bosonic superconducting strings without current. It can be seen that the introduction of an addi Figure 8: The dependence of interaction energy, \(\vec{\mathcal{E}}_{\rm int}\), on the fixed distance, \(\bar{d}\), is shown in the case of two global strings for \(\bar{\lambda}_{\phi}=4\) (left panel). Also shown is the comparison with local strings with the same \(\bar{\lambda}_{\phi}\) and \(e=1\) (right panel). Figure 7: Surface plots of the scalar field magnitude \(|\bar{\phi}(\bar{x},\bar{y})|\) (left), and the energy density \(\vec{\mathcal{E}}_{\rm int}(\bar{x},\bar{y})\) (right) for two static global string configurations for \(\bar{d}=2\) with \(\bar{\lambda}_{\phi}=4\) are shown. tional scalar field whose condensate is localized inside the string has a dramatic effect on the interaction energy in a certain parameter region. Figure 9 shows field configurations and the energy density of two superconducting strings on \(\bar{y}=0\) slice with fixed separation, \(\bar{d}=2.1\), for the benchmark point with \(\bar{\lambda}_{\phi}=8,\ \bar{\lambda}_{\tilde{\phi}}=80,\ \bar{\beta}=24\) and \(\bar{\eta}_{\tilde{\phi}}=0.55\). Figure 10 is a surface plot for the same parameter set. The whole field configurations can be approximated by the superposition ansatz, \(\bar{\phi}=\bar{\phi}_{+}\bar{\phi}_{-},\ \bar{A}_{i}=\bar{A}_{i+}+\bar{A}_{i-}\), and \(\bar{\tilde{\phi}}_{r}=\bar{\tilde{\phi}}_{r+}+\bar{\tilde{\phi}}_{r-}\), where \(\bar{\phi}_{\pm},\ \bar{A}_{i\pm}\) and \(\bar{\tilde{\phi}}_{r\pm}\) are the configurations of each string whose axis is placed at \((\bar{x},\bar{y})=(\pm\bar{d}/2,0)\), respectively. The same benchmark point is taken as in Figs. 11 and 12 for \(\bar{d}=2\). By comparing the condigurations at \(\bar{d}=2.1\) and \(\bar{d}=2\), we can see the coalescence of \(\bar{\tilde{\phi}}\) happens at \(\bar{d}=2\) where the magnitude of \(\bar{\tilde{\phi}}\) enlarges and configurations of two string attach to each other. This coalescence cannot be captured by the point source formalism and leads to a characteristic phase structure of interaction energy, as we will discuss below. Figure 13 shows the dependence of \(\bar{\mathcal{E}}_{\rm int}\) on \(\bar{d}\) by the gradient flow method and by the point source formalism for the benchmark point with \(\bar{\lambda}_{\phi}=8,\ \bar{\lambda}_{\tilde{\phi}}=80,\ \bar{\beta}=24\) and \(\bar{\eta}_{\tilde{\phi}}=0.55\). As can be seen from the figure, the results relying on the point source formalism are in good agreement with those of the gradient flow method at infinity. Figure 14 shows the dependence of \(\bar{\mathcal{E}}_{\rm int}\) on the separation distance \(\bar{d}\) for various \(\bar{\lambda}_{\phi}\), \(\bar{\eta}_{\tilde{\phi}}\), \(\bar{\lambda}_{\tilde{\phi}}\) and \(\bar{\beta}\) in the parameter space which allows the formation of bosonic superconducting strings with the unit winding number. In t the figure, the origin of the vacuum energy is adjusted such that all interaction energies coincide at \(\bar{d}\to\infty\) for visibility. In order to understand the effect of \(\bar{\bar{\phi}}_{r}\) on \(\bar{\mathcal{E}}_{\text{int}}\), we compare \(\bar{\mathcal{E}}_{\text{int}}\) of the bosonic superconducting string with that of the local string in the figures. These figures show that the dependence of \(\bar{\mathcal{E}}_{\text{int}}\) on \(\bar{d}\) is almost the same as that of local strings. On the other hand, it is drastically different for small \(\bar{d}\) because \(\bar{\mathcal{E}}_{\text{int}}\) is a decreasing function with respect to \(\bar{d}\). A remarkable feature is that the first derivative of \(\bar{\mathcal{E}}_{\text{int}}\) looks discontinuous at the transition point, \(\bar{d}=\bar{d}_{c}\), within the resolution of our lattice setup. We suppose that this kink might be smoothened as we improve the lattice resolution. We find that this kink behavior is triggered by the coalescence of \(\bar{\bar{\phi}}\) since the coalescence of \(\bar{\bar{\phi}}\) also happens at \(\bar{d}=\bar{d}_{c}\). The dependence of \(\mathcal{E}_{\text{int}}\) on \(d\) is roughly understood as follows. For bosonic superconducting strings (without current), there are three important length scales associated with the scalar field \(|\phi|\), the \(U(1)\) gauge field, and \(\tilde{\phi}\), namely, \(m_{\phi},m_{e}\), and \(m_{\tilde{\phi}}\) evaluated at infinity. For all benchmark points taken in the Fig. 14, there is a hierarchical structure \(m_{e}^{-1}>m_{\phi}^{-1}>m_{\tilde{\phi}}^{-1}\). On the length scale \(d\gtrsim 2m_{e}^{-1}\), effective descriptions by the point source formalism can be applied because non-linear effects caused by all fields are clearly irrelevant. For \(2m_{e}^{-1}\gtrsim d\gtrsim 2m_{\phi}^{-1}\) or \(2m_{\phi}^{-1}\gtrsim d\gtrsim 2m_{\tilde{\phi}}^{-1}\), the non-linear effects of the \(U(1)\) sector are important, and hence, analysis relying on the point source formalism may not apply. However, in this region, \(\mathcal{E}_{\text{int}}\) is almost the same as that of the local string because of the small effect of \(\tilde{\phi}_{r}\). This feature can be observed in Fig. 14. At \(d=d_{c}\approx 2m_{\tilde{\phi}}^{-1}\), the aforementioned coalescence of \(\tilde{\phi}_{r}\) takes place, and the non-linear effect of \(\tilde{\phi}\) is important for \(2m_{\tilde{\phi}}^{-1}\gtrsim d\). In this region, the configuration of \(\tilde{\phi}_{r}\) possesses approximate circular symmetry, and the gradient energy of \(\tilde{\phi}_{r}\) gives rise to the attraction. This additional attraction due to \(\tilde{\phi}_{r}\) gives the bosonic superconducting string a richer phase structure than the conventional local string. The effects of \(\eta_{\tilde{\phi}},\ \lambda_{\tilde{\phi}},\ \lambda_{\phi}\), and \(\beta\) on the phase structure of the interaction energy may be qualitatively understood by these arguments since these parameters are related to the mass parameters, \(m_{e},\ m_{\phi}\) and \(m_{\tilde{\phi}}\). These qualitative features are numerically confirmed to be the same for other hierarchical mass structures. ### Bosonic superconducting strings with current In this subsection, we discuss numerical results including the effect of non-zero currents associated with the \(\widetilde{U}(1)\) gauge field. Because the massless \(\widetilde{U}(1)\) gauge field has a long-range effect on another string, we expand the lattice simulation box is from \(600\times 600\) to \(1000\times 1000\) to see long-distance behavior and to have enough space to decouple two strings while keeping the lattice space \(\bar{a}=0.05\). Figure 15 shows the field configurations and energy density of two static superconducting strings on \(\tilde{y}=0\) slice with fixed separation, \(\bar{d}=2\), for \(\bar{\lambda}_{\phi}=8,\ \bar{\lambda}_{\tilde{\phi}}=80,\ \bar{\beta}=24\), \(\bar{\eta}_{\tilde{\phi}}=0.55\), \(\bar{\tilde{s}}(0)=0.4\), while Fig. 16 is a surface plot for the same parameter set. We numerically confirm that the whole field configuration of \(\widetilde{U}(1)\) gauge field can be approximated by a superposition of each string when two strings are far enough from each other. Figure 17 shows the dependence of \(\bar{\mathcal{E}}_{int}\) of superconducting strings on a separation distance \(\bar{d}\) with various currents, including zero current. For clarity, all vacuum energies are matched at \(\bar{d}=12\). The logarithmic growth of the \(\widetilde{U}(1)\) gauge field can be seen in the long-distance limit, but it is very hard to see because of the small charge \(k_{A}\) due to current quenching effects as explained in Sec. 2.3. In fact, the value of \(k_{A}\) is tiny on the order of \(10^{-2}\sim 10^{-3}\), while the other charges are order unity as shown in Table 1. As we have shown for the case without currents, the gradient energy of \(\bar{\tilde{\phi}}\) leads to an additional attraction at short range. If the amplitude of the \(\widetilde{U}(1)\) gauge field, \(\bar{\tilde{s}}\), on the string axis is not zero, the \(\bar{\tilde{\phi}}_{r}\) condensation becomes smaller, and consequently, the attraction becomes weaker than in the absence of current, as discussed in the previous section. As the initial value of \(\bar{\tilde{s}}\) increases, the strength of the attraction becomes even weaker. We can see this clearly from Fig. 17. This effect can be explained by the back-reaction on \(\tilde{\phi}_{r}\), which is the same mechanism as that of current quenching. Thus, even if an additional attraction due to the \(\widetilde{U}(1)\) gauge field is introduced, the total attraction at short range is still suppressed by the back-reaction effect. This implies that the attraction in the \(\widetilde{U}(1)\) sector is maximized when the current is zero at a separation distance comparable to the string thickness. Conclusions In this paper, we investigated the interaction energy of two static, straight, various types of cosmic strings using the point source formalism proposed in Ref. [39] and numerical calculations using the gradient flow method. In the analysis based on the point source formalism, we analytically derived the asymptotic configurations of the cosmic strings and showed that the asymptotic field configurations can be realized by the linearized field theory introducing appropriate external sources. The interaction energy can then be calculated using the standard Green's function method under the assumption that the external source can be approximated by a superposition of each string. We confirmed that our results are consistent with those of previous studies on local [39] and global strings [58]. In the case of bosonic superconducting strings, the Higgs field in the fundamental representation under the new gauge group \(\widetilde{U}(1)\) is an additional source of attraction. In addition to this, if the static magnetic current is trapped inside the string, it becomes a source of long-range attraction (repulsion) if the directions of the currents in the two strings are parallel (antiparallel). However, due to the current quenching effect, there is an upper bound on the current, which leads to the upper bound on the charge of the string. As a result, the \(\widetilde{U}(1)\) gauge field cannot be the most significant source of interaction unless the strings are very far apart. This is the essential reason why the apparent discontinuity (kink behavior) of the first derivative of \(\bar{\mathcal{E}}_{\text{int}}\) at the transition point is triggered by the coalescence of \(\bar{\tilde{\phi}}_{r}\) rather than that of the \(\widetilde{U}(1)\) gauge field, \(\bar{\tilde{s}}\). We numerically evaluated the interaction energies of several types of cosmic strings at arbitrary separation distance using the relaxation method (gradient flow method), and compared the results with those obtained using the point source formalism. In general, analyses relying on the point source formalism are in good agreement with the full numerical results of the gradient flow method only for separation distances longer than the inverse of the lightest mass in the underlying theory evaluated at infinity. The results for the simple local string are in good agreement with the results of the previous study [41], and with those by the variational approach [40]. Long-range interactions due to NG-bosons are also found to exist in the case of global strings. For both local and global strings, the phase structure is simple in the sense that the interaction energy is a monotonic function of the separation distance. In the case of superconducting strings, an additional short-range attraction is induced by the Higgs field in the fundamental representation of \(\widetilde{U}(1)\). This leads to the emergence of non-trivial phase structures of interaction energy (see _e.g._ Fig. 14), that cannot be captured by the point source formalism. Furthermore, when the effects of magnetic currents are included, a logarithmic dependence of the interaction energy on the separation distance on at large distance is found, which is consistent with the predictions of the point source formalism. However, the total strength of the attraction is weakened by the current quenching effect at a separation distance comparable to the string thickness. Therefore, we conclude that the attraction between two static bosonic superconducting strings is maximized when the total current associated with \(\widetilde{U}(1)\) is zero. Our new results on the interaction energy of bosonic superconducting strings indicate that there is a viable parameter region leading to an attraction force, at least when two strings are static and straight. Thus, the necessary condition for the bound state of the two strings is satisfied. Therefore, it would be very interesting to investigate the dynamical collision of the two-string system and to discuss the possibility of bound state formation of the two bosonic superconducting strings. Such analysis could include the effects of non-zero relative angles and velocities on the bound state formation of the strings. We leave it as a future study. ## Acknowledgement We would like to appreciate Yoshihiko Abe, Jinno Ryusuke, and Yu Hamada for valuable discussions. We are grateful for fruitful discussions with Takashi Hiramatsu and Daisuke Yamauchi when this work is initiated. KF is supported by JSPS Grant-in-Aid for Research Fellows Grant No.22J00345. S.L. is supported by JSPS Grant-in-Aid for Research Fellows Grant No.23KJ0936. M.Y. is supported by IBS under the project code, IBS-R018-D3, and by JSPS Grant-in-Aid for Scientific Research Number JP21H01080. ## Appendix A Normalization and Numerical solutions This appendix describes the normalization used in the numerical calculations and the numerical recipes for obtaining various types of cosmic string configurations. ### Normalization of local and bosonic superconducting strings We consider the following action: \[\begin{split}& S=\int\mathrm{d}^{4}x\left[-\frac{1}{4}F^{\mu\nu}F_{ \mu\nu}+|D_{\mu}\phi|^{2}-\frac{1}{4}\widetilde{F}^{\mu\nu}\widetilde{F}_{\mu \nu}+|\widetilde{D}_{\mu}\tilde{\phi}|^{2}-V(\phi,\tilde{\phi})\right],\\ & V(\phi,\tilde{\phi})=\frac{\lambda_{\phi}}{4}\left(|\phi|^{2}- \eta_{\phi}^{2}\right)^{2}+\frac{\lambda_{\tilde{\phi}}}{4}\left(|\tilde{ \phi}|^{2}-\eta_{\phi}^{2}\right)^{2}+\beta|\phi|^{2}|\tilde{\phi}|^{2}.\end{split} \tag{100}\] For convenience, we introduce the following dimensionless length \(\bar{x}\) and various dimensionless fields \(\bar{\phi},\ \bar{A}_{\mu},\ \bar{\bar{\phi}}\) and \(\tilde{\bar{A}}_{\mu}\) as \[\bar{x}\equiv e\eta_{\phi}x,\ \bar{\phi}\equiv\frac{\phi}{\eta_{\phi}},\ \bar{A}_{\mu}\equiv\frac{A_{\mu}}{\eta_{\phi}},\ \bar{\bar{\phi}}\equiv\frac{\tilde{\phi}}{\eta_{ \phi}},\ \bar{\bar{A}}_{\mu}\equiv\frac{\tilde{A}_{\mu}}{\eta_{\phi}}\,. \tag{101}\] Then the action Eq. (A.1) becomes \[S=\frac{1}{e^{2}}\bar{S},\] \[\bar{S}\equiv\int\mathrm{d}^{4}\bar{x}\left[-\frac{1}{4}\bar{F}^{ \mu\nu}\bar{F}_{\mu\nu}+|\bar{D}_{\mu}\bar{\phi}|^{2}-\frac{1}{4}\bar{\bar{F}}^ {\mu\nu}\bar{\bar{F}}_{\mu\nu}+|\bar{D}_{\mu}\bar{\phi}|^{2}-\overline{V}\left( \bar{\phi},\,\bar{\phi}\right)\right],\] (A.3) \[\overline{V}\left(\bar{\phi},\,\bar{\bar{\phi}}\right)\equiv \frac{\bar{\lambda}_{\phi}}{4}\left(|\bar{\phi}|^{2}-1\right)^{2}+\frac{\bar{ \lambda}_{\phi}}{4}\left(|\bar{\phi}|^{2}-\bar{\eta}_{\bar{\phi}}^{2}\right)^{ 2}+\bar{\beta}|\bar{\phi}|^{2}|\bar{\bar{\phi}}|^{2}\,.\] where \(\bar{F}_{\mu\nu}\equiv\bar{\partial}_{\mu}\bar{A}_{\nu}-\bar{\partial}_{\nu}\bar{A} _{\mu},\ \bar{\bar{F}}_{\mu\nu}\equiv\bar{\partial}_{\mu}\bar{\bar{A}}_{\nu}-\bar{ \partial}_{\nu}\bar{\bar{A}}_{\mu},\ \bar{D}_{\mu}\equiv\bar{\partial}_{\mu}-i\bar{A}_{\mu}\) and \(\bar{\bar{D}}_{\mu}=\bar{\partial}_{\mu}-i\bar{g}\bar{A}_{\mu}\) with \(\bar{\partial}_{\mu}\equiv\partial/\partial\bar{x}^{\mu}\). The physical parameters in the above expressions are defined by \[\bar{\eta}_{\tilde{\phi}}\equiv\frac{\eta_{\tilde{\phi}}}{\eta_{\phi}},\ \bar{\lambda}_{\phi}\equiv\frac{\lambda_{\phi}}{e^{2}},\ \bar{\lambda}_{\tilde{\phi}}\equiv\frac{\bar{\lambda}_{\tilde{\phi}}}{e^{2}}, \ \bar{\beta}\equiv\frac{\beta}{e^{2}},\ \bar{g}\equiv\frac{g}{e}\,. \tag{100}\] Assuming a static configuration, the two-dimensional energy (tension) of the string can be extracted as follows, \[\begin{split}&\mathcal{E}_{\rm int}=\frac{1}{e}\bar{\mathcal{E}}_{ \rm int},\ \bar{\mathcal{E}}_{\rm int}\equiv\int\mathrm{d}^{2}\bar{x}\,\epsilon_{\rm int}( \bar{x},\bar{y}),\\ &\epsilon_{\rm int}(\bar{x},\bar{y})\equiv\left[\frac{1}{4}\bar{ F}^{ij}\bar{F}_{ij}+|\bar{D}_{i}\bar{\phi}|^{2}+\frac{1}{4}\bar{\bar{F}}^{ij} \bar{\bar{F}}_{ij}+|\bar{\bar{D}}_{i}\bar{\bar{\phi}}|^{2}+\overline{V}\left( \bar{\phi},\,\bar{\bar{\phi}}\right)\right].\end{split} \tag{101}\] In this normalization, \(\bar{\eta}_{\tilde{\phi}},\ \bar{\lambda}_{\phi},\ \bar{\lambda}_{\tilde{\phi}},\ \bar{\beta}\) and \(\bar{g}\) are treated as free parameters. Note that this normalization cannot be applied to global strings because \(\bar{x}\) becomes ill-defined for \(e=0\). Therefore, we will use a different normalization for global strings. ### Normalization of global strings In the case of global strings, the normalization defined by Eqs. (100) and (100) cannot take the limit \(e\to 0\), so the normalization of the complex scalar field and length scale must be changed. We then use the following normalization in the numerical computation of the global strings. \[\bar{\phi}\equiv\frac{\phi}{\eta_{\phi}},\ \bar{x}\equiv\eta_{\phi}x\,. \tag{102}\] With this normalization, the tension of the global string becomes \[\bar{\mathcal{E}}_{\rm int}=\int\mathrm{d}^{2}\bar{x}\left(|\bar {\partial}_{\mu}\bar{\phi}|^{2}-V(\bar{\phi})\right), \tag{103}\] \[V(\bar{\phi})=\frac{\lambda_{\phi}}{4}\left(|\bar{\phi}|^{2}-1 \right)^{2}. \tag{104}\] Figure 17: The dependence of the interaction energy of bosonic superconducting strings, \(\bar{\mathcal{E}}_{\rm int}\), on the fixed distance, \(\bar{d}\), are shown with \(\bar{\lambda}_{\phi}=8,\ \bar{\lambda}_{\tilde{\phi}}=80,\ \bar{\beta}=24,\ \bar{\eta}_{\tilde{\phi}}=0.55\) for various initial values of \(\bar{s}\) (corresponding current density \(J_{\rm tot}\) are shown in Table 1). The vacuum energy is unified with local string case for demonstration purpose. ### Numerical solutions Here, we describe a numerical recipe for obtaining the string configurations. The cosmic string solution is obtained by solving the coupled second-order differential equations defined by Eqs. (40). The boundary conditions for these differential equations at infinity are given by Eq. (41). As in the case of the conventional ANO strings, the regularity of \(\phi_{r}(r)\) and \(A_{\theta}(r)\) at the origin requires the conditions given by Eq. (6). Also, \(\partial_{r}\tilde{\phi}_{r}(r)\) and \(\partial_{r}\tilde{s}(r)\) must vanish at the origin by these regularities. In addition, in order to solve the equation of \(\tilde{s}(r)\), we take another boundary condition \(\tilde{s}(r=0)\), which is considered crucial for the formation of bosonic superconducting strings with current. In this paper, we treat \(\tilde{s}(r=0)\) as a free parameter and simply read off the total current flowing inside the current using Eq. (46). Since the boundary conditions for \(\phi_{r}(r),\ A_{\theta}(r)\) and \(\tilde{\phi}_{r}(r)\) are given both at infinity and at the origin, it is a boundary value problem to solve these fields. On the other hand, the two boundary conditions for \(\tilde{s}\) are specified both at the origin, which is a standard initial value problem. Hence we apply the gradient flow method to find solutions of \(\phi_{r}(r),\ A_{\theta}(r)\), and \(\tilde{\phi}_{r}(r)\), while the standard numerical integration method is applied for \(\tilde{s}\). For other numerical approaches, including variational analysis and successive over-relaxation method, we refer to Refs. [10; 12; 13; 59]. In the gradient flow method, the \(\phi_{r}(r),\ A_{\theta}(r)\) and \(\tilde{\phi}_{r}(r)\) fields are promoted to fictitious time-dependent fields as \(\phi_{r}(t,r),\ A_{\theta}(t,r)\) and \(\tilde{\phi}_{r}(t,r)\). Using the normalization defined in Sec. 4, we consider the following heat equations. \[\begin{split}&\frac{\partial^{2}\bar{\phi}_{r}}{\partial\bar{r}^{2}} +\frac{1}{\tilde{r}}\frac{\partial\bar{\phi}_{r}}{\partial\bar{r}}-\frac{1}{ \tilde{r}^{2}}\bar{\phi}_{r}\left(\bar{A}_{\theta}-n\right)^{2}-\frac{1}{2} \bar{\lambda}_{\phi}\bar{\phi}_{r}(\bar{\phi}_{r}^{2}-1)-\bar{\beta}|\bar{ \bar{\phi}}|^{2}\bar{\phi}_{r}=\frac{\partial\bar{\phi}_{r}}{\partial t},\\ &\frac{\partial^{2}\bar{A}_{\theta}}{\partial\bar{r}^{2}}-\frac{1 }{\tilde{r}}\frac{\partial\bar{A}_{\theta}}{\partial\bar{r}}-2\phi_{r}^{2} \left(\bar{A}_{\theta}-n\right)=\frac{\partial\bar{A}_{\theta}}{\partial t}, \\ &\frac{\partial^{2}\bar{\tilde{\phi}}_{r}}{\partial\bar{r}^{2}}+ \frac{1}{\tilde{r}}\frac{\partial\bar{\tilde{\phi}}_{r}}{\partial\bar{r}}- \bar{\tilde{s}}(\bar{r})^{2}\tilde{\phi}_{r}-\frac{1}{2}\bar{\lambda}_{\tilde {\phi}}\bar{\tilde{\phi}}_{r}(\bar{\tilde{\phi}}_{r}^{2}-\bar{\eta}_{\tilde{ \phi}}^{2})-\bar{\beta}|\bar{\phi}|^{2}\bar{\tilde{\phi}}_{r}=\frac{\partial \bar{\tilde{\phi}}_{r}}{\partial t}.\end{split} \tag{102}\] Here, the signs of the time derivatives of the various fields are determined such that the energy of the system decreases with time evolution. First, we set \(\bar{\tilde{s}}(\bar{r})=0\) and find the field configurations for \(\bar{\phi},\ \bar{A}_{\theta}\) and \(\tilde{\phi}\). We initially prepare functions for the fields \(\bar{\phi}_{r}(t=0,\bar{r}),\ \bar{A}_{\theta}(t=0,\bar{r})\) and \(\bar{\tilde{\phi}}_{r}(t=0,\bar{r})\) that satisfy boundary conditions at origin and infinity, then evolve these functions numerically with the fictitious time. Specifically, for \(\bar{\phi}_{r}\) and \(\bar{A}_{\theta}\), we prepare hyperbolic tangent configuration, and Gaussian configurations for \(\bar{\tilde{\phi}}_{r}\). When the time evolution converges to \(\partial_{t}\bar{\phi}_{\bar{r}}=\partial_{t}\bar{A}_{\theta}=\partial_{t} \bar{\tilde{\phi}}_{r}=0\), a solution to the original differential equations is obtained. The time evolution converges quickly, consistent with the results of Ref. [60]. Now let us include the effect of the gauge field, \(\tilde{s}(r)\). In addition to the differential equations Eq. (102), we need to include the following differential equation, \[\frac{\partial^{2}\bar{\tilde{s}}}{\partial\bar{r}^{2}}+\frac{1}{\tilde{r}} \frac{\partial\bar{\tilde{s}}}{\partial\bar{r}}-2g^{2}\bar{\tilde{s}}(\bar{r} )\bar{\tilde{\phi}}_{r}^{2}=0\,. \tag{103}\] The differential equations Eqs. (102) and (103) are solved iteratively. To be precise, the following procedure is used for our numerical computation. * (i) We first obtain the configuration of \(\bar{\phi}_{r},\ \bar{A}_{\theta}\) and \(\bar{\tilde{\phi}}_{r}\) by solving Eq. (102) without the gauge field, \(\bar{\tilde{s}}(\bar{r})=0\), using the gradient flow method. * (ii) Then, we solve Eq. (110) with the \(\bar{\tilde{\phi}}_{r}\) configuration evaluated in the first step (i) (or in the third step (iii)) using the standard initial value method. The initial value of \(\bar{\tilde{\tilde{\tilde{\bar{s}}}}}(\bar{r}=0)\) is set by hand. * (iii) We use the current \(\bar{\tilde{s}}(\bar{r})=0\) evaluated in the second step and solve Eq. (119) again to update the configuration of \(\bar{\phi}_{r},\ \tilde{A}_{\theta}\) and \(\bar{\tilde{\phi}}_{r}\). After this, we return to the second step (ii). If the above iteration converges well, the whole configuration of the bosonic superconducting string, including the gauge field, can be obtained numerically. We have confirmed that the iteration converges well, at least for the benchmark points shown in Table. 1. The effective origin and infinity are set to \(\bar{r}=10^{-10}\) and \(\bar{r}=40\), respectively, and the finite size effect is found to be negligible. We take three benchmark points \(A,\ B\) and \(C\) shown in Table 1. The numerical constants \(k_{\phi},\ k_{e},\ k_{\tilde{\phi}}\) and \(k_{A}\) are determined such that the asymptotic solutions Eqs. (14) (11) (44) and (45) are matched with the numerical results at infinity. From Table. 1, it is easy to see that the total current calculated in Eq. (46) decreases as \(\tilde{s}(r=0)\) is increased above the threshold value. The configurations of \(\bar{\phi}_{r},\ \tilde{A}_{\theta},\ \bar{\tilde{\phi}}\) and \(\bar{\tilde{s}}\) are shown in Fig. 18 for the benchmark point \(A\) with initial condition \(\bar{\tilde{s}}=0.2\) (leading to the total current \(\widetilde{J}_{\rm tot}\simeq 0.0277\)). As expected, the configurations of \(\bar{\phi}_{r}(\bar{r}),\ \tilde{A}_{\theta}(\bar{r})\) and \(\bar{\tilde{\phi}}_{r}\) are exponentially suppressed, while \(\bar{\tilde{s}}(\bar{r})\) increases logarithmically at long distance. The corresponding magnetic field defined as \(\bar{\tilde{B}}_{\theta}=\partial_{\bar{r}}\bar{\tilde{A}}_{z}(\bar{r})\) is depicted in Fig. 19. It is clear from the figure that the magnetic field is proportional to t \(1/\bar{r}\) outside the string but decays towards the center of string, where it decays to zero. The penetration depth of the magnetic field is approximately given by the mass of the gauge field acquired by the Higgs mechanism inside the string. This effect is known as the _Meissner effect_. ## Appendix B Numerical calculation of the interaction energy of two cosmic strings This appendix details the numerical setup for the interaction energy of two straight cosmic strings. In the system of two straight cosmic strings, the calculations are performed in Cartesian coordinates rather than cylindrical coordinates, since the cylindrical symmetry is explicitly broken unless the axes of the two strings coincide. To find the configuration, we apply the gradient flow method as in the case of a single straight string. In particular, we parameterize the fields \(\bar{\phi},\ \bar{A}_{\mu},\ \bar{\bar{\phi}}\) and \(\bar{\bar{s}}\) as \[\begin{split}\bar{\phi}(t,x,y)&=\bar{\phi}_{x}(t,x,y)+i\bar{\phi}_{y}(t,x,y)\,,\ \bar{A}_{\mu}(t,x,y)=(0,\bar{A}_{x}(t,x,y),\bar{A}_{y}(t,x,y),0)\,,\\ \bar{\bar{\phi}}(t,x,y)&=\bar{\bar{\phi}}_{r}(t,x, y)\,,\ \bar{\bar{A}}_{z}(t,x,y)=\frac{1}{\bar{g}}\bar{\bar{s}}(t,x,y)\,.\end{split} \tag{103}\] Since we work in the Coulomb gauge, an additional term \({\cal L}_{\rm Coulomb}=(\partial_{x}\bar{A}_{x}+\partial_{y}\bar{A}_{y})^{2}/2\) is added to the Lagrangian density defined by Eq (34). Using this parameterization and the normalization defined in Sec. 4, we numerically solve the following equations, \[\begin{split}&\frac{\partial\bar{\phi}_{x}}{\partial t}=\bar{ \Delta}\bar{\phi}_{x}-|\bar{A}|^{2}\bar{\phi}_{x}+2\left(\bar{A}_{x}\frac{ \partial\bar{\phi}_{x}}{\partial\bar{x}}+\bar{A}_{y}\frac{\partial\bar{\phi}_ {y}}{\partial\bar{y}}\right)-\frac{\bar{\lambda}_{\phi}}{2}\bar{\phi}_{x} \left(|\bar{\phi}|^{2}-1\right)-\bar{\beta}\,\bar{\bar{\phi}}_{r}^{2}\bar{\phi }_{x}\,,\\ &\frac{\partial\bar{\phi}_{y}}{\partial t}=\bar{\Delta}\bar{\phi} _{y}-|\bar{A}|^{2}\bar{\phi}_{y}-2\left(\bar{A}_{x}\frac{\partial\bar{\phi}_ {x}}{\partial\bar{x}}+\bar{A}_{y}\frac{\partial\bar{\phi}_{y}}{\partial\bar{y }}\right)-\frac{\bar{\lambda}_{\phi}}{2}\bar{\phi}_{y}\left(|\bar{\phi}|^{2}- 1\right)-\bar{\beta}\,\bar{\bar{\phi}}_{r}^{2}\bar{\phi}_{y}\,,\\ &\frac{\partial\bar{A}_{x}}{\partial t}=\bar{\Delta}\bar{A}_{x}- 2|\bar{\phi}|^{2}\bar{A}_{x}+2\left(\bar{\phi}_{x}\frac{\partial\bar{\phi}_{y} }{\partial\bar{x}}-\bar{\phi}_{y}\frac{\partial\bar{\phi}_{x}}{\partial\bar{x }}\right)\,,\\ &\frac{\partial\bar{A}_{y}}{\partial t}=\bar{\Delta}\bar{A}_{y}- 2|\bar{\phi}|^{2}\bar{A}_{y}+2\left(\bar{\phi}_{x}\frac{\partial\bar{\phi}_{y }}{\partial\bar{y}}-\bar{\phi}_{y}\frac{\partial\bar{\phi}_{x}}{\partial\bar{y }}\right)\,,\\ &\frac{\partial\bar{\bar{\phi}}_{r}}{\partial t}=\bar{\Delta} \bar{\bar{\phi}}-\bar{\bar{z}}^{2}\bar{\bar{\phi}}_{r}-\frac{\bar{\lambda}_ {\phi}}{2}\bar{\bar{\phi}}_{r}(\bar{\bar{\phi}}_{r}^{2}-\bar{\eta}_{\bar{\phi }}^{2})-\bar{\beta}|\bar{\phi}|^{2}\bar{\bar{\phi}}_{r}\,,\\ &\frac{\partial\bar{\bar{s}}}{\partial t}=\bar{\Delta}\bar{\bar {s}}-2\bar{g}^{2}\,\bar{\bar{s}}\,\bar{\bar{\phi}}_{r}^{2}\,.\end{split} \tag{104}\] In these expressions, \(\bar{\Delta}\equiv\partial_{x}^{2}+\partial_{y}^{2}\) and \(|\bar{A}|^{2}\equiv\bar{A}_{x}^{2}+\bar{A}_{y}^{2}\) are the standard two-dimensional Laplace operator and the two-dimensional norm for \(U(1)\) gauge fields, respectively. At initial time \(\bar{t}=0\), all field configurations are taken to be \[\begin{split}&\bar{\phi}(\bar{t}=0,\bar{x},\bar{y})=F(\bar{x}, \bar{y})e^{i\theta_{1}+i\theta_{2}},\\ &\bar{A}_{x}(\bar{t}=0,\bar{x},\bar{y})=F(\bar{x},\bar{y})\left(a _{\bar{x}_{+}}(\bar{x},\bar{y})+a_{\bar{x}_{-}}(\bar{x},\bar{y})\right),\\ &\bar{A}_{y}(\bar{t}=0,\bar{x},\bar{y})=F(\bar{x},\bar{y})\left(a _{\bar{y}_{+}}(\bar{x},\bar{y})+a_{\bar{y}_{-}}(\bar{x},\bar{y})\right),\\ & F(\bar{x},\bar{y})\equiv\tanh\left(\sqrt{(\bar{x}-\bar{d})^{2}+ \bar{y}^{2}}\right)\tanh\left(\sqrt{(\bar{x}+\bar{d})^{2}+\bar{y}^{2}}\right), \\ & a_{\bar{x}_{\pm}}(\bar{x},\bar{y})\equiv\begin{cases}\frac{\mp \bar{y}}{(\bar{x}\pm\bar{d})^{2}+\bar{y}^{2}}&((\bar{x}\pm\bar{d})^{2}+\bar{ y}^{2}>0)\\ 0&(\bar{x}=\mp\bar{d},\ \bar{y}=0)\end{cases},\\ & a_{\bar{y}_{\pm}}(\bar{x},\bar{y})\equiv\begin{cases}\frac{(\bar{x}\pm\bar{d}) }{(\bar{x}\pm\bar{d})^{2}+\bar{y}^{2}}&((\bar{x}\pm\bar{d})^{2}+\bar{y}^{2}> 0)\\ 0&(\bar{x}=\mp\bar{d},\ \bar{y}=0)\end{cases},\\ &\bar{\bar{s}}=\bar{\bar{s}}_{1}^{\rm inf}+\bar{\bar{s}}_{2}^{\rm inf}\end{cases}. \end{split} \tag{105}\] Here, \(\theta_{1}\) and \(\theta_{2}\) are the azimuthal angles measured from each string axe placed at \((\bar{x},\bar{y})=(-\bar{d}/2,0)\), and \((\bar{x},\bar{y})=(\bar{d}/2,0)\), respectively. \(\bar{\bar{s}}_{i}^{\rm inf}\) is the solution of the \(\widetilde{U}(1)\) gauge field in the single string case obtained in App. A. It should be noted that above initial conditions satisfy the desired boundary conditions, \(\bar{\phi}(\bar{x},\bar{y})\to e^{i\theta_{1}+i\theta_{2}},\ |\bar{D}_{\mu}\bar{\phi}(\bar{x},\bar{y})|\to 0,\ \bar{ \bar{\phi}}_{r}(\bar{x},\bar{y})\to 0\) at \(\sqrt{\bar{x}^{2}+\bar{y}^{2}}\to\infty\), and \(\bar{\phi}(\pm\bar{d}/2,0)=0\). For local strings, \(\bar{\beta}=\bar{\eta}_{\bar{\phi}}=0\) and \(\bar{\bar{s}}_{i}^{\rm inf}=0\) are taken while \(\bar{A}_{x}(\bar{t}=0,\bar{x},\bar{y})=\bar{A}_{y}(\bar{t}=0,\bar{x},\bar{y})=0\) is additionally taken for global strings. To solve the above equations numerically, we prepare a two-dimensional simulation box with a length of \(L=30\) per side (the box size is increased to \(L=50\) for superconducting strings with current. The lattice spacing is \(a=0.05\) in units of \(e\eta_{\phi}=1\) for local and bosonic superconducting strings and \(a=0.05\) in units of \(\eta_{\phi}=1\) for global strings. The (fictitious) time evolution of all fields is evaluated from \(\bar{t}=0\) to \(\bar{t}=15\) in time steps of \(\delta\bar{t}=5\times 10^{-4}\) by the standard Euler method. Since we would like to evaluate a static field configuration under a fixed separation, we need to fix the position of the string axis during the time evolution. To do so, we also fix the phase factor of \(\bar{\phi}\) at each time step according to Ref. [40]. Specifically, we first evaluate \(\bar{\phi}(\bar{t}+\delta\bar{t},\bar{x},\bar{y})\) from \(\bar{\phi}(\bar{t},\bar{x},\bar{y})\) by Eq. (14), and then, it is updated by replacing it with \(\bar{\phi}(\bar{t}+\delta\bar{t},\bar{x},\bar{y})\to|\bar{\phi}(\bar{t}+\delta \bar{t},\bar{x},\bar{y})|e^{i\theta_{1}+i\theta_{2}}\). In the string axis where the phase factors, \(\theta_{1}\) and \(\theta_{2}\), are ill-defined, the value of \(\bar{\phi}\) is fixed at each time step as \(\bar{\phi}=0\). We use the Julia programming language to solve the above partial differential equations with aforementioned setup in our numerical calculations. After evolving the field configurations, the two-dimensional energy density of the two-string system defined by Eq. (10) is computed as \[\begin{split}\bar{\mathcal{E}}_{\rm int}=\int{\rm d}x{\rm d}y& \,(\partial_{\bar{x}}\bar{\phi}_{x}+\bar{A}_{x}\bar{\phi}_{y})^{2}+( \partial_{\bar{x}}\bar{\phi}_{y}-\bar{A}_{x}\bar{\phi}_{x})^{2}+(\partial_{ \bar{y}}\bar{\phi}_{x}+\bar{A}_{y}\bar{\phi}_{y})^{2}+(\partial_{\bar{y}}\bar {\phi}_{y}-\bar{A}_{y}\bar{\phi}_{x})^{2}\\ &+\frac{1}{2}(\partial_{\bar{x}}\bar{A}_{y}-\partial_{\bar{y}} \bar{A}_{x})^{2}+\frac{1}{2}(\bar{\partial}_{\bar{x}}\bar{A}_{x}+\bar{\partial }_{\bar{y}}\bar{A}_{y})^{2}+(\partial_{\bar{x}}\bar{\phi}_{r})^{2}+(\partial_{ \bar{y}}\bar{\bar{\phi}}_{r})^{2}+\bar{g}^{2}\bar{s}^{2}\bar{\bar{\phi}}_{r}^ {2}+\\ &\frac{1}{2}(\partial_{\bar{x}}\bar{s})^{2}+\frac{1}{2}(\partial _{\bar{y}}\bar{s})^{2}+\bar{V}(\bar{\phi},\bar{\bar{\phi}})],\\ \bar{V}(\bar{\phi},\bar{\bar{\phi}})=&\frac{\bar{ \lambda}_{\phi}}{4}(\bar{\phi}_{x}^{2}+\bar{\phi}_{y}^{2}-1)^{2}+\frac{\bar{ \lambda}_{\phi}}{4}(\bar{\bar{\phi}}_{r}^{2}-\bar{\eta}_{\bar{\phi}}^{2})^{2}+ \bar{\beta}(\bar{\phi}_{x}^{2}+\bar{\phi}_{y}^{2})\bar{\bar{\phi}}_{r}^{2}- \frac{\bar{\lambda}_{\phi}}{4}\bar{\eta}_{\bar{\phi}}^{2}.\end{split} \tag{11}\]
2309.15706
Long-time Anderson Localization for the Nonlinear quasi-periodic Schrödinger Equation on $\mathbb Z^d$
Using a Birkhoff normal form transform to impede mode transfer in a finite "barrier", we prove localization of arbitrary $\ell^2$ data for polynomially long time for the nonlinear quasi-periodic Schr\"odinger equation on $\mathbb Z^d$.
Hongzi Cong, Yunfeng Shi, W. -M. Wang
2023-09-27T14:47:14Z
http://arxiv.org/abs/2309.15706v1
Long-time Anderson localization for the nonlinear quasi-periodic Schrodinger equation on \(\mathbb{Z}^{d}\) ###### Abstract. Using a Birkhoff normal form transform to impede mode transfer in a finite "barrier", we prove localization of _arbitrary_\(\ell^{2}\) data for polynomially long time for the nonlinear quasi-periodic Schrodinger equation on \(\mathbb{Z}^{d}\). Key words and phrases:Anderson localization, Birkhoff normal form, Nonlinear quasi-periodic Schrodinger equation ## 1. Introduction We consider the nonlinear Schrodinger equation on \(\mathbb{Z}^{d}\): \[\mathrm{i}\dot{q}_{\boldsymbol{j}}=V_{\boldsymbol{j}}(\boldsymbol{\theta}, \boldsymbol{\alpha})q_{\boldsymbol{j}}+\epsilon_{1}(\Delta q)_{\boldsymbol{j}} +\epsilon_{2}|q_{\boldsymbol{j}}|^{2}q_{\boldsymbol{j}},\,\boldsymbol{j}\in \mathbb{Z}^{d}, \tag{1.1}\] where \(\epsilon_{1}\) and \(\epsilon_{2}\) are parameters in \([0,1]\), and \[(\Delta q)_{\boldsymbol{j}}=\sum_{\boldsymbol{j}^{\prime},|\boldsymbol{j}^{ \prime}-\boldsymbol{j}|_{1}=1}q_{\boldsymbol{j}^{\prime}},\] is the usual discrete Laplacian, \(|\boldsymbol{j}|_{1}=\sum_{1\leq i\leq d}|j_{i}|\) and the potential \(V_{\boldsymbol{j}}(\boldsymbol{\theta},\boldsymbol{\alpha})\) is a trigonometric polynomial given by \[V_{\boldsymbol{j}}(\boldsymbol{\theta},\boldsymbol{\alpha})=\sum_{ \boldsymbol{\ell}\in\Gamma_{L}}v_{\boldsymbol{\ell}}\cos 2\pi\left(\boldsymbol{ \ell}\cdot(\boldsymbol{\theta}+\boldsymbol{j}\boldsymbol{\alpha})\right)\text{ with }\boldsymbol{\theta}, \boldsymbol{\alpha}\in[0,1]^{d}, \tag{1.2}\] where \[\boldsymbol{x}\boldsymbol{y} :=(x_{1}y_{1},\cdots,x_{d}y_{d})\in\mathbb{R}^{d},\] \[\boldsymbol{x}\cdot\boldsymbol{y} :=x_{1}y_{1}+\cdots+x_{d}y_{d}\in\mathbb{R},\] for \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{R}^{d}\), and \(\Gamma_{L}\subset\mathbb{Z}^{d},\ L\in\mathbb{N}\) satisfies the following properties: (a) for each \(\boldsymbol{\ell}=(\ell_{k})_{1\leq k\leq d}\in\Gamma_{L},\ell_{k}\neq 0\) for \(\forall\ k=1,2,...,d\); (b) for any two \(\ \boldsymbol{\ell},\boldsymbol{\ell}^{\prime}\in\Gamma_{L}\), \(\boldsymbol{\ell}+\boldsymbol{\ell}^{\prime}\neq 0\). Note that finite multi-variable cosine series naturally satisfy conditions (a) and (b). When \(\epsilon_{2}=0\), it is known from the work of Bourgain [1] that for analytic \(V\), for any fixed \(\boldsymbol{\theta}\), there is a large set in \(\boldsymbol{\alpha}\), so that the linear Schrodinger operator \(H\): \[H=V(\boldsymbol{\theta},\boldsymbol{\alpha})+\epsilon_{1}\Delta\] has Anderson localization if \(\epsilon_{1}\) is sufficiently small. (See also [1] for \(\mathbb{Z}^{2}\); and [13] for general long-range operators on \(\mathbb{Z}^{d}\).) Consequently using eigenfunction expansion, all \(\ell^{2}\) solutions which are initially localized about the origin, remain localized for all time. The main purpose of this paper is to prove an analogous, long time result, for the nonlinear equation (1.1), when \(\epsilon_{2}\neq 0\). Our main result is the following: **Theorem 1.1**.: _Given any \(\delta,\gamma>0,\)\(M\gg 1\) and for all initial datum \(q(0)\in\ell^{2}(\mathbb{Z}^{d})\), let \(j_{0}\in\mathbb{N}\) be such that_ \[\sum_{|\mathbf{j}|>j_{0}}|q_{j}(0)|^{2}<\delta,\] _where \(|\mathbf{j}|=\sqrt{\sum_{1\leq i\leq d}|j_{i}|^{2}}\). Then there exists a constant \(\varepsilon(\gamma,L,M,j_{0})>0\) so that the following holds: for \(0<\epsilon:=\epsilon_{1}+\epsilon_{2}<\varepsilon(\gamma,L,M,j_{0})\) and for all_ \[|t|\leq\delta\cdot\epsilon^{-M}\] _one has_ \[\sum_{|\mathbf{j}|>j_{0}+M^{2}}|q_{j}(t)|^{2}<2\delta,\] _on a set in \((\mathbf{\theta},\mathbf{\alpha})\) of measure at least_ \[1-\gamma.\] ### Ideas of the proof We use a Birkhoff normal form transform to prove the Theorem. This transform is related to that in [1, 20, 21] in the random setting. (For physical motivations of the problem, see e.g., [13].) Here we extend it to the quasi-periodic setting, and to arbitrary dimensions. Similar to [1, 20, 21], this normal form transform is effectuated in a finite neighborhood, seen as a "barrier" to impede transfer to higher modes, higher indexed lattice sites. Small divisor conditions are imposed, as usual, in the barrier for the normal form transform. The measure estimate of these small divisor conditions is more difficult than that in the random case. We use the generalized Wronskian approach initiated in [20, 21]. This is the main novelty here. In [20], the Wronskian method was first used for the cosine potential in one dimensional phase space, (i.e, the \(\mathbf{\theta}\) in (1.1) is one dimensional). As it turns out, however, this method is general, and applicable to arbitrary trigonometric polynomials in arbitrary dimensional phase space [20]. Here we follow [20] to make small divisor estimates and to prove that arbitrary \(\ell^{2}\) data remain localized, in the sense of the Theorem, for polynomially long time. ### Global in time solutions to quasi-periodic systems It is natural to entire whether nonlinear quasi-periodic systems such as equation (1.1) have _global_ solutions. In [20], quasi-periodic in time solutions were constructed for the nonlinear quasi-periodic wave equations (NLQPW) in \(\mathbb{Z}^{d}\), for the cosine potential (in one dimensional phase space). Related results for the nonlinear quasi-periodic Schrodinger equation on \(\mathbb{Z}\) were known before, see e.g., [1, 20]. But the existence of quasi-periodic in time solutions to NLQPW seemed to have remained open until [20]. Finally, we would like to mention that the recent result in [20] should pave the way toward the existence of quasi-periodic in time solutions to (1.1), and also its NLQPW counterpart, in \(\mathbb{Z}^{d}\), for arbitrary trigonometric polynomial potentials, thus further generalizing Bourgain's work [1] to a non-linear setting. ### Some details about the small divisors Before closing the section, let us give a bit more precision on the small divisors, which the reader may skip as a first reading. As mentioned earlier, the Birkhoff normal form transform is in a finite neighborhood, more precisely, in an annulus neighborhood of the sphere \(A(j_{0}):=\left\{\boldsymbol{j}\in\mathbb{Z}^{d}:|\boldsymbol{j}|=j_{0}\right\}\), which then acts as a "barrier" to impede mode transfer. We choose "good" \((\boldsymbol{\theta},\boldsymbol{\alpha})\) so that \(V=(V_{\boldsymbol{j}}(\boldsymbol{\theta},\boldsymbol{\alpha}))_{\boldsymbol{ j}\in\mathbb{Z}^{d}}\) satisfies suitable **non-resonant conditions**. More precisely, in the Birkhoff normal form transform, the values of the potential \(V\) at different sites \(\boldsymbol{j}\), \(V_{\boldsymbol{j}}\) act as components of a frequency vector. Given \(\gamma,L,M,j_{0}>0\), we say that a frequency vector \(\omega=(\omega_{\boldsymbol{j}})_{\boldsymbol{j}\in\mathbb{Z}^{d}}\) is \((\gamma,L,M,j_{0})\)-nonresonant if for any \(0\neq\boldsymbol{k}=(k_{\boldsymbol{j}})_{j\in\mathbb{Z}^{d}}\in\mathbb{Z}^{ \mathbb{Z}^{d}}\), \[\left|\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}k_{\boldsymbol{j}}\omega_{ \boldsymbol{j}}\right|\geq\left(\frac{\gamma}{\left(2LM^{2}j_{0}\right)^{2(d+ 1)}}\right)^{10L^{4}M^{4}}. \tag{1.3}\] Since the normal form transform is in the barrier only, it suffices to realize the above non-resonant condition near \(A(j_{0})\). We remark that this is an essential point in the proof. The structure of the paper is as follows. Some important facts on Hamiltonian dynamics, such as the Poisson bracket, symplectic transformation and non-resonant conditions are presented in SS2. The Birkhoff normal form type theorem and the main theorem are proved in SS3. The measure estimates for the small divisors are in SS4. Subsequently SS5 concludes the proof of some technical lemmas. ## 2. Structure of the transformed Hamiltonian We recast (1.1) as a Hamiltonian equation \[\mathrm{i}\dot{\mathrm{q}}_{\mathrm{j}}=2\frac{\partial\mathrm{H}}{\partial \bar{\mathrm{q}}_{\mathrm{j}}},\] where \[H(q,\bar{q})=\frac{1}{2}\left(\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}V_{ \boldsymbol{j}}(\boldsymbol{\theta},\boldsymbol{\alpha})\left|q_{\boldsymbol{ j}}\right|^{2}+\epsilon_{1}\sum_{\begin{subarray}{c}j,j^{\prime}\in\mathbb{Z}^{d} \\ |j-j^{\prime}|=1\end{subarray}}q_{\boldsymbol{j}}\bar{q}_{\boldsymbol{j}^{ \prime}}+\frac{1}{2}\epsilon_{2}\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}\left|q _{\boldsymbol{j}}\right|^{4}\right), \tag{2.1}\] where \(V_{\boldsymbol{j}}(\boldsymbol{\theta},\boldsymbol{\alpha})\) is defined by (1.2). In order to prove the main result, we need to control the time derivative of the truncated sum of higher modes \[\frac{d}{dt}\sum_{|\boldsymbol{j}|>j_{0}}\left|q_{\boldsymbol{j}}(t)\right|^{2}. \tag{2.2}\] In what follows, we will deal extensively with monomials in \(q_{\boldsymbol{j}}\). Rewrite any monomial in the form \[\prod_{\boldsymbol{j}\in\mathbb{Z}^{d}}q_{\boldsymbol{j}}^{n_{\boldsymbol{j}} }\bar{q}_{\boldsymbol{j}}^{n_{\boldsymbol{j}}^{\prime}}. \tag{2.3}\] Let \[\boldsymbol{n}=(\boldsymbol{n}_{\boldsymbol{j}},\boldsymbol{n}_{\boldsymbol{j} }^{\prime})_{\boldsymbol{j}\in\mathbb{Z}^{d}}\in\mathbb{N}^{\mathbb{Z}^{d}} \times\mathbb{N}^{\mathbb{Z}^{d}}.\] We define \[\begin{split}\text{supp }\mathbf{n}&=\{\mathbf{j}\in\mathbb{Z}^{d}: \ \mathbf{n_{j}}\neq 0\ \text{or}\ \mathbf{n^{\prime}_{j}}\neq 0\},\\ \Delta(\mathbf{n})&=\sup_{\mathbf{j},\mathbf{j}^{\prime}\in \text{supp }\mathbf{n}}|\mathbf{j}-\mathbf{j}^{\prime}|,\\ |\mathbf{n}|_{1}&=\sum_{\mathbf{j}\in\mathbb{Z}^{d}}(\mathbf{n_ {j}}+\mathbf{n^{\prime}_{j}}).\end{split}\] If \(\mathbf{n_{j}}=\mathbf{n^{\prime}_{j}}\) for all \(\mathbf{j}\in\text{supp }\mathbf{n}\), then the monomial (2.3) is called resonant. Otherwise it is non-resonant. Note that non-resonant monomials contribute to the truncated sum in (2.2); while resonant ones do not. We define the (resonant) set as \[\mathcal{N}=\left\{\mathbf{n}\in\mathbb{N}^{\mathbb{Z}^{d}}\times\mathbb{N}^{ \mathbb{Z}^{d}}:\ \mathbf{n_{j}}=\mathbf{n^{\prime}_{j}}\ \text{for}\ \forall\ \mathbf{j}\right\}. \tag{2.4}\] Given a large \(j_{0}\in\mathbb{N}\) and \(N\in\mathbb{N}\) satisfying \(N\ll j_{0}\), let \[A(j_{0},N):=\left\{\mathbf{j}\in\mathbb{Z}^{d}:||\mathbf{j}|-j_{0}|\leq N\right\}.\] **Definition 2.1**.: Given a Hamiltonian \[W(q,\bar{q})=\sum_{\mathbf{n}\in\mathbb{N}^{\mathbb{Z}^{d}}\times\mathbb{N}^{ \mathbb{Z}^{d}}}W(\mathbf{n})\prod_{\text{supp }\mathbf{n}}q_{\mathbf{j}}^{\mathbf{n}_{j}}\bar{q}_{\mathbf{j}}^{\mathbf{n}_{j}^{\prime}},\] for \(j_{0},N\in\mathbb{N}\) and \(r>2\), we define \[\left\|W\right\|_{j_{0},N,r}=\sum_{\begin{subarray}{c}\mathbf{n}\in\mathbb{N}^{ \mathbb{Z}^{d}}\times\mathbb{N}^{\mathbb{Z}^{d}}\\ \text{supp }\mathbf{n}\cap A(j_{0},N)\neq\emptyset\end{subarray}}|W(\mathbf{n})| \cdot|\mathbf{n}|_{1}\cdot r^{\Delta(\mathbf{n})+|\mathbf{n}|_{1}-1}. \tag{2.5}\] **Remark 2.1**.: It is easy to see that \[\left\|W\right\|_{j_{0},N_{1},r_{1}}\leq\left\|W\right\|_{j_{0},N_{2},r_{2}},\] if \[N_{1}\leq N_{2}\quad\text{and}\quad r_{1}\leq r_{2}.\] **Definition 2.2**.: The Poisson bracket of \(W\) and \(U\) is defined as \[\{W,U\}:=\mathfrak{i}\sum_{\mathbf{j}\in\mathbb{Z}^{d}}\left(\frac{\partial W}{ \partial q_{\mathbf{j}}}\cdot\frac{\partial U}{\partial\bar{q}_{\mathbf{j}}}-\frac{ \partial W}{\partial\bar{q}_{\mathbf{j}}}\cdot\frac{\partial U}{\partial q_{\mathbf{j} }}\right).\] We have the following key estimate. **Proposition 2.3** (**Poisson Bracket**).: _For \(j_{0},N\in\mathbb{N}\), let_ \[W(q,\bar{q})=\sum_{\begin{subarray}{c}\mathbf{n}\in\mathbb{N}^{\mathbb{Z}^{d}} \times\mathbb{N}^{\mathbb{Z}^{d}}\\ \text{supp }\mathbf{n}\subset A(j_{0},N)\end{subarray}}W(\mathbf{n})\prod_{ \text{supp }\mathbf{n}}q_{\mathbf{j}}^{\mathbf{n}_{j}}\bar{q}_{\mathbf{j}}^{\mathbf{n}_{j}^{\prime}},\] _and_ \[U(q,\bar{q})=\sum_{\mathbf{m}\in\mathbb{N}^{\mathbb{Z}^{d}}\times\mathbb{N}^{ \mathbb{Z}^{d}}}U(\mathbf{m})\prod_{\text{supp }\mathbf{m}}q_{\mathbf{j}}^{\mathbf{m}_{j}}\bar{q}_{\mathbf{j}}^{\mathbf{m}_{j}^{\prime}}.\] _Then for any \(0<\sigma<r/2\), we have_ \[\left\|\{W,U\}\right\|_{j_{0},N,r-\sigma}\leq\frac{1}{\sigma}\left\|W\right\|_ {j_{0},N,r}\cdot\left\|U\right\|_{j_{0},N,r}. \tag{2.6}\] Proof.: First of all, we write \[\{W,U\}=\sum_{\boldsymbol{l}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}}\{W,U\} (\boldsymbol{l})\prod_{\text{supp }\boldsymbol{l}}q_{\boldsymbol{j}}^{L_{\boldsymbol{j}}}\bar{q}_{ \boldsymbol{j}}^{\boldsymbol{l}^{\prime}_{\boldsymbol{j}}}.\] Then one has \[\{W,U\}(\boldsymbol{l})=\mathrm{i}\sum_{\boldsymbol{k}\in\mathbb{Z}^{d}}\left( \sum_{\boldsymbol{n},\boldsymbol{m}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d} }}^{*}W(\boldsymbol{n})U(\boldsymbol{m})\left(\boldsymbol{n_{k}}\boldsymbol{m ^{\prime}_{k}}-\boldsymbol{n^{\prime}_{k}}\boldsymbol{m_{k}}\right)\right) \tag{2.7}\] and the sum \(\sum\limits_{\boldsymbol{n},\boldsymbol{m}\in\mathbb{N}^{2^{d}}\times\mathbb{ N}^{2^{d}}}^{*}\) is taken as \[\boldsymbol{l_{j}} =\boldsymbol{n_{j}}+\boldsymbol{m_{j}}-1,\quad\boldsymbol{l^{ \prime}_{j}}=\boldsymbol{n^{\prime}_{j}}+\boldsymbol{m^{\prime}_{j}}-1\text{ for }\boldsymbol{j}=\boldsymbol{k},\] \[\boldsymbol{l_{j}} =\boldsymbol{n_{j}}+\boldsymbol{m_{j}},\quad\boldsymbol{l^{ \prime}_{j}}=\boldsymbol{n^{\prime}_{j}}+\boldsymbol{m^{\prime}_{j}}\text{ for }\boldsymbol{j}\neq\boldsymbol{k}.\] Secondly, let \[\widetilde{U}=\sum_{\begin{subarray}{c}\boldsymbol{m}\in\mathbb{N}^{2^{d}} \times\mathbb{N}^{2^{d}}\\ \text{supp }\boldsymbol{m}\cap A(j_{0},N)=\emptyset\end{subarray}}U( \boldsymbol{m})\prod_{\text{supp }\boldsymbol{m}}q_{\boldsymbol{j}}^{\boldsymbol{m}_{j}}\bar{q}_{ \boldsymbol{j}}^{\boldsymbol{m}^{\prime}_{j}},\] then one has \(\left\{W,\widetilde{U}\right\}=0.\) Hence, we can always assume that \[U=\sum_{\begin{subarray}{c}\boldsymbol{m}\in\mathbb{N}^{2^{d}} \times\mathbb{N}^{2^{d}}\\ \text{supp }\boldsymbol{m}\cap A(j_{0},N)\neq\emptyset\end{subarray}}U( \boldsymbol{m})\prod_{\text{supp }\boldsymbol{m}}q_{\boldsymbol{j}}^{\boldsymbol{m}_{j}}\bar{q}_{ \boldsymbol{j}}^{\boldsymbol{m}^{\prime}_{j}}. \tag{2.8}\] Without loss of generality, we assume that \(W\) and \(U\) are homogeneous polynomials with degrees \(n^{*}\) and \(m^{*}\) respectively, i.e., \[W(q,\bar{q})=\sum_{\begin{subarray}{c}\boldsymbol{n}\in\mathbb{N}^{2^{d}} \times\mathbb{N}^{2^{d}},|\boldsymbol{m}|_{1}=n^{*}\\ \text{supp }\boldsymbol{m}\cap A(j_{0},N)\neq\emptyset\end{subarray}}W( \boldsymbol{n})\prod_{\text{supp }\boldsymbol{n}}q_{\boldsymbol{j}}^{\boldsymbol{n}_{j}}\bar{q}_{ \boldsymbol{j}}^{\boldsymbol{n}^{\prime}_{j}},\] and \[U(q,\bar{q})=\sum_{\begin{subarray}{c}\boldsymbol{m}\in\mathbb{N}^{2^{d}} \times\mathbb{N}^{2^{d}},|\boldsymbol{m}|_{1}=m^{*}\\ \text{supp }\boldsymbol{m}\cap A(j_{0},N)\neq\emptyset\end{subarray}}U( \boldsymbol{m})\prod_{\text{supp }\boldsymbol{m}}q_{\boldsymbol{j}}^{\boldsymbol{m}_{j}}\bar{q}_{ \boldsymbol{j}}^{\boldsymbol{m}^{\prime}_{j}}.\] Since \(r>2\) and \(0<\sigma<r/2\), one has \[1<r-\sigma<r. \tag{2.9}\] In view of (2.9) and \[\Delta(\boldsymbol{l})\leq\Delta(\boldsymbol{n})+\Delta(\boldsymbol{m}),\] one has \[\sum_{\mathbf{l}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}}\left|\sum_{ \mathbf{k}\in\mathbb{Z}^{d}}\sum_{\mathbf{n},\mathbf{m}\in\mathbb{N}^{2^{d}}\times\mathbb{N }^{2^{d}}}^{*}W(\mathbf{n})U(\mathbf{m})\left(\mathbf{n_{k}}\mathbf{m}_{k}^{\prime}-\mathbf{n}_{k}^ {\prime}\mathbf{m}_{k}\right)\right|(r-\sigma)^{\Delta(\mathbf{l})}\] \[\leq \sum_{\mathbf{n},\mathbf{m}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}} \left|W(\mathbf{n})\right|\left|U(\mathbf{m})\right|\sum_{\mathbf{k}\in\mathbb{Z}^{d}} \left(\mathbf{n_{k}}\mathbf{m}_{k}^{\prime}+\mathbf{n}_{k}^{\prime}\mathbf{m}_{k}\right)(r- \sigma)^{\Delta(\mathbf{n})+\Delta(\mathbf{m})} \tag{2.10}\] \[\leq \left(\sum_{\mathbf{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}} \left|W(\mathbf{n})\right|\cdot|\mathbf{n}|_{1}\cdot r^{\Delta(\mathbf{n})}\right)\left( \sum_{\mathbf{m}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}}\left|U(\mathbf{m}) \right|\cdot|\mathbf{m}|_{1}\cdot r^{\Delta(\mathbf{m})}\right).\] In view of (2.7), (2.10) and using \(|\mathbf{l}|_{1}=|\mathbf{n}|_{1}+|\mathbf{m}|_{1}-2\), we have \[\left\|\{W,U\}\right\|_{j_{0},N,r-\sigma}\] \[\leq \left(|\mathbf{n}|_{1}+|\mathbf{m}|_{1}-2\right)(r-\sigma)^{|\mathbf{n}|_{1}+ |\mathbf{m}|_{1}-3}\] \[\times\left(\sum_{\mathbf{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^ {d}}}\left|W(\mathbf{n})\right|\cdot|\mathbf{n}|_{1}\cdot r^{\Delta(\mathbf{n})}\right) \left(\sum_{\mathbf{m}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}}\left|U(\mathbf{m}) \right|\cdot|\mathbf{m}|_{1}\cdot r^{\Delta(\mathbf{m})}\right)\] \[\leq \frac{1}{\sigma}\left\|W\right\|_{j_{0},N,r}\cdot\left\|U\right\| _{j_{0},N,r},\] which finishes the proof of (2.6), and where the last inequality is based on the following inequality: \[(|\mathbf{n}|_{1}+|\mathbf{m}|_{1}-2)(r-\sigma)^{|\mathbf{n}|_{1}+|\mathbf{m}|_{1}-3}\leq \frac{1}{\sigma}r^{|\mathbf{n}|_{1}+|\mathbf{m}|_{1}-2}.\] Furthermore, one has **Proposition 2.4**.: _Let \(W\) and \(U\) be given by Proposition 2.3. Assume further that_ \[\left(\frac{e}{\sigma}\right)\left\|W\right\|_{j_{0},N,r}\leq\frac{1}{2}. \tag{2.11}\] _Then_ \[\left\|U\circ X_{W}^{1}\right\|_{j_{0},N,r-\sigma}\leq 2\left\|U\right\|_{j_{0 },N,r},\] _where \(X_{W}^{1}\) is the time-\(1\) map generated by the flow of \(W\)._ In general, we have \[\left\|U\circ X_{W}^{1}-U\right\|_{j_{0},N,r-\sigma}\leq\frac{e}{\sigma} \cdot\left\|W\right\|_{j_{0},N,r}\cdot\left\|U\right\|_{j_{0},N,r}, \tag{2.12}\] and \[\left\|U\circ X_{W}^{1}-U-\{U,W\}\right\|_{j_{0},N,r-\sigma}\leq\left(\frac{e }{\sigma}\right)^{2}\left\|W\right\|_{j_{0},N,r}^{2}\cdot\left\|U\right\|_{j_ {0},N,r}. \tag{2.13}\] ## 3. Analysis of the Symplectic Transformations We now construct the symplectic transformation \(\Gamma\) by a finite step induction. At the first step, i.e., \(s=1\) (in view of (2.1)), \[H_{1}=H=\frac{1}{2}\left(\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}V_{\boldsymbol{j }}(\boldsymbol{\theta},\boldsymbol{\alpha})|q_{\boldsymbol{j}}|^{2}+\epsilon _{1}\sum_{\begin{subarray}{c}i,j\in\mathbb{Z}^{d}\\ |i-j|_{1}=1\end{subarray}}q_{i}\bar{q}_{\boldsymbol{j}}+\frac{1}{2}\epsilon_{2 }\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}|q_{\boldsymbol{j}}|^{4}\right). \tag{3.1}\] Then (3.1) can be rewritten as \[H_{1}=D+Z_{1}+R_{1},\] where \[D=\frac{1}{2}\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}V_{\boldsymbol{j}}( \boldsymbol{\theta},\boldsymbol{\alpha})|q_{\boldsymbol{j}}|^{2},\] \[Z_{1}=\frac{\epsilon_{2}}{4}\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}|q_{ \boldsymbol{j}}|^{4},\] and \[R_{1}=\frac{\epsilon_{1}}{2}\sum_{\begin{subarray}{c}i,j\in\mathbb{Z}^{d}\\ |i-j|=1\end{subarray}}q_{i}\bar{q}_{\boldsymbol{j}}.\] Let \(M\) be given as in Theorem 1.1 and fix any \(r>2\). From (2.5), we have that \[\left\|H_{1}-D\right\|_{j_{0},M^{2},r}\leq\epsilon^{0.99}, \tag{3.2}\] using \(\epsilon=\epsilon_{1}+\epsilon_{2}\) small enough. ### One step of Birkhoff normal form **Lemma 3.1**.: _Let \(V=(V_{\boldsymbol{j}}(\boldsymbol{\theta},\boldsymbol{\alpha}))_{\boldsymbol{ j}\in\mathbb{Z}^{d}}\) satisfy the \((\gamma,L,M,j_{0})\)-nonresonant conditions (1.3). Then there exists a change of variables \(\Gamma_{1}:=X_{F_{1}}^{1}\) such that_ \[H_{2} =H_{1}\circ X_{F_{1}}^{1}=D+Z_{2}+R_{2}\] \[=\frac{1}{2}\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}V_{\boldsymbol {j}}(\boldsymbol{\theta},\boldsymbol{\alpha})|q_{\boldsymbol{j}}|^{2}+\sum_{ \begin{subarray}{c}\boldsymbol{n}\in\mathbb{N}^{d}\times\mathbb{N}^{d}\\ \boldsymbol{n}\in\mathcal{N}\end{subarray}}Z_{2}(\boldsymbol{n})\prod_{ \mathrm{supp}\ \boldsymbol{n}}\left|q_{\boldsymbol{j}}^{\boldsymbol{n}_{\boldsymbol{j}}} \right|^{2}+\sum_{\boldsymbol{n}\in\mathbb{N}^{d}\times\mathbb{N}^{d}}R_{2}( \boldsymbol{n})\prod_{\mathrm{supp}\ \boldsymbol{n}}q_{\boldsymbol{j}}^{\boldsymbol{n}_{\boldsymbol{j}}} \bar{q}_{\boldsymbol{j}}^{\boldsymbol{n}_{\boldsymbol{j}}^{\boldsymbol{n}_{ \boldsymbol{j}}^{\boldsymbol{n}_{\boldsymbol{j}}^{\boldsymbol{n}_{\boldsymbol{j }}^{\boldsymbol{n}_{\boldsymbol{j}}^{\boldsymbol{n}_{\boldsymbol{j}}^{ \boldsymbol{n}_{\boldsymbol{j}}}^{\boldsymbol{n}_{\boldsymbol{j}}^{\boldsymbol{n} _{\boldsymbol{j}}}}}}}}}}.\] _Moreover, one has_ \[\left\|F_{1}\right\|_{j_{0},M^{2},r} \leq\epsilon^{0.95}, \tag{3.4}\] \[\left\|Z_{2}\right\|_{j_{0},M^{2},r-\sigma} \leq\epsilon^{0.9}\left(\sum_{i=0}^{1}2^{-i}\right),\] (3.5) \[\left\|R_{2}\right\|_{j_{0},M^{2},r-\sigma} \leq\epsilon^{0.9}\left(\sum_{i=0}^{1}2^{-i}\right), \tag{3.3}\] _and_ \[\left\|\mathcal{R}_{2}\right\|_{j_{0},M^{2},r-\sigma}\leq\epsilon^{1.9}, \tag{3.6}\] _where_ \[\mathcal{R}_{2}=\sum_{\boldsymbol{n}\in\mathbb{N}^{d}\times\mathbb{N}^{d}}R_{2 }(\boldsymbol{n})\prod_{\mathrm{supp}\ \boldsymbol{n}\cap A(j_{0},M^{2}-40)\neq \emptyset}q_{\boldsymbol{j}}^{\boldsymbol{n}_{\boldsymbol{j}}}\bar{q}_{ \boldsymbol{j}}^{\boldsymbol{n}_{\boldsymbol{j}}^{\boldsymbol{n}_{\boldsymbol{j} }^{\boldsymbol{n}_{\boldsymbol{j}}^{\boldsymbol{j}}}}}. \tag{3.7}\] _Furthermore, for any \(A\geq 3\) the following estimate holds_ \[\left\|\sum_{\Delta(\mathbf{n})+|\mathbf{n}|=A}\left(|Z_{2}(\mathbf{n})|+|R_{2}(\mathbf{n})| \right)\prod_{\text{supp }\mathbf{n}}q_{\mathbf{j}}^{\mathbf{n}_{j}}\bar{q}_{\mathbf{j}}^{\mathbf{n}_{j}^{\prime}}\right\|_ {j_{0},M^{2},r-\sigma}\leq\epsilon^{1+0.9(A-3)}. \tag{3.8}\] Proof.: By the Birkhoff normal form theory, \(F_{1}\) satisfies the homological equation \[L_{V}F_{1}=\mathcal{R}_{1}, \tag{3.9}\] where the _Lie derivative_ operator is defined by \[L_{V}:\;W\mapsto L_{V}W:=\mathrm{i}\sum_{\mathbf{n}\in\mathbb{N}^{2^{d}}\times \mathbb{N}^{2^{d}}}\left(\sum_{\mathbf{j}\in\mathbb{Z}^{d}}(\mathbf{n}_{\mathbf{j}}-\mathbf{n} _{\mathbf{j}}^{\prime})V_{\mathbf{j}},\right)W(\mathbf{n})\prod_{\text{supp }\mathbf{n}}q_{\mathbf{j}}^{\mathbf{n}_{j}}\bar{q}_{\mathbf{j}}^{\mathbf{n}_{j}^{ \prime}},\] and \[\mathcal{R}_{1}=\sum_{\mathbf{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}}R_ {1}(\mathbf{n})\prod_{\text{supp }\mathbf{n}\cap A(j_{0},M^{2}-20)\neq\emptyset}q_{\mathbf{j}}^{\mathbf{n}_{j}}\bar{q}_{ \mathbf{j}}^{\mathbf{n}_{j}^{\prime}}.\] Unless \(\mathbf{n}\in\mathcal{N}\) (see (2.4)), one has \[F_{1}(\mathbf{n})=\frac{R_{1}(\mathbf{n})}{\sum_{\mathbf{j}\in\mathbb{Z}^{d}}(\mathbf{n}_{\bm {j}}-\mathbf{n}_{\mathbf{j}}^{\prime})V_{\mathbf{j}}}.\] Note that frequency \(V\) satisfies the nonresonant condition (1.3). Using \(\epsilon\) small enough we have \[|F_{1}(\mathbf{n})|\leq|R_{1}(\mathbf{n})|\epsilon^{-0.01}, \tag{3.10}\] and then \[\left\|F_{1}\right\|_{j_{0},M^{2},r}\leq\epsilon^{0.95},\] which completes the proof of (3.3). Using Taylor's formula yields \[H_{2} :=H_{1}\circ X_{F_{1}}^{1}=D+Z_{1}\] \[+\left\{D,F_{1}\right\}+R_{1}+\left(X_{F_{1}}^{1}-\mathbf{id}- \left\{\cdot,F_{1}\right\}\right)D+\left(X_{F_{1}}^{1}-\mathbf{id}\right)(Z_ {1}+R_{1})\] \[=D+Z_{2}+R_{2}\] \[:=\frac{1}{2}\sum_{\mathbf{j}\in\mathbb{Z}^{d}}V_{\mathbf{j}}|q_{\mathbf{j}}| ^{2}+\sum_{\begin{subarray}{c}\mathbf{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^ {d}}\\ \mathbf{n}\in\mathcal{N}\end{subarray}}Z_{2}(\mathbf{n})\prod_{\text{supp }\mathbf{n}}\left|q_{\mathbf{j}}^{\mathbf{n}_{j}} \right|^{2}+\sum_{\mathbf{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}}R_{2}( \mathbf{n})\prod_{\text{supp }\mathbf{n}}q_{\mathbf{j}}^{\mathbf{n}_{j}}\bar{q}_{\mathbf{j}}^{\mathbf{n}_{j}^{ \prime}},\] where by (3.9), \[R_{2} =\left(R_{1}-\mathcal{R}_{1}\right)+\left(X_{F_{1}}^{1}-\mathbf{ id}-\left\{\cdot,F_{1}\right\}\right)D+\left(X_{F_{1}}^{1}-\mathbf{id}\right)(Z_ {1}+R_{1})\] \[=\sum_{\mathbf{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}}R_{2}( \mathbf{n})\prod_{\text{supp }\mathbf{n}}q_{\mathbf{j}}^{\mathbf{n}_{j}}\bar{q}_{\mathbf{j}}^{\mathbf{n}_{j}^{\prime}},\] and \[\left(X_{F_{1}}^{1}-\mathbf{id}-\left\{\cdot,F_{1}\right\}\right)D :=D\circ X_{F_{1}}^{1}-D-\left\{D,F_{1}\right\},\] \[\left(X_{F_{1}}^{1}-\mathbf{id}\right)(Z_{1}+R_{1}):=(Z_{1}+R_{1 })\circ X_{F_{1}}^{1}-(Z_{1}+R_{1}).\] In this step, we have \(Z_{2}=Z_{1}\). Hence, the estimate (3.4) holds true. Write \[R_{2}=\mathcal{R}_{2}+(R_{2}-\mathcal{R}_{2}),\] where \(\mathcal{R}_{2}\) is defined by (3.7). By (2.12), (2.13) and (3.9), for any \(0<\sigma<r/2\) one has \[\left\|\mathcal{R}_{2}\right\|_{j_{0},M^{2},r-\sigma}\leq\left(\frac{e}{ \sigma}\right)\cdot\left\|F_{1}\right\|_{j_{0},M^{2},r}\cdot\left\|H_{1}-D \right\|_{j_{0},M^{2},r}\leq\epsilon^{1.9},\] where the last inequality follows from (3.2) and (3.3). This finishes the proof of (3.6). Similarly, we have \[\left\|R_{2}-\mathcal{R}_{2}\right\|_{j_{0},M^{2},r-\sigma}\leq\epsilon^{0.9} \left(\sum_{i=0}^{1}2^{-i}\right).\] Finally, the estimate (3.8) follows from (3.2) and (3.3) by induction about \(A\). Precisely, the term in \(R_{2}\) comes from \(\frac{1}{j!}Z_{1}^{(j)}\) and \(\frac{1}{j!}R_{1}^{(j)}\) for some \(j\in\mathbb{N}\), where \(Z_{1}^{(j)}=\left\{Z_{1}^{(j-1)},H\right\}\), \(Z_{1}^{(0)}=Z_{1}\), \(R_{1}^{(j)}=\left\{R_{1}^{(j-1)},H\right\}\) and \(R_{1}^{(0)}=R_{1}\). Following the proof of (3.6) and noting that \(\Delta(\boldsymbol{l})\leq\Delta(\boldsymbol{n})+\Delta(\boldsymbol{m})\) and \(|\boldsymbol{l}|_{1}\leq|\boldsymbol{n}|_{1}+|\boldsymbol{m}|_{1}-2\), we conclude the proof of (3.8). ### Iterative Lemma We introduce some constants during the iterative steps. For \(s\in\mathbb{N}\) and \(1\leq s\leq M\), let \[N_{s}=M^{2}-20(s-1), \tag{3.11}\] and then using \(M\gg 1\) one has \[N_{s}\geq M^{2}-20M\geq\frac{M^{2}}{2},\] which implies \[\left[j_{0}-\frac{M^{2}}{2},j_{0}+\frac{M^{2}}{2}\right]\subset A(j_{0},N_{s} )\subset\left[j_{0}-M^{2},j_{0}+M^{2}\right].\] Let \[\sigma=\frac{r}{2M},\] then \[r-s\sigma\geq r-M\cdot\frac{r}{2M}\geq r/2.\] **Lemma 3.2**.: _Consider the Hamiltonian \(H_{s}(q,\bar{q})\) of the form_ \[H_{s} =D+Z_{s}+R_{s}\] \[=\frac{1}{2}\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}V_{\boldsymbol {j}}|q_{\boldsymbol{j}}|^{2}+\sum_{\begin{subarray}{c}\boldsymbol{n}\in \mathbb{N}^{2}\mathbb{Z}^{d}\\ \boldsymbol{n}\in\mathcal{N}\end{subarray}}Z_{s}(\boldsymbol{n})\prod_{ \operatorname{supp}\boldsymbol{n}}\left|q_{\boldsymbol{j}}^{\boldsymbol{n}_{j }}\right|^{2}+\sum_{\boldsymbol{n}\in\mathbb{N}^{2}\mathbb{Z}^{d}\times \mathbb{N}^{2}}R_{s}(\boldsymbol{n})\prod_{\operatorname{supp}\boldsymbol{n}}q _{\boldsymbol{j}}^{\boldsymbol{n}_{j}}\bar{q}_{\boldsymbol{j}}^{\boldsymbol{n }_{j}^{\prime}}.\] _Let \(V\) satisfy the \((\gamma,L,M,j_{0})\)-nonresonant conditions (1.3). Assume_ \[\left\|Z_{s}\right\|_{j_{0},M^{2},r-(s-1)\sigma} \leq\epsilon^{0.9}\left(\sum_{i=0}^{s-1}2^{-i}\right), \tag{3.13}\] \[\left\|R_{s}\right\|_{j_{0},M^{2},r-(s-1)\sigma} \leq\epsilon^{0.9}\left(\sum_{i=0}^{s-1}2^{-i}\right),\] (3.14) \[\left\|\mathcal{R}_{s}\right\|_{j_{0},M^{2},r-(s-1)\sigma} \leq\epsilon^{1+0.9(s-1)}, \tag{3.12}\] _where_ \[\mathcal{R}_{s}=\sum_{\boldsymbol{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}} R_{s}(\boldsymbol{n})\ \prod_{\text{supp }\boldsymbol{n}\cap A(j_{0},N_{s+1})\neq\emptyset}q_{j}^{\boldsymbol{n}_{j}} \bar{q}_{\bar{\boldsymbol{j}}}^{n_{j}^{\prime}}. \tag{3.15}\] _Furthermore, assume for any \(A\geq 3\) the following holds_ \[\left\|\sum_{\Delta(\boldsymbol{n})+|\boldsymbol{n}|=A}(|Z_{s}(\boldsymbol{n} )|+|R_{s}(\boldsymbol{n})|)\prod_{\text{supp }\boldsymbol{n}}q_{j}^{\boldsymbol{n}_{j}}\bar{q}_{\bar{ \boldsymbol{j}}}^{n_{j}^{\prime}}\right\|_{j_{0},M^{2},r-(s-1)\sigma}\leq \epsilon^{1+0.9(A-3)}. \tag{3.16}\] _Then there exists a change of variables \(\Phi_{s}:=X^{1}_{F_{s}}\)_ \[H_{s+1} =H_{s}\circ X^{1}_{F_{s}}\] \[=\frac{1}{2}\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}V_{\boldsymbol {j}}|q_{\boldsymbol{j}}|^{2}\] \[\quad+\sum_{\begin{subarray}{c}\boldsymbol{n}\in\mathbb{N}^{2^{d }}\times\mathbb{N}^{2^{d}}\\ \boldsymbol{n}\in\mathcal{N}\end{subarray}}Z_{s+1}(\boldsymbol{n})\prod_{ \text{supp }\boldsymbol{n}}\left|q_{\boldsymbol{j}}^{\boldsymbol{n}_{j}} \right|^{2}+\sum_{\boldsymbol{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}} }R_{s+1}(\boldsymbol{n})\prod_{\text{supp }\boldsymbol{n}}q_{j}^{\boldsymbol{n}_{j}}\bar{q}_{\bar{ \boldsymbol{j}}}^{n_{j}^{\prime}}.\] _Moreover, one has_ \[\left\|F_{s}\right\|_{j_{0},M^{2},r-(s-1)\sigma}\leq\epsilon^{0.9s}, \tag{3.18}\] \[\left\|Z_{s+1}\right\|_{j_{0},M^{2},r-s\sigma}\leq\epsilon^{0.9} \left(\sum_{i=0}^{s}2^{-i}\right),\] (3.19) \[\left\|R_{s+1}\right\|_{j_{0},M^{2},r-s\sigma}\leq\epsilon^{0.9} \left(\sum_{i=0}^{s}2^{-i}\right),\] (3.20) \[\left\|\mathcal{R}_{s+1}\right\|_{j_{0},M^{2},r-s\sigma}\leq \epsilon^{1+0.9s}, \tag{3.17}\] _where_ \[\mathcal{R}_{s+1}=\sum_{\boldsymbol{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2 ^{d}}}R_{s+1}(\boldsymbol{n})\ \prod_{\text{supp }\boldsymbol{n}\cap A(j_{0},N_{s+2})\neq \emptyset}q_{j}^{\boldsymbol{n}_{j}}\bar{q}_{\bar{\boldsymbol{j}}}^{n_{j}^{ \prime}}.\] _Moreover, we have_ \[\left\|\sum_{\Delta(\boldsymbol{n})+|\boldsymbol{n}|=A}(|Z_{s+1}(\boldsymbol{ n})|+|R_{s+1}(\boldsymbol{n})|)\prod_{\text{supp }\boldsymbol{n}}q_{j}^{\boldsymbol{n}_{j}}\bar{q}_{\bar{\boldsymbol{j}}}^{n_{ j}^{\prime}}\right\|_{j_{0},M^{2},r-s\sigma}\leq\epsilon^{1+0.9(A-3)}. \tag{3.21}\] Proof.: As before, \(F_{s}\) satisfies the homological equation \[L_{V}F_{s}=\widetilde{\mathcal{R}}_{s},\] where \[\widetilde{\mathcal{R}}_{s}(q,\bar{q}):=\sum_{\boldsymbol{n}\in\mathbb{N}^{2 ^{d}}\times\mathbb{N}^{2^{d}}}R_{s}(\boldsymbol{n})\ \prod_{\text{supp }\boldsymbol{n}\cap A(j_{0},N_{s+1})\neq\emptyset \atop\Delta(\boldsymbol{n})+|\boldsymbol{n}|\leq s+2}q_{j}^{\boldsymbol{n}_{ j}}\bar{q}_{\bar{\boldsymbol{j}}}^{n_{j}^{\prime}}. \tag{3.22}\] By direct computations, one has \[F_{s}(\boldsymbol{n})=\frac{R_{s}(\boldsymbol{n})}{\sum_{\boldsymbol{j}\in \mathbb{Z}^{d}}(\boldsymbol{n}_{j}-\boldsymbol{n}_{j}^{\prime})V_{\bar{ \boldsymbol{j}}}},\] unless \(\mathbf{n}\in\mathcal{N}\). In view of (3.14) and following the proof of (3.3), we have \[\left\|F_{s}\right\|_{j_{0},M^{2},r-(s-1)\sigma}\leq\epsilon^{0.9s},\] which finishes the proof of (3.17). Using Taylor's formula again shows \[H_{s+1} :=H_{s}\circ X_{F_{s}}^{1}\] \[=D+\{D,F_{s}\}+Z_{s}+R_{s}+\left(X_{F_{s}}^{1}-\mathbf{id}-\{ \cdot,F_{s}\}\right)D+\left(X_{F_{s}}^{1}-\mathbf{id}\right)(Z_{s}+R_{s})\] \[=D+Z_{s+1}+R_{s+1}\] \[=\frac{1}{2}\sum_{\mathbf{j}\in\mathbb{Z}^{d}}V_{\mathbf{j}}|q_{\mathbf{j}}|^ {2}+\sum_{\begin{subarray}{c}\mathbf{n}\in\mathbb{N}^{\mathbb{Z}^{d}}\times \mathbb{N}^{\mathbb{Z}^{d}}\\ \mathbf{n}\in\mathcal{N}\end{subarray}}Z_{s+1}(\mathbf{n})\prod_{\text{supp }\mathbf{n}}\left|q_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}} \right|^{2}+\sum_{\mathbf{n}\in\mathbb{N}^{\mathbb{Z}^{d}}\times\mathbb{N}^{ \mathbb{Z}^{d}}}R_{s+1}(\mathbf{n})\prod_{\text{supp }\mathbf{n}}q_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}} \bar{q}_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}^{\prime}},\] where \[Z_{s+1}=Z_{s}+\sum_{\begin{subarray}{c}\mathbf{n}\in\mathbb{N}^{\mathbb{Z}^{d}} \times\mathbb{N}^{\mathbb{Z}^{d}}\\ \mathbf{n}\in\mathcal{N}\end{subarray}}R_{s}(\mathbf{n})\prod_{\begin{subarray}{c} \text{supp }\mathbf{n}\cap A(j_{0},N_{s+1})\neq 0\\ \Delta(\mathbf{n})+\left|\mathbf{n}\right|\leq s+2\end{subarray}}q_{\mathbf{j}}^{\mathbf{n}_{ \mathbf{j}}}\bar{q}_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}^{\prime}}.\] Following the proof of (3.12)-(3.14), one completes the proof of (3.18)-(3.20). Finally, the estimate (3.21) follows from the proof of (3.8). ### Birkhoff Normal Form Theorem **Theorem 3.3** (**Birkhoff Normal Form**).: _Consider the Hamiltonian (3.1) and assume that the potential \(V\) satisfies the \((\gamma,L,M,j_{0})\)-nonresonant condition (1.3). Given any \(r>2\), there exists an \(\varepsilon(\gamma,L,M,j_{0})>0\) such that, for any \(0<\epsilon<\varepsilon(\gamma,L,M,j_{0})\), there exists a symplectic transformation \(\Gamma=\Gamma_{1}\circ\cdots\circ\Gamma_{M}\) such that_ \[\widetilde{H} =H_{1}\circ\Gamma=D+\widetilde{Z}+\widetilde{R}\] \[=\frac{1}{2}\sum_{\mathbf{j}\in\mathbb{Z}^{d}}V_{\mathbf{j}}|q_{\mathbf{j}}|^ {2}+\sum_{\begin{subarray}{c}\mathbf{n}\in\mathbb{N}^{\mathbb{Z}^{d}}\times \mathbb{N}^{\mathbb{Z}^{d}}\\ \mathbf{n}\in\mathcal{N}\end{subarray}}\widetilde{Z}(\mathbf{n})\prod_{\text{supp }\mathbf{n}}q_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}}\bar{q}_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}^{\prime}}\] \[+\sum_{\mathbf{n}\in\mathbb{N}^{\mathbb{Z}^{d}}\times\mathbb{N}^{ \mathbb{Z}^{d}}}\widetilde{R}(\mathbf{n})\prod_{\text{supp }\mathbf{n}}q_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}}\bar{q}_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}^{\prime}},\] _where_ \[\left\|\widetilde{Z}\right\|_{j_{0},M^{2},r/2} \leq 2\epsilon^{0.9}, \tag{3.24}\] \[\left\|\widetilde{R}\right\|_{j_{0},M^{2},r/2} \leq 2\epsilon^{0.9}, \tag{3.23}\] _and_ \[\left\|\widetilde{\mathcal{R}}\right\|_{j_{0},M^{2},r/2} \leq\epsilon^{0.9M}, \tag{3.25}\] _with_ \[\widetilde{\mathcal{R}}=\sum_{\mathbf{n}\in\mathbb{N}^{\mathbb{Z}^{d}}\times \mathbb{N}^{\mathbb{Z}^{d}}}\widetilde{R}(\mathbf{n})\prod_{\text{supp }\mathbf{n}\cap A(j_{0},M^{2}/2)\neq 0}q_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}} \bar{q}_{\mathbf{j}}^{\mathbf{n}_{\mathbf{j}}^{\prime}}. \tag{3.26}\] _Furthermore, for any \(A\geq 3\) the following estimate holds_ \[\left\|\sum_{\Delta(\mathbf{n})+|\mathbf{n}|=A}\left(|\widetilde{Z}(\mathbf{n})|+|\widetilde{ R}(\mathbf{n})|\right)\prod_{\text{supp }\mathbf{n}}q_{j}^{\mathbf{n}_{j}}\bar{q}_{j}^{\mathbf{n}_{j}^{\prime}}\right\|_{j_{0},M^{2 },r/2}\leq\epsilon^{1+0.9(A-3)}. \tag{3.27}\] Proof.: First of all, note that the Hamiltonian (3.1) satisfies all assumptions (3.12)-(3.16) for \(s=1\), which follows from (3.2). Finally, using the Iterative Lemma, one can find a symplectic transformation \(\Gamma=\Gamma_{1}\circ\cdots\circ\Gamma_{M}\) such that \[\widetilde{H}:=H_{M+1}=H_{1}\circ\Gamma,\] which satisfies (3.23)-(3.27). ### Proof of the main theorem Now we are in a position to complete the proof of Theorem 1.1. Proof.: In view of Theorem 3.3, one obtains the \(\widetilde{H}(\tilde{q},\bar{\tilde{q}})\) in the new coordinates. The new Hamiltonian equation is given by \[\mathrm{i}\dot{\tilde{q}}=2\frac{\partial\widetilde{H}}{\partial\tilde{\tilde {q}}}. \tag{3.28}\] We get by using (3.28) that \[\frac{d}{dt}\sum_{|\mathbf{j}|>j_{0}}|\tilde{q}_{j}(t)|^{2}= \left\{\sum_{|\mathbf{j}|>j_{0}}|\tilde{q}_{j}(t)|^{2}\,,\widetilde{ D}+\widetilde{Z}+\widetilde{R}\right\}\] \[= \left\{\sum_{|\mathbf{j}|>j_{0}}|\tilde{q}_{j}(t)|^{2}\,,\widetilde{R}\right\}\] \[= 4\mathrm{Im}\sum_{|\mathbf{j}|>j_{0}}\bar{\tilde{q}}_{j}(t)\frac{ \partial\widetilde{R}}{\partial\tilde{\tilde{q}}}\] \[= \sum_{\mathbf{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}} \widetilde{R}(\mathbf{n})\sum_{|\mathbf{j}|>j_{0}}(\mathbf{n_{j}}-\mathbf{n_{j}^{\prime}}) \prod_{\text{supp }\mathbf{n}}\tilde{q}_{j}^{\mathbf{n}_{j}}\bar{\tilde{q}}_{j}^{\mathbf{n}_{j}^{ \prime}}.\] In view of (3.26), we decompose \(\widetilde{R}\) into three parts: \[\widetilde{R}=\widetilde{R}^{(1)}+\widetilde{R}^{(2)}+\widetilde{R}^{(3)},\] where \[\widetilde{R}^{(1)} =\widetilde{\mathcal{R}},\] \[\widetilde{R}^{(2)} =\sum_{\mathbf{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}} \widetilde{R}(\mathbf{n})\sum_{|\mathbf{j}|>j_{0}}(\mathbf{n_{j}}-\mathbf{n_{j}^{\prime}}) \prod_{\text{supp }\mathbf{n}\cap A(j_{0},M^{2}/2)=\emptyset\atop\Delta(\mathbf{n})\geq M+4} \tilde{q}_{j}^{\mathbf{n}_{j}}\bar{\tilde{q}}_{j}^{\mathbf{n}_{j}^{\prime}}, \tag{3.29}\] \[\widetilde{R}^{(3)} =\sum_{\mathbf{n}\in\mathbb{N}^{2^{d}}\times\mathbb{N}^{2^{d}}} \widetilde{R}(\mathbf{n})\sum_{|\mathbf{j}|>j_{0}}(\mathbf{n_{j}}-\mathbf{n_{j}^{\prime}}) \prod_{\text{supp }\mathbf{n}\cap A(j_{0},M^{2}/2)=\emptyset\atop\Delta(\mathbf{n})\leq M+3} \tilde{q}_{j}^{\mathbf{n}_{j}}\bar{\tilde{q}}_{j}^{\mathbf{n}_{j}^{\prime}}.\] Using (3.25) and (3.27) implies \[\left\|\widetilde{R}^{(1)}+\widetilde{R}^{(2)}\right\|_{j_{0},M^{2},r/2}\leq \epsilon^{M+1}. \tag{3.30}\] Now consider the monomials in \(\widetilde{R}^{(3)}\). Recalling that \[\Delta(\boldsymbol{n})\leq M+3,\] if \(\text{supp }\boldsymbol{n}\cap A(j_{0},M^{2}/2)=\emptyset\), then for any \(\boldsymbol{j}\in\text{supp }\boldsymbol{n}\) satisfying \[|\boldsymbol{j}|>j_{0}.\] Hence the terms in (3.29) satisfy \[\sum_{|\boldsymbol{j}|>j_{0}}(\boldsymbol{n_{j}}-\boldsymbol{n_{j}^{\prime}}) =0. \tag{3.31}\] Using (3.30) and (3.31), one has \[\frac{d}{dt}\sum_{|j|>j_{0}}|\tilde{q}_{j}(t)|^{2}\leq\epsilon^{M+1}.\] Integrating in \(t\), we obtain \[\sum_{|\boldsymbol{j}|>j_{0}}|\tilde{q}_{\boldsymbol{j}}(t)|^{2}\leq\sum_{| \boldsymbol{j}|>j_{0}}|\tilde{q}_{\boldsymbol{j}}(0)|^{2}+\epsilon^{M+1}t. \tag{3.32}\] Note that the symplectic transformation only acts on the \(M^{2}\)-neighborhood of \(\left\{\boldsymbol{j}\in\mathbb{Z}^{d}:\ |\boldsymbol{j}|=j_{0}\right\}\). We obtain \[\sum_{|\boldsymbol{j}|>j_{0}+M^{2}}|q_{\boldsymbol{j}}(t)|^{2}\leq\sum_{| \boldsymbol{j}|>j_{0}}|\tilde{q}_{\boldsymbol{j}}(t)|^{2},\] which together with (3.32) gives \[\sum_{|\boldsymbol{j}|>j_{0}+M^{2}}|q_{\boldsymbol{j}}(t)|^{2}\leq\sum_{| \boldsymbol{j}|>j_{0}}|\tilde{q}_{\boldsymbol{j}}(0)|^{2}+\epsilon^{M+1}t.\] On the other hand, the Hamiltonian preserves the \(\ell^{2}\)-norm. So we have \[\sum_{|\boldsymbol{j}|>j_{0}}|\tilde{q}_{\boldsymbol{j}}(0)|^{2}=\sum_{ \boldsymbol{j}\in\mathbb{Z}^{d}}|q_{\boldsymbol{j}}(0)|^{2}-\sum_{| \boldsymbol{j}|\leq j_{0}}|\tilde{q}_{\boldsymbol{j}}(0)|^{2}<\sum_{| \boldsymbol{j}|>j_{0}-M^{2}}|q_{\boldsymbol{j}}(0)|^{2}.\] Hence one has \[\sum_{|\boldsymbol{j}|>j_{0}+M^{2}}|q_{\boldsymbol{j}}(0)|^{2}\leq 2\delta,\] for \[|t|\leq\delta\cdot\epsilon^{-M}.\] ## 4. Estimates on the measure Let \[\omega=(\omega_{\boldsymbol{j}})_{\boldsymbol{j}\in\mathbb{Z}^{d}}\] with \[\omega_{\boldsymbol{j}}=V_{\boldsymbol{j}}(\boldsymbol{\theta},\boldsymbol{ \alpha}),\] which is given by (1.2). Define the resonant set \(\mathfrak{R}(\boldsymbol{k})\) with \(0\neq\boldsymbol{k}\in\mathbb{Z}^{\mathbb{Z}^{d}}\) by \[\mathfrak{R}\left(\boldsymbol{k}\right)=\left\{(\boldsymbol{\theta}, \boldsymbol{\alpha}):\ \left|\sum_{\boldsymbol{j}\in\mathbb{Z}^{d}}k_{\boldsymbol{j}}\omega_{ \boldsymbol{j}}\right|<\left(\frac{\gamma}{\left(2LM^{2}j_{0}\right)^{2(d+1)} }\right)^{10L^{4}M^{4}}\right\},\] and \[\mathfrak{R}(\gamma,L,M,j_{0})=\bigcup_{\mathbf{k}}^{*}\mathfrak{R}(\mathbf{k}),\] where the union \[\bigcup_{\mathbf{k}}^{*}\] is taken for \(\mathbf{k}\) satisfying \[\text{supp }\mathbf{k}\cap A(j_{0},M^{2})\neq\emptyset \tag{4.1}\] and \[\Delta(\mathbf{k})+|\mathbf{k}|\leq M+2. \tag{4.2}\] Denote by \(\text{meas}(\cdot)\) the Lebesgue measure. Then one has **Lemma 4.1**.: (4.3) \[\text{meas}(\mathfrak{R}(\gamma,L,M,j_{0}))\leq\gamma.\] The proof of Lemma 4.1 is similar to that in [11]. For the reader's convenience, we reproduce the main steps below. For completeness, we also include the main lemmas used in [11] in the Appendix. Proof.: Note that the number of \(\mathbf{k}\) satisfying (4.1) and (4.2) is less than \[\left(j_{0}M^{2}\right)^{d}\cdot(2M+1)^{d(M+2)}\leq\left(j_{0}M^{2}\right)^{2 dM}.\] Therefore it suffices to estimate \(\text{meas}(\mathfrak{R}(\mathbf{k}))\). Recalling (1.2), one has \[\sum_{\mathbf{j}\in\mathbb{Z}^{d}}k_{\mathbf{j}}\omega_{\mathbf{j}}=(\mathbf{k}\otimes\mathbf{v}) \cdot\mathbf{V},\] where \[\mathbf{k}\otimes\mathbf{v}=(k_{\mathbf{j}}v_{\mathbf{\ell}})_{\mathbf{j}\in\text{supp }\mathbf{k},\ \mathbf{\ell}\in\Gamma_{L}},\] \[\mathbf{V}=(\cos 2\pi\mathbf{\ell}\cdot(\mathbf{\theta}+\mathbf{j}\mathbf{\alpha}))_{ \mathbf{j}\in\text{supp }\mathbf{k},\ \mathbf{\ell}\in\Gamma_{L}}.\] Choose some suitable vector \(\mathbf{\xi}=(\widetilde{\mathbf{\xi}},\widehat{\mathbf{\xi}})\in(0,1]^{d}\times(0,1]^{d}\), to be given shortly, and define \[d_{\mathbf{\xi}}=\sum_{i=1}^{d}\left(\widetilde{\xi}_{i}\frac{\partial}{\partial \alpha_{i}}+\widehat{\xi}_{i}\frac{\partial}{\partial\theta_{i}}\right).\] Hence for \(s\geq 1\), one obtains \[d_{\mathbf{\xi}}^{2s}\cos 2\pi\mathbf{\ell}\cdot(\mathbf{\theta}+\mathbf{j} \mathbf{\alpha})\] \[=(-1)^{s}(2\pi)^{2s}((\mathbf{\ell j})\cdot\widetilde{\mathbf{\xi}}+\mathbf{ \ell}\cdot\widehat{\mathbf{\xi}})^{2s}\cos 2\pi\mathbf{\ell}\cdot(\mathbf{\theta}+\mathbf{j}\mathbf{ \alpha}),\] where \[d_{\mathbf{\xi}}^{s+1}\cos 2\pi\mathbf{\ell}\cdot(\mathbf{\theta}+\mathbf{j}\mathbf{\alpha})=d_{\mathbf{ \xi}}\left(d_{\mathbf{\xi}}^{s}\cos 2\pi\mathbf{\ell}\cdot(\mathbf{\theta}+\mathbf{j}\mathbf{ \alpha})\right).\] This motivates us to consider the Wronskian \[W=\left[d_{\mathbf{\xi}}^{2s}V_{(\mathbf{j},\mathbf{\ell})}\right]_{(\mathbf{j},\mathbf{\ell})}^{ 1\leq s\leq R}\text{ with }R=(\#\text{supp }\mathbf{k})\cdot(\#\Gamma_{L}),\] which is a \(R\times R\) real matrix. Direct computations show \[|\det W|=A_{1}\cdot A_{2}\cdot A_{3},\] where \[A_{1} = \prod_{\boldsymbol{j},\boldsymbol{\ell}}|\cos 2\pi\boldsymbol{\ell} \cdot(\boldsymbol{\theta}+\boldsymbol{j}\boldsymbol{\alpha})|,\] \[A_{2} = \prod_{\boldsymbol{j},\boldsymbol{\ell}}(2\pi)^{2}((\boldsymbol{ \ell}\boldsymbol{j})\cdot\widetilde{\boldsymbol{\xi}}+\boldsymbol{\ell} \cdot\widehat{\boldsymbol{\xi}})^{2},\] and \[A_{3}=\prod_{(\boldsymbol{j},\boldsymbol{\ell})\neq(\boldsymbol{j}^{\prime}, \boldsymbol{\ell}^{\prime})}(2\pi)^{2}\left((\boldsymbol{\ell}\boldsymbol{j} )\cdot\widetilde{\boldsymbol{\xi}}+\boldsymbol{\ell}\cdot\widehat{\boldsymbol {\xi}}\right)^{2}-((\boldsymbol{\ell}^{\prime}\boldsymbol{j}^{\prime})\cdot \widetilde{\boldsymbol{\xi}}+\boldsymbol{\ell}^{\prime}\cdot\widehat{ \boldsymbol{\xi}})^{2}\Big{|}\,.\] If we choose a Diophantine vector \(\boldsymbol{\xi}\), then we have \[A_{2}\cdot A_{3}\geq\left(\frac{\gamma}{\left(2LM^{2}j_{0}\right)^{2(d+1)}} \right)^{10L^{3}M^{3}}, \tag{4.4}\] by using that \(\Gamma_{L}\subset\mathbb{Z}^{d}\) satisfies Properties (a) and (b) (see (1.2) for the details). We now estimate the lower bound of \(A_{1}\). Firstly note that there are at least \(R\) terms in the product \(\prod_{\boldsymbol{j},\boldsymbol{\ell}}\) and \(R\leq M(2L+1)\). Hence it suffices to estimate \[|\cos 2\pi\boldsymbol{\ell}\cdot(\boldsymbol{\theta}+\boldsymbol{j}\boldsymbol {\alpha})|\] for fixed \(\boldsymbol{j}\in\operatorname{supp}\,k,\ \boldsymbol{\ell}\in\Gamma_{L}\). Let \(\left\|x\right\|_{\mathbb{T}/2}=\operatorname{dist}(x,\mathbb{Z}/2).\) Using \[4\left\|x-\frac{1}{4}\right\|_{\mathbb{T}/2}\leq|\cos 2\pi x|\leq 2\pi\left\|x- \frac{1}{4}\right\|_{\mathbb{T}/2},\] there exists a subset \(\Pi_{\boldsymbol{j},\boldsymbol{\ell}}\) (the union of intervals) satisfying \[\operatorname{meas}(\Pi_{\boldsymbol{j},\boldsymbol{\ell}})\leq\frac{\gamma} {\left(2LM^{2}j_{0}\right)^{2(d+1)}},\] such that for any \((\boldsymbol{\theta},\boldsymbol{\alpha})\in[0,1]^{2d}\setminus\Pi_{ \boldsymbol{j},\boldsymbol{\lambda}}\), one has \[|\cos 2\pi\boldsymbol{\ell}\cdot(\boldsymbol{\theta}+\boldsymbol{j}\boldsymbol {\alpha})|\geq\frac{\gamma}{\left(2LM^{2}j_{0}\right)^{2(d+1)}}.\] Hence for each \((\boldsymbol{\theta},\boldsymbol{\alpha})\in[0,1]^{2d}\setminus\bigcup_{ \boldsymbol{j},\boldsymbol{\lambda}}\Pi_{\boldsymbol{j},\boldsymbol{\lambda}}\), one has \[|\det W|\geq\left(\frac{\gamma}{\left(2LM^{2}j_{0}\right)^{2(d+1)}}\right)^{20L ^{3}M^{3}}. \tag{4.5}\] By (4.5) and using Lemmas 5.1 and 5.3 in the Appendix, yields \[\operatorname{meas}(\mathfrak{R}(\boldsymbol{k}))\leq\left(\frac{\gamma}{ \left(2LM^{2}j_{0}\right)^{2(d+1)}}\right)^{10LM},\] and further \[\operatorname{meas}(\mathfrak{R}(\gamma,L,M,j_{0}))\leq\gamma,\] which finishes the proof of (4.3). ## 5. Appendix We collect below the lemmas implicated in the proof of Lemma 4.1. **Lemma 5.1** ([1]).: _Let \(\mathbf{v}^{(1)},\cdots,\mathbf{v}^{(r)}\in\mathbb{R}^{r}\) be \(r\) linearly independent vectors with \(|\mathbf{v}^{(l)}|_{1}\leq M\) for \(1\leq l\leq r.\) Then for any \(\mathbf{w}\in\mathbb{R}^{r}\), we have_ \[\max_{1\leq l\leq r}|\mathbf{w}\cdot\mathbf{v}^{(l)}|\geq r^{-3/2}M^{1-r}|\mathbf{w}|_{2} \cdot|\det\left[\mathbf{v}^{(l)}\right]_{1\leq l\leq r}|.\] **Lemma 5.2** ([14]).: _Let \(I\subset\mathbb{R}\) be an interval of finite length (i.e., \(0<|I|<\infty\)) and \(k\geq 1.\) If \(f\in C^{k}(I;\mathbb{R})\) satisfies_ \[\inf_{x\in I}\left|\frac{d^{k}}{dx^{k}}f(x)\right|\geq A>0,\] _then for all \(\gamma>0,\)_ \[\operatorname{meas}(\{x\in I:\ |f(x)|\leq\gamma\})\leq\zeta_{k}\left(\frac{ \gamma}{A}\right)^{\frac{1}{k}},\] _where \(\zeta_{k}=k(k+1)\left((k+1)!\right)^{\frac{1}{k}}.\)_ **Lemma 5.3** ([15]).: _Fix \(k\in\mathbb{N}\) and let \(I=I_{\mathbf{a},\mathbf{b}}=\prod_{i=1}^{d}[a_{i},b_{i}]\subset\mathbb{R}^{d}\). Assume that the function \(f\in C^{k+1}(I;\mathbb{R})\) satisfies for some \(A>0,\)_ \[\inf_{\mathbf{x}\in I}\sup_{1\leq l\leq k}\left|d^{l}_{\mathbf{\beta}}f(\mathbf{x})\right| \geq A,\] _where_ \[d_{\mathbf{\beta}}:=\sum_{i=1}^{d}\beta_{i}\partial_{i},\ \partial_{i}:=\frac{ \partial}{\partial x_{i}}.\] _with \(\mathbf{\beta}=(\beta_{1},\cdots,\beta_{d})\in\mathbb{R}^{d}\setminus\{0\}\) and_ \[d^{l}_{\mathbf{\beta}}f(\mathbf{x})=d_{\mathbf{\beta}}(d^{l-1}_{\mathbf{\beta}}f(\mathbf{x}))\] _for \(l\geq 1\). Let_ \[\|f\|_{k+1}=\sup_{\mathbf{x}\in I}\sup_{1\leq|\mathbf{\gamma}|\leq k+1}|\partial^{ \gamma}f(\mathbf{x})|<\infty\] _with_ \[|\mathbf{\gamma}|=\sum_{i=1}^{d}\gamma_{i},\ \partial^{\gamma}=\partial_{1}^{ \gamma_{1}}\cdots\partial_{d}^{\gamma_{d}}.\] _Then for \(0<\varepsilon<A<1,\)_ \[\operatorname{meas}(\{\mathbf{x}\in I:\ |f(\mathbf{x})|\leq\varepsilon\})\] \[\leq C(\mathbf{\beta},k,d)(\|f\|_{k+1}|\mathbf{b}-\mathbf{a}|_{2}+1)^{d}|\mathbf{ a}\vee\mathbf{b}|^{d-1}A^{-(d+\frac{1}{k})}\varepsilon^{\frac{1}{k}},\] _where \(C=C(\mathbf{\beta},k,d)>0\) depends only on \(\mathbf{\beta},k,d\) (but not on \(f\)) and_ \[|\mathbf{a}\vee\mathbf{b}|=\sum_{i=1}^{d}\max(|a_{i}|,|b_{i}|).\] **Remark 5.1**.: Lemmas 5.1 and 5.2 are two of the ingredients used in the proof of Lemma 5.3. ## Acknowledgments H. Cong was supported by NSFC (11671066). Y. Shi was supported by NSFC (12271380). W.-M. Wang acknowledges support from the CY Initiative of Excellence, "Investissements d'Avenir" Grant No. ANR-16-IDEX-0008.
2306.17780
On the Tremaine-Weinberg method: how much can we trust gas tracers to measure pattern speeds?
Pattern speeds are a fundamental parameter of the dynamical features (e.g. bars, spiral arms) of a galaxy, setting resonance locations. Pattern speeds are not directly observable, so the Tremaine-Weinberg (TW) method has become the most common method used to measure them in galaxies. However, it has not been tested properly whether this method can straightforwardly be applied to gas tracers, despite this being widely done in the literature. When applied to observations, the TW method may return invalid results, which are difficult to diagnose due to a lack of ground truth for comparison. Although some works applying the TW method to simulated galaxies exist, only stellar populations have been tested. Therefore, here we explore the applicability of the TW method for gas gracers, by applying it to hydrodynamical simulations of galaxies, where we know the true value of the bar pattern speed. We perform some simple tests to see if the TW method has a physically reasonable output. We add different kinds of uncertainties (e.g. in position angle or flux) to the data to mock observational errors based on the magnitude of uncertainty present in the observations. Second, we test the method on 3D simulations with chemical networks. We show that in general, applying TW to observations of gas will not recover the true pattern speed. These results have implications for many "pattern speeds" reported in the literature, and based on these tests we also give some best practices for measuring pattern speeds using gas tracers going forwards.
Olga Borodina, Thomas G. Williams, Mattia C. Sormani, Sharon Meidt, Eva Schinnerer
2023-06-30T16:33:36Z
http://arxiv.org/abs/2306.17780v1
# On the Tremaine-Weinberg method: how much can we trust gas tracers to measure pattern speeds? ###### Abstract Pattern speeds are a fundamental parameter of the dynamical features (e.g. bars, spiral arms) of a galaxy, setting resonance locations. Pattern speeds are not directly observable, so the Tremaine-Weinberg (TW) method has become the most common method used to measure them in galaxies. However, it has not been tested properly whether this method can straightforwardly be applied to gas tracers, despite this being widely done in the literature. When applied to observations, the TW method may return invalid results, which are difficult to diagnose due to a lack of ground truth for comparison. Although some works applying the TW method to simulated galaxies exist, only stellar populations have been tested. Therefore, here we explore the applicability of the TW method for gas tracers, by applying it to hydrodynamical simulations of galaxies, where we know the true value of the bar pattern speed. We perform some simple tests to see if the TW method has a physically reasonable output. We add different kinds of uncertainties (e.g. in position angle or flux) to the data to mock observational errors based on the magnitude of uncertainty present in the observations. Second, we test the method on 3D simulations with chemical networks. We show that in general, applying TW to observations of gas will not recover the true pattern speed. These results have implications for many "pattern speeds" reported in the literature, and based on these tests we also give some best practices for measuring pattern speeds using gas tracers going forwards. keywords: galaxies: kinematics and dynamics - galaxies:fundamental parameters - galaxies: structure ## 1 Introduction In the local Universe, 30% to 50% of galaxies are barred (Sheth et al., 2008; Binney and Tremaine, 2008; Aguerri et al., 2009). Bars are believed to rotate with a well-defined pattern speed which is one of the most important parameters because it sets the location of the corotation and Lindblad resonances. Bars can also have a profound impact on galaxy evolution, causing starburst events at the interface with spiral arms (Beuther et al., 2017) and can lead to star formation suppression along the bar (Querejeta et al., 2021). Furthermore, pattern speeds are a possible key to understand the interaction between the bar and dark matter halo (Hernquist and Weinberg, 1992; Debattista and Sellwood, 2000; Weinberg and Katz, 2007; Beane et al., 2022). Therefore, the accurate measurement of pattern speeds is vital to understanding large-scale dynamical structures in galaxies. Pattern speeds can not be observed directly, but there are several different ways to measure them. For example, we can estimate the pattern speed from the velocity at radii that correspond to resonances in the disc (Elmegreen et al., 1989; Kuno et al., 2000), or we can match a simulation, where the pattern speed is already known, to observed galaxies (Weiner et al., 2001; Hirota et al., 2009; Lin et al., 2013; Sormani et al., 2015). Galaxy modelling has been employed to robustly define the pattern speed, and it can produce smaller uncertainties in the measurements (Hunter et al., 1988; Sempere et al., 1995; Salo et al., 1999; Rautiainen et al., 2008; Kalapotharakos et al., 2010). However, the physical link between observational features (e.g. bar ends) and dynamical features, like corotation, is still not firmly established or expected in all cases (Kranz et al., 2003; Williams et al., 2021). Plus, running suites of bespoke simulations is computationally expensive, limiting the usefulness of these methods. Tremaine and Weinberg (1984) developed a model-independent method to calculate pattern speeds. It has become favored due to its apparent simplicity, requiring only line-of-sight velocity and brightness information along the direction parallel to the galaxy major axis In most modern applications one can get the pattern speed even in a single observation with interferometric imaging (e.g., HI, CO with VLA and ALMA respectively), Fabry-Perot observations (Debattista and Williams, 2004; Chemin and Hernandez, 2009) and more recently using wide field of view optical integral-field unit (IFU) spectroscopy (Guo et al., 2019; Cuomo et al., 2019, 2020) (e.g., stars or H\(\alpha\) with MUSE). During the last decades the TW method has been applied to many different tracers such as stars (Gerssen et al., 2003; Corsini et al., 2007; Cuomo et al., 2020; Butittta et al., 2022) or gas: Hi (Bureau et al., 1999; Banerjee et al., 2013), CO (Zimmer et al., 2004), and H\(\alpha\)(Chemin and Hernandez, 2009). At first sight using gas as a tracer is preferable as its emission lines are bright and easier to study rather than composite stellar spectra, which require complex modelling. Therefore, the application of the TW method to CO and Hi has regularly been used over the past three decades to measure pattern speeds, as it is usually assumed that stars and gas have the same pattern speed (Sellwood and Wilkinson, 1993). Given the purely data-driven nature of the TW method, it is also straightforward to apply to simulations. For example, Roshan et al. (2021) have used the TW method to calculate pattern speeds from single snapshots in the IllustrisTNG (Nelson et al., 2018; Pillepich et al., 2019) and EAGLE simulations (Schaye et al., 2015). However, using data from the Physics at High Angular resolution in Nearby GalaxieS (PHANGS) survey Williams et al. (2021) showed that when we apply the TW method to different tracers it can yield different "pattern speeds", indicating that different tracers (e.g. stars or ionised/molecular gas) may be compromised in different ways. Therefore, we need to carefully test this method on mock data. For instance, we can apply the TW method to \(N\)-body simulations (Debattista, 2003; Gerssen and Debattista, 2007; Zou et al., 2019; Guo et al., 2019) to study the limitations of the method by comparing the output of the method with the pattern speed that was set by the model (a ground truth or GT). These works found that the inclination range and bar position alignment with the position angle of the galaxy on the sky should be restricted (the reasons for this will be discussed later in sect. 2.3). It is also clear from these simple simulations that the method is extremely sensitive to position angle (PA) measurements (Debattista, 2003). Despite previous work, it is still not fully understood to what extent the TW method can be applied to gas tracers. The biggest concern is that the gas does not obey the continuity equation which is one of the fundamental assumptions of the TW method (see sect. 2.3). This is caused by the baryon life cycle, i.e. atomic gas is converted into molecular gas, then star formation processes molecular clouds into stars that ionize the gas (Schinnerer et al., 2019, 2019). Furthermore, this method has never been tested properly on 3D hydrodynamical simulations. Therefore, here we revisit the question of the applicability of the TW method to gas tracers. The structure of this paper is as follows: we first describe the simulations we use and how we create mock observational data (Section 2). Then we briefly explain the method itself. In Section 3 we present the main results and then describe their implications in Section 4. Finally, we summarize the conclusions in Section 5. ## 2 Method ### The simulations In this work, we have used two different kinds of simulations. First, we use a simple 2D isothermal simulation of gas flow in an external bar potential. These simulations have a smooth gas density distribution and allow us to test the TW method in the simplest possible setup. Then, we use the 3D simulation presented in Sormani et al. (2018) which includes non-equilibrium chemical networks. The external gravitational potential of the bar is the same as in the first simulation. These 3D simulations have a clumpy interstellar medium (ISM), and the continuity equation does not apply to individual tracers such as CO and Hi which trace the molecular and atomic gas phases. These more complex simulations, therefore, allow us to assess how the TW method performs when some of the underlying assumption of the method (smoothness and absence of sources and sinks) are not perfectly satisfied. #### 2.1.1 Numerical setup of the 2D simulations We ran 2D isothermal non-selfgravitating hydrodynamical simulations in an externally imposed, rotating barred potential. We use the public grid code Pluto(Mignone et al., 2007) version 4.3. The external gravitational potential is exactly the same as described in Section 3.2 of Ridley et al. (2017), and the simulations here are very similar to those described in that paper except for the different code and grid geometry used (cartesian by Ridley et al., 2017 vs polar here for a better resolution in the centre). The gravitational potential is constructed to reproduce the properties of the Milky Way, which serves here as a template for a barred galaxy. The exact same potential is also used in the 3D simulations described below from Sormani et al. (2018). The potential is assumed to be rigidly rotating with bar pattern speed of \(\Omega_{\rm p}=40\,\rm km\,s^{-1}\,\rm kpc^{-1}\). We assume that the gas is isothermal, i.e. \[P=c_{\rm s}^{2}\Sigma\,, \tag{1}\] where the sound speed is \(c_{\rm s}=10\,\rm km\,s^{-1}\) and \(\Sigma\) describes the gas surface density. We neglect the gas self-gravity and its associated additional physics. The equations of motion in the rotating frame co-rotating with the bar are the continuity and Euler equations: \[\partial_{t}\Sigma+\nabla\cdot(\Sigma\mathbf{v})=0 \tag{2}\] \[\partial_{t}\mathbf{v}+(\mathbf{v}\cdot\nabla)\,\mathbf{v}=-c_{\rm s}^{2} \frac{\nabla\Sigma}{\Sigma}-\nabla\Phi-2\Omega_{\rm p}\hat{\mathbf{e}}_{z} \times\mathbf{v}+\Omega_{\rm p}^{2}R\,\epsilon_{R} \tag{3}\] where \(\Sigma\) is the surface density, \(\mathbf{v}\) is the velocity, \((R,\theta,z)\) denote standard cylindrical coordinates, \(\hat{\mathbf{e}}_{R}\) is the unit vector in the radial direction and \(\hat{\mathbf{e}}_{z}\) the unit vector in the \(z\) direction. We use a two-dimensional static polar grid covering the region \(R\times\theta=[0.01,10]\,\rm kpc\times[0,2\pi]\). The grid is logarithmically spaced in \(R\) and uniformly spaced in \(\theta\) with \(1024\times 1024\) cells. We use the following parameters: rx2 time-stepping, no dimensional splitting, hill Riemann solver and the default flux limiter. We solve the equations in the frame rotating at \(\Omega_{\rm p}\) by using the rotating_frame = yes switch. Boundary conditions are reflective on the inner boundary at \(R=0.01\,\rm kpc\) and outflow on the outer boundary at \(R=10.0\,\rm kpc\). The initial density distribution is taken to be \[\Sigma_{0}=A\exp\left(-\frac{R_{m}}{R}-\frac{R}{R_{d}}\right) \tag{4}\] where \(R_{m}=1.5\,\rm kpc\), \(R_{d}=7\,\rm kpc\) and without loss of generality (the equations of motion are invariant for density rescaling, so the density units are arbitrary) we set \(A=1\). In order to avoid transients, we introduce the bar gradually, as is common practice in this type of simulations (e.g., Athanassoula, 1992). We start with gas in equilibrium on circular orbits in an axisymmetrised potential and then we linearly turn on the non-axisymmetric part of the potential during the first \(150\,\rm Myr\) while keeping the rotation curve fixed (Fig. 1). #### 2.1.2 Numerical setup of the 3D simulations The simulation used here is the "variable" simulation from Sormani et al. (2018). We give here only a very brief overview, and refer to that paper for a more complete description. The simulations are run using the moving-mesh code AREO(Springel, 2010; Weinberger et al., 2020). They are three-dimensional and unmagnetised, and include a live chemical network that keeps track of hydrogen and carbon chemistry. In particular, for the purposes of this paper it is important that we can calculate the amount of molecular CO and atomic Hi at each \((x,y,z)\) point. Gas self-gravity and star formation are neglected. The simulations comprise interstellar gas out to a galactocentric radius of \(R\leq 8\,\rm kpc\). ### Simulation post-processing From the simulations, we create a mock pixelated image to mimic gas observations. Initially, we have the bar aligned with the \(x\)-axis, so we rotate the galaxy by angle \(\psi_{\rm bar}\) around the \(z\)-axis. Second, we incline the galaxy along the \(x\)-axis (which now does not in general coincide with the major axis of the bar) by angle \(i\). We calculate the line-of-sight velocity as follows: \[v_{\rm LOS}(x,y)=v_{y}(x,y,z)\sin(i)+v_{2}(x,y,z)\cos(i)\,, \tag{5}\] where \(v_{z}(x,y,z)\equiv 0\) for the 2D simulation. For 3D simulations, we also weight velocity by the mass of particles in the bin. To make these simulations appear closer to observational data we add Gaussian noise with a standard deviation of 10 km s\({}^{-1}\) for the velocity field and 5% uncertainty for the density of each pixel. In real observational data, we do not know the exact location of the galaxy centre. Therefore, we add a centering error, in which values are picked from a Gaussian distribution with a standard deviation equal to the slit width \(h(y)=100\) pc. To mock uncertainty in position angle (PA) measurements, we add another rotation of the inclined galaxy by \(\delta_{\rm PA}\) angle, which is drawn from a normal distribution with a standard deviation of 1\({}^{\circ}\). Error values estimations for flux, velocity, and PA were based on typical values from PHANGS (Physics at High Angular resolution in Nearby GalaxieS) survey ALMA data (Leroy et al., 2021), which represent the current smallest uncertainties achievable with relatively large surveys. ### Tremaine-Weinberg method Our work is based on the formula presented by Tremaine and Weinberg (1984): \[\Omega_{\rm P}\sin(i)=\frac{\int\limits_{-\infty}^{\infty}h(y)\int\limits_{- \infty}^{\infty}v_{\rm LOS}(x,y)\Sigma(x,y)\mathrm{d}x\mathrm{d}y}{\int\limits_ {-\infty}^{\infty}h(y)\int\limits_{-\infty}^{\infty}\Sigma(x,y)x\mathrm{d}x \mathrm{d}y}=\frac{\langle v\rangle}{\langle x\rangle}\,, \tag{6}\] where \(h(y)\) is the weight function, which in our case has a form of a boxcar function to represent a slit, mimicking long-slit spectroscopy, or columns of pixels in IFU data. This formula is based on the following three assumptions (Tremaine and Weinberg, 1984): 1. The disc of the galaxy is flat. 2. The disc has a single well-defined and constant pattern speed (\(\mathrm{\ddot{m}}\)) 3. The tracer should obey the continuity equation, i.e. it has neither sources nor sinks. Because this method is designed to catch non-axisymmetric structure, any deviation from axisymmetry is assumed to be caused by the pattern. From the formula we can see that the method should be applied to moderately-inclined galaxies. For edge-on galaxies we will be unable to identify the bar, and for face-on galaxies the line-of-sight velocity is too small. The same logic can be used for bar alignment restrictions: when the bar is oriented along either the major or minor kinematic axis of a galaxy, no left-right asymmetry is present and the integral will evaluate to zero. We bin the simulation into 100 pc-sized 'pixels'. Due to computational asymmetry or rotation by \(\delta_{\rm PA}\) and centre shifting, the number of pixels on either side of the galaxy centre may not be exactly equal (i.e. \(N(x<0)\neq N(x>0)\)). Therefore, we symmetrize both \(\Sigma(x,y)\) and \(V_{\rm LOS}(x,y)\) by setting pixels without a corresponding opposite counterpart to zero, so \(N(x<0)=N(x>0)\) along each slit to minimise non-axisymmetries induced simply from the pixelisation process Then we calculate \(\langle v\rangle\) and \(\langle x\rangle\) and fit data points (\(\langle x\rangle\), \(\langle v\rangle\)) for those slits which cross the bar using orthogonal distance regression (ODR, Virtanen et al., 2020). The slope of of (\(\langle x\rangle\), \(\langle v\rangle\)) is then simply the pattern speed \(\Omega_{\rm P}\sin i\). We include uncertainties in density \(\Sigma(x,y)\) and velocity \(v_{\rm LOS}(x,y)\) by adding noise values sampled from a Gaussian distributions with standard deviation of 5% of pixel's density and 10 km s\({}^{-1}\), respectively. However, different values of noise in data and \(\delta_{\rm PA}\) lead to different pattern speed measurements. Therefore, we implement a bootstrapping procedure to estimate the uncertainty in our fits. We repeat the measurements 500 times, each time randomly sampling from our noise distribution. Then we calculate the median and 16th and 84th percentiles to define pattern speed error bars. We will use these values as the nominal pattern speed and its uncertainties. ## 3 Results ### Hydrodynamical 2D simulations We first applied the TW method to the simple 2D hydrodynamical simulation of the galaxy. This simulation obeys the first and third assumptions (Tremaine and Weinberg, 1984), i.e. the disc is flat and gas obeys the continuity equation by design. However, the gas never reaches a perfect steady state while flowing in the bar potential. Therefore, the density distribution changes slightly from snapshot to snapshot in a frame rotating with the pattern speed \(\Omega_{\rm P}=40\)km s\({}^{-1}\) kpc\({}^{-1}\). We have checked that repeating the analysis for other time snapshots does not change the conclusions in this Section. As shown in Figure 2, we recover the correct pattern speed \(38.7\pm 4.6\) km s\({}^{-1}\) kpc\({}^{-1}\). However, we can also see that there is a second steep slope which corresponds to the slits crossing the center of the bar. We discuss it further in Section 4. Moreover, the slits outside the bar, i.e. where there is no pattern (yellow and blue color in the Fig. 2), have non-zero \(\langle v\rangle\) and \(\langle x\rangle\) and these points also follow the ground truth line. Intriguingly, we appear to be measuring a pattern speed even in the outer region of the disc where the density perturbations induced by the bar are expected to be negligibly small. Similarly, recent results from Williams et al. (2021) show that the use of a gas tracer can lead to erroneous pattern speed measurements. To test whether these Figure 1: Rotation curves for each component in our simulation potential. The bar is rigidly rotating with a pattern speed of \(\Omega_{\rm P}=40\)km s\({}^{-1}\) kpc\({}^{-1}\) measurements are real or simply an artifact in the data, we take a step back and perform a simple test with the disc with solid body rotation. ### Semi-analytical 2D simulations To test when the TW method can pick up errant signal, we performed a very simple but unphysical test. We created a mock galaxy using an exponential 2D density profile, which rotates as a solid body with an angular speed of \(40\,\mathrm{km\,s^{-1}\,kpc^{-1}}\), using the positions of particles from the previously described 2D simulation as a basis. We will refer this as our 'perfect disc model'. As before, we also added noise to the data. Due to absence of any left-right asymmetry, we would expect to have zero values for both \(\langle v\rangle\) and \(\langle x\rangle\) because signals for \(x<0\) and \(x>0\) under the integral sign will cancel out. As we will show, this is the case but only under certain fairly strict conditions. Also, we stress that this model does not reflect reality, but we use it to illustrate the conditions where we can pick up false signal using the TW method in a case where we should not detect any signal. #### 3.2.1 Effect of data uncertainties Firstly, we confirmed that if we have no uncertainties in the data (i.e. no noise in the density distribution, velocity, or in PA), we do not measure any significant signal (left panel in Fig. 3). However, if we include either density uncertainties or PA error then we measure a non-zero pattern speed (right panel in Fig. 3). We emphasize that symmetrization of pixels, which was described above, does not help to cancel out the signal because it is not an edge effect, the asymmetry appears along the whole slit (see Fig. 1 in Garma-Ochimchen et al., 2020). We demonstrate this effect in Figure 4 where we calculated the pattern speed for the perfect disc model with slits that do not cross the whole galaxy but start from \(x=-a\) and end at \(x=a\). For higher inclinations we see the pattern speed tends towards the value for the solid body rotation. Nevertheless, adding 5% uncertainty to the density values is enough to create a false signal of the pattern speed. It happens because noise in the data is like having "clumps" and "holes". When \(\Sigma(-x)\neq\Sigma(x)\) pixels do not cancel out under the integral, we see a non-zero pattern speed measurement within that slit. We can illustrate this with the simplest example: if there is only one pixel with a clump, then the TW method measures the rotation of that clump. Therefore, we measure a rotation velocity weighted by \(\overline{\Sigma(x)}x\) (from Eq. 6) with those clumps. Thus, we want to find the minimal value of \(\delta_{\mathrm{PA}}\) corresponding to the fixed inclination where the TW method will start picking up signal that is not due to a pattern. We applied the TW method to the perfect disc model for the range of inclinations \(10^{\circ}<i<80^{\circ}\) using the bootstrap procedure for different uncertainties in PA and centering. Due to the fact that this model does not have any pattern we would expect some noise instead of the pattern speed. However, uncertainties in data lead the TW method to pick up a signal. In Figure 5 we plot these values of \(\delta_{\mathrm{PA}}\), which correspond to the \(\delta_{\mathrm{PA}}\) values where the TW method produces a non-zero \(\Omega_{\mathrm{P}}\). The relative difference between measured "pattern speed" and solid body rotation (SBR) velocity is highlighted by the colour of the point. This shows that for \(i<20^{\circ}\), the error in PA can be up to \(1^{\circ}\) without detecting a false signal. Also, in Figure 5 we show that the level of flux uncertainty has an impact on these values. Higher values of noise result in more asymmetry, and make the constraint on position angle uncertainty more stringent. ### The effect of bar alignment and galaxy inclination As well as limitations on \(\delta_{\mathrm{PA}}\), we also expect to have limitations for bar alignment and galaxy inclinations where the TW method will break down. For example, if the galaxy is too inclined, we might not have enough slits to fit for the pattern speed (which may well be a function of resolution) and also as shown in Figure 5 we can catch average rotation velocity. Given that the bar is a roughly straight structure, there may also be regimes where the bar is sufficiently Figure 2: Applying the TW method to our simulated 2D galaxy. _Left:_ grey-scale map shows the line-of-sight velocity field and horizontal lines indicate the centres of our 100 pc slits. Only one in every four slits is shown, for legibility. _Right:_\(\langle v\rangle\) and \(\langle x\rangle\), each data point corresponds to the matching colour slit in the left panel. The black line shows the ground truth (GT) pattern speed and the grey line is the fit to slits that cross the bar that are shown as filled circles. Slits which are not included in the fitting are marked with open circles. aligned or perpendicular to the galaxy's kinematic major axis that we are unable to pick up any signal in the integrals. Therefore, we applied the TW method to galaxies with a range of inclinations \(2^{\circ}<i<88^{\circ}\) and a range of bar alignment angle \(0^{\circ}<\psi_{\rm bar}<180^{\circ}\) using the bootstrapping procedure described above (see end of Sect. 2.2) to test the limitations imposed by bar (mis-)alignment and galaxy inclination. In Figure 6, we see how the TW method outputs differ depending on \(i\) and \(\psi_{\rm bar}\) for two simple models - the regular 2D simulation, described in Section 2.1.1 and the perfect disc (Sect. 3.2.1). Firstly, for a perfect disc with no pattern speed (see left panel of Fig. 6) we see that for small inclinations we measure noise as expected (see Fig. 3). However, the more inclined galaxy we have, the more likely we catch the solid body rotation instead of pattern speed. Second, we see that bar orientation matters (right: 2D hydrodynamical simulation). When the bar is located along the minor or major axis of galactic projection we do not recover the true pattern speed. However, due to adding uncertainties in the PA and centering, this introduces some asymmetry, and the blue region on the right panel of Fig. 6 is shifted down from being centred on \(\psi_{\rm bar}=90^{\circ}\). If we do not include these uncertainties, this blue region is shifted back up to be centred at \(\psi_{\rm bar}=90^{\circ}\). this blue region is shifted back up to be centred at \(\psi_{\rm bar}=90^{\circ}\). We define a lower bound on the inclination of \(5^{\circ}\), which is highly dependent on the measured uncertainties of the velocities and den Figure 4: Blue line shows measured pattern speed for slits integrated between \(-a\) to \(a\) for the perfect disc model with an inclination of \(i=50^{\circ}\). It does not reach SBR value \(\Omega=40\) km s\({}^{-1}\) kpc\({}^{-1}\) due to reasons discussed below. Gray dashed line corresponds to the value of \(\Omega/{\rm sin}(i)\) Figure 5: Allowable \(\delta_{\rm PA}\) uncertainty for different inclinations and density noise levels. If the PA uncertainty is below diamonds or crosses, then we do not pick up any false signal. Diamonds correspond to flux uncertainty of 1% and crosses correspond to an uncertainty of 5%. For lower inclinations, higher than \(\delta_{\rm PA}>1^{\circ}\) are acceptable. The colour of the points shows the difference between the fitted \(\Omega_{\rm P}\) and the GT. Figure 3: Applying the TW method to our perfect disc model with \(\psi_{\rm bar}=90^{\circ}\) and \(i=45^{\circ}\). _Left:_\(\langle v\rangle\) and \(\langle x\rangle\) plot for a galaxy with \(\delta_{\rm PA}=0^{\circ}\), and without noise in densities and velocities. We can see that no signal is detected here, and the fitted pattern speed is consistent with \(0\) km s\({}^{-1}\) kpc\({}^{-1}\). _Right:_\(\langle v\rangle\) and \(\langle x\rangle\) plot for a galaxy with \(\delta_{\rm PA}=1^{\circ}\) and noise added to the density and velocity data. The black line here shows the solid body rotation of the perfect disc and the grey line is the fit. In this case, even though no pattern is present the TW method detects signal close to the solid body rotation of the disk. sities. For this lower bound, we assume an uncertainty (per pixel) of 10 km s\({}^{-1}\) and 5%, respectively. The upper limit for inclinations we define as follows. We measure some signal at very low inclinations, but the difference between the measured speed and the true pattern speed is greater than a factor of two (left panel of Fig. 6). It would give us an error in corotation radius more than 50%, so it is easy to detect this discrepancy and remove questionable slits from consideration. However, for inclinations larger than \(i_{\rm max}\) we might consider a false signal as the real one. Therefore, from Figure 6 we see that \(i_{\rm max}=50^{\circ}\) and we should not apply the TW method to galaxies with higher inclinations. In summary, we have used 2D simulations to highlight the effects of a number of parameters, and provide limits on various uncertainties and geometric parameters in obtaining reliable results from the TW method. Firstly, we suggest a lower bound to the galaxy inclination of \(5^{\circ}\) and an upper bound of \(i=50^{\circ}\). These conclusions are based on unphysical model of a galaxy with the properties described above. Secondly, bar alignment matters too and reasonable results are obtained for \(5^{\circ}<|\psi_{\rm bar}|<30^{\circ}\). We caution that even when these conditions are met, and with uncertainties matched to the best currently available data, given the inherent noise in the observations that false signal may still be picked up by the TW method. ### Hydrodynamical 3D simulations Given that observationally we do not have access to the total gas density but only to approximate gas tracers (Bureau et al., 1999; Zimmer et al., 2004; Chemin & Hernandez, 2009; Banerjee et al., 2013), we also wish to explore how the choice of tracer effects the results of the TW method. We use the 3D hydrodynamical simulations of Sormani et al. (2018) that include a non-equilibrium chemical network that captures the multi-phase nature of the ISM. These simulations are well-suited to our present purpose, as they can be used to predict maps of CO and H\({}_{2}\) emission (see density maps in Appendix A), although they do not include star formation, so we will not explore the effects of using H\(\alpha\) in this study. As earlier for 2D simulations, we did the same post-processing to the data, e.g. rotation and adding noise and imperfections, and additionally we produce a 2D image from this 3D simulation using an assumed inclination angle. First, we used the total gas content to check if the TW method measures the correct pattern speed for this model, which we would expect to be the case as given the lack of star formation the continuity equation will hold. Second, we applied the TW method to individual gas tracers, such as CO or H\({}_{2}\) only, as they are commonly used observationally. Third, we constructed a mock hydrogen tracer. As the cold H\({}_{2}\) is impossible to observe directly, we usually use CO as a molecular gas indicator. Therefore, we calculated H\({}_{2}\) density as if we did not have it from the simulation: \[\Sigma=\Sigma_{\rm H_{2}}+\Sigma_{\rm CO}\,\,\frac{\left<\Sigma_{\rm H_{2}} \right>}{\left<\Sigma_{\rm CO}\right>}\,, \tag{7}\] where \(<\Sigma_{\rm H_{2}}>\) and \(<\Sigma_{\rm CO}>\) are the median values for the H\({}_{2}\) and CO surface densities. This is an equivalent of a single "CO-conversion factor", \(\alpha_{\rm CO}\), in observational work. In Figure 7 we show results of applying the TW method to these tracers. We see that for total gas density the TW method measures the correct pattern speed. However, Hi alone typically underpredicts the pattern speed, whilst CO overpredicts. Critically, even when combining H\({}_{2}\) and CO a correct pattern speed is not recovered, indicating these gas tracers are unsuitable for measuring pattern speeds. We repeated this study for another snapshot 5 Myr later and the results are consistent. As illustrated in Figure 8, these results are recovered for a range of different geometric configurations (as studied for the 2D models in sect. 3.3). The true pattern speed is most reliably extracted using the total gas tracer (where the gas obeys the continuity equation) at the majority of angle combinations (see Fig. 8), yielding an overall picture similar to 2D simulation test (see right panel of Fig. 6). On the other hand, the pattern speed is typically either over- or under-estimated using the CO (right panel of Fig. 8) or H\({}_{2}\) (middle panel of Fig. 8) alone, respectively. It is noteworthy that there is a small range of inclinations \(i\in[110^{\circ};120^{\circ}]\) where the true pattern speed is recovered by both tracers. We speculate that this is the result of the geometry of the bar; for these angles velocities in the bar region are higher than in the outskirts of the galaxy. ## 4 Discussion Our work shows that for more highly inclined galaxies, the uncertainty limit in the position angle is more strict, which is opposite to the conclusion presented by Debattista (2003, see fig. 8). In theory, the more inclined galaxy is, the more asymmetric it becomes when a position angle uncertainty \(\delta_{\rm PA}\) is included, therefore the method should not measure the correct pattern speed for high values of both inclination \(i\) and PA uncertainty \(\delta_{\rm PA}\). We have alluded to why this occurs in our Section 3.2.1 - at higher inclination, we often catch the angular rotation curve of the galaxy, which can be confused with a pattern speed. It is likely for the more highly inclined simulations in Debattista (2003), this effect is dominating the "pattern speed" measurement, but is incorrectly identified as the pattern speed. Furthermore, we applied the TW method to 3D simulations to study how well the GT pattern speed can be extracted using different gas phases as independent kinematic tracers. We show that the TW method provides different results depending on which gas phase we use, even though they respond to the same bar potential. We attribute this difference primarily to the small-scale morphology of the different tracers. CO is very clumpy, which leads to measurements that consist of both pattern speed and average velocity field. We can similarly conclude that we observe higher "pattern speed" in the center of the 2D galaxy (Fig. 2) due to clumps and holes in this Figure 6: The boostrapped TW method output for a range of angles \(i\) and \(\psi_{\rm bar}\) for 2D test models. Color shows the relative difference between ground truth pattern speed (\(\Omega_{\rm GT}\)) and measured pattern speed (\(\Omega_{\rm P}\)) – gray indicates agreement and increasing blue (red) shows increasing over- (under-) prediction. _Left:_ Results for perfect disc test, with no bar and hence no pattern speed. For small inclinations pattern speed measurements are random. _Right:_ Results for 2D simulations, including a bar. unsteady region, so we sum up high orbital rotation velocity with the pattern speed. The Hi density map has different properties, it is much smoother and has less density in the bar relative to the outer galactic disc (see density maps in Appendix A). Therefore, weights in the integrals in Eq. 6 corresponding to the bar region are not sufficiently strong and the signal tends to be partially cancelled out by pixels at larger galactocentric radius. Williams et al. (2021) showed that for observational data, we get the same result as we have shown here - that the pattern speed measured by CO is higher than the true pattern speed. Moreover, we suppose that slow bars measured in dwarf galaxies (Bureau et al., 1999; Banerjee et al., 2013) might be an outcome of the inapplicability of the TW method to Hi. However, we need to stress that dwarf galaxies are Hi dominated so the continuity equation may work better for this gas phase there than in our simulation. Furthermore, gas in our simulation is more idealized than in any observations. First, real galaxies are much less symmetric which is crucial for this method. Second, the simulations have a single well-defined pattern from a rigidly rotation bar, while real galaxies will have also additional perturbations coming from spiral arms, interac Figure 8: The bootstrapped TW method output for a range of angles \(i\) and \(\psi_{\rm bar}\) for tracers from 3D simulation. Colour shows the difference between ground truth pattern speed (\(\Omega_{\rm GT}\)) and measured pattern speed (\(\Omega_{\rm p}\)) – gray indicates agreement and increasing blue (red) shows increasing over- (under-) prediction. _Left:_ Results for total gas tracer _Middle:_ HI gas tracer _Right:_ CO gas tracer. Figure 7: TW method output for our 3D simulation when applied to different gas tracers with \(i=40^{\circ}\) and \(\psi_{\rm bar}=20^{\circ}\). _Top left:_ Total gas density; _top right:_ Hi gas tracers; _bottom left :_ CO density, and; _bottom right:_ Combination of Hi and estimation of H\({}_{2}\) obtained from CO, including a conversion factor to mimic how molecular gas masses are obtained observationally. The black line shows the ground truth pattern speed and the grey solid line is the fit to slits that cross the bar. These slits are shown as filled circles, and slits which are not included in the fitting are marked with open circles. tion with other galaxies, satellites, etc. We also have not studied the effects that star formation has on the gas distribution and kinematics - given that dwarf galaxies typically have higher star formation rate efficiencies than normal spirals (e.g. Leroy et al., 2008), this effect may be significant. However, investigating the effects of star formation and different galaxy classes is beyond the scope of this work - we leave this to future efforts. ## 5 Conclusions In this work, we have applied the Tremaine-Weinberg (TW) method to a number of simple simulations, to test where this method is able to recover correct pattern speeds, and where its reliability is compromised. We have shown that the method is often unsuitable, but our recommendations for applying it are as follows: Firstly, we would advise that the TW method can be applied only to galaxies with inclinations \(i\in[5^{\circ},50^{\circ}]\). The lower limit depends on the signal-to-noise ratio for available data, while the upper one is set by the TW method catching other sources of the signal rather than the pattern speed. This is a more strict range than recommended by earlier studies (Guo et al., 2019; Zou et al., 2019), where the range was wider and inclinations up to \(70^{\circ}\) were allowed. However, we want to stress that this conclusion is based on very simple and unrealistic model, and therefore we would suggest studies adapt such a test to match their own data. Secondly, there is also a limitation for a bar (mis-)alignment angle. Previous studies with \(N\)-body simulations (Zou et al., 2019) showed that the TW method works for a wide range \(10^{\circ}<|\psi_{\rm bar}|<75^{\circ}\). Although our 2D simulation test (Fig. 6) agrees with this conclusion, when we also include extra solid body rotation, it is clear that the range should be narrowed. Thus, the bar should be oriented towards the major axis with a misalignment angle \(5^{\circ}<|\psi_{\rm bar}|<30^{\circ}\). Thirdly, the PA should be measured with an error not higher than \(1^{\circ}\) for a galaxy inclination of \(50^{\circ}\). For less inclined galaxies, the uncertainty in PA can be up to \(10^{\circ}\) (see Fig. 5). Finally, by applying the TW method to a 3D simulation we conclude that the method produces incorrect results when applied to gas tracers, due to both a violation of the continuity equation and the gas morphology. Using CO data typically leads to overestimated values of pattern speed, while Hi data leads to an underestimation. Overall, this work shows that the TW method should be used with extreme caution and using strict criteria when applied to ISM tracers. Given the overall simplicity of our tests, we expect that these criteria will become even more strict when additional processes such as star formation, a live stellar potential or galaxy interactions are included. Further tests using more sophisticated galaxy simulations will be critical for assessing how well we can measure the pattern speeds of bars in galaxies using the optical IFU instruments that are now regularly producing maps of the stellar surface density and kinematics. ## Acknowledgements This work has been carried out during a Summer Internship funded by the Max Planck Society. ## Data Availability The simulation's snapshots and interactive plots are available on GitHub
2309.08743
Active Learning for Fine-Grained Sketch-Based Image Retrieval
The ability to retrieve a photo by mere free-hand sketching highlights the immense potential of Fine-grained sketch-based image retrieval (FG-SBIR). However, its rapid practical adoption, as well as scalability, is limited by the expense of acquiring faithful sketches for easily available photo counterparts. A solution to this problem is Active Learning, which could minimise the need for labeled sketches while maximising performance. Despite extensive studies in the field, there exists no work that utilises it for reducing sketching effort in FG-SBIR tasks. To this end, we propose a novel active learning sampling technique that drastically minimises the need for drawing photo sketches. Our proposed approach tackles the trade-off between uncertainty and diversity by utilising the relationship between the existing photo-sketch pair to a photo that does not have its sketch and augmenting this relation with its intermediate representations. Since our approach relies only on the underlying data distribution, it is agnostic of the modelling approach and hence is applicable to other cross-modal instance-level retrieval tasks as well. With experimentation over two publicly available fine-grained SBIR datasets ChairV2 and ShoeV2, we validate our approach and reveal its superiority over adapted baselines.
Himanshu Thakur, Soumitri Chattopadhyay
2023-09-15T20:07:14Z
http://arxiv.org/abs/2309.08743v1
# Active Learning for Fine-Grained Sketch-Based Image Retrieval ###### Abstract The ability to retrieve a photo by mere free-hand sketching highlights the immense potential of Fine-grained sketch-based image retrieval (FG-SBIR). However, its rapid practical adoption, as well as scalability, is limited by the expense of acquiring faithful sketches for easily available photo counterparts. A solution to this problem is Active Learning, which could minimise the need for labeled sketches while maximising performance. Despite extensive studies in the field, there exists no work that utilises it for reducing sketching effort in FG-SBIR tasks. To this end, we propose a novel active learning sampling technique that drastically minimises the need for drawing photo sketches. Our proposed approach tackles the trade-off between uncertainty and diversity by utilising the relationship between the existing photo-sketch pair to a photo that does not have its sketch and augmenting this relation with its intermediate representations. Since our approach relies only on the underlying data distribution, it is agnostic of the modelling approach and hence is applicable to other cross-modal instance-level retrieval tasks as well. With experimentation over two publicly available fine-grained SBIR datasets ChairV2 and ShoeV2, we validate our approach and reveal its superiority over adapted baselines. Active Learning, Fine-Grained Sketch-Based Image Retrieval ## 1 Introduction The success of computer vision applications can be largely attributed to deep learning architectures [1, 2], which, in turn, have yielded favourable results due to their access to large-scale labelled databases [1, 2] for training. Being in the age of Big Data, enormous volumes of data is easily available; however, proper annotation of the same is a painstakingly cumbersome as well as an expensive process, often requiring specialized qualifications if the task at hand demands for domain expertise, such as handling medical images [1]. To alleviate this bottleneck, researchers have proposed various annotation-efficient methods [1, 2] to standard computer vision tasks like classification and segmentation. A commonly used technique is active learning [1, 2], which seeks to find the most "useful" unlabelled data samples to be annotated for learning, so as to reduce annotation cost as well as increase overall generalisability on the supervised learning task to be performed. Apart from conventional visual tasks, active learning has been applied to other domains such as video captioning [1], hand pose estimation [2] and single-image super-resolution [1]. In this paper, we embrace a paradigm shift to tackle the aforementioned challenges in a domain which is fundamentally very much different from traditional vision tasks - fine-grained sketch-based image retrieval (FG-SBIR) [1, 2, 3], a relatively newer direction of research from traditional category-level SBIR [1, 2]. As the name suggests, FG-SBIR aims at exploiting the finer sketch representations for cross-modal instance-level retrieval, achieved by learning an embedding space where sketch-photo pairs lie close to each other. The most common approach in several works [1, 2, 3] has been to train a supervised triplet loss-based model [1] that learns feature similarities between an image and its corresponding sketch and hence requires a large number of sketch-photo pairs. However, drawing full sketches is both time-consuming and difficult, since it requires artistic expertise and amateurish sketching can only lead to learning degradation. Thus, there is a need to develop a robust, annotation-efficient pipeline for FG-SBIR. Very few recent works have been proposed treading on this motivation, such as a generalisable zero-shot [3] and semi-supervised learning [1] FG-SBIR framework. While the latter involves training of two networks which makes it computationally expensive, the performance of [2] is far from fully-supervised alternatives. Developing an Active Learning pipeline for FG-SBIR imposes some unique challenges. Considering a traditional FG-SBIR model which lacks a probability distribution of the samples in its output, there is an absence of a direct way to measure the uncertainty of such a model. Hence, it becomes difficult to select a photo for labelling without having an estimate of its uncertainty from the model. Moreover, off-the-shelf active learning methods [1, 2] that were primarily proposed for classification tasks are not suitable for FG-SBIR, since for classification the learning mechanism _draws firm discriminatory boundaries_ among samples, whereas in a cross-modal instance-level retrieval setup the objective is to draw _softer decision boundaries_, as the samples belong to the same category and only differ in minute fine-grained details. Additionally, the FG-SBIR model holds the photo and sketch embeddings in a joint space and thus, selection based on only photo or sketch might yield previously unseen results, compared to selection from a single modality. Hence, developing a sampling technique for the FG-SBIR requires the handling of both modalities - photo and its sketch. The effectiveness of an active learning pipeline depends highly upon the _technique_ of selecting samples from the unlabelled pool. As a result, a good technique for sampling photos Figure 1: Intuition behind our proposed AL framework. The _violation index_ quantifies the “disturbance” introduced into the learned embedding space by the incoming photo sample, due to having a greater similarity with a previously paired sketch present in the latent space. More details are provided in section 4. would lead to the maximum increase in the model's performance. To this end, we propose a novel sampling strategy for active learning that utilises the evolving relations between photos and their sketches to approximate the influence of a new photo from the unlabelled pool, on the model's existing knowledge. Our sampling technique is informed by the embedding space learnt using the labelled pool of photo-sketch pairs and two major aspects of an unlabelled photo - _its predicted representation_ and _its approximate potential of influence_ in the existing embedding space. While the former is the basis of our sampling technique, the latter helps in tackling the classic trade-off between uncertainty and diversity-based sampling techniques. Specifically, to model an unlabelled photo's potential of influencing the existing embedding space, we formulate a quantity, _violation index_. The violation index of a photo acts as a proxy for measuring the amount of confusion it could create in the existing sketch-photo pair embedding space (refer to Figure 1 for an intuitive understanding). Further, for diversity sampling, we choose \(k\)-MEANS++ due to its ability to converge faster, as highlighted in []. The overall advantage is two-fold: since we adopt a relative method of approximating the influence using the task network itself, it does not need an auxiliary learner or source of knowledge. Moreover, our technique relies on the underlying distribution of the data itself and hence is agnostic of the model used. The number of samples to be chosen depends on the permissible budget of annotation; the chosen samples are then queried for their sketches to be drawn, which are then paired and put into the training examples. To sum up, the primary contributions of the presented study are as follows: (1) For the first time, we propose an _active learning_ pipeline for annotation-efficient fine-grained SBIR; (2) To this end, we formulate a novel sampling strategy that incorporates uncertainty as well as diversity to quantify the "usefulness" of an unlabelled sample for querying its label (here, sketch); (3) With suitable experimentation and comparison with adopted baselines on two publicly available FG-SBIR datasets, as well as conducting ablations on the proposed framework, we demonstrate the usefulness of our approach. ## 2 Related Works **Fine-grained SBIR:** Although SBIR was originally proposed and studied as a category-level retrieval task [], [], [], recently there has been significant interest among researchers towards exploiting the _fine-grained_ information that sketches provide [], [], [], [], [] for enhanced cross-domain matching, something other query mediums (e.g. text) fail to do. The first deep learning-based approach by Yu _et al._[] was further enhanced using cross-domain generative-discriminative learning [] and attention mechanisms []. More recent studies include zero-shot-like cross-category FG-SBIR []; cross-modal co-attention-based hierarchical model []; on-the-fly SBIR setup for early retrieval []; style-agnostic meta-learning setup [] and a semi-supervised framework [] to tackle data scarcity in FG-SBIR []. However, none of these works seeks to address the need for a smart data labelling system for cross-modal instance-level retrieval problems like FG-SBIR so that we can achieve optimal performance using the minimum possible labelling budget. **Active Learning:** Active learning (AL) has been extensively studied for over two decades, the primary goal being to develop an effective strategy to reduce annotation effort by "actively" selecting representative samples and improve learning. Apart from conventional image classification [], [], AL has been widely used for various computer vision tasks such as medical imaging [], image and video segmentation [], [], [], among others. Broadly speaking, AL approaches may be categorized as: (1) Uncertainty-based methods [B,,,, ], which aim to construct a so-called acquisition function that quantifies the uncertainty of the model on unlabelled data points, based on which the points are sampled and queried for annotation; and (2) Representation-based methods [H,,, ], which seek to learn a common embedding space for the labelled and unlabelled data items so as to sample the unlabelled data points that capture the most diverse regions of the embedding and thereby better represent the overall data distribution. Existing AL methods mostly deal with classification and thus cannot be directly adopted for FG-SBIR where paired photo and sketch need to be aligned in the vicinity in the joint embedding space. This cross-modal instance-wise matching brings an exclusively different set of challenges for employing AL in FG-SBIR. We intend to address and tackle these through this work, which, to the best of our knowledge, is the first to introduce AL to a cross-modal instance-level retrieval problem. ## 3 Background and Problem Formulation Baseline FG-SBIR:Instead of complicated pre-training [C] or joint-training [B], we use a three-branch state-of-the-art siamese network [C] as our baseline retrieval model, which is considered to be a strong baseline to date [C], C]. Each branch starts from ImageNet pre-trained VGG-16 [C], sharing equal weights. Given an input image \(I\in\mathbb{R}^{H\times W\times 3}\), we extract the convolutional feature-map \(\mathcal{F}(I)\), which upon global average pooling followed by \(l_{2}\) normalisation generates a \(d\) dimensional feature embedding. This model has been trained with an anchor sketch (a), a positive (p) photo, and a negative (n) photo triplets \(\{a,p,n\}\) using _triplet-loss_. Triplet-loss aims at increasing the distance between anchor sketch and negative photo \(\delta^{-}=||\mathcal{F}(a)-\mathcal{F}(n)||_{2}\), while simultaneously decreasing the same between anchor sketch and positive photo \(\delta^{+}=||\mathcal{F}(a)-\mathcal{F}(p)||_{2}\). Therefore, the triplet-loss \(\mathcal{L}\) with the margin hyperparameter \(\mu>0\) can be written as: \[\mathcal{L}=max\{0,\delta^{+}-\delta^{-}+\mu\} \tag{1}\] During inference, given a gallery of \(M\) photos \(\{P_{i}\}_{i=1}^{M}\), we can compute a list of \(d\) dimensional vectors as \(G=\{\mathcal{F}(P_{i})\}_{i=1}^{M}\). Now, given a query sketch \(S\), and pair-wise distance metric, we obtain a top-q retrieved list as \(Ret_{q}(F(S),G)\). If the paired (ground-truth) photo appears in the top-q list, we consider accuracy to be true for that sketch sample. Active Learning for FG-SBIR:Following the principle of active learning shown in Equation 2, we aim to minimise the number of rounds \(\mathcal{R}\) so that the fewest possible sketches are needed to be acquired. At the end of each of round \(r\), the performance measures \(\mathcal{P}\) are recorded to quantify and understand the effectiveness of our sampling technique \(\mathcal{X}\). \[\min_{\mathcal{R}}\min_{\mathcal{L}}\left[\mathcal{X}(\mathcal{L}|\mathcal{P} _{0}\subset\cdots\mathcal{P}_{k}\subset\mathcal{P}_{U})\right]_{r=1}^{ \mathcal{R}} \tag{2}\] ## 4 Proposed Method Overview:We aim to design an active learning framework specially suited for fine-grained SBIR so as to make it label-efficient. To keep our framework fairly simple, we have used the baseline triplet network (described in Section 3) as the retrieval model. During the training process, we leverage active learning to select the most informative examples for querying its labels, the details of which are given in the subsequent sections. The AL mechanism is employed in training cycles; at the end of each cycle the sampling technique selects photos from the unlabelled data and queries for their paired sketches, which are provided by the Oracle (ground truth annotator) and added to the labelled training pool. ### Introducing Violation Index In this section, we introduce the core idea of our contribution - the **violation index**. Our approach is inspired by the learning process of cross-modal retrieval frameworks. When an FG-SBIR model learns, it tries to put an image and its sketch nearby in the embedding space, while pushing away other images. Formally, consider a model \(\mathcal{M}\), the embedding of an image obtained from final layer of the model as \(E_{I}\), and that of the corresponding sketch as \(E_{S}\). Also, let \(E_{I^{\prime}}\) be the embedding of an image other than \(E_{I}\), \(N_{L}\) be the total size of the labelled dataset \(\mathcal{D}_{L}\) and \(N_{U}\) be the total size of the unlabeled dataset \(\mathcal{D}_{U}\). To learn the similarities, following Section 3 let us consider a triplet loss function defined as follows: \[\mathcal{L}=\max\Big{(}\|E_{I}-E_{S}\|^{2}-\|E_{I}-E_{P}\|^{2}+\mu,0\Big{)} \tag{3}\] The objective of the learning process is to minimise \(\mathcal{L}\) over all images in the dataset (Eq 4). Hence, an ideal embedding space would comprise images lying closest to its actual sketch. However, a non-ideal embedding space is more realistic and introduces some new challenges. \[\textit{minimize}\sum_{i=1}^{N_{L}}\mathcal{L}\left(E_{I}^{(i)},E_{S}^{(i)},E _{I^{\prime}}^{(i)}\right) \tag{4}\] Now, when a new image \(I^{\prime}\) from the unlabelled set \(\mathcal{D}_{U}\) is introduced during training, its corresponding sketch might not lie closest to it in the embedding space, a condition defined in Equation 5. Hence, in the model's view, the image now seems closer to one of the existing sketches rather than its own. This phenomenon leads to perturbation in the existing Figure 2: Overall workflow of the proposed active learning framework for fine-grained SBIR. The method starts with a subset of sketch-photo pairs for learning the FG-SBIR model, following which it is used to compute violation index of unlabeled photos with respect to the embedding space. Our sampling technique selects a suitable query set of photos which are passed to the Oracle (the ground truth sketch provider for queried photos) to obtain their sketch counterparts. These pairs are subsequently added to the labeled subset. embedding space. We quantify the degree of disturbance an image from the unlabelled pool of images produces in the existing embedding space and call it its _violation index_. \[\|E_{I^{\prime}}-E_{S}\|^{2}\leq\|E_{I}-E_{S}\|^{2} \tag{5}\] Formally, we define the violation index (\(VI\)) of the photo embedding \(E_{I^{\prime}}\) as: \[Violation\,Index(E_{I^{\prime}})=\frac{1}{N_{L}}\times\sum_{i=1}^{N_{L}}\frac{\| E_{I}^{(i)}-E_{S}^{(i)}\|}{\|E_{I^{\prime}}-E_{S}^{(i)}\|} \tag{6}\] Thus, the violation index is an improvement over a simpler distance-based sampling technique since it accounts for the fact that the inherent imperfections in sketches cause the cross-modal embedding space to be sensitive to new image-sketch pairs. Moreover, since the metric is built on relative distances instead, computing the relative similarities between a new image and existing pairs help surface novel samples that still do not violate existing sketches or images. ### VI-based Active Sampling Given the violation index of an image shows its average relative distance from existing image-sketch pairs and hints towards how many of them it violates in the learned embedding space. Intuitively, images with a low violation index would be relatively unseen in the training data, whereas ones with a high violation index might closely resemble one or more images from the training set. Images with a low violation index or **min violating samples** are relatively unseen in the training data, which means that they may be novel or unique compared to other images. These images may contain features or characteristics that are not commonly found in the training set. On the other hand, images with a high violation index or **max violating samples** are more likely to closely resemble one or more images from the training set. This suggests that these images may contain familiar features that are present in the training data. When considering the selection strategy in Active Learning, a naive solution could be to select the samples with minimum violation indices,i.e., the sampling technique \(X\) gives us the set \(\{x\in D_{U}:|\,\,\textit{VISet}_{D_{U}}\cap(-\infty,VI(E_{x})\,|<K\}\) where \(\textit{VISet}_{D_{U}}\) is the set of violation indices for the images in the unlabelled set and \(VI(E_{x})\) is the violation index for image \(x\) (Eq 6). Although with this approach the perturbation in the existing embedding space is minimized, it loses out on the opportunity to reduce uncertainty by learning through closely resembling images. As such, there exists a tradeoff between reducing existing uncertainty and learning novel instances. Hence, a better selection strategy would be an ensemble of minimum and maximum violating image samples, i.e, a better \(X\) gives us the set \[\{x\in D_{U}:|\,\,\textit{VISet}_{D_{U}}\cap(-\infty,VI(E_{x}))\, |<p\}\,\,\cup\] \[\{x\in D_{U}:|\,\,\textit{VISet}_{D_{U}}\cap(VI(E_{x}),\infty)\, |>(K-p)\}\] where \(p\in I\), is a hyper-parameter such that \(0\leq p\leq N_{U}\). Since the violation index does not inherently capture the diversity of images, a further enhancement of our selection strategy includes diversity sampling to maximize novelty in the selected subset. To do this, we adopt a \(kmeans++\) based clustering of the image embeddings followed by violation index-based selection inside each cluster. We select \(kmeans++\) due to its ability to converge faster [\(\blacksquare\)]. The overall workflow has been depicted in Figure 2. ## 5 Experiments and Results Datasets:We use QMUL-Shoe-V2 [2, 2] and QMUL-Chair-V2 [3] datasets that have been specifically designed for FG-SBIR. QMUL-Shoe-V2 contains a total of 6,730 sketches and 2,000 photos, of which we use 6,051 and 1,800 respectively for training and the rest for testing. For QMUL-Chair-V2, we split it as 1,275/725 sketches and 300/100 photos for training/testing respectively. For each photo, one of its possible sketches is selected randomly, and is considered as its label. Initially, we consider 300 photo-sketch pairs as the labelled set and the rest as unlabelled (i.e. absence of their corresponding sketches). Implementation:We implemented our framework in PyTorch [3] accelerated by an 11 GB Nvidia RTX 2080-Ti GPU. ImageNet [2] pre-trained VGG-16 [3] network (embedding dimension \(D=256\)) is used as the backbone network for both sketch and photo branches. In all experiments, we use Adam optimizer [3] with learning rate of \(1e-4\), batch size 16 and train the base model with a triplet objective. In the active learning setup, we conduct 5 cycles of complete training, where at the end of each round, we employ our sampling technique to add \(K\) samples to the labelled pool after the provision of its actual sketch. In each active learning round, we obtain the embeddings for the photos and sketches from the labelled set and the photos from the unlabelled set. Following this, our sampling technique utilises these embeddings to select \(K\) photos from the unlabelled set, whose corresponding sketches from the dataset are then used to label them. Once labelled, these photos and their sketches are added to the labelled set. Evaluation Metrics:Following the standard FG-SBIR setting [2, 2], we quantify the performance of the sampling technique using acc.@\(q\) metric (\(q=1,10\)), i.e. the percentage of sketches having true-match photos appearing in the top-\(q\) list, after each AL round. ### Baselines To the best of our knowledge, there has been no previous work on active learning for fine-grained sketch-based image retrievals. Thus, we compare ours with a few SoTA active learning techniques adapted suitably for fine-grained SBIR. We choose three widely used baseline sampling techniques, namely: random sampling, kmeans sampling, and coreset sampling. As evident from our choice of baseline methods, we do not consider uncertainty-based sampling techniques due to the limitations discussed in Section 3.3. Random sampling is a widely used baseline for evaluating active sampling techniques. Here, we randomly select photos for labelling based on our-predefined budget. In K-means sampling [3], we cluster photos into as many clusters as the labeling budget and select the photos closest to the cluster centroid. In case of a tie, we randomly select any one of the closest photos. Finally, Coresets sampling [3] is another diversity sampling technique in which we utilise "coresets" to find the most representative photos for sampling. To find the coresets, we use a farthest-first approach to select maximally distant photos in the embedding space. ### Performance Analysis To compare our approach with baselines, we report the mean and one standard deviation of acc.@1 values across 5 different runs. We also report the initial accuracy of the model. Figure 3 compares our violation index-based approach and baseline techniques. Before any active learning, the acc.@1 after training it on 8% labelled data were 13% (49.8%) and 11.7% (42.8%) on the Shoe and Chair datasets respectively. Our method outperforms the baselines on both QMUL-Shoe-V2 and QMUL-Chair-V2 datasets and obtains consistently higher mean accuracies. We consider the mean accuracy differences between our approach and the baselines to further substantiate the results shown in Figure 3. On the QMUL-Chair-V2 dataset, we achieve a mean gain of 2.6% acc.@1 compared to the 3 baseline techniques. On the QMUL-Shoe-V2 dataset, we observe a mean increase of 1.1% acc.@1. Our proposed approach achieves comparable performance by utilizing only 40-50% of the dataset. Compared to the state-of-the-art acc.@1 of 36.47% on 100% of the QMUL-Shoe-V2 dataset [2], we obtain a mean acc.@1 of 29.8% by only utilizing 40% of the dataset. This is achieved when we use \(\alpha\)=0 as our hyperparameter. We also see consistent results on the QMUL-Chair-V2 dataset (using \(\alpha\)=0.7), where we obtain 49.6% acc.@1 by only using 50% of the dataset. ### Ablation Study To better understand the effects of violation index-based sampling, we perform ablations on violation index-based selection and the choice of hyper-parameter. Significance of violation index and diversity-based sampling:For this, we consider selecting only the unlabelled images that have the smallest and largest violation indices in active learning rounds performed on the QMUL-Shoe-V2 dataset. As per Figure 4, an interesting observation is made: In early active learning cycles, selecting minimum VI yields higher acc.@1 compared to maximum VI. As the training dataset grows, this relationship is inverted, with higher acc.@1 achieved through Maximum VI-based selection. Empirical results support our hypothesis on violation indices, indicating that with less training data, increased retrieval accuracy primarily stems from selecting diverse instances. Conversely, with larger training data, reducing model uncertainty on similar instances contributes more to acc.@1. Thus, selecting maximum VI samples yields optimal results. Figure 3: Comparing VI-based active sampling (ours) with SoTA AL baselines for fine-grained SBIR. All results are reported as mean of 5 runs and an error band of \(\sigma=1.0\). Another significant aspect of our experiment involves the need for diversity-based sampling, not just violation index-based sampling. As shown in Figure 4, violation index-based selection within diverse clusters obtained from kmeans++ consistently outperforms vanilla violation-based approach and kmeans++. Thus, the violation index captures semantic relations between labeled and unlabeled datasets but does not explicitly utilize relations between unlabeled images for diverse selection. **Sensitivity to hyper-parameter:** We analyze the sensitivity of our method to the hyperparameter \(\alpha\), the results shown in Figure 5. We vary the value of \(\alpha\) from 0 to 1 with a gap of 0.1 and report the mean acc.@1 on both datasets. On the QMUL-Shoe-V2 dataset, we observe a steady increase in acc.@1 as we increase \(\alpha\). On this dataset, \(\alpha\)=0 produces the best results. On the QMUL-Chair-V2 dataset, we observe a steady increase in initial and final acc.@1 as we increase \(\alpha\). As per Figure 6 and on this dataset, \(\alpha\)=0.7 produces the best results. On both datasets, there is a sudden drop in performance as we increase \(\alpha\) from 0.0 to 0.1 and 0.2. We also present the same results compared with the baseline method in Figure 6. We believe that studying this interesting behavior is an open research avenue. Figure 4: Ablations on diversity-based sampling and violation index. We compare only VI-based sampling, diversity-based sampling, and a combination of both. Figure 5: Ablations on value of hyper-parameter \(\alpha\) on the ShoeV2 and ChairV2 datasets. ## 6 Conclusion We have proposed an active learning framework to tackle the annotation bottleneck in fine-grained SBIR. To this end, we brought forth a quantifiable metric, _violation index_, that measures the latent space displacements due to addition of a new instance. With suitable experiments and ablations, we have shown the robustness of our model compared to classification-specific AL works, especially in the low-data regimes. Our model is modality agnostic and thus can be leveraged for any cross-modal retrieval task. In future, we plan to extend our studies by investigating its applicability to such other tasks.
2309.07470
African swine fever in wild boar: investigating model assumptions and structure
African swine fever (ASF) is a highly virulent viral disease that affects both domestic pigs and wild boar. Current ASF transmission in Europe is in part driven by wild boar populations, which act as a disease reservoir. Wild boar are abundant throughout Europe and are highly social animals with complex social organisation. Despite the known importance of wild boar in ASF spread and persistence, there remain knowledge gaps surrounding wild boar transmission. To investigate the influence of density-contact functions and wild boar social structure on disease dynamics, we developed a wild boar modelling framework. The framework included an ordinary differential equation model, a homogeneous stochastic model, and various network-based stochastic models that explicitly included wild boar social grouping. We found that power law functions (transmission $\propto$ density$^{0.5}$) and frequency-based density-contact functions were best able to reproduce recent Baltic outbreaks; however, power law function models predicted considerable carcass transmission, while frequency-based models had negligible carcass transmission. Furthermore, increased model heterogeneity caused a decrease in the relative importance of carcass-based transmission. The different dominant transmission pathways predicted by each model type affected the efficacy of potential interventions, which highlights the importance of evaluating model type and structure when modelling systems with uncertainties.
Callum Shaw, Angus McLure, Kathryn Glass
2023-09-14T07:00:31Z
http://arxiv.org/abs/2309.07470v1
# African swine fever in wild boar: investigating model assumptions and structure ###### Abstract African swine fever (ASF) is a highly virulent viral disease that affects both domestic pigs and wild boar. Current ASF transmission in Europe is in part driven by wild boar populations, which act as a disease reservoir. Wild boar are abundant throughout Europe and are highly social animals with complex social organisation. Despite the known importance of wild boar in ASF spread and persistence, there remain knowledge gaps surrounding wild boar transmission. To investigate the influence of density-contact functions and wild boar social structure on disease dynamics, we developed a wild boar modelling framework. The framework included an ordinary differential equation model, a homogeneous stochastic model, and various network-based stochastic models that explicitly included wild boar social grouping. We found that power law functions (transmission \(\propto\) density\({}^{0.5}\)) and frequency-based density-contact functions were best able to reproduce recent Baltic outbreaks; however, power law function models predicted considerable carcass transmission, while frequency-based models had negligible carcass transmission. Furthermore, increased model heterogeneity caused a decrease in the relative importance of carcass-based transmission. The different dominant transmission pathways predicted by each model type affected the efficacy of potential interventions, which highlights the importance of evaluating model type and structure when modelling systems with uncertainties. ## 1 Introduction African swine fever (ASF) is a viral haemorrhagic disease caused by the African swine fever virus (ASFV), which affects both domestic pigs and wild boar populations. There is currently no vaccine, and highly virulent strains of ASFV can result in close to 100% mortality [1], while infection from moderately virulent strains has a lower mortality, ranging from 30 to 70% [2]. ASF outbreaks have the potential to devastate pig production. ASF was first detected in China in August 2018. By mid-2019, ASFV infection had killed 13,355 pigs and a further 1.2 million pigs were culled to prevent continued spread. The estimated economic impact of the outbreak was equal to 0.78% of China's 2019 gross domestic product [3]. Similarly, outbreaks in wild boar have caused significant population decline [4]. Due to this high potential for serious economic losses, ASF has been declared a notifiable disease by the World Organization for Animal Health [5]. ASF was first identified in Kenya in 1921, after the introduction of European domestic pigs [6]. Genotype I ASFV was first reported in Europe in 1957, when it was discovered in Portugal. ASF was subsequently reported in Brazil and a number of Caribbean and European countries [7]. In the summer of 2007, there was an ASF outbreak in Georgia, caused by the highly virulent genotype II ASFV. This genotype quickly spread through the Caucasus region and entered the Russian Federation by November 2007 [8]. Despite control efforts, the virus has since reached Eastern Europe and the European Union, where wild boar have been a key driver in the spread [9, 10]. In mid-2018, genotype II ASFV reached north-eastern China [11] and by early 2019, ASF had spread to 25 provinces in China [12]. Since its detection in China, there have been outbreaks in much of East and South-East Asia[13], with detection of ASF in Indonesia, Timor-Leste, and Papua New Guinea. The epidemiology of ASF is complex, with four primary transmission cycles identified [14]. The first is the sylvatic cycle that occurs in sub-Saharan Africa, in which ASFV circulates between warthogs and several soft tick species in the _Ornithodoros_ genus, without causing disease in the warthog. As there is no horizontal transmission between warthogs, soft ticks are essential for transmission [15]. In the second cycle, ASFV is transmitted between soft ticks and domestic pigs. This has been observed in both the Iberian peninsula [16, 17] and sub-Saharan Africa [18]. The third cycle, the domestic cycle, involves transmission through direct or indirect contact between domestic pigs. Indirect transmission via contamination is possible, as ASFV can remain viable in excretions of infected pigs for days [19] and viable for months in blood [20]. Furthermore, infectious ASFV was found to last several days in various soil matrices [21] and in shipped feed [22]. The final cycle of infection is centred on wild boar and their habitat [14]. Wild boar have been an important driver of the recent ASF outbreak in eastern Europe [23] and South Korea [24]. Environmental transmission is a key component of the wild boar transmission cycle; viable ASFV has been detected in wild boar carcasses for weeks after death [25] and wild boar have been observed interacting with conspecific carcasses and the environment around said carcasses [26]. The importance of infected carcasses in the transmission cycle is supported by recent modelling work [27, 28, 29, 30]. Wild boar are highly social animals and are often observed in groups. Most often, groups are made up of one or more sows and their piglets, but can consist of young males or females [31]. Conversely, adult male boar are often solitary. This wild boar social structure has been found to influence ASF transmission dynamics [32]. However, wild boar population ecology is not homogeneous as group size, home range area, density, and growth rates vary between regions. Previous ASF modelling studies focusing on wild boars have been primarily individual-based (agent-based) models [27, 29, 30, 33, 34] or a homogeneous ordinary differential equation (ODE) model [28]. Individual-based models, while often the most accurate abstraction of a system, are computationally expensive and require a significant number of parameters, which may challenging to measure due to the knowledge gaps around ASF transmission in wild boar [35]. Conversely, simple ODE models may not properly capture the complex social structure of wild boar populations. In this study, we develop a lightweight, highly adaptable ASF modelling framework that can model ASF dynamics in wild boar in a variety of settings. The framework ranges from simple ODE models to stochastic models that explicitly model wild boar group structure. The objective of the study is to highlight the impact of model assumptions, wild boar population characteristics, and ASF transmission features that impact long-term modelled ASF behaviour. Model overview To accurately model an ASFV outbreak in naive wild boar populations, we extended the traditional compartmental susceptible (\(S\)), infected (\(I\)), recovered (\(R\)) model [36, 37]. We included an exposed, non-infectious, class (\(E\)) and due to the importance of potential carcass transmission we included a class for the decaying carcasses of pigs that have recently died from ASF (\(C\)). For simplicity we did not model piglets; instead we only modelled yearlings and adults pigs, which we combined to form a single adult class. The structure of the model is shown in Fig. 1, where \(N\) is the the total living population. \(f_{b}(t,N)\) is the time-dependent birth rate, \(f_{d}(N)\) is the death rate, and \(\lambda(t)\) is the carcass decay rate. \(\beta\) is the transmission rate, \(\omega\) is the relative infectiousness of carcasses compared to live pigs, \(\zeta\) is the latent rate, \(\gamma\nu\) is the ASF induced mortality rate, \(\gamma(1-\nu)\) is the recovery rate, and \(\kappa\) is the waning immunity rate. ### Births Seasonality can have a large impact on disease dynamics in ecological systems [38]. The seasonal cycle of births provides new susceptible individuals and can affect the critical community size [39]. Seasonality in wild boar births has been observed in many geographic settings [40, 41, 42, 43]. In our study, the total birth rate \(f_{b}(t,N)\) (Eq. 4) is primarily composed of a raw birth rate \(b(t)\), which is scaled to account for population-level density effects. The raw birth rate is modelled with a periodic Gaussian function similar to that used by Peel _et al._[39]. \[b(t)=b_{0}\exp\bigg{(}-s\cos\bigg{(}\frac{\pi(t-\phi_{b})}{365}\bigg{)}^{2} \bigg{)}, \tag{1}\] where \(s\) alters pulse width, \(\phi_{b}\) alters the phase of the pulse, and \(k\) is a scaling factor to ensure the correct total yearly birth rate. As piglets are not modelled, the total raw yearly birth rate is the Figure 1: Underlying model structure, used in all tested models. There are five base classes: \(S\)-susceptible, \(E\)- exposed, \(I\)- infectious, \(R\)- recovered, and \(C\)- carcasses. In this model, \(N\) is the the total living population, \(f_{b}(t,N)\) is the time-dependent birth rate, \(f_{d}(N)\) is the death rate, \(\lambda(t)\) is the carcass decay rate, \(\beta\) is the transmission rate, \(\omega\) is the relative infectiousness of carcasses, \(\zeta\) is the latent rate, \(\gamma\nu\) is the ASF induced mortality rate, \(\gamma(1-\nu)\) is the recovery rate, and \(\kappa\) is the waning immunity rate. birth rate of young that survive the first year. Therefore \[b_{0}=\frac{0.5*l_{t}*l_{s}*(1-l_{\mu})}{\int_{0}^{365}\exp\bigg{(}-s\cos\bigg{(} \frac{\pi(t-\phi_{b})}{365}\bigg{)}^{2}\bigg{)}dt}. \tag{2}\] We assume there is an even split between male and female pigs, and the birth rate is the product of the average local number of litters per year for yearlings/adults (\(l_{t}\)), the average local litter size (\(l_{s}\)), and the local probability that a piglet will not die the first year (1-\(l_{\mu}\)). ### Deaths and carrying capacity The logistic equation is a staple of ecological modelling, used to implement a population-level carrying capacity (\(K\)); however, it is not applicable in all contexts. The logistic equation often takes the following form: \[\frac{dN}{dt}=rN\bigg{(}1-\frac{N}{K}\bigg{)}, \tag{3}\] where \(N\) is the population size and \(r\) is the net growth rate (birth rate (\(b\)) - death rate (\(d\))). This formulation contains a density independent term (\(rN\)) and a density dependent term (\(rN^{2}/K\)). This allows for a net positive rate when \(N<K\) and a net negative rate when \(N>K\), in order for the population to return to \(K\). Furthermore, when \(N=K\), the two terms are equal and the net rate is 0. The first issue is that the basic logistic equation does not allow for seasonality in the population size when \(N=K\) as the net rate is 0. This can be remedied by allowing for seasonal variation in \(K\). Furthermore, the logistic equation is a sensible formulation when \(r\) is positive; however, unwanted behaviour can occur if \(r\) is negative. If births are time dependent, for example \(b(t)\) has the form shown in equation 1 and \(s>0\), while \(d\) is constant and \(\min(b(t))<d<\max(b(t))\), \(r\) is positive during the birth pulse peak, yet outside this time \(r\) is negative. The second issue with the logistic equation, which occurs both when \(K\) is constant or seasonally varying, is that there is an unrealistic timing of deaths. As seen in Fig. 2, there is a pulse of deaths in the logistic equation that mirrors the pulse in births. In our model, as we did not model piglets (modelled only births that survived to yearling/adult), we would not expect an excess of deaths to occur at the same time as the birth pulse. Furthermore, by concentrating most deaths into a pulse, this formulation may improperly model deaths outside of the birth pulse, as studies have found that adult wild boar deaths are distributed throughout the year [40, 44]. Therefore, due to the known importance of death and birth rates on disease dynamics [38], we formulated another method to implement \(K\). To remedy the issues with equation logistic equation, we assumed that death rate was independent of the birth rate and that both the birth and death rates contain density dependent and independent components. The equations take the form of: \[\begin{split} f_{b}(t,N)&=b(t)\left(\sigma+(1- \sigma)\left(\frac{K}{N}\right)^{\theta}\right),\\ f_{d}(N)&=\mu\left(\sigma+(1-\sigma)\left(\frac{ N}{K}\right)^{\theta}\right),\end{split} \tag{4}\] where \(b(t)\) is the aforementioned raw birth rate, \(\sigma\) is the modifier to determine the ratio of density-independent births/deaths to total births/deaths (1 is purely density independent, whereas 0 is purely density-dependent), \(\theta\) is the power of the density-dependent component (assumed to be 1/2), and \(\mu\) is the daily time-independent death rate. As the average simulated model time was comparable to the average wild-boar lifetime, we assumed that \(K\) was constant. As such, without ASF, when the population size is equal to \(K\), the yearly births equals yearly deaths. If \(\theta=1/2\), the birth and death equations in equation 4, can be combined to form \[\frac{dN}{dt}=\sigma r(t)N+(1-\sigma)(b(t)K-\mu N)\bigg{(}\frac{N}{K}\bigg{)}^{ 1/2}. \tag{5}\] Equation 5 allows for seasonal variation in \(K\), decoupled births and deaths, and a variable split between the relative strength of density and density-independent processes. The effects of \(\sigma\) and its relationship to growth rate are explored in Appendix Section 1.1. As seen in Fig. 2, with the _split_ formulation there is no longer the unwanted pulse of deaths mirroring the birth pulse, instead the deaths are distributed throughout the year. A more complete comparison of the split formulation, logistic equation, and logistic equation with a seasonally varying \(K\), for three different ratios of \(N/K\), is shown in Appendix Fig. 2. ### Model structure We considered three ASF models based on the model structure shown in Fig. 1. The models were: 1. Homogeneous ordinary differential equation model (M1) 2. Homogeneous stochastic model (M2) 3. Heterogeneous network-based stochastic model (M3) Figure 2: Comparison between the logistic equation and the split equations for the per pig population growth rate. The split model assumes a birth pulse (equation 1) with \(b_{0}\) = 0.01, \(s\) = 3, and \(\phi_{b}\) = 0, \(\mu\approx 0.0367\), \(\sigma\) = 0.75, \(\theta\) = 1/2, and \(N/K\) = 0.5. The logistic model assumes the same birth pulse, but a death rate (\(d\)) that is 2/3 the birth rate (\(b(t)\)). M1 was run with a variable time step and was defined by the following set of equations: \[\frac{dS}{dt} =f_{b}(t,N)N+\kappa R-f_{d}(N)S-\beta(I+\omega C)\frac{S}{N+C}, \tag{6}\] \[\frac{dE}{dt} =\beta(I+\omega C)\frac{S}{N+C}-(f_{d}(N)+\zeta)E,\] \[\frac{dI}{dt} =\zeta E-(f_{d}(N)+\gamma)I,\] \[\frac{dR}{dt} =\gamma(1-\nu)I-(f_{d}(N)+\kappa)R,\] \[\frac{dC}{dt} =(f_{d}(N)+\gamma\nu)I-\lambda(t)C,\] where \[\lambda(t)=\lambda_{0}+\lambda_{1}*\cos\left(\frac{2\pi(t+\phi_{\lambda})}{36 5}\right). \tag{7}\] M2 and M3 were run with a Gillespie tau-leaping algorithm [45], which allowed for demographic stochasticity. The models were run with a fixed daily time-step. The underlying structure of the two stochastic models is represented by eleven distinct processes (Appendix Table 1), which included births, deaths, transmission, and disease progression. All models simulated a single population of wild boar. Models M1 and M2 were run on a homogeneous population, while M3 was run on a network containing \(N_{g}\) groups of wild boar. The groups consisted of multiple pigs or a solitary male boar. We assumed that intra-group contact was homogeneous (due to the small average group size), but modelled the complex system of inter-group contacts with a network [46]. Nodes were groups and edges represented contact between groups. Therefore, the degree of a group (\(k\)), was the number of other groups a group could directly interact with. We assumed that, on average, each group had contact with six other groups (\(<\)\(k\)\(>\) = 6). Previous wild boar studies found that male boars have larger home-ranges than females [47, 48]. Therefore, we assumed that larger home-ranges of male boar would result in an increased average number contacts. To include this in the model, the \(\chi\)% of nodes with largest \(k\) were assigned to be a solitary male boar, while the remaining nodes were some groups (of size \(N_{s}\)). An indicative population structure of M3 is given in Fig. 3. Network-type is known to influence epidemic dynamics [49]; however, as the true network is unknown, we used several algorithms to generate different idealised networks with varying characteristics to investigate the effect of network type. The following three networks were tested: 1. Erdos-Renyi random network (RN) [50] 2. Barabasi-Albert scale-free network (SF) [51] 3. Watts-Strogatz small-world random network (SW) [52] The chosen networks cover a range of attributes. In RN networks, the degree distribution of each group, provided the number of groups modelled is sufficiently large, follows a Poisson distribution and clustering is low. In SF networks, the degree distribution follows a power-law (\(P(k)\propto k^{-3}\)), such that \(<\)\(k\)\(>\) is finite, but the variance is infinite. Therefore, most groups have low connectivity, but some groups (solitary boars) are highly connected and act as key hubs (nodes with high betweenness centrality) in the network. The average clustering in the SF networks generated for this study is low. The SW network in this study was run with a rewiring probability of 0.2, which allowed the network to have high clustering. ### Transmission coefficient There are two transmission pathways for ASF included in the model: infection from live pigs, or infection from infectious carcasses of pigs that have recently died from ASF. In the homogeneous models, M1 and M2, both pathways are governed by a single transmission rate, \(\beta_{h}\). In M3, infection can come from within a pig's group or from a connected group, hence there are two rates - intra-group transmission (\(\beta_{i}\)) and inter-group transmission (\(\beta_{o}\)). To determine the role of carcasses in transmission, infection from carcasses is set to be proportional to \(\omega\beta_{x}\), where \(\omega\) is the relative infectiousness of carcasses to live pigs, a fitted parameter. In wild boar, higher densities likely lead to high contact rates [53], however, the exact contact density function is not known. A positive relationship between density and disease spread has been noted at the population level, in both classical swine fever studies [54, 55, 56] and ASF studies [57, 28, 58]. Furthermore, power functions have been shown to offer a realistic scaling of transmission rate with density in ecological settings [59, 60, 61]. Therefore, similar to Borremans _et al._[59], we tested a number of contact-density functions that scaled with density. We set \(\beta=C(\rho_{N})p\), where \(C(\rho_{N})\) is the contact rate, a dimensionless scaling factor that is a function of population level density (\(\rho_{N}\)), and \(p\) is the transmission rate. We tested four formulations of \(C(\rho_{N})\) for \(\beta_{h}\) and \(\beta_{o}\); these relationships are given in Table 1 and plots of how each formulation scales with density are shown in Fig. 4. While the model assumes that inter-group transmission is dependent on density, intra-group transmission is density independent. Bouma _et al._[62] found that intra-group pseudo-rabies transmission rates, on the group scale, were independent of population density. Therefore, the density-contact function for intra-group transmission was assumed to be frequency-based. Figure 3: Network model population structure. Sow groups are yellow, while solitary boar _groups_ are blue. Inter-group connections are shown by solid black lines. ## 3 Fitting models to recent outbreak data ### Parameters and model fitting All models relied on two sets of parameters: transmission parameters which were assumed the same across populations \(\{p_{h},p_{o},p_{i},\omega,\sigma,\theta,\zeta,\gamma,\kappa,\nu\}\), and population and location specific parameters that likely varied by region \(\{K,\rho_{NK},<\)\(k\)\(>\), \(N_{g},N_{s}\), \(\chi\), \(N_{ly}\), \(N_{ls}\), \(\mu_{py}\), \(\mu\), \(\phi_{b}\), \(\lambda_{0}\), \(\lambda_{1}\), \(\phi_{\lambda}\}\). Transmission parameters were estimated by fitting our candidate models to recent outbreak data from the Baltic states and Poland. Transmission parameters are given in Table 2, while population parameters are given in Table 3. The three models were fitted using an approximate Bayesian computation with a sequential Monte Carlo algorithm (ABC-SMC). Each model had an initial population of 5000 wild boar, which was the assumed carrying capacity of the modelled region. All models assumed that ASFV was introduced during the summer, as wild boar tend to have larger home ranges during this season [77]. In M1 and M2, ASF was initialised with 1% prevalence, while in M3 ASF was seeded in five connected groups. This high number was chosen to minimise the occurrence of minor outbreaks in the stochastic \begin{table} \begin{tabular}{l l l} \hline \hline **Function \#** & **Transmission-type** & **Description** \\ \hline \(C_{1}\) & Frequency-based & \(C(\rho_{N})=1\) \\ \(C_{2}\) & Density-based & \(C(\rho_{N})=\rho_{N}/\rho_{NK}\) \\ \(C_{3}\) & Sigmoid & \(C(\rho_{N})=\tanh(1.5\rho_{N}/\rho_{NK}-1.5)+1\) \\ \(C_{4}\) & Power-law & \(C(\rho_{N})=(\rho_{N}/\rho_{NK})^{1/2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The four different contact-density functions tested in the models. Here, \(C(\rho_{N})\) is the contact-density function, \(\rho_{N}\) is the wild boar density (total pigs per unit area), and \(\rho_{NK}\) is the wild boar density at \(K\). Figure 4: Plots of the four different contact-density functions tested in the models. The frequency-based contact function (\(C_{1}\)) is in blue, the density-based contact function (\(C_{2}\)) is in green, the sigmoid contact function (\(C_{3}\)) is in orange, and the power-law contact function (\(C_{4}\)) is in pink. models [78]. In M1 and M2, which assumed homogeneous populations, the fitted parameters were \(p_{h}\) and \(\omega\). In M3 the fitted parameters were \(p_{i}\), \(p_{o}\), and \(\omega\). All parameters had semi-informative uniform priors and fitting simulations were run for six years. Model fit was judged against summary statistics. The following summary statistics were chosen from the Baltic and Polish ASF outbreaks: * Prevalence of ASFV several years post introduction is 1.5% (95% CI [1%, 2%]) [79, 80]. * 75% decline in wild boar density during the epidemic (95% CI [65%, 85%]) [81, 4, 82]. * Infection peak occurs 180 days after the first detected case (95% CI [120 Days, 240 Days]) [4]. As there was no surveillance in the model, we assumed the first detected case would occur at 5% prevalence. To assess the effect of contact-density functions, every model was fitted with the four contact functions (Table 1). To analyse the effect of network structure, M3 was also fitted with the three network types (RN, SF, and SW). ### M1 & M2 Model fitting results In both M1 (ODE) and M2 (homogeneous stochastic) models, the posterior distributions of the fitted parameters for each contact-function sub-model show largely the same behaviour. Properties of the posteriors for \(p_{h}\) and \(\omega\) are given in Appendix Table 2 and plots of the M1 and M2 posterior distributions are given in Appendix Fig. 6 and 7. To asses the quality of fit of M1 and M2, for each contact-density formulation, we simulated both M1 and M2 using draws from the fitted posterior \begin{table} \begin{tabular}{l l l l} \hline \hline **Parameter** & **Description** & **Value** & **Source** \\ \hline \(p_{h}\) & Homogeneous transmission rate (M1 \& M2) & Fitted & - \\ & Inter-group transmission rate (M3) & Fitted & - \\ \(p_{i}\) & Intra-group transmission rate (M3) & Fitted & - \\ \(\omega\) & Relative infectiousness of carcasses to live pigs & Fitted & - \\ & Proportion of density independent births \& deaths to & 0.75 & [63, 64] \\ & total births \& deaths & \\ \(\theta\) & Power coefficient of density-dependent births \& deaths & 0.5 & Assumption \\ \(1/\zeta\) & Latent period & 6 days & [65, 66, 67] \\ \(1/\gamma\) & Infectious period & 8 days & [66, 67, 68] \\ \(1/\kappa\) & Waning immunity period & 180 days\({}^{1}\) & \\ \(\nu\) & Lethality & 0.95 & [1, 69] \\ \hline \hline \end{tabular} \end{table} Table 2: Transmission based model parameters. \({}^{1}\)Immunity duration is a current knowledge gap and requires further investigation [70]. Sereda _et al._[71] found that immunisation with attenuated ASFV strains offered at least four months protection from virulent ASFV strains. distributions. Each sub-model was run 10,000 times and the simulations ran for six years. We then computed summary statistics for each run and checked the goodness of fit between the simulated and observed summary statistics. The results for the M1 simulations are given in Fig. 5. The best fitting contact-density function was the power-law relationship (\(C_{4}\)), where 100% of the simulated summary statistics were within the 95% confidence intervals of all observed summary statistics. The next most effective contact-density function was \(C_{1}\), where 90% of simulated summary statistics were within the 95% confidence interval of all observed summary statistics. No simulated summary statistics for the density-based transmission (\(C_{2}\)) and the sigmoid transmission (\(C_{3}\)) were within the 95% confidence interval of all observed summary statistics. For contact functions \(C_{2}\) and \(C_{3}\), the endemic prevalence and population decline were within the 95% confidence intervals, but ASF spread more quickly than observed in outbreak data with the peak in infections occurring prematurely (on average in 106 days for \(C_{2}\) and in 69.6 days for \(C_{3}\)). While similar to the M1 results, the stochastic nature of M2 caused a greater spread in simulated summary statistics (Fig. 6). Contact function \(C_{4}\) again produced the most accurate model, with approximately 47% of simulated summary statistics lying within the 95% confidence intervals of all observed statistics and ASFV was endemic in over 97% of simulations after six years. All other contact functions produced a poor fit and were largely unable to reproduce the observed summary \begin{table} \begin{tabular}{l l l l} \hline \hline Parameter & Description & Value & Source \\ \hline \(K\) & Carrying capacity of modelled region & 5000 pigs & Assumption \\ \(\rho_{NK}\) & Initial wild boar density & 2.8 pigs & [64] \\ \(<\!\!k\!\!>^{2}\) & Average number of inter-group connections & 6 connections & Assumption \\ \(N_{q}^{2}\) & Number of groups & 1000 groups & Assumption \\ \(N_{s}^{2}\) & Sow group size & 6 pigs & [64, 72, 73] \\ \(\chi^{2}\) & Ratio of solitary boar groups & 0.2 & [72] \\ \(N_{ly}\) & Number of yearly litters & 0.9 litters & [74, 75] \\ \(N_{ls}\) & Litter size & 5.8 pigs & [43, 64, 75, 76] \\ \(\mu_{py}\) & First year mortality & 0.5 & [44, 64, 75] \\ \(\mu\) & Death rate at \(K\) & 0.0036 day\({}^{-1}\) & \(0.5N_{ly}N_{ls}(1\!-\!\mu_{py})/365\) \\ \(s\) & Birth pulse width & 3 & [43] \\ \(\phi_{b}\) & Birth pulse offset & 75 days & [43] \\ \(1/\lambda_{0}\) & Base carcass decay period & 60 days & Local weather \& [25] \\ \(1/\lambda_{1}\) & Seasonal change in carcass decay period & 30 days & [25] adjusted for local conditions \\ \(\phi_{\lambda}\) & carcass decay offset & 0 days & Local weather \\ \hline \hline \end{tabular} \end{table} Table 3: Population model parameters selected to reflect the Baltic region. \({}^{2}\) Parameters only used in heterogeneous network model. statistics. Only 3% of simulated \(C_{1}\) summary statistics were within all 95% confidence intervals and after six years ASFV was endemic in 70% of simulations. When endemic, prevalence was often too high; a third of endemic simulations had an average ASF prevalence greater than 5%. In the homogeneous models (M1), the two most accurate density-contact functions, \(C_{1}\) and \(C_{4}\), predicted different dominant transmission pathways. To analyse the various routes of ASF transmission, the aforementioned simulations were analysed to calculate the force of infection (\(\mathbf{\lambda}\)) from both live animals (\(\mathbf{\lambda}_{l}\)) and from carcasses (\(\mathbf{\lambda}_{c}\)). The force of infection was only calculated in endemic simulations during the endemic phase of transmission, between years three and six. Results are given in Fig. 7. In this homogeneous model (M1), the \(C_{1}\) sub-model predicted \(\mathbf{\lambda}_{l}\approx 20\mathbf{\lambda}_{c}\), while the converse was true in the \(C_{4}\) sub-model, where \(\mathbf{\lambda}_{l}\approx\frac{1}{6}\mathbf{\lambda}_{c}\). Therefore, transmission in the \(C_{4}\) sub-model was primarily driven by carcasses, while in the \(C_{1}\) sub-model transmission was dominated by live pigs and there was comparatively little carcass-based transmission. Furthermore, the total force of infection in the \(C_{1}\) sub-model was 36% larger than in the \(C_{4}\) sub-model. In the stochastic model (M2), there was a similar, albeit lesser, dichotomy in transmission predicted by \(C_{1}\) and \(C_{4}\). The \(C_{1}\) sub-model predicted \(\mathbf{\lambda}_{l}\approx 3\mathbf{\lambda}_{c}\), while in the \(C_{4}\) sub-model, \(\mathbf{\lambda}_{l}\approx\frac{1}{5}\mathbf{\lambda}_{c}\). The total force of infection of the \(C_{1}\) sub-model was 220% larger than in the \(C_{4}\) sub-model and was 140% greater than in the M1 \(C_{1}\) sub-model. This high force of infection may stem from the inability of the M2 \(C_{1}\) sub-model to accurately reproduce the observations, as a number of simulations achieved too high an endemic prevalence, which would have caused an overestimation of \(\mathbf{\lambda}\). The total force of infection calculated from the most accurate models (M1 \(C_{1}\), M1 \(C_{4}\), and M2 \(C_{4}\) sub-models) was Figure 5: Comparison between the three simulated summary statistics and the observed summary statistics for ODE model (M1). The plot on the left compares statistics S1 and S2, while the right plot compares S1 and S3. The point estimates of the observed summary statistics are represented by the black dot, and the associated 95% confidence regions by the black rectangles. Results from the frequency-based contact function (\(C_{1}\)) are in blue, from the density-based contact function (\(C_{2}\)) in green, from the sigmoid contact function (\(C_{3}\)) in orange, and from the power-law contact function (\(C_{4}\)) in pink. All models were simulated 10,000 times. In brackets in the legend are the proportion of simulations where all three simulated summary statistics were within the 95% confidence interval of all observed summary statistics. comparable to that calculated by Loi _et al._[83]. The fitting procedure outlined above was repeated for a logistic equation based ODE (LM1) and homogeneous tau-leaping model (LM2) to quantify the effect of using the _split_ equation over the logistic equation on ASF dynamics. For LM1, the \(C_{1}\) and \(C_{2}\) functions could accurately reproduce the observed summary statistics; however, in LM2 no contact-density sub-model was able to adequately replicate the summary statistics. The results of the logistic-based simulations are given in Appendix Section 1.2. ### M3 fitting results For the heterogeneous tau-leaping model (M3) an ensemble of six sub-models were fitted (contact functions \(C_{1}\) and \(C_{4}\) run on each of the three network types). \(C_{2}\) and \(C_{3}\) sub-models were excluded due to their poor fit in M1 and M2. The properties of the posterior distributions of intra-group transmission (\(p_{i}\)), inter-group transmission (\(p_{o}\)), and relative infectivity of carcasses (\(\omega\)) are given in Appendix Table 3, while plots of the posterior distributions are given in Appendix Fig. 8. To assess the goodness of fit of the six M3 models, we again compared summary statistics from simulations using their fitted posteriors to the observed summary statistics. Each model was simulated 1,000 times and the results are given in Fig. 9 (Full results in Appendix Table 4). Unsurprisingly, the increased heterogeneity in M3 caused greater variability in summary statistics. Across all net Figure 6: Comparison between the three simulated summary statistics and the observed summary statistics for the stochastic homogeneous model (M2). The plot on the left compares summary statistics S1 and S2, while the right plot compares S1 and S3. The observed summary statistic is represented by the black dot, and its associated 95% confidence interval by the black rectangle. Results from the frequency-based contact function (\(C_{1}\)) are in blue, the density-based contact function (\(C_{2}\)) in green, the sigmoid contact function (\(C_{3}\)) in orange, and the power-law contact function (\(C_{4}\)) in pink. Each contact density formulation model was simulated 10,000 times. In brackets in the legend are the proportion of simulations where all three simulated summary statistics were within the 95% confidence interval of all observed summary statistics. works, the \(C_{4}\) sub-models were better able to reproduce the observed summary statics than the \(C_{1}\) sub-models. Furthermore, \(C_{4}\) sub-models could better model endemicity; after six years, ASFV was endemic in 51.5% of \(C_{1}\) simulations and 62% of \(C_{4}\) simulations. There were also notable differences between networks. SW networks, for both contact functions, had the lowest proportion of simulated summary statistics within the 95% confidence intervals of all observed summary statistics. \(C_{4}\) simulations on both SF and RN networks produced similar levels of fit, while a greater proportion of SF \(C_{1}\) simulations were within the observed summary statistic confidence intervals. Compared to the homogeneous stochastic model (M2), depending on the contact-function, the M3 model offered comparable or better levels of fit. The M3 \(C_{1}\) sub-models had an up to 7 times greater proportion of simulations within all observed confidence intervals than the poorly fitting M2 \(C_{1}\) sub-model. In contrast, the M2 \(C_{4}\) sub-model had, on average, a 1.22 times greater proportion of simulations within the observed confidence intervals than the M3 \(C_{4}\) sub-models. For completeness the basic reproduction number (\(R_{0}\)) and the effective reproduction number (\(R_{eff}\)) for M1 and M2 were calculated. Results are presented in Appendix Section 1.6. To analyse the different routes of ASF transmission in M3, the six sub-models were again run from their fitted posteriors, simulated 1,000 times, and the force of infection was calculated during the endemic phase of the outbreak (years 3-6). We were able to decompose the force of infection into components from living pigs and pig carcasses, and from solitary boar (\(\mathbf{\lambda_{b}}\)) or sow groups (\(\mathbf{\lambda_{s}}\)). The results for inter-group transmission are given in Fig. 8 (Full results in Appendix Table 5). As the primary focus of the heterogeneous model was to investigate how ASF spreads through the network and because the force of infection has limited utility within groups (due to small sow group size and solitary boar groups), the force of infection was only computed for inter-group transmission. Consequently, the magnitudes of the calculated forces are not directly comparable to those from Figure 7: The yearly force of infection (\(\mathbf{\lambda}\)) from carcasses and live pigs, computed for the ODE model (M1) and the homogeneous tau-leaping model (M2), with both frequency-dependent transmission (blue) and the power-law contact-density function (pink). The force of infection was calculated during the endemic phase of the outbreak and results have been filtered to only include endemic simulations. the homogeneous models (M1 and M2). Infected carcasses were the dominant pathway of transmission for both contact functions (\(C_{1}\) and \(C_{4}\)), unlike M1 and M2 where transmission from live pigs dominated in \(C_{1}\) sub-models. However, similar to what was observed in M1 and M2, the total force of infection in the \(C_{1}\) sub-models was equal or greater than that for \(C_{4}\) sub-models. Network structure was also found to influence the force of infection, particularly the relative influence of sows or boar. In the SW models, relative transmission from boars was lowest, with \(\mathbf{\lambda_{b}}\approx 0.25\mathbf{\lambda_{s}}\). In RN models, while sows were still the primary drivers of infection, boars had a greater influence on ASF spread as \(\mathbf{\lambda_{b}}\approx 0.56\mathbf{\lambda_{s}}\). Conversely, the SF models were dominated by boar-based transmission as \(\mathbf{\lambda_{b}}\approx 2.69\mathbf{\lambda_{s}}\). Therefore, in the SF models, although boars only comprised 4% of the population, they were responsible for approximately 70% of inter-group transmission. This large influence of boars is explained by the SF network structure; the highly connected boars (\(k\gg\)\(<\)\(k\)\(>\)) act as hubs in the group-network and drive inter-group transmission. ## 4 Intervention To explore the impact of model choice on potential programmatic outcome, we simulated a potential ASF intervention. Specifically, we analysed the efficacy of removing wild boar carcasses to prevent Figure 8: The force of infection (\(\mathbf{\lambda}\)) calculated for the tau-leaping heterogeneous model (M3). On the left, in blue, are sub-models that assume frequency-based density-contact function (\(C_{1}\)) and the right plot, in pink, are sub-models that assume a power-law density-contact function (\(C_{4}\)). The unhatched bars are force of infection from live boars, the diagonal hatch bars are the force of infection from live sov, the cross-hatch bars are force of infection from boar carcasses, and the dot-hatch bars are force of infection from sov carcasses. The force of infection was calculated during the endemic phase of the outbreak and results have been filtered to only include endemic simulations. Figure 9: Comparison between the three simulated summary statistics and the observed summary statistics for the tau-leaping heterogeneous model (M3). Each row is the M3 model run on a different network. The left plots compare summary statistics S1 and S2, while the right plots compare S1 and S3. The observed summary statistic is represented by the black dot, and its associated 95% confidence interval by the black rectangle. Result from the frequency-based contact function (\(C_{1}\)) are in blue and from the power-law contact function (\(C_{4}\)) are in pink. Due to computational limitations, each model was simulated 500 times. In brackets in the legend are the proportion of simulations where all three simulated summary statistics lie within the 95% confidence interval of all observed summary statistics. endemic ASFV transmission, as Guinat _et al._[84] and Gervasi & Guberti [85] found that carcass removal can be a highly effective, albeit potentially impractical, intervention. In our simulations, the intervention started 90 days after ASFV was seeded in the population and continued for three years. The intervention was modelled by reducing the carcass decay period (\(1/\lambda\)) by multiplicative factor (\(\alpha\)), where \(\alpha\in[0,0.975]\). While a 97.5% reduction in decay time is likely unachievable in practice, this upper limit was considered for illustrative purposes. The effect of the intervention was tested for all combinations of model types (M1-M3), contact-density functions (\(C_{1}\)-\(C_{4}\)), and networks (RN, SF, and SW). For the two homogeneous models, M1 and M2, each magnitude of \(\alpha\) tested was simulated 1000 times. The results for M1 and M2 are given in Fig. 10. Results from M1 highlight potential limitations with simple homogeneous ODE models. We found that in the \(C_{1}\) sub-model, carcass removal did not reduce the number of simulations with endemic ASF after three years; all simulations had 100% endemicity after the intervention period. The lack of intervention success occurs because carcass-based transmission in \(C_{1}\) is negligible. The remaining three ODE models exhibited a sudden transition, from ASFV being endemic in 100% of simulations to ASFV being endemic in no simulations. This bifurcation required an approximate reduction in decay period of 50% for \(C_{3}\) sub-models, 68% for \(C_{4}\) sub-models, and 84% for \(C_{2}\) sub-models. The abrupt shift between the two states is unrealistic and may give a false prediction of potential programmatic success. M2, which included demographic stochasticity, did not display the sharp transition between endemicity and extinction that occurred in M1. Instead, compared to M1, all sub-models were characterised by a gradual increase in the number of simulations with ASFV extinction with an increase in the magnitude of \(\alpha\). ASFV extinction in 75% of simulations required an average reduction in the decay period of 97% for \(C_{1}\) sub-models, 66% for \(C_{4}\) sub-models, 49% for \(C_{2}\) sub-models, and 30% Figure 10: The effectiveness of a carcass removal intervention in both the ODE model (M1) and the tau-leaping homogeneous model (M2). Each model investigates the impact of different assumed contact-density functions: frequency-based (\(C_{1}\)) in blue, density-based (\(C_{2}\)) in green, sigmoid-based (\(C_{3}\)) in orange, and power-law based (\(C_{4}\)) in blue. for \(C_{3}\) sub-models. Unlike in M1, the \(C_{1}\) sub-model did respond to the intervention; however, again due to the negligible levels of carcass based transmission, the required intervention was large and would probably not be practically achievable. The addition of stochasticity changed the ordering of the intervention efficacy of sub-models. Importantly, the simple addition of stochasticity removed the sudden bifurcation present in M1 and produced more realistic responses to the intervention. In M3, both network and density contact function type were found to influence intervention success. For the M3 interventions, the carcass decay period was reduced by a multiplicative factor \(\alpha\in[0,0.05,\ldots 0.85,0.9]\) and each magnitude of \(\alpha\) was simulated 500 times. As with the previous analysis of M3, only \(C_{1}\) and \(C_{4}\) sub-models were included. All interventions were simulated for three years and began 90 days after ASFV was seeded in the population. For each network, after normalising for intervention-free endemicity, \(C_{1}\) sub-models required comparable or lesser intensity of intervention to achieve elimination than required by the \(C_{4}\) sub-models (Fig. 11). The increased effectiveness of the intervention in \(C_{1}\) M3 models, when compared to the \(C_{1}\) homogeneous models, Figure 11: The effectiveness of a carcass removal intervention in the tau-leaping heterogeneous model (M3). Each row is the model run on a different network structure. In each plot the model investigates the impact of different assumed contact-density functions– frequency-based (\(C_{1}\)) in blue and power-law based (\(C_{4}\)) in blue. In brackets in the legend is the proportion of simulation where ASF is endemic with no intervention. occurs as the \(C_{1}\) M3 models have a greater relative proportion of transmission from carcasses. Network choice influenced the likelihood of intervention success. We found that the SF network, on average, required the lowest intensity of intervention to reach the 75% elimination benchmark, while SW and RN models required similar intensities of intervention. The better performance of the intervention on the SF network is unsurprising, due to the aforementioned SF network structure and the key role boar play in transmission. If a highly connected boar becomes infected, dies, and their carcasses are promptly removed via the intervention, this can quickly remove a highly connected node. This can disconnect regions of the network, limit spread and cause fade-out. Similar behaviour has been noted in previous scale-free studies [49, 86]. ## 5 Conclusion Previous wild boar focused ASF modelling studies have either been individual-based (agent-based) models [27, 29, 30, 33, 34] or, in the case of O'Niell _et al._[28], an ODE model. This study investigates the gap between the two model types. To include a spatial component of transmission, while reducing complexity compared to an individual-based model, we used a stochastic ASF model on a network (M3). The network based approach has the ability to easily compare the effect of different network topologies on disease dynamics. This ability to analyse different networks is important due to the limited understanding of the social dynamics of wild pigs [87]. Similar to Lange & Thulke [27], Pepin _et al._[29], and Gervasi & Guberti [30], in our best fitting contact-density function, \(C_{4}\), across all tested models and networks, we found that carcasses were essential to endemic transmission. Furthermore, similar to Pepin _et al._[29], and Gervasi & Guberti [30], we found that carcass removal was a viable intervention to control an ASF outbreak. There are some limitations to the modelling conducted in this study that stem from the ongoing tug-of-war between complexity and simplicity. Firstly, to streamline all models, we did not model piglets, as this assumption allowed us to halve the number of classes. The addition of piglets would further increase the recruitment of susceptibles in the model during birthing periods, which could influence ASF dynamics and allow for more spread around these birthing periods. Similarly, we modelled yearlings and adult pigs as the same class. Yearlings and adult pigs have similar, but different characteristics and behaviours, for example, different litter sizes and mortality rates [64]. Although carcass decay and births had a strong time dependence, the transmission rate (\(\beta\)), did not. Wild boar contact rates vary by season [40]; therefore there should be a degree of seasonality to \(\beta\). Time dependence in transmission has been shown to alter potential outbreak sizes, induce complex dynamics in endemic scenarios, and reduce disease persistence [88]. There is a large degree of uncertainty surrounding a number of key processes in both the wild boar population and ASF spread dynamics. There is a knowledge gap on the group-level behaviour of wild boar; for example, the degree distribution of each group and the overall network characteristics. We attempted to allow for this by modelling ASF on three different networks; however, there are drawbacks with each chosen network. The SF networks have an untruncated degree distribution, which can cause some boar to have very large, unrealistic, degrees and being too influential in ASF dynamics. Furthermore, true scale-free networks are uncommon [89]. RN networks lack sufficient clustering and hubs, and most food webs do not exhibit small-world topology [90]. All networks in this study are static in both nodes and connections, which is likely unrealistic over the six year simulation period; however, the extent that a wild boar network evolves with time is unknown. Another limitation of the heterogeneous stochastic model (M3), is that after a boar dies, they remain connected to all their previously connected groups until their carcass decays. Boars were chosen to be the highly connected nodes on the basis that they have the largest home range, meaning a boar would travel and contact many other boar and sows groups. Once a boar dies, this travel cannot occur, but in our model the boar remains connected to the same number of other groups. Therefore, the model may overestimate the force of infection from boar carcasses, especially within the SF network. The highly adaptable nature of the ASF modelling framework presented in this study allows it to be extended to model potential ASF incursions in different regions. However, to better model ASF, further research should be conducted on the aforementioned knowledge gaps. Moreover, as evidenced by this study and the recent ASF literature, wild boar carcasses play a key role in endemic ASF transmission. Therefore, to accurately model ASF in different geographic locations, it is crucial we understand the period of viability of ASFV in carcasses in different climatic conditions, e.g. humidity, temperature. The modelling framework could also be extended to include domestic pigs. As the interface between wild and domestic pigs is important factor of ASF outbreaks [35] and piggery nodes could include edges that allow for long distance transmission. We found that potential ASF outbreak outcomes are highly dependent on model specification. Model complexity, density-contact function, network, and carrying capacity implementation all affect disease spread, persistence, and intervention effectiveness. Across all models, of all contact-density functions tested, we found that the power-law formulation (\(C\propto\sqrt{\rho}\)), was the most able to reproduce observed ASF outbreak behaviour. Models with this contact-density function involved substantial carcass-based transmission; however, the force of infection from carcasses varied between models, with increased model heterogeneity leading to a decrease in the relative importance of carcass-based transmission. We found these differences influenced the efficacy of potential interventions, which highlights the necessity of carefully considering model structure and the utility of using an ensemble of models.
2309.11212
Hardness Transitions and Uniqueness of Acyclic Colouring
For $k\in \mathbb{N}$, a $k$-acyclic colouring of a graph $G$ is a function $f\colon V(G)\to \{0,1,\dots,k-1\}$ such that (i)~$f(u)\neq f(v)$ for every edge $uv$ of $G$, and (ii)~there is no cycle in $G$ bicoloured by $f$. For $k\in \mathbb{N}$, the problem $k$-ACYCLIC COLOURABILITY takes a graph $G$ as input and asks whether $G$ admits a $k$-acyclic colouring. Ochem (EuroComb 2005) proved that 3-ACYCLIC COLOURABILITY is NP-complete for bipartite graphs of maximum degree~4. Mondal et al. (J. Discrete Algorithms, 2013) proved that 4-ACYCLIC COLOURABILITY is NP-complete for graphs of maximum degree five. We prove that for $k\geq 3$, $k$-ACYCLIC COLOURABILITY is NP-complete for bipartite graphs of maximum degree $k+1$, thereby generalising the NP-completeness result of Ochem, and adding bipartiteness to the NP-completeness result of Mondal et al. In contrast, $k$-ACYCLIC COLOURABILITY is polynomial-time solvable for graphs of maximum degree at most $0.38\, k^{\,3/4}$. Hence, for $k\geq 3$, the least integer $d$ such that $k$-ACYCLIC COLOURABILITY in graphs of maximum degree $d$ is NP-complete, denoted by $L_a^{(k)}$, satisfies $0.38\, k^{\,3/4}<L_a^{(k)}\leq k+1$. We prove that for $k\geq 4$, $k$-ACYCLIC COLOURABILITY in $d$-regular graphs is NP-complete if and only if $L_a^{(k)}\leq d\leq 2k-3$. We also show that it is coNP-hard to check whether an input graph $G$ admits a unique $k$-acyclic colouring up to colour swaps (resp. up to colour swaps and automorphisms).
Shalu M. A., Cyriac Antony
2023-09-20T11:02:25Z
http://arxiv.org/abs/2309.11212v2
# Hardness Transitions and Uniqueness of Acyclic Colouring ###### Abstract For \(k\in\mathbb{N}\), a \(k\)-acyclic colouring of a graph \(G\) is a function \(f\colon V(G)\to\{0,1,\ldots,k-1\}\) such that (i) \(f(u)\neq f(v)\) for every edge \(uv\) of \(G\), and (ii) there is no cycle in \(G\) bicoloured by \(f\). For \(k\in\mathbb{N}\), the problem \(k\)-Acyclic Colourability takes a graph \(G\) as input and asks whether \(G\) admits a \(k\)-acyclic colouring. Ochem (EuroComb 2005) proved that \(3\)-Acyclic Colourability is NP-complete for bipartite graphs of maximum degree \(4\). Mondal et al. (J. Discrete Algorithms, 2013) proved that \(4\)-Acyclic Colourability is NP-complete for graphs of maximum degree five. We prove that for \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for bipartite graphs of maximum degree \(k+1\), thereby generalising the NP-completeness result of Ochem, and adding bipartiteness to the NP-completeness result of Mondal et al. In contrast, \(k\)-Acyclic Colourability is polynomial-time solvable for graphs of maximum degree at most \(0.38\,k^{\,3/4}\). Hence, for \(k\geq 3\), the least integer \(d\) such that \(k\)-Acyclic Colourability in graphs of maximum degree \(d\) is NP-complete, denoted by \(L_{a}^{(k)}\), satisfies \(0.38\,k^{\,3/4}<L_{a}^{(k)}\leq k+1\). We prove that for \(k\geq 4\), \(k\)-Acyclic Colourability in \(d\)-regular graphs is NP-complete if and only if \(L_{a}^{(k)}\leq d\leq 2k-3\). We also show that it is coNP-hard to check whether an input graph \(G\) admits a unique \(k\)-acyclic colouring up to colour swaps (resp. up to colour swaps and automorphisms). ## 1 Introduction and Definitions Acyclic colouring is a variant of graph colouring introduced by Grunbaum [35] and widely studied for the class of planar graphs [12, 35] and its superclasses, such as \(1\)-planar graphs [15, 62] and graphs embeddable on surfaces [1, 4, 39]. An acyclic colouring of a graph \(G\) is a (vertex) colouring of \(G\) without bicoloured cycles. It is used in the estimation of sparse Hessian matrices [31]. The algorithmic complexity of acyclic colouring is studied in various graph classes [2, 6, 9, 25, 46, 47, 60]. Brause et al. [16] investigated the complexity of \(3\)-acyclic colouring with respect to the graph diameter. For \(k\geq 3\), we study the complexity of \(k\)-acyclic colouring with respect to the maximum degree of the graph focusing on graphs of maximum degree \(d\) and \(d\)-regular graphs. Our interest is in the values of \(d\) for which the complexity of \(k\)-acyclic colouring in graphs of maximum degree \(d\) (resp. \(d\)-regular graphs) differ drastically from that in graphs of maximum degree \(d-1\) (resp. \((d-1)\)-regular graphs); we call such values of \(d\) as _hardness transitions_ of \(k\)-acyclic colouring (with respect to the maximum degree) in the class of graphs of maximum degree \(d\) (resp. \(d\)-regular graphs); see Section 1.3 for details. We also prove computational hardness results on unique acyclic colouring (see Section 1.4). The paper is organised as follows. See Subsection 1.1 for basic definitions. Subsection 1.2 discusses known results on the algorithmic complexity of acyclic colouring, and Subsections 1.3 and 1.4 introduce the conventions and notations we use related to hardness transitions and unique solution problems, respectively. Subsection 1.5 lists the major contributions of this paper. Section 2 presents our results on the hardness transitions of acyclic colouring (with respect to the maximum degree). Section 3 discusses our results on unique acyclic colouring. We conclude with Section 4 on open problems. ### Basic Definitions All graphs considered in this paper are finite, simple and undirected. We follow West [61] for graph theory terminology and notation. When the graph is clear from the context, we denote the number of edges of the graph by \(m\) and the number of vertices by \(n\). For a graph \(G\), we denote the maximum degree of \(G\) by \(\Delta(G)\). For a subset \(S\) of the vertex set of \(G\), the _subgraph of \(G\) induced by \(S\)_ is denoted by \(G[S]\). The _girth_ of a graph with a cycle is the length of its shortest cycle. A graph \(G\) is \(2\)_-degenerate_ if there exists a left-to-right ordering of its vertices such that every vertex has at most two neighbours to its left. The _maximum average degree_\(\mathrm{mad}(G)\) of a graph \(G\) is the maximum over average degrees of subgraphs of \(G\). That is, \(\mathrm{mad}(G)=\max\{2|E(H)|/|V(H)|\colon H\text{ is a subgraph of }G\}\). The treewidth of \(G\) is denoted as \(\mathrm{tw}(G)\). A \(3\)-regular graph is also called a _cubic graph_, and a graph of maximum degree \(3\) is called a _subcubic graph_. A \(k\)-colouring of a graph \(G\) is a function \(f\) from the vertex set of \(G\) to a set of \(k\) colours, say \(\{0,1,\ldots,k-1\}\), such that \(f\) maps every pair of adjacent vertices to different colours. Let us denote the \(i\)th colour class \(f^{-1}(i)\) by \(V_{i}\). A \(k\)-acyclic colouring of \(G\) is a \(k\)-colouring \(f\) of \(G\) such that every pair of colour classes induces an acyclic subgraph (i.e., \(G[V_{i}\cup V_{j}]\) is a forest for every pair of colour classes \(V_{i}\) and \(V_{j}\)). See Figure 1 for an example. The _acyclic chromatic number_\(\chi_{a}(G)\) of a graph \(G\) is the least integer \(k\) such that \(G\) is \(k\)-acyclic colourable. The problem Acyclic Colourability takes a graph \(G\) and a positive integer \(k\) as input and asks whether \(G\) is \(k\)-acyclic colourable. For \(k\in\mathbb{N}\), the decision problem \(k\)-Colourability takes a graph \(G\) as input and asks whether \(G\) is \(k\)-colourable. Similarly, for \(k\in\mathbb{N}\), the problem \(k\)-Acyclic Colourability takes a graph \(G\) as input and asks whether \(G\) is \(k\)-acyclic colourable. To denote the restriction of a decision problem, we write the conditions in parenthesis. For instance, \(4\)-Acyclic Colourability(\(\mathrm{bipartite},\Delta=5\)) denotes the problem \(4\)-Acyclic Colourability restricted to the class of bipartite graphs \(G\) with \(\Delta(G)=5\). The Exponential Time Hypothesis (ETH) asserts that \(3\)-Sat cannot be solved in \(2^{o(n)}\) time, where \(n\) is the number of variables in the \(3\)-Sat formula [53]. An _automorphism_ of a graph \(G\) is a bijective function \(\psi\colon V(G)\to V(G)\) such that \(xy\in E(G)\) if and only if \(\psi(x)\psi(y)\in E(G)\). We say that two colourings \(f_{1}\) and \(f_{2}\) of \(G\) are the same _up to colour swaps_ if \(f_{2}\) can be obtained from \(f_{1}\) by merely swapping colours (that is, there exists a permutation \(\sigma\) of colours such that \(f_{2}(v)=\sigma(f_{1}(v))\) for every vertex \(v\) of \(G\)). We say that two colourings \(f_{1}\) and \(f_{2}\) of \(G\) are the same _up to colour swaps and automorphisms_ if there exists a permutation \(\sigma\) of colours and an automorphism \(\psi\) of \(G\) such that \(f_{2}(\psi(v))=\sigma(f_{1}(v))\) for every vertex \(v\) of \(G\). ### Acyclic colouring: Literature Survey Grunbaum [35] conjectured that every planar graph is \(5\)-acyclic colourable. This conjecture and its proof by Borodin [12] attracted the attention of many, and as a result, acyclic colouring is widely studied for the class of planar graphs [13, Section 9] and its superclasses including \(1\)-planar graphs [15, 62] and graphs embeddable on surfaces [1, 4, 39]. It is also studied for other graph classes such as regular graphs [6], line graphs [2, 47, 60], \(H\)-free graphs for fixed \(H\)[9], co-bipartite graphs [9]1, cographs [46], and grid graphs [25]. Besides, acyclic colouring is studied for classes of graphs obtained by imposing bounds on parameters such as maximum degree, girth, maximum average degree and degeneracy [3, 8, 14, 18, 37, 42, 51]. Grunbaum [35] proved that every graph of maximum degree \(3\) is \(4\)-acyclic colourable. Burstein [17] proved that every graph of maximum degree \(4\) is \(5\)-acyclic colourable. Due to the vast literature, we focus on the algorithmic complexity aspect of acyclic colouring. For surveys, see [36, Section 3.11] and [13, Section 9]. Footnote 1: See also [http://www.math.tau.ac.il/~noaga/PDFS/multitaskadd.pdf](http://www.math.tau.ac.il/~noaga/PDFS/multitaskadd.pdf) Fertin et al. [25] proved that \(\chi_{a}(G)>1+\frac{m}{n}\) for every non-empty graph \(G\). Alon et al. [3] proved that \(\chi_{a}(G)\leq 50\,d^{4/3}\) for every graph \(G\) of maximum degree \(d\). This bound was improved by Ndreca Figure 1: (a) a \(3\)-colouring of a graph, which is not a \(3\)-acyclic colouring (bicoloured cycle highlighted), and (b) a \(3\)-acyclic colouring of the circular ladder graph \(CL_{3}\) (which is not \(3\)-star colourable). et al. [52] to \(6.59\,d^{4/3}+3.3\,d\), and by Sereni and Volec [57] to \(2.835\,d^{4/3}+d\). See [5, 33, 40] for related work. ### Algorithms for Acyclic Colouring Every \(k\)-colouring of a chordal graph \(G\) is a \(k\)-acyclic colouring of \(G\), and hence the acyclic chromatic number of \(G\) can be computed in polynomial time [30]. Lyons [46] proved that the acyclic chromatic number of a cograph can be computed in linear time (i.e., \(O(m+n)\) time). Linhares-Sales et al. [45] designed a linear-time algorithm to compute the acyclic chromatic number for two superclasses of cographs called \(P_{4}\)-tidy graphs and \((q,q-4)\)-graphs (for each fixed \(q\)). Skulrattanakulchai [59] designed a linear-time algorithm to acyclic colour a graph of maximum degree \(3\) with \(4\) colours. Cheng et al. [19] designed a polynomial-time algorithm to obtain optimal acyclic colourings of claw-free graphs of maximum degree \(3\). ### Hardness Results on Acyclic Colouring Kostochka [41] and Coleman and Cai [20] independently produced constructions that proved the NP-completeness of Acyclic Colourability. We restate the construction of Coleman and Cai [20] in Section 1.2 (see Construction 1). From Construction 1, it follows that (i) for all \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for \(2\)-degenerate bipartite graphs and (ii) \(3\)-Acyclic Colourability is NP-complete for \(2\)-degenerate planar bipartite graphs. Borrowing ideas from Construction 1, Gebremedhin et al. [29] proved that for all \(\epsilon>0\), it is NP-hard to approximate the acyclic chromatic number of a \(2\)-degenerate bipartite graph within \(n^{\frac{1}{5}-\epsilon}\). In contrast, every \(2\)-degenerate graph admits an acyclic colouring with \(n^{\frac{1}{5}}\) colours [38, Theorem 6.2] (note that every unique superior colouring is an acyclic colouring [38]); hence, the acyclic chromatic number of a \(2\)-degenerate graph is approximable within \(n^{\frac{1}{5}}\). Bok et al. [10] studied the complexity of Acyclic Colourability and \(k\)-Acyclic Colourability in \(H\)-free graphs. They also proved that for \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for line graphs and graphs of arbitrarily large girth. As mentioned, \(3\)-Acyclic Colourability is NP-complete for \(2\)-degenerate planar bipartite graphs [20]. Ochem [54] proved that the problem remains NP-complete when further restricted to graphs of maximum degree four. Alon and Zaks [2] proved that \(3\)-Acyclic Colourability is NP-complete for line graphs of subcubic graphs. Mondal et al. [50] proved that \(4\)-Acyclic Colourability is NP-complete for graphs of maximum degree five. The problem \(4\)-Acyclic Colourability is also NP-complete for planar graphs of maximum degree seven [50] and \(2\)-degenerate planar bipartite graphs of maximum degree eight [54]. Consider a fixed \(k\geq 3\). Emden-Weinert et al. [24] proved that \(k\)-Colourability is NP-complete for graphs of maximum degree \(k-1+\left\lceil\sqrt{k}\right\rceil\). Since the Construction of Coleman and Cai [20] (restated below as Construction 1) establishes a reduction from \(k\)-Colourability in graphs of maximum degree \(k-1+\left\lceil\sqrt{k}\right\rceil\) to \(k\)-Acyclic Colourability in graphs of maximum degree \(k(k-1+\left\lceil\sqrt{k}\right\rceil)\), the latter problem is NP-complete as well. **Observation 1**.: _For \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(k(k-1+\left\lceil\sqrt{k}\right\rceil)\). _ ### Fixed-Parameter Tractability of Acyclic Colouring For every positive integer \(k\), \(k\)-Acyclic Colourability can be expressed in Monadic Second Order (MSO) logic without edge set quantifiers (i.e., \(\text{MSO}_{1}\)) as follows. \[\exists V_{0}\ \exists V_{1}\ \ldots\exists V_{k-1}\ k\text{- Colouring}(V_{0},V_{1},\ldots,V_{k-1})\ \land\] \[\neg\text{ContainCycle}(V_{0},V_{1})\ \land\ \cdots\land\neg\text{ContainCycle}(V_{k-2},V_{k-1})\] Here, \(\text{ContainCycle}(V_{i},V_{j})\) stands for \(\exists W\ \forall w\in W\ (w\in V_{i}\lor w\in V_{j})\ \land\ (\exists w^{\prime}\ \exists w^{\prime\prime}\ w^{\prime}\in W\wedge w^{ \prime\prime}\in W\land w^{\prime\prime}\neq w^{\prime\prime}\land\text{ adj}(w,w^{\prime})\land\text{adj}(w,w^{\prime\prime}))\) (that is, there exists a set \(W\subseteq V_{i}\cup V_{j}\) such that every vertex \(w\) in \(W\) has two neighbours \(w^{\prime}\) and \(w^{\prime\prime}\) in \(W\)). Also, \(k\)-Colouring\((V_{0},V_{1},\ldots,V_{k-1})\) denotes the MSO formula that says \(\{V_{0},V_{1},\ldots,V_{k-1}\}\) is a partition of the vertex set of the graph into independent sets (see Section 1 in the supplementary material for the MSO formula). Therefore, for each \(k\), the problem \(k\)-Acyclic Colourability admits FPT algorithms with parameter either treewidth or cliquewidth by Courcelle's theorem [11, 21]. Ganian and Hlineny [27] obtained an FPT algorithm for \(k\)-Acyclic Colourability with parameter rankwidth. On the negative side, the following construction shows that unless coNP \(\subseteq\) NP/poly, \(k\)-Acyclic Colourability parameterized by treewidth does not admit a polynomial kernel, provided \(k\geq 3\). **Construction 1** (Coleman and Cai [20]).: _Parameter:_ An integer \(k\geq 3\). _Input:_ A graph \(G\). _Output:_ A 2-degenerate bipartite graph \(G^{\prime}\). _Guarantee 1_[20] _:_ G is \(k\)-colourable if and only if \(G^{\prime}\) is \(k\)-acyclic colourable. _Guarantee 2:_\(\operatorname{tw}(G^{\prime})\leq\operatorname{tw}(G)+1\). _Steps:_ Replace each edge \(e=uv\) of \(G\) by a copy of the complete bipartite graph \(K_{2,k}\) with parts \(\{u,v\}\) and \(\{e_{1},e_{2},\ldots,e_{k}\}\), where \(e_{1},e_{2},\ldots,e_{k}\) are newly introduced vertices. To obtain a 2-degenerate ordering of \(V(G^{\prime})\), list members of \(V(G)\) followed by the new vertices \(e_{i}\). Guarantee 2 is easy to prove, especially using the game-theoretic definition of treewidth (see Section 2 in the supplementary material for details). By Guarantee 2, the transformation from \(k\)-Colourability to \(k\)-Acyclic Colourability established by Construction 1 is a Polynomial Parameter Transformation (PPT) [26] when both problems are parameterized by treewidth. Thus, we have the following theorem since \(k\)-Colourability with parameter treewidth does not admit a polynomial kernel for \(k\geq 3\). **Theorem 1**.: _For all \(k\geq 3\), \(k\)-Acyclic Colourability parameterized by treewidth does not admit a polynomial kernel unless coNP \(\subseteq\) NP/poly. _ ### Hardness Transitions Analysing the boundary between easy (i.e., polynomial-time solvable) and hard (e.g., NP-complete) problems is a common theme in complexity theory [28]. Studying the change in the complexity of a problem in response to a change in a single parameter falls in this category. Brause et al. [16] studied the complexity of \(3\)-Acyclic Colourability with the diameter of the graph as the parameter. For \(k\geq 3\), we study the complexity of \(k\)-acyclic colouring with the maximum degree of the graph as the parameter. Recall that we write the conditions in parenthesis to denote the restriction of a decision problem; e.g.: \(4\)-Acyclic Colourability\((\operatorname{bipartite},\Delta=5)\) denotes the problem \(4\)-Acyclic Colourability restricted to the class of bipartite graphs \(G\) with \(\Delta(G)=5\). We assume \(\operatorname{P}\neq\operatorname{NP}\) throughout this paper; thus, \(\operatorname{NP}\) is partitioned into three classes: \(\operatorname{P}\), \(\operatorname{NPC}\) and \(\operatorname{NPI}\)[43]. We emphasise that our interest is in the classification of \(\operatorname{NP}\)-problems with respect to the \(\operatorname{P}\) vs. \(\operatorname{NPC}\) vs. \(\operatorname{NPI}\) trichotomy: that is, the complexity classes dealt with in this paper are only \(\operatorname{P}\), \(\operatorname{NPC}\) and \(\operatorname{NPI}\). A decision problem \(\Pi\) in \(\operatorname{NP}\) has a _hardness transition_ with respect to a discrete parameter \(d\) at a point \(d=x\) if \(\Pi(d=x)\) and \(\Pi(d=x-1)\) belong to different complexity classes among \(\operatorname{P}\), \(\operatorname{NPC}\) and \(\operatorname{NPI}\) (e.g.: \(\Pi(d=x)\in\operatorname{NPC}\) whereas \(\Pi(d=x-1)\in\operatorname{P}\); see [48] for a discussion). For example, \(3\)-Colourability of a graph of maximum degree \(d\) is polynomial-time solvable for \(d=3\) (due to Brook's theorem) and \(\operatorname{NP}\)-complete for \(d=4\)[28]. That is, \(3\)-Colourability\((\Delta=3)\in\operatorname{P}\) and \(3\)-Colourability\((\Delta=4)\in\operatorname{NPC}\). Hence, \(3\)-Colourability has a hardness transition with respect to the maximum degree \(d\) at the point \(d=4\). Note that each hardness transition presumably deals with the \(\operatorname{P}\) vs. \(\operatorname{NPC}\) boundary since no 'natural' problem is known to be \(\operatorname{NP}\)-intermediate [7]. The number of hardness transitions depends on the problem as well as the parameter under consideration. Interestingly, a decision problem can have infinitely many hardness transitions. Cseh and Kavitha [22] proved that the popular matching problem on complete graph \(K_{n}\) is in \(\operatorname{P}\) for odd \(n\) whereas it is \(\operatorname{NP}\)-complete for even \(n\). Therefore, the popular matching problem on complete graph with respect to the number of vertices \(n\) has infinitely many hardness transitions. Let us consider the complexity of \(k\)-colouring and \(k\)-acyclic colouring in bounded degree graphs for fixed \(k\geq 3\). Emden-Weinert et al. [24] proved that \(k\)-Colourability is \(\operatorname{NP}\)-complete for graphs of maximum degree \(k-1+\left\lceil\sqrt{k}\right\rceil\). By Observation 1, \(k\)-Acyclic Colourability is \(\operatorname{NP}\)-complete for graphs of maximum degree \(k(k-1+\left\lceil\sqrt{k}\right\rceil)\). Hence, \(k\)-Colourability (resp. \(k\)-Acyclic Colourability) in graphs of maximum degree \(d\) is \(\operatorname{NP}\)-complete when \(d\) is sufficiently large. Observe that if \(k\)-Colourability is \(\operatorname{NP}\)-complete for graphs of maximum degree \(d\), then it is \(\operatorname{NP}\)-complete for graphs of maximum degree \(d+1\) (to produce a reduction, it suffices to add a disjoint copy of \(K_{1,d+1}\)). This suggests the following problem. **Problem 1**.: _For \(k\geq 3\), what is the least integer \(d\) such that \(k\)-Colourability is \(\operatorname{NP}\)-complete for graphs of maximum degree \(d\)?_ Observe that Problem 1 deals with locating a point of hardness transition. By the same argument as in \(k\)-Colourability, if \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(d\), then \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(d+1\). Therefore, for each \(k\geq 3\), there exists a unique integer \(d^{*}\) such that \(k\)-Colourability (resp. \(k\)-Acyclic Colourability) in graphs of maximum degree \(d\) is NP-complete if and only if \(d\geq d^{*}\). Thus, one can ask the counterpart of Problem 1 for acyclic colouring. Let \(L^{(k)}\) and \(L^{(k)}_{a}\) denote the answers to Problem 1 and its counterpart for acyclic colouring; that is, \(L^{(k)}\) (resp. \(L^{(k)}_{a}\)) is the least integer \(d\) such that \(k\)-Colourability (resp. \(k\)-Acyclic Colourability) is NP-complete for graphs of maximum degree \(d\). Due to Brook's theorem, \(k\)-Colourability is polynomial-time solvable for graphs of maximum degree \(k\), and thus \(L^{(k)}\geq k+1\). For \(k\geq 3\), \(k\)-Colourability is NP-complete for graphs of maximum degree \(k-1+\left\lceil\sqrt{k}\right\rceil\)[24], and thus \(k+1\leq L^{(k)}\leq k-1+\left\lceil\sqrt{k}\right\rceil\). Hence, \(L^{(3)}=4\), \(L^{(4)}=5\), \(6\leq L^{(5)}\leq 7\), and so on. For sufficiently large \(k\) and \(d<k-1+\left\lceil\sqrt{k}\right\rceil\), the problem \(k\)-Colourability is in P for graphs of maximum degree \(d\)[49, Theorem 43]. Therefore, \(L^{(k)}=k-1+\left\lceil\sqrt{k}\right\rceil\) for sufficiently large \(k\). Yet, the exact value of \(L^{(k)}\) is unknown for small values of \(k\) such as \(k=5\) eventhough we know that \(L^{(5)}\in\{6,7\}\) (the complexity of \(5\)-Colourability in graphs of maximum degree \(6\) is open [56]). ### Unique Solution Problems For a decision problem \(X\), the _unique solution problem_ associated with \(X\) takes the same input as \(X\) and asks whether problem \(X\) with the given input has exactly one solution. For instance, the unique solution problem associated with Sat takes a boolean formula \(B\) as input and asks whether \(B\) has exactly one satisfying truth assignment. For some decision problems, it is more natural to consider an equivalence relation and ask whether there is only one equivalence class in it. For example, for a graph \(G\), it is customary to call a \(k\)-colouring \(f\) of \(G\) as the unique \(k\)-colouring of \(G\) if each \(k\)-colouring of \(G\) can be obtained from \(f\) by merely swapping colours. That is, instead of the number of \(k\)-colourings of \(G\), we are interested in the number of equivalence classes under \(\mathcal{R}_{\text{swap}}(G,k)\) where \(\mathcal{R}_{\text{swap}}(G,k)\) is an equivalence relation on the set of \(k\)-colourings of \(G\) defined as \((f_{1},f_{2})\in\mathcal{R}_{\text{swap}}(G,k)\) if \(f_{1}\) and \(f_{2}\) are the same up to colour swaps. Similarly, we define an equivalence relation \(\mathcal{R}_{\text{swap}+\text{auto}}(G,k)\) on the set of \(k\)-colourings of \(G\) as \((f_{1},f_{2})\in\mathcal{R}_{\text{swap}+\text{auto}}(G,k)\) if \(f_{1}\) and \(f_{2}\) are the same up to colour swaps and automorphisms. Thus, we have two unique solution problems associated with \(k\)-colouring. Unique \(k\)-Colouring [\(\mathcal{R}_{\text{swap}}\)] _Instance:_ A graph \(G\). _Question:_ Is the number of equivalence classes under \(\mathcal{R}_{\text{swap}}(G,k)\) exactly one? (i.e., Is the number of \(k\)-colourings of \(G\) up to colour swaps exactly one?) Unique \(k\)-Colouring [\(\mathcal{R}_{\text{swap}+\text{auto}}\)] _Instance:_ A graph \(G\). _Question:_ Is the number of equivalence classes under \(\mathcal{R}_{\text{swap}+\text{auto}}(G,k)\) exactly one? (Is the number of \(k\)-colourings of \(G\) up to colour swaps and automorphisms exactly one?) An _another solution problem_ is closely related to the unique solution problem. The another solution problem associated with Sat takes a boolean formula \(B\) and a satisfying truth assignment of \(B\) as input and asks whether \(B\) has another satisfying truth assignment. Similar to the unique solution problem, there are two another solution problems associated with \(k\)-colouring, namely Another \(k\)-Colouring [\(\mathcal{R}_{\text{swap}}\)] and Another \(k\)-Colouring [\(\mathcal{R}_{\text{swap}+\text{auto}}\)]. The former is defined below, and the latter is defined likewise. Another \(k\)-Colouring [\(\mathcal{R}_{\text{swap}}\)] _Instance:_ A graph \(G\), and a \(k\)-colouring \(f\) of \(G\). _Question:_ Is there another \(k\)-colouring of \(G\) up to colour swaps? (i.e., a \(k\)-colouring \(f^{*}\) of \(G\) such that \((f,f^{*})\notin\mathcal{R}_{\text{swap}}(G,k)\)) Dailey [23] proved that for all \(k\geq 3\), Another \(k\)-Colouring [\(\mathcal{R}_{\text{swap}}\)] is NP-hard. Hence, given a \(k\)-colourable graph \(G\), it is coNP-hard to check whether \(G\) is uniquely \(k\)-colourable up to colour swaps. That is, Unique \(k\)-Colouring [\(\mathcal{R}_{\text{swap}}\)] is coNP-hard even when restricted to the class of \(k\)-colourable graphs. Dailey [23] produced a reduction from \(k\)-Colourability to Another Colouring\(\left[\mathcal{R}_{\mathrm{swap}}\right]\). A close look reveals that the same construction also establishes a reduction from \(k\)-Colourability to Another \(k\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}\,\mathrm{\text{-}auto}}\right]\). Thus, for all \(k\geq 3\), problems Another \(k\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}}\right]\) and Another \(k\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}\,\mathrm{\text{-}auto}}\right]\) are NP-complete. As a result, the problems Unique \(k\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}}\right]\) and Unique \(k\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}\,\mathrm{\text{-}auto}}\right]\) are coNP-hard for \(k\geq 3\). It is easy to observe that Dailey's construction provides reductions from \(k\)-Colourability\((\Delta=k-1+\left\lceil\sqrt{k}\right\rceil)\) to Another \(k\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}}\right]\)\((\Delta=(k-1)(k-1+\left\lceil\sqrt{k}\right\rceil))\) and to Another \(k\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}\,\mathrm{\text{-}auto}}\right]\)\((\Delta=(k-1)(k-1+\left\lceil\sqrt{k}\right\rceil))\). In particular, Another \(3\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}}\right]\) and Another \(3\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}\,\mathrm{\text{-}auto}}\right]\) are NP-complete for graphs of maximum degree \(8\). We establish hardness results for unique solution problems and another solution problems associated with acyclic colouring. These problems are defined in the obvious way. For instance, Unique \(k\)-Acyclic Colouring\(\left[\mathcal{R}_{\mathrm{swap}}\right]\) is defined like Unique \(k\)-Colouring\(\left[\mathcal{R}_{\mathrm{swap}}\right]\). ### Our Results Recall that for \(k\geq 3\), \(L_{a}^{(k)}\) is the the least integer \(d\) such that \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(d\). For \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(k(k-1+\left\lceil\sqrt{k}\right\rceil)\) by Observation 1, and thus \(L_{a}^{(k)}\leq k(k-1+\left\lceil\sqrt{k}\right\rceil)\). Ochem [54] proved that \(3\)-Acyclic Colourability is NP-complete for bipartite graphs of maximum degree \(4\); thus, \(L_{a}^{(3)}\leq 4\). We generalise this result: for \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for bipartite graphs of maximum degree \(k+1\) (and thus \(L_{a}^{(k)}\leq k+1\)). Hence, \(4\)-Acyclic Colourability is NP-complete not only for graphs of maximum degree \(5\)[50], but also for bipartite graphs of maximum degree \(5\). For each \(k\geq 3\), we prove the following. 1. \(k\)-Acyclic Colourability is NP-complete for bipartite graphs of maximum degree \(k+1\), and the problem does not admit a \(2^{o(n)}\)-time algorithm unless ETH fails. 2. \(0.38\,k^{3/4}<L_{a}^{(k)}\leq k+1\). 3. Provided \(k\neq 3\), \(k\)-Acyclic Colourability is NP-complete for \(d\)-regular graphs if and only if \(L_{a}^{(k)}\leq d\leq 2k-3\). 4. It is coNP-hard to check whether an input graph \(G\) admits a unique \(k\)-acyclic colouring up to colour swaps (resp. up to colour swaps and automorphisms). ## 2 Hardness Transitions of Acyclic Colouring In this section, we discuss hardness transitions of acyclic colouring with respect to the maximum degree. First, we show that (i) for \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(k+1\), and (ii) for \(k\geq 4\) and \(d\leq 2k-3\), \(k\)-Acyclic Colourability in graphs of maximum degree \(d\) is NP-complete if and only if \(k\)-Acyclic Colourability in \(d\)-regular graphs is NP-complete. The consequences of these results on the value of \(L_{a}^{(k)}\) are discussed later in Section 2.1. Let us start with a simple result due to Fertin et al. [25], which has direct consequences for hardness transitions. We present a shorter proof, adapted from [34], below. **Theorem 2** ([25]).: \(\chi_{a}(G)>1+\frac{m}{n}\) _for every non-empty graph \(G\)._ Proof (adapted from [34]).: Let \(f\colon V(G)\to\{0,1,\ldots,k-1\}\) be a \(k\)-acyclic colouring of \(G\). Recall that we denote the \(i\)th colour class \(f^{-1}(i)\) by \(V_{i}\) for \(0\leq i\leq k-1\). For every pair of colour classes \(V_{i}\) and \(V_{j}\), let \(m_{ij}\) denote the number of edges from \(V_{i}\) to \(V_{j}\). Since \(G[V_{i}\cup V_{j}]\) is a forest, we have \(m_{ij}\leq|V_{i}|+|V_{j}|\), and equality holds only when \(|V_{i}|=|V_{j}|=0\). We know that some colour class is non-empty because \(G\) is a non-empty graph. Summing up over all \(i\) and \(j\), we get \(m<(k-1)n\) (for instance, \(|V_{0}|\) appears exactly \(k-1\) times on the right side). Hence, \(k>1+(m/n)\). Since \(f\) is an arbitrary \(k\)-acyclic colouring of \(G\), we have \(\chi_{a}(G)>1+(m/n)\). Since \(\frac{m}{n}=\frac{d}{2}\) for every \(d\)-regular graph \(G\), we have the following corollary. **Corollary 1**.: \(\chi_{a}(G)\geq\left\lceil\frac{d+3}{2}\right\rceil\) _for every \(d\)-regular graph \(G\) (provided \(d\neq 0\))._ **Corollary 2**.: _For every non-empty graph \(G^{*}\), \(\chi_{a}(G^{*})>1+\frac{\operatorname{mad}(G^{*})}{2}\)._ Proof.: Let \(G\) be a subgraph of \(G^{*}\) such that the average degree of \(G\) is equal to the maximum average degree of \(G^{*}\). Applying Theorem 2 on \(G\) proves the corollary since \(\chi_{a}(G^{*})\geq\chi_{a}(G)\) and \(|E(G)|/|V(G)|=\operatorname{mad}(G^{*})/2\). Next, we show that the lower bound in Corollary 1 is attained by a vertex-transitive \(d\)-regular graph for each \(d\) (the graphs defined in the proof are used later in a construction). **Theorem 3**.: _For all \(d\geq 1\), there exists a \(d\)-regular vertex-transitive graph \(G_{d}\) with \(\chi_{a}(G_{d})=\left\lceil\frac{d+3}{2}\right\rceil\)._ Proof.: We first prove the theorem for every odd number \(d\). To this end, we construct a \((2p+1)\)-regular graph \(G_{2p+1}\) with acyclic chromatic number equal to \(p+2\) for all \(p\geq 0\). The vertex set of \(G_{2p+1}\) is \(\{(i,j)\colon i,j\in\mathbb{Z},\ 0\leq i\leq p+1,\ 0\leq j\leq p+1,\ i\neq j\}\), and \((i,j)\) is adjacent to \((k,\ell)\) if \(j=k\) or \(i=\ell\) (or both). The graph \(G_{5}\) is shown in Figure 2. Note that \(G_{2p+1}\) is a \((2p+1)\)-regular graph; for instance, the vertex \((0,1)\) in \(G_{2p+1}\) has neighbours \((1,2),\ldots,(1,p+1),(1,0),(2,0),\ldots,(p+1,0)\). Consider the function \(f\colon V(G_{2p+1})\to\{0,1,\ldots,p+1\}\) defined as \(f((i,j))=i\) for all \((i,j)\in V(G_{2p+1})\). Let \(V_{i}=f^{-1}(i)\) for \(0\leq i\leq p+1\). It is easy to verify that \(f\) is a \((p+2)\)-acyclic colouring of \(G_{2p+1}\) (the subgraph of \(G\) induced by \(V_{i}\cup V_{j}\) is two stars attached at endvertices of the edge \(\{(i,j),(j,i)\}\); observe that \(u\in V_{i}\) is adjacent to \(v\in V_{j}\) if and only if \(u=(i,j)\) or \(v=(j,i)\)). Thus, \(\chi_{a}(G_{2p+1})\leq p+2\). Thanks to Corollary 1, we have \(\chi_{a}(G_{2p+1})\geq\left\lceil\frac{(2p+1)+3}{2}\right\rceil=p+2\). Hence, \(\chi_{a}(G_{2p+1})=p+2\). Note that by the definition of the graph \(G_{2p+1}\), whether vertex \((i,j)\) is adjacent to vertex \((k,\ell)\) depends only on equality and inequality between indices \(i,j,k,\ell\). This ensures that \(G_{2p+1}\) is vertex-transitive. We include a short proof for completeness. To construct an automorphism \(\psi\) that maps a vertex \((i,j)\) to another vertex \((k,\ell)\), first choose a bijection \(h\) from \(\{0,1,\ldots,p+1\}\) to itself such that \(h(i)=k\) and \(h(j)=\ell\), and then define \(\psi\big{(}(x,y)\big{)}=\big{(}h(x),h(y)\big{)}\) for all \((x,y)\in V(G_{2p+1})\). Next, we prove the theorem for every even number \(d\), by constructing a \(2p\)-regular graph \(G_{2p}\) with the acyclic chromatic number equal to \(p+2\) for all \(p\geq 1\). Observe that for all \(p\geq 1\), the set \(M\) of edges between vertex pairs \((i,j)\) and \((j,i)\) in \(G_{2p+1}\) is a perfect matching of \(G_{2p+1}\). We define \(G_{2p}=G_{2p+1}-M\). The graph \(G_{4}\) is shown in Figure 3. Clearly, \(G_{2p}\) is a \(2p\)-regular subgraph of \(G_{2p+1}\), and thus \(\chi_{a}(G_{2p})\leq\chi_{a}(G_{2p+1})\leq p+2\). By Corollary 1, \(\chi_{a}(G_{2p})\geq\left\lceil\frac{2p+3}{2}\right\rceil=p+2\), and thus \(\chi_{a}(G_{2p})=p+2\). Moreover, the vertex set of \(G_{2p}\) is \(\{(i,j)\colon i,j\in\mathbb{Z},\ 0\leq i\leq p+1,\ 0\leq j\leq p+1,\ i\neq j\}\), and there is an edge joining \((i,j)\) to \((k,\ell)\) if either \(j=k\) or \(i=\ell\) (not both). Thus, by its definition, \(G_{2p}\) is vertex-transitive (see [58, Theorem 4] for a detailed proof). _Remark:_ Note that \(G_{1}\subseteq G_{3}\subseteq G_{5}\subseteq\ldots\) and \(G_{2}\subseteq G_{4}\subseteq G_{6}\subseteq\ldots\) by the definition of the graphs \(G_{d}\). We are now ready to generalise the NP-completeness result of \(3\)-Acyclic Colourability in bipartite graphs of maximum degree \(4\)[54]: we show that for each \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for bipartite graphs of maximum degree \(k+1\). The following simple observations are employed to construct gadgets. Figure 2: The graph \(G_{5}\). **Observation 2**.: _Let \(G\) be a graph, and let \(f\) be a \(q\)-acyclic colouring of \(G\), where \(q\leq k\). If two vertices \(x\) and \(y\) of \(G\) have \(k\) common neighbours, then \(f(x)\neq f(y)\)._ Proof.: Let \(x\) and \(y\) be vertices in \(G\) that share \(k\) neighbours \(w_{1},w_{2},\ldots,w_{k}\). Note that at least two among vertices \(w_{1},w_{2},\ldots,w_{k}\) should get the same colour, say \(f(w_{1})=f(w_{2})\) (if not, we need \(k\) colours for vertices \(w_{1},w_{2},\ldots,w_{k}\), and thus no colour is available for \(x\)). If \(f(x)=f(y)\), then \((x,w_{1},y,w_{2})\) is a bicoloured cycle; a contradiction. Hence, \(f(x)\neq f(y)\). **Observation 3**.: _Let \(G(A\cup B,E)\cong K_{k-1,k}\) where \(A\) and \(B\) are independent sets in \(G\) with cardinality \(k-1\) and \(k\), respectively. Then, \(\chi_{a}(G)=k\). Moreover, for every \(k\)-acyclic colouring \(f\) of \(G\), (i) vertices in \(A\) should get pairwise distinct colours, and (ii) all vertices in \(B\) should get the same colour \((\)i.e., \(f(b)=f(b^{\prime})\) for all \(b,b^{\prime}\in B)\)._ Proof.: Let \(f\) be a \(q\)-acyclic colouring of \(G\), where \(q=\chi_{a}(G)\). One can obtain a \(k\)-acyclic colouring of \(G\) by assigning a permutation of colours \(1,2,\ldots,k-1\) on vertices in \(A\) and colour \(0\) on vertices in \(B\). Hence, \(q\leq k\). Every pair of vertices from \(A\) have \(k\) common neighbours. Hence, vertices in \(A\) should get pairwise distinct colours by Observation 2. That is, \(k-1\) distinct colours are used on vertices in \(A\). Let \(b\in B\). Since all vertices in \(A\) are adjacent to \(b\), at least one more colour, say colour \(c_{1}\), is needed (to colour \(b\)). This proves that \(q\geq k\), and thus \(\chi_{a}(G)=q=k\). Since \(k-1\) distinct colours are used on vertices in \(A\), the remaining colour, namely colour \(c_{1}\), is the only available colour for each vertex in \(B\). Thus, all vertices in \(B\) should get colour \(c_{1}\). Note that by Observation 3, the biclique \(K_{k-1,k}\) has a unique \(k\)-acyclic colouring up to colour swaps. For every construction in this paper, only selected vertices within each gadget are allowed to have neighbours outside the gadget. We call such vertices as the _terminals_ of the gadget, and highlight them in diagrams by drawing a circle around them. The graph displayed in Figure 3(a), let us call it the chain gadget, plays a major role in our constructions. In Figure 3(a), the terminals of the chain gadget are labelled \(v_{1}^{\prime},v_{2}^{\prime},\ldots,v_{t}^{\prime}\), where \(t\in\mathbb{N}\). The next lemma explains the importance of the chain gadget. **Lemma 1** (Properties of the chain gadget).: _The colouring displayed in Figure 3(b) is the unique \(k\)-acyclic colouring of the chain gadget up to colour swaps and automorphisms. In particular, the following hold for every \(k\)-acyclic colouring of the chain gadget: (i) there is a colour \(c_{1}\) such that every terminal (of the chain gadget) gets colour \(c_{1}\), and (ii) for every colour \(c_{2}(\neq c_{1})\) and every pair of terminals \(x\) and \(y\), there is an \(x,y\)-path coloured using only \(c_{1}\) and \(c_{2}\)._ Proof.: Let \(f\) be a \(k\)-acyclic colouring of the chain gadget. Observe that for each \(i\in\{1,2,\ldots,t\}\), vertices of the chain gadget from Levels \(2i-1\) and \(2i\) together form \(K_{k-1,k}\). Therefore, for fixed \(i\) Figure 3: The graph \(G_{4}\) obtained from \(G_{5}\) by deleting all \(\{(i,j),(j,i)\}\) edges. vertices at Level \(2i-1\) get pairwise distinct colours, and vertices at Level \(2i\) share the same colour by Observation 3. Without loss of generality, assume that vertices at Level 2 are coloured 0. Since each vertex at Level 3 has a neighbour in Level 2, colour 0 is unavailable in Level 3. That means vertices in Level 3 are coloured \(1,2,\ldots,k-1\) in some order. Therefore, the colour shared by vertices at Level 4 must be 0. Similarly, for all \(i\in\{1,2,\ldots,t\}\), vertices at Level \(2i\) are coloured 0 and vertices at Level \(2i-1\) are assigned a permutation of colours \(1,2,\ldots,k-1\). This proves that the colouring displayed in Figure 3(b) is the unique \(k\)-acyclic colouring of the chain gadget up to colour swaps and automorphisms (observe that applying a permutation of colours on the set of vertices at Level \(2j-1\) for each \(j\in\{1,2,\ldots,t\}\) corresponds to an automorphism of the chain gadget; see Section 4 in the supplementary material for a demonstration). Since all terminals are coloured 0, Property (i) is proved. Observe that in Figure 3(b), for every colour \(c_{2}\neq 0\) and every pair of terminals \(x\) and \(y\), there is an \(x,y\)-path coloured using only 0 and \(c_{2}\) (for \(x=v_{1}^{\prime}\), \(y=v_{t}^{\prime}\) and \(c_{2}=1\), such an \(x,y\)-path is highlighted in Figure 3(b)). This proves Property (ii). The following construction is employed to prove NP-completeness of \(k\)-Acyclic Colourability in bipartite graphs of maximum degree \(k+1\). **Construction 2**.: _Parameter:_ An integer \(k\geq 3\). _Input:_ A graph \(G\) of maximum degree \(2(k-1)\). _Output:_ A bipartite graph \(G^{\prime}\) of maximum degree \(k+1\). _Guarantee 1:_\(G\) is \(k\)-colourable if and only if \(G^{\prime}\) is \(k\)-acyclic colourable. _Guarantee 2:_\(G^{\prime}\) has only \(O(n)\) vertices where \(n=|V(G)|\). _Steps:_ Replace each vertex \(v\) of \(G\) by a chain gadget with \(k\cdot\deg_{G}(v)\) terminals, and reserve \(k\) terminals each for every neighbour of \(v\) in \(G\). For each neighbour \(u\) of \(v\) in \(G\), label the \(k\) terminals of the chain gadget for \(v\) reserved for \(u\) as \(v_{u1},v_{u2},\ldots,v_{uk}\). For each edge \(e=uv\) of \(G\) and each \(j\in\{1,2,\ldots,k\}\), introduce a new vertex \(e_{j}\) in \(G^{\prime}\) and join \(e_{j}\) to \(u_{vj}\) as well as \(v_{uj}\) (see Figure 5). An example of the Figure 4: (a) The chain gadget, and (b) a \(k\)-acyclic colouring of the chain gadget. construction is exhibited in Figure 6. \(G^{\prime}\) is clearly bipartite (small dots form one part and big dots form the other part; see Figure 6). Proof of Guarantee 1.: Suppose that \(G\) admits a \(k\)-colouring \(f:V(G)\to\{0,1,\ldots,k-1\}\). We produce a \(k\)-colouring \(f^{\prime}\) of \(G^{\prime}\) as follows, where \(f^{\prime}:V(G^{\prime})\to\{0,1,\ldots,k-1\}\). For each vertex \(v\) of \(G\), colour the chain gadget for \(v\) by the scheme obtained from Figure 3(b) by swapping colour \(0\) with \(f(v)\). Now, the terminals of the chain gadget for \(v\) have colour \(f(v)\) under \(f^{\prime}\). For each edge \(e=uv\) of \(G\), choose a colour \(c\in\{0,1,\ldots,k-1\}\setminus\{f(u),f(v)\}\) and assign \(f^{\prime}(e_{j})=c\) for \(1\leq j\leq k\). Since the paths of the form \(u_{vj},e_{j},v_{uj}\) are tricoloured by \(f^{\prime}\), any cycle in \(G^{\prime}\) bicoloured by \(f^{\prime}\) must be entirely within a chain gadget. Since chain gadgets are coloured by an acyclic colouring scheme, they do not contain any bicoloured cycle. Therefore, \(f^{\prime}\) is a \(k\)-acyclic colouring of \(G^{\prime}\). Conversely, suppose that \(G^{\prime}\) admits a \(k\)-acyclic colouring \(f^{\prime}:V(G^{\prime})\to\{0,1,\ldots,k-1\}\). By Property (i) of the chain gadget (see Lemma 1), all terminals of a chain gadget get the same colour. For brevity, let us call this colour as "the colour of the chain gadget". _Claim 1:_ For every edge \(uv\) of \(G\), the colour of the chain gadget for \(u\) differs from the colour of the chain gadget for \(v\) (i.e., \(f^{\prime}(u_{v1})\neq f^{\prime}(v_{u1})\,\)). Figure 5: Construction of \(G^{\prime}\) from \(G\) (only vertices \(u,v\) in \(G\) and edge \(uv\) in \(G\), and corresponding gadgets in \(G^{\prime}\) are displayed). Figure 6: Example of Construction 2 with \(k=3\). Graph \(G^{\prime}\) is displayed large, and graph \(G\) is shown inset (for convenience, a graph of maximum degree \(3\) rather than \(4\) is used as \(G\)). Contrary to the claim, assume that \(uv\) is an edge in \(G\), but there is a colour \(c_{1}\) such that \(f^{\prime}(u_{v1})=f^{\prime}(v_{u1})=c_{1}\). Clearly, colour \(c_{1}\) is unavailable for vertices \(e_{j}\) (\(1\leq j\leq k\)). Hence, by pigeon-hole principle, at least two vertices among \(e_{1},e_{2},\ldots,e_{k}\) have the same colour, say \(f(e_{p})=f(e_{q})=c_{2}\) where \(p\neq q\) and \(c_{2}\neq c_{1}\). By Property (ii) of the chain gadget (see Lemma 1), the chain gadget for \(u\) contains a path \(R_{1}\) from \(u_{vp}\) to \(u_{vq}\) which is coloured using only \(c_{1}\) and \(c_{2}\). Similarly, the chain gadget for \(v\) contains a path \(R_{2}\) from \(v_{uq}\) to \(v_{up}\) which is coloured using only \(c_{1}\) and \(c_{2}\). These paths together with the three-vertex paths \(u_{vq},e_{q},v_{uq}\) and \(v_{up},e_{p},u_{vp}\) form a cycle in \(G^{\prime}\) bicoloured by \(f^{\prime}\) (namely, the cycle \((u_{vp};R_{1};u_{vq},e_{q},v_{uq};R_{2};v_{up},e_{p})\) ). This contradiction proves Claim 1. Producing a \(k\)-colouring \(f\) of \(G\) from \(f^{\prime}\) is easy. For each vertex \(v\) of \(G\), assign at \(v\), the colour of the chain gadget for \(v\). The function \(f\) is a \(k\)-colouring of \(G\) due to Claim 1. Proof of Guarantee 2.: Suppose that \(G\) has \(n\) vertices and \(m\) edges. For each vertex \(v\) of \(G\), the chain gadget for \(v\) has \((2k-1)\cdot k\cdot\deg_{G}(v)\) vertices. Also, there are only \(km\) vertices outside of chain gadgets. Therefore, \(G^{\prime}\) has \(\left(\sum_{v\in V(G)}(2k^{2}-k)\deg_{G}(v)\right)+km=(2k^{2}-k)\,2m+km=O(m)\) vertices. Since \(m\leq n\Delta(G)/2\) and \(\Delta(G)\leq 2(k-1)\), we have \(m=O(n)\), and thus the number of vertices in \(G^{\prime}\) is \(O(n)\). Since \(G^{\prime}\) has only \(O(n)\) vertices and the maximum degree of \(G^{\prime}\) is \(k+1\), the graph \(G^{\prime}\) has only \(O(n)\) edges as well (\(\because 2|E(G^{\prime})|=(k+1)|V(G^{\prime})|\) ). Hence, Construction 2 requires only time polynomial in the input size. **Theorem 4**.: _For \(k\geq 3\), \(k\)-Acyclic Colourability(bipartite, \(\Delta=k+1\)) is NP-complete, and the problem does not admit a \(2^{o(n)}\)-time algorithm unless ETH fails._ Proof.: Fix an integer \(k\geq 3\). We employ Construction 2 to establish a reduction from \(k\)-Colourability(\(\Delta=2(k-1)\)) to \(k\)-Acyclic Colourability(bipartite, \(\Delta=k+1\)). The problem \(k\)-Colourability(\(\Delta=2(k-1)\)) is NP-complete (in fact, \(k\)-Colourability is NP-complete for line graphs of \(k\)-regular graphs [44]) and does not admit a \(2^{o(n)}\)-time algorithm unless ETH fails (the latter can be observed from a reduction of Emden-Weinert et al. [24]). Let \(G\) be an instance of \(k\)-Colourability(\(\Delta=2(k-1)\)). Produce a graph \(G^{\prime}\) from \(G\) by Construction 2. Note that Construction 2 requires only time polynomial in the size of \(G\). By Guarantee 1 of Construction 2, \(G\) is \(k\)-colourable if and only if \(G^{\prime}\) is \(k\)-acyclic colourable. Besides, the number of vertices in \(G^{\prime}\) is \(O(n)\) where \(n=|V(G)|\). Therefore, the problem \(k\)-Acyclic Colourability(bipartite, \(\Delta=k+1\)) is NP-complete, and it does not admit a \(2^{o(n)}\)-time algorithm unless ETH fails. For \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(k+1\) by Theorem 4. If \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(d\), then it is NP-complete for graphs of maximum degree \(d+1\). Thus, for \(k\geq 3\) and \(d\geq k+1\), \(k\)-Acyclic Colourability in graphs of maximum degree \(d\) is NP-complete. On the other hand, for all \(d\geq 2\) and every \(d\)-regular graph \(G\), we have \(\chi_{a}(G)\geq\left\lceil\frac{d+3}{2}\right\rceil\), and thus \(G\) is not \(k\)-acyclic colourable for \(k<\left\lceil\frac{d+3}{2}\right\rceil\). Hence, for \(k\geq 3\) and \(d\geq 2k-2\), \(k\)-Acyclic Colourability in \(d\)-regular graphs is polynomial-time solvable (because \(k<\left\lceil\frac{(2k-2)+3}{2}\right\rceil\leq\left\lceil\frac{d+3}{2}\right\rceil\)). As a result, for \(k\geq 3\) and \(d\geq 2k-2\), \(k\)-Acyclic Colourability in graphs of maximum degree \(d\) is NP-complete (because \(2k-2\geq k+1\)) whereas \(k\)-Acyclic Colourability in \(d\)-regular graphs is polynomial-time solvable. In contrast, we use Construction 3 below to show that for all \(k\geq 3\) and \(d\leq 2k-3\), the complexity of \(k\)-Acyclic Colourability is the same for graphs of maximum degree \(d\) and \(d\)-regular graphs. First, we construct a gadget called the _filler gadget_ using the graph \(G_{d}\). Note that for every graph \(H\) with an edge \(xy\) and a \(k\)-acyclic colouring \(h\), there is no \(x,y\)-path in \(H-xy\) bicoloured by \(h\) (if not, that path together with the edge \(xy\) forms a cycle in \(H\) bicoloured by \(h\)). In particular, the graph \(G_{d}\) defined in the proof of Theorem 3 is a \(d\)-regular graph with a \(k\)-acyclic colouring \(h\) (because \(\chi_{a}(G_{d})=\left\lceil\frac{d+3}{2}\right\rceil\leq\left\lceil\frac{(2k-3)+3 }{2}\right\rceil=k\)), and for each edge \(xy\) of \(G_{d}\), there is no \(x,y\)-path in \(G_{d}-xy\) bicoloured by \(h\). We choose an edge \(xy\) of \(G_{d}\), and make the filler gadget using \(G_{d}-xy\) as shown in Figure 7. Note that every non-terminal vertex of the filler gadget has degree \(d\). We use the vertex identification operation in Construction 3 (see Section 3 in the supplementary material for the definition of vertex idenitification). **Construction 3**.: _Parameters:_ Integers \(k\geq 3\) and \(d\leq 2k-3\). _Input:_ A graph \(G\) of maximum degree \(d\). _Output:_ A \(d\)-regular graph \(G^{\prime}\). _Guarantee:_\(G\) is \(k\)-acyclic colourable if and only if \(G^{\prime}\) is \(k\)-acyclic colourable. _Steps:_ Introduce two copies of \(G\), say \(G^{(1)}\) and \(G^{(2)}\). For each \(v\in V(G)\) and \(i\in\{1,2\}\), let \(v^{(i)}\) denote the copy of \(v\) in \(G^{(i)}\). For each \(v\in V(G)\), introduce \(d-\deg_{G}(v)\) filler gadgets, and for each of these filler gadgets, identify its two terminals with \(v^{(1)}\) and \(v^{(2)}\), respectively. See Figure 8 for an example. For every \(v\in V(G)\) and \(i\in\{1,2\}\), the vertex \(v^{(i)}\) has (i) \(\deg_{G}(v)\) neighbours in \(G^{(i)}\), and (ii) one neighbour in each of the \(d-\deg_{G}(v)\) filler gadgets attached at \(v^{(i)}\); and thus \(v^{(i)}\) has degree \(d\) in \(G^{\prime}\). Recall that every non-terminal vertex of a filler gadget has degree \(d\) in \(G^{\prime}\). Thus, \(G^{\prime}\) is \(d\)-regular. _Remark:_ In this construction, one can use any \(k\)-acyclic colourable \(d\)-regular graph in place of \(G_{d}\) to construct a filler gadget. We chose a fixed graph, namely \(G_{d}\), for definiteness. _Proof of Guarantee._ If \(G^{\prime}\) is \(k\)-acyclic colourable, then its subgraph \(G\) is \(k\)-acyclic colourable. Conversely, suppose that \(G\) admits a \(k\)-acyclic colouring \(f\). We produce a \(k\)-colouring \(f^{\prime}\) of \(G^{\prime}\) as follows. First, colour both copies of \(G\) using \(f\). Next, we colour the filler gadgets. For every \(v\in V(G)\), (i) choose two distinct colours \(c_{1},c_{2}\in\{0,1,\ldots,k-1\}\setminus\{f(v)\}\) and a \(k\)-acyclic colouring \(h\) of \(G_{d}-xy\) such that \(h(x)=c_{1}\) and \(h(y)=c_{2}\), and (ii) use \(h\) to complete the colouring of the filler gadgets with terminals \(v^{(1)}\) and \(v^{(2)}\). If a filler gadget contains a path \(Q\) between its terminals \(v^{(1)}\) and \(v^{(2)}\), then \(f^{\prime}\) uses at least three colours (namely, \(c_{1},c_{2}\) and \(f(v)\)) on \(Q\), and thus \(Q\) is not bicoloured by \(f^{\prime}\). Therefore, paths such as \(Q\) cannot be part of any cycle in \(G^{\prime}\) bicoloured by \(f^{\prime}\). This ensures that \(f^{\prime}\) is a \(k\)-acyclic colouring of \(G^{\prime}\) since \(f^{\prime}\) colours the copies of \(G\) and the copies of \(G_{d}-xy\) in \(G^{\prime}\) using acyclic colouring schemes. The graph \(G^{\prime}\) contains two copies of \(G\) and at most \(dn\) copies of the filler gadget. Since \(G_{d}\) is a fixed graph, \(G^{\prime}\) has at most \(2n+dn\cdot O(1)=O(n)\) vertices and \(2m+dn\cdot O(1)=O(m+n)\) edges, where \(m=|E(G)|\) and \(n=|V(G)|\). Thus, Construction 3 requires only time polynomial in \(m+n\). Due to Theorem 4, for all \(k\geq 3\) and \(d\geq k+1\), \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(d\). For \(k\geq 3\) and \(d\leq 2k-3\), Construction 3 establishes a reduction from \(k\)-Acyclic Colourability\((\Delta=d)\) to \(k\)-Acyclic Colourability\((d\)-regular). Hence, for \(k\geq 3\) and \(d\leq 2k-3\), if \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(d\), then Figure 8: Example of Construction 3 with \(d=3\). Figure 7: The filler gadget in Construction 3 (except for vertices \(x\) and \(y\), vertices and edges within the copy of \(G_{d}-xy\) are not displayed). \(k\)-Acyclic Colourability is NP-complete for \(d\)-regular graphs. Clearly, if \(k\)-Acyclic Colourability is NP-complete for \(d\)-regular graphs, then \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(d\). Thus, we have the following theorem. **Theorem 5**.: _For all \(k\geq 3\) and \(d\leq 2k-3\), \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(d\) if and only if \(k\)-Acyclic Colourability is NP-complete for \(d\)-regular graphs. In particular, for \(k\geq 4\) and \(k+1\leq d\leq 2k-3\), \(k\)-Acyclic Colourability is NP-complete for \(d\)-regular graphs. _ A modification of Construction 3 gives the following theorem. **Theorem 6**.: _For all \(k\geq 3\) and \(d\leq 2k-3\), \(k\)-Acyclic Colourability is NP-complete for bipartite graphs of maximum degree \(d\) if and only if \(k\)-Acyclic Colourability is NP-complete for \(d\)-regular bipartite graphs. In particular, for \(k\geq 4\) and \(k+1\leq d\leq 2k-3\), \(k\)-Acyclic Colourability is NP-complete for \(d\)-regular bipartite graphs._ Proof.: First, we prove that for \(k\geq 3\) and \(d\leq 2k-3\), if \(k\)-Acyclic Colourability is NP-complete for bipartite graphs of maximum degree \(d\), then \(k\)-Acyclic Colourability is NP-complete for \(d\)-regular bipartite graphs. As remarked earlier, any \(k\)-acyclic colourable \(d\)-regular graph \(H\) can replace the graph \(G_{d}\) in Construction 3 without affecting the guarantee in the construction. Hence, it is enough to prove the following claim to complete the proof. _Claim:_\(G^{\prime}\) is bipartite if \(G\) and \(H\) are bipartite. To prove the claim, assume that \(G\) and \(H\) are bipartite. Let \(f\) be a \(2\)-colouring of \(G\) and \(h\) be a \(2\)-colouring of \(H\). Since \(xy\) is an edge in \(H\), we have \(h(x)\neq h(y)\). Without loss of generality, assume that \(h(x)=0\) and \(h(y)=1\). It suffices to produce a \(2\)-colouring \(f^{\prime}\) of \(G^{\prime}\). For each \(v\in V(G)\), assign \(f^{\prime}(v^{(1)})=f(v)\) and \(f^{\prime}(v^{(2)})=1-f(v)\). Next, for each \(v\in V(G)\), we colour the filler gadgets with terminals \(v^{(1)}\) and \(v^{(2)}\). If \(f^{\prime}(v^{(1)})=1\), colour the filler gadget by the scheme in Figure 8(a); otherwise, use the scheme in Figure 8(b). It is easy to verify that \(f^{\prime}\) is indeed a \(2\)-colouring of \(G^{\prime}\). We know that for \(k\geq 4\) and \(d\geq k+1\), \(k\)-Acyclic Colourability is NP-complete for bipartite graphs of maximum degree \(d\). As a result, for \(k\geq 4\) and \(k+1\leq d\leq 2k-3\), \(k\)-Acyclic Colourability is NP-complete for \(d\)-regular bipartite graphs. ### Results on \(\boldsymbol{L_{a}^{(k)}}\) Recall that for \(k\geq 3\), \(L_{a}^{(k)}\) is the least integer \(d\) such that \(k\)-Acyclic Colourability in graphs of maximum degree \(d\) is NP-complete. Bear in mind that we assume P \(\neq\) NP throughout this paper; thus, NP is partitioned into three classes: P, NPC and NPI [55]. If a problem in NP is not NP-complete (i.e., not in NPC), then it is either in P or in NPI. By the definition of \(L_{a}^{(k)}\), \(k\)-Acyclic Colourability(\(\Delta=d\)) is not NP-complete for \(d<L_{a}^{(k)}\), which means that the problem is either in P or in NPI (we do not know which is the case). Theorem 4 proved that for \(k\geq 3\), \(k\)-Acyclic Colourability is NP-complete for graphs of maximum degree \(k+1\), and thus \(L_{a}^{(k)}\leq k+1\). It is easy to observe that for \(d\leq 2\), the acyclic chromatic number of a graph of maximum degree \(d\) can be computed in polynomial time. Hence, \(L_{a}^{(k)}\geq 3\) for all \(k\geq 3\). Next, we show that \(0.38\,k^{3/4}<L_{a}^{(k)}\leq k+1\) for all \(k\geq 3\). **Observation 4**.: _For \(d\leq 0.38\,k^{3/4}\), \(k\)-Acyclic Colourability is polynomial-time solvable for graphs of maximum degree \(d\). Hence, \(L_{a}^{(k)}>0.38\,k^{3/4}\) for all \(k\geq 3\)._ Figure 9: \(2\)-colouring schemes for filler gadget when \(H\) is bipartite. For each vertex \(z\) in \(H-xy\), scheme (a) assigns colour \(h(z)\) whereas scheme (b) assigns colour \(1-h(z)\). Proof.: The observation is trivially true for \(d\leq 2\). It suffices to prove the observation for \(d\geq 3\). Suppose that \(d\geq 3\). Sereni and Volec [57] proved that \(\chi_{a}(G)<2.835\,d^{\,4/3}+d\) for every graph \(G\) of maximum degree \(d\). Since \(d\geq 3\), we have \(d^{1/3}\geq 3^{1/3}>1/0.694\), and thus \(d<0.694\,d^{\,4/3}\). Thus, \(\chi_{a}(G)<(2.835+0.694)d^{\,4/3}=3.529\,d^{\,4/3}\) for every graph \(G\) of maximum degree \(d\). Hence, when \(k\geq 3.529\,d^{\,4/3}\), every graph of maximum degree \(d\) is \(k\)-acyclic colourable. In other words, if \(d\leq(3.529)^{-3/4}k^{3/4}\), then every graph of maximum degree \(d\) is \(k\)-acyclic colourable. Note that \(0.38<(3.529)^{-3/4}\). Hence, if \(d\leq 0.38k^{3/4}\), then \(d\leq(3.529)^{-3/4}k^{3/4}\). Therefore, for \(d\leq 0.38\,k^{3/4}\), every graph of maximum degree \(d\) is \(k\)-acyclic colourable, and thus \(k\)-Acyclic Colourability is polynomial-time solvable for graphs of maximum degree \(d\). As a result, \(L_{a}^{(k)}>0.38\,k^{3/4}\) for all \(k\geq 3\). By Corollary 1, \(\chi_{a}(G)\geq\lceil(d+3)/2\rceil\) for every \(d\)-regular graph \(G\). Hence, for \(k\geq 3\), \(k\)-Acyclic Colourability in \(d\)-regular graphs is polynomial-time solvable for each \(d\geq 2k-2\) (because the answer is always 'no'). Fix an integer \(k\geq 4\). Theorem 6 proved that for \(d\leq 2k-3\), \(k\)-Acyclic Colourability in graphs of maximum degree \(d\) is NP-complete if and only if \(k\)-Acyclic Colourability in \(d\)-regular graphs is NP-complete. By the definition of \(L_{a}^{(k)}\), \(k\)-Acyclic Colourability in graphs of maximum degree \(d\) is NP-complete for \(d=L_{a}^{(k)}\), and not NP-complete for \(d<L_{a}^{(k)}\). Hence, for \(d<L_{a}^{(k)}\), \(k\)-Acyclic Colourability in \(d\)-regular graphs is not NP-complete by Theorem 6. We know that \(k\)-Acyclic Colourability in graphs of maximum degree \(d\) is NP-complete for \(d\geq L_{a}^{(k)}\). As a result, for \(d\) in the range \(L_{a}^{(k)}\leq d\leq 2k-3\), \(k\)-Acyclic Colourability in \(d\)-regular graphs is also NP-complete by Theorem 6. Moreover, for \(d\geq 2k-2\), \(k\)-Acyclic Colourability in \(d\)-regular graphs is polynomial-time solvable (see the previous paragraph). Thus, we have the following theorem. **Theorem 7**.: _For \(k\geq 4\), \(k\)-Acyclic Colourability in \(d\)-regular graphs is NP-complete if and only if \(L_{a}^{(k)}\leq d\leq 2k-3\). _ ## 3 Unique Acyclic Colouring In this section, we borrow gadgets from Construction 2 to obtain results on unique acyclic colouring. See Section 1.4 for definitions of problems related to unique colouring and unique acyclic colouring. We prove that for all \(k\geq 3\), Another \(k\)-Acyclic Colouring \([\mathcal{R}_{\mathrm{swap}}]\) and Another \(k\)-Acyclic Colouring \([\mathcal{R}_{\mathrm{swap+auto}}]\) are NP-complete and thus the corresponding unique solution problems are coNP-hard. We also show that Unique 3-Acyclic Colouring \([\mathcal{R}_{\mathrm{swap+auto}}]\) is coNP-hard for the class of bipartite graphs of maximum degree \(4\). We start with a simple construction that enables us to transform Unique 3-Colouring \([\mathcal{R}_{\mathrm{swap}}]\) to Unique 3-Acyclic Colouring \([\mathcal{R}_{\mathrm{swap}}]\). Observation 3 proved that the biclique \(K_{k-1,k}\) has a unique \(k\)-acyclic colouring up to colour swaps. In particular, \(K_{2,3}\) has a unique 3-acyclic colouring up to colour swaps. The following construction makes use of this. **Construction 4**.: _Input:_ A graph \(G\) of maximum degree \(8\). _Output:_ A 2-degenerate bipartite graph \(G^{\prime}\) of maximum degree \(24\). _Guarantee:_ The number of 3-colourings of \(G\) up to colour swaps equals the number of 3-acyclic colourings of \(G^{\prime}\) up to colour swaps. _Steps:_ Replace each edge \(e=uv\) of \(G\) by a copy of the complete bipartite graph \(K_{2,3}\) with parts \(\{u,v\}\) and \(\{e_{1},e_{2},e_{3}\}\) where \(e_{1},e_{2}\) and \(e_{3}\) are newly introduced vertices. To produce a 2-degenerate ordering of \(V(G^{\prime})\), list the new vertices \(e_{i}\) followed by the members of \(V(G)\). Proof of Guarantee.: For each 3-colouring \(f\) of \(G\) that uses colours 0,1 and 2, there exists a unique 3-colouring extension of \(f\) into \(V(G^{\prime})\). The extension is unique because for each edge \(e=uv\) of \(G\), exactly one colour, namely the unique colour in \(\{0,1,2\}\setminus\{f(u),f(v)\}\), is available for \(e_{1},e_{2}\) and \(e_{3}\). Let \(\phi\) be the function that maps each 3-colouring of \(G\) to its unique 3-colouring extension into \(V(G^{\prime})\). Clearly, \(\phi\) is a function from the set of 3-colourings of \(G\) to the set of 3-colourings of \(G^{\prime}\), and \(\phi\) is one-one. We claim that \(\mathrm{Range}(\phi)\) is precisely the set of 3-acyclic colourings of \(G^{\prime}\). For every 3-colouring \(f\) of \(G\), we know that \(\phi(f)\) is a 3-acyclic colouring of \(G^{\prime}\) because every cycle in \(G^{\prime}\) contains a path of the form \(u,e_{i},v\) where \(u,v\in V(G)\) and \(uv\in E(G)\), and such paths are tricoloured by \(\phi(f)\) since \(\phi(f)(u)=f(u)\neq f(v)=\phi(f)(v)\). Moreover, each \(3\)-acyclic colouring of the complete bipartite graph \(K_{2,3}\) with parts \(\{u,v\}\) and \(\{e_{1},e_{2},e_{3}\}\) assigns different colours to \(u\) and \(v\) by the special case \(k=3\) of Observation 2. Thus, every \(3\)-acyclic colouring \(f^{\prime}\) of \(G^{\prime}\) has a preimage under \(\phi\), namely the restriction of \(f^{\prime}\) to \(V(G)\). This proves that \(\phi\) is onto. Thus, there exists a one-one function \(\phi\) from the set of \(3\)-colourings of \(G\) onto the set of \(3\)-acyclic colourings of \(G^{\prime}\). Furthermore, two \(3\)-colourings \(f_{1}\) and \(f_{2}\) of \(G\) are non-equivalent under colour swaps if and only if the \(3\)-acyclic colourings \(\phi(f_{1})\) and \(\phi(f_{2})\) of \(G^{\prime}\) are non-equivalent under colour swaps. **Theorem 8**.: _For 2-degenerate bipartite graphs of maximum degree 24, Another \(3\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) is NP-complete and Unique \(3\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) is coNP-hard._ Proof.: The reduction is from Another \(3\)-Colouring\(\left[\mathcal{R}_{\text{swap}}\right](\Delta=8)\). Let \((G,f)\) be an instance of the source problem. From \(G\), produce a graph \(G^{\prime}\) by Construction 4. In Construction 4, it is established that there is a bijection \(\phi\) from the set of \(3\)-colourings of \(G\) to the set of \(3\)-acyclic colourings of \(G^{\prime}\). In particular, \(f^{\prime}=\phi(f)\) is a \(3\)-acyclic colouring of \(G^{\prime}\). By the guarantee in Construction 4, the number of \(3\)-colourings of \(G\) up to colour swaps is equal to the number of \(3\)-acyclic colourings of \(G^{\prime}\) up to colour swaps. Therefore, \((G,f)\) is a yes instance of Another \(3\)-Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) if and only if \((G^{\prime},f^{\prime})\) is a yes instance of Another \(3\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\). This proves that Another \(3\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) is NP-complete for \(2\)-degenerate bipartite graphs of maximum degree 24, and thus the problem Unique \(3\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) is coNP-hard for the same class. Next, we show that Another \(k\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) is NP-complete and Unique \(k\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) in coNP-hard for all \(k\geq 3\). Let \(G\) be a graph, and let \(G^{\prime}\) be the graph obtained by adding a universal vertex to \(G\); that is, \(G^{\prime}\) is the graph join of \(G\) and \(K_{1}\). Fix an integer \(k\geq 3\). Clearly, \(G\) is \(k\)-acyclic colourable if and only if \(G^{\prime}\) is \((k+1)\)-acyclic colourable. Moreover, \(G\) admits two \(k\)-acyclic colourings \(f_{1}\) and \(f_{2}\) non-equivalent up to colour swaps (i.e., \((f_{1},f_{2})\notin\mathcal{R}_{\text{swap}}(G,k)\)) if and only if \(G^{\prime}\) admits two \((k+1)\)-acyclic colourings \(f^{\prime}_{1}\) and \(f^{\prime}_{2}\) non-equivalent up to colour swaps (i.e., \((f^{\prime}_{1},f^{\prime}_{2})\notin\mathcal{R}_{\text{swap}}(G^{\prime},k+1)\)). Thus, \(G\) admits a unique \(k\)-acyclic colouring up to colour swaps if and only if \(G^{\prime}\) admits a unique \((k+1)\)-acyclic colouring up to colour swaps. Hence, for \(k\geq 3\), the transformation from \((G,k)\) to \((G^{\prime},k+1)\) establishes a reduction from Another \(k\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) to Another \((k+1)\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\). Thus, we have the following theorem by Theorem 8 and induction. **Theorem 9**.: _For \(k\geq 3\), Another \(k\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) is NP-complete and Unique \(k\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap}}\right]\) is coNP-hard. _ Finally, we prove that Another \(k\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap+auto}}\right]\) is NP-complete for all \(k\geq 3\). We prove this for \(k=3\) first by Construction 5 below, which establishes a reduction from Another \(3\)-Colouring\(\left[\mathcal{R}_{\text{swap}}\right](\Delta=8)\) to the problem Another \(3\)-Acyclic Colouring\(\left[\mathcal{R}_{\text{swap+auto}}\right](\Delta=4)\). Construction 5 is a slight modification of Construction 2. **Construction 5**.: _Input:_ A graph \(G\) of maximum degree \(8\). _Output:_ A bipartite graph \(G^{*}\) of maximum degree \(4\). _Guarantee:_\(G\) has a unique \(3\)-colouring up to colour swaps if and only if \(G^{*}\) has a unique \(3\)-acyclic colouring up to colour swaps and automorphisms. _Steps:_ Replace each vertex \(v\) of \(G\) by a chain gadget with \(3\deg_{G}(v)+\lambda(v)\) terminals, where \(\lambda\colon V(G)\to\mathbb{N}\) is defined in such a way that no two chain gadgets have the same number of terminals (one way to ensure this is to choose an ordering \(v_{1},v_{2},\ldots,v_{n}\) of the vertex set of \(G\) such that \(\deg_{G}(v_{1})\leq\deg_{G}(v_{2})\leq\cdots\leq\deg_{G}(v_{n})\), and define \(\lambda(v_{i})=i\) for \(1\leq i\leq n\)). For each \(v\in V(G)\) and each neighbour \(u\) of \(v\), the chain gadget for \(v\) (let us call it chain(\(v\))) has three terminals reserved for \(u\), which we shall call as \(v_{u1}\), \(v_{u2}\) and \(v_{u3}\). For each edge \(e=uv\) of \(G\) and each \(j\in\{1,2,3\}\), introduce a new vertex \(e_{j}\) in \(G^{*}\) and join \(e_{j}\) to \(u_{vj}\) as well as \(v_{uj}\). An example of the construction is shown in Figure 10. Proof of Guarantee.: First, we construct a surjective mapping \(\phi\) from the set of \(3\)-acyclic colourings of \(G^{*}\) to the set of \(3\)-colourings of \(G\). Then, we show that \(\phi\) gives a bijection from the set of \(3\)-acyclic colourings of \(G^{*}\) up to automorphisms to the set of \(3\)-colourings of \(G\). This proves that \(\phi\) is a bijection from the set of \(3\)-acyclic colourings of \(G^{*}\) up to colour swaps and automorphisms to the set of \(3\)-colourings of \(G\) up to colour swaps (that is, if \(\mathcal{R}^{*}\) is the equivalence relation \(\mathcal{R}_{\text{swap+auto}}(G^{*},3)\) Figure 10: Example of Construction 5 with \(k=3\) (here, \(\lambda(u)=3\), \(\lambda(v)=1\), \(\lambda(w)=4\) and \(\lambda(x)=2\)). Graph \(G^{*}\) is displayed large, and graph \(G\) is shown inset (for convenience, a graph of maximum degree \(3\) rather than \(8\) is used as \(G\)). restricted to the set of \(3\)-acyclic colourings of \(G^{*}\), then \(\phi\) is a bijection from the set of equivalence classes of \(\mathcal{R}^{*}\) to the set of equivalence classes of \(\mathcal{R}_{\mathrm{swap}}(G,3)\)). Before constructing \(\phi\), we discuss the structure of automorphisms of \(G^{*}\). Let \(\mathrm{Aut}(G^{*},v)\) denote the set of automorphisms \(\psi\) of \(G^{*}\) such that \(\psi\) fixes all vertices not in \(\mathrm{chain}(v)\); i.e., \(\psi(x)=x\) for all \(x\in V(G^{*})\setminus V(\mathrm{chain}(v))\). Since no two chain gadgets have the same number of terminals, each automorphism of \(G^{*}\) is in a sense composed of automorphisms of the chain gadgets. This is formally expressed as Claim 1 below (see Section 5 in the supplementary material for proof). **Claim 1:** For every automorphism \(\psi\) of \(G^{*}\), there exists an automorphism \(\psi_{v}\) for each \(v\in V(G)\) such that \(\psi\) equals the function composition of \(\psi_{v}\)'s (i.e., if \(V(G)=\{v_{1},v_{2},\ldots,v_{n}\}\), then for every automorphism \(\psi\) of \(G^{*}\), there exist \(\psi_{1}\in\mathrm{Aut}(G^{*},v_{1})\),..., \(\psi_{n}\in\mathrm{Aut}(G^{*},v_{n})\) such that \(\psi=\psi_{1}\circ\psi_{2}\circ\cdots\circ\psi_{n}\)). Moreover, we have the following claim (for a proof, see Claim 1.6 in Section 5 of the supplementary material). **Claim 2:** For each vertex \(v\) of \(G\), each automorphism \(\psi\) of \(G^{*}\) maps each vertex in Level \(j\) of \(\mathrm{chain}(v)\) to some vertex in Level \(j\) of \(\mathrm{chain}(v)\), where \(j\in\mathbb{N}\). Note that the chain gadget used here is the special case \(k=3\) of the chain gadget in Lemma 1. By Lemma 1, this chain gadget has exactly one \(3\)-acyclic colouring up to colour swaps and automorphisms. In particular, the terminals of a chain gadget get the same colour under a \(3\)-acyclic colouring (and we shall call this colour as the _colour of the chain gadget_). Note that if \(G\) is given as input, the output graph \(G^{\prime}\) in Construction 2 with \(k=3\) is a subgraph of \(G^{*}\) (compare Figure 6 with Figure 10). Hence, as in Construction 2, for each edge \(uv\) of \(G\), the colour of \(\mathrm{chain}(u)\) differs from the colour of \(\mathrm{chain}(v)\) under each \(3\)-acyclic colouring \(f^{*}\) of \(G^{*}\). Hence, for every \(3\)-acyclic colouring \(f^{*}\) of \(G^{*}\), there exists a corresponding \(3\)-colouring \(f\) of \(G\) such that \(f(v)\) equals the colour of \(\mathrm{chain}(v)\) for each \(v\in V(G)\). Let \(\phi\) be the function that maps each \(3\)-acyclic colouring \(f^{*}\) of \(G^{*}\) to this corresponding \(3\)-colouring \(f\) of \(G\). The next claim shows that \(\phi\) is onto. **Claim 3:** Every \(3\)-colouring \(f\) of \(G\) has a preimage under \(\phi\). Let \(f\) be a \(3\)-colouring of \(G\). We claim that the colouring \(f^{*}\) of \(G^{*}\) defined as follows is a preimage of \(f\) under \(\phi\): for each vertex \(v\) of \(G\), colour the chain gadget for vertex \(v\) by assigning the colour \(f(v)\) on vertices of Level \(2j\) and the remaining two colours on vertices of Level \(2j-1\) for each \(j\); whenever \(e=uv\) is an edge in \(G\), colour the vertices \(e_{1},e_{2}\) and \(e_{3}\) by the only colour different from both \(f(u)\) and \(f(v)\). Since the paths of the form \(u_{v\,j},e_{j},v_{u\,j}\) are tricoloured, any bicoloured cycle in \(G^{*}\) must be entirely within a chain gadget. But, an acyclic colouring scheme is used on each chain gadget. Therefore, there is no cycle in \(G^{*}\) bicoloured by \(f^{*}\). This proves Claim 3. **Claim 4:** If two \(3\)-acyclic colourings \(f_{1}^{*}\) and \(f_{2}^{*}\) of \(G^{*}\) are the same up to automorphisms, then \(\phi(f_{1}^{*})=\phi(f_{2}^{*})\). Let \(f_{1}^{*}\) and \(f_{2}^{*}\) be \(3\)-acyclic colourings of \(G^{*}\) which are the same up to automorphisms. That is, there exists an automorphism \(\psi\) of \(G^{*}\) such that \(f_{1}^{*}(\psi(x))=f_{2}^{*}(x)\) for all \(x\in V(G^{*})\). By the special case \(k=3\) of Lemma 1, the chain gadget has exactly one \(3\)-acyclic colouring up to colour swaps and automorphisms, namely the colouring in Figure 3(b). Observe that all even-level vertices have the same colour in Figure 3(b). Also, observe that every automorphism of the chain gadget maps vertices on Level \(j\) to vertices on Level \(j\) for each \(j\in\mathbb{N}\) (for a proof, see Claim 1.5 in Section 5 of the supplementary material). Hence, for every \(3\)-acyclic colouring of the chain gadget, there is a colour \(c\) such that all even-level vertices of the chain gadget are coloured \(c\). In particular, for each \(v\in V(G)\) and each \(i\in\{1,2\}\), there is a colour \(c_{v}^{(i)}\) such that \(f_{i}^{*}\) assigns colour \(c_{v}^{(i)}\) on all even-level vertices of \(\mathrm{chain}(v)\). Consider an arbitrary vertex \(v\) of \(G\) and an arbitrary vertex \(x_{v}\) at Level \(2j\) of \(\mathrm{chain}(v)\) for some \(j\in\mathbb{N}\). We know that \(f_{i}^{*}\) assigns colour \(c_{v}^{(i)}\) on even-level vertices of \(\mathrm{chain}(v)\) for \(i\in\{1,2\}\). In particular, \(f_{1}^{*}(x_{v})=c_{v}^{(1)}\). Since \(\psi\) maps \(x_{v}\) to a vertex in Level \(2j\) of \(\mathrm{chain}(v)\), \(f_{2}^{*}(\psi(x_{v}))=c_{v}^{(2)}\). Since \(f_{2}^{*}(\psi(x))=f_{1}^{*}(x)\) for all \(x\in V(G^{*})\), we have \(c_{v}^{(1)}=f_{1}^{*}(x_{v})=f_{2}^{*}(\psi(x_{v}))=c_{v}^{(2)}\). For \(i\in\{1,2\}\), \(f_{i}^{*}\) assigns colour \(c_{v}^{(i)}\) on even-level vertices of \(\mathrm{chain}(v)\) and in particular terminals of \(\mathrm{chain}(v)\). Thus, \(c_{v}^{(i)}\) is the colour of \(\mathrm{chain}(v)\) under \(f_{i}^{*}\) for \(i\in\{1,2\}\). Hence, \((\phi(f_{i}^{*}))(v)=c_{v}^{(i)}\) for \(i\in\{1,2\}\). Since \(c_{v}^{(1)}=c_{v}^{(2)}\), we have \((\phi(f_{1}^{*}))(v)=(\phi(f_{2}^{*}))(v)\). Since \(v\) is arbitrary, \(\phi(f_{1}^{*})=\phi(f_{2}^{*})\). This proves Claim 4. **Claim 5:** If \(\phi(f_{1}^{*})=\phi(f_{2}^{*})\) for two \(3\)-acyclic colourings \(f_{1}^{*}\) and \(f_{2}^{*}\) of \(G^{*}\), then \(f_{1}^{*}\) and \(f_{2}^{*}\) are the same up to automorphisms. Let \(f_{1}^{*}\) and \(f_{2}^{*}\) be two \(3\)-acyclic colourings of \(G^{*}\), and let \(\phi(f_{1}^{*})=\phi(f_{2}^{*})\). Consider an arbitrary vertex \(v\) of \(G\). Suppose that \(\mathrm{chain}(v)\) has \(\ell\) terminals. Since \(\phi(f_{1}^{*})=\phi(f_{2}^{*})\), the colour of \(\mathrm{chain}(v)\) under \(f_{1}^{*}\) is equal to the colour of \(\mathrm{chain}(v)\) under \(f_{2}^{*}\). By the special case \(k=3\) of Lemma 1, the chain gadget has exactly one \(3\)-acyclic colouring up to colour swaps and automorphisms, namely the colouring in Figure 3(b). Observe that in Figure 3(b), (i) all even-level vertices have colour \(0\), and (ii) for \(k=3\), for each \(j\in\{1,2,\ldots,\ell\}\), vertices at Level \(2j-1\) are assigned colours \(1\) and \(2\) from left to right. Also, observe that every automorphism of the chain gadget maps vertices on Level \(j\) to vertices on Level \(j\) for each \(j\in\mathbb{N}\) (for a proof, see Claim 1.5 in Section 5 of the supplementary material). Hence, for every \(3\)-acyclic colouring of \(\mathrm{chain}(v)\), there is a colour \(c\) such that (i) all even-level vertices (of \(\mathrm{chain}(v)\)) are coloured \(c\), and (ii) for each \(j\in\{1,2,\ldots,\ell\}\), vertices at Level \(2j-1\) are assigned colours \(1\) and \(2\) in some order. Thus, for \(i\in\{1,2\}\), \(f_{i}^{*}\) assigns the same colour, say colour \(c_{i}\), on even-level vertices of \(\mathrm{chain}(v)\) and in particular on the terminals of \(\mathrm{chain}(v)\). Since the colour on terminals of \(\mathrm{chain}(v)\) under \(f_{1}^{*}\) (i.e., the colour of \(\mathrm{chain}(v)\) under \(f_{1}^{*}\)) is equal to the colour on terminals of \(\mathrm{chain}(v)\) under \(f_{2}^{*}\), we have \(c_{1}=c_{2}\). That is, both \(f_{1}^{*}\) and \(f_{2}^{*}\) assign the same colour, say colour \(c\), on even-level vertices of \(\mathrm{chain}(v)\). Owing to this and the fact that both \(f_{1}^{*}\) and \(f_{2}^{*}\) assign a permutation of colours \(\{0,1,2\}\setminus\{c\}\) on vertices at Level \(2j-1\) of \(\mathrm{chain}(v)\) for each \(j\in\{1,2,\ldots,\ell\}\), \(f_{1}^{*}\) restricted to the vertex set of \(\mathrm{chain}(v)\) (i.e., \(f_{1\,|V(\mathrm{chain}(v))}^{*}\)) can be obtained from \(f_{2\,|V(\mathrm{chain}(v))}^{*}\) by applying a permutation of colours on the set of vertices at Level \(2j-1\) for each \(j\in\{1,2,\ldots,\ell\}\). Since applying a permutation of colours on the set of vertices at Level \(2j-1\) for each \(j\in\{1,2,\ldots,\ell\}\) corresponds to an automorphism of \(\mathrm{chain}(v)\) (see Figure 2 in the supplement for a demonstration), there exists an automorphism \(\psi_{v}^{*}\) of \(\mathrm{chain}(v)\) such that \(f_{1\,|V(\mathrm{chain}(v))}^{*}=f_{2\,|V(\mathrm{chain}(v))}^{*}\circ\psi_{v}^ {*}\). Hence, \(f_{1}^{*}(x)=f_{2}^{*}(\psi_{v}^{*}(x))\) for each \(x\in V(\mathrm{chain}(v))\). Define \(\psi_{v}\colon V(G^{*})\to V(G^{*})\) as \(\psi_{v}(x)=\psi_{v}^{*}(x)\) for each \(x\in V(\mathrm{chain}(v))\) and \(\psi_{v}(x)=x\) otherwise. Clearly, \(f_{1}^{*}(x)=f_{2}^{*}(\psi_{v}(x))\) for each \(x\in V(\mathrm{chain}(v))\). Define \(\psi\) as the function composition of \(\psi_{v}\)'s (i.e., \(\psi=\psi_{v_{1}}\circ\psi_{v_{2}}\circ\cdots\circ\psi_{v_{n}}\) if \(V(G)=\{v_{1},v_{2},\ldots,v_{v}\}\)). Since \(f_{1}^{*}(x)=f_{2}^{*}(\psi_{v}(x))\) for each \(x\in V(\mathrm{chain}(v))\) and \(\psi_{v}\) is an automorphism of \(G^{*}\) that fixes vertices not in \(\mathrm{chain}(v)\) for each \(v\in V(G)\), we have \(f_{1}^{*}(x)=f_{2}^{*}(\psi(x))\) for each vertex \(x\) in some chain gadget of \(G^{*}\). To prove that \(f_{1}^{*}=f_{2}^{*}\circ\psi\), it suffices to show that \(f_{1}^{*}(x)=f_{2}^{*}(x)\) for each vertex \(x\) of \(G^{*}\) which is not in any gadget; i.e., \(x\) is a vertex of the form \(e_{j}^{(t)}\) in Figure 10, where \(e^{(t)}=uv\) is an edge in \(G\). Recall that the vertex \(e_{j}^{(t)}\) is adjacent to both \(u_{vj}\) and \(v_{uj}\) in \(G^{*}\). For \(i\in\{1,2\}\), \(\phi(f_{i}^{*})\) is a \(3\)-colouring of \(G^{*}\), and thus \(f_{i}^{*}(u_{vj})=(\phi(f_{i}^{*}))(u)\neq(\phi(f_{i}^{*}))(v)=f_{i}^{*}(v_{uj})\). For \(i\in\{1,2\}\), since \(f_{i}^{*}(u_{vj})\neq f_{i}^{*}(v_{uj})\), the colour of \(e_{j}^{(t)}\) under \(f_{i}^{*}\) is the unique colour in \(\{0,1,2\}\setminus\{f_{i}^{*}(u_{vj}),f_{i}^{*}(v_{uj})\}\). In other words, the colour of \(e_{j}^{(t)}\) under \(f_{i}^{*}\) is the unique colour in \(\{0,1,2\}\setminus\{(\phi(f_{i}^{*}))(u),(\phi(f_{i}^{*}))(v)\}\). Since \(\phi(f_{1}^{*})=\phi(f_{2}^{*})\), the unique colour in \(\{0,1,2\}\setminus\{(\phi(f_{1}^{*}))(u),(\phi(f_{1}^{*}))(v)\}\) is the same as the unique colour in \(\{0,1,2\}\setminus\{(\phi(f_{2}^{*}))(u),(\phi(f_{2}^{*}))(v)\}\). That is, \(f_{1}^{*}(e_{j}^{(t)})=f_{2}^{*}(e_{j}^{(t)})\). This completes the proof of \(f_{1}^{*}=f_{2}^{*}\circ\psi\), and thus proves Claim 5. By Claim 4 and Claim 5, \(\phi\) gives a bijection from the set of \(3\)-acyclic colourings of \(G^{*}\) up to automorphisms to the set of \(3\)-colourings of \(G\). Therefore, \(\phi\) also gives a bijection from the set of \(3\)-acyclic colourings of \(G^{*}\) up to colour swaps and automorphisms to the set of \(3\)-colourings of \(G\) up to colour swaps. In particular, the number of \(3\)-colourings of \(G\) up to colour swaps is equal to the number of \(3\)-acyclic colourings of \(G^{*}\) up to colour swaps and automorphisms. This proves the guarantee. Let us consider the time complexity of Construction 5. Assume that \(V(G)=\{v_{1},v_{2},\ldots,v_{n}\}\), where \(\deg_{G}(v_{1})\leq\deg_{G}(v_{2})\leq\cdots\leq\deg_{G}(v_{n})\) and \(\lambda(v_{i})=i\) for \(1\leq i\leq n\). Let \(m=|E(G)|\). For \(1\leq i\leq n\), since \(\mathrm{chain}(v_{i})\) has \(\deg_{G}(v_{i})+i\) terminals, there are at most \(5(\deg_{G}(v_{i})+i)\leq 5\deg_{G}(v_{i})+5n\) vertices and at most \(6(\deg_{G}(v_{i})+i)+2(\deg_{G}(v_{i})+i-1)\leq 8\deg_{G}(v_{i})+8n\) edges in \(\mathrm{chain}(v_{i})\). There are \(3m\) vertices and \(6m\) edges that are not in any chain gadget in \(G^{\prime}\). Thus, in \(G^{\prime}\), there are at most \((\sum_{i=1}^{n}(5\deg_{G}(v_{i})+5n))+3m\leq 10m+5n^{2}+3m\leq 18\,n^{2}\) vertices and \((\sum_{i=1}^{n}(8\deg_{G}(v_{i})+8n))+6m\leq 16m+8n^{2}+6m\leq 30\,n^{2}\) edges. Hence, Construction 5 requires only time polynomial in \(n\). Construction 5 establishes a reduction from the problem Another \(3\)-Colouring \([\mathcal{R}_{\mathrm{swap}}]\,(\Delta=8)\) to the problem Another \(3\)-Acyclic Colouring \([\mathcal{R}_{\mathrm{swap}}]\,(\Delta=8)\) is NP-complete (see page output graph in Construction 5. Hence, we have the following corollary. **Corollary 3**.: Another \(3\)-Acyclic Colouring\([\mathcal{R}_{swap+auto}]\) is NP-complete for graphs without any universal vertex. Next, we show that for \(k\geq 4\), Another \(k\)-Acyclic Colouring\([\mathcal{R}_{\mathrm{swap+auto}}]\) is NP-complete. **Construction 6**.: _Parameter:_ A positive integer \(q\). _Input:_ A graph \(G\) without any universal vertex. _Output:_ A graph \(G^{\prime}\). _Guarantee:_\(G\) has a unique \(3\)-acyclic colouring up to colour swaps and automorphisms if and only if \(G^{\prime}\) has a unique \((q+3)\)-acyclic colouring up to colour swaps and automorphisms. _Steps:_ Let \(v_{1},v_{2},\ldots,v_{n}\) be the vertices in \(G\). Let \(H\) be a graph isomorphic to \(K_{q}\) with vertex set \(\{u_{1},u_{2},\ldots,u_{q}\}\). To construct \(G^{\prime}\), introduce a copy of \(G\) and a copy of \(H\), and join each \(u_{j}\) to each \(v_{i}\) for \(1\leq i\leq n\) and \(1\leq j\leq q\) (i.e., \(G^{\prime}\) is the graph join of \(G\) and \(K_{q}\); see Figure 12). _Proof of Guarantee._ Let \(U=\{u_{1},u_{2},\ldots,u_{q}\}\) and \(V=\{v_{1},v_{2},\ldots,v_{n}\}\). Since there is no universal vertex in \(G\), the set \(U\) is precisely the set of universal vertices in \(G^{\prime}\). First, let us consider the structure of automorphisms of \(G^{\prime}\). Let \(\psi^{\prime}\) be an automorphism of \(G^{\prime}\). Since automorphisms preserve the vertex degrees [32, Lemma 1.3.1], \(\psi^{\prime}\) maps each universal vertex in \(G^{\prime}\) to a universal vertex in \(G^{\prime}\) (i.e., \(\psi^{\prime}(u_{j})\in U\) for all \(u_{j}\in U\)). Hence, \(\psi^{\prime}\) maps each vertex in \(V\) to a vertex in \(V\) (i.e., \(\psi^{\prime}_{[V}\) is a bijection from \(V\) to itself). Since \(\psi^{\prime}\) is an automorphism of \(G^{\prime}\) and its restriction to \(V\) is a bijection from \(V\) to itself, \(\psi^{\prime}\) restricted to \(V\) is an automorphism of \(G^{\prime}[V]\). Since \(G\cong G^{\prime}[V]\), we have the following. **Claim 1:** For every automorphism \(\psi^{\prime}\) of \(G^{\prime}\), the restriction of \(\psi^{\prime}\) to \(V(G)\) is an automorphism of \(G\). In the reverse direction, let \(\psi\) be an automorphism of \(G\). Define \(\psi^{\prime}\colon V(G^{\prime})\to V(G^{\prime})\) as \(\psi^{\prime}(v_{i})=\psi(v_{i})\) for \(1\leq i\leq n\) and \(\psi^{\prime}(u_{j})=u_{j}\) for \(1\leq j\leq q\). Clearly, \(\psi^{\prime}\) is a bijection from \(V(G^{\prime})\) to itself. Next, we show that \(\psi^{\prime}\) preserves adjacency as well as non-adjacency. Since \(\psi=\psi^{\prime}_{[V(G)}\), it is easy to verify that for \(1\leq i<j\leq n\), \(\psi^{\prime}(v_{i})\psi^{\prime}(v_{j})\in E(G^{\prime})\) if and only if \(v_{i}v_{j}\in E(G^{\prime})\). Each \(u_{j}\in U\) is a universal vertex in \(G^{\prime}\) and \(\psi^{\prime}(u_{j})=u_{j}\); as a result, \(u_{j}x\in E(G^{\prime})\) and \(\psi^{\prime}(u_{j})\psi^{\prime}(x)\in E(G^{\prime})\) for all \(x\in V(G^{\prime})\). Therefore, \(\psi^{\prime}\) preserves adjacency as well as non-adjacency, and thus \(\psi^{\prime}\) is an automorphism of \(G^{\prime}\). Thus, we have the following claim. **Claim 2:** For every automorphism \(\psi\) of \(G\), the extension \(\psi^{\prime}\) of \(\psi\) into \(V(G^{\prime})\) defined as \(\psi^{\prime}(v_{i})=\psi(v_{i})\) for \(1\leq i\leq n\) and \(\psi^{\prime}(u_{j})=u_{j}\) for \(1\leq j\leq q\) is an automorphism of \(G^{\prime}\). We are ready to prove the guarantee. Since \(G^{\prime}\) is the graph join of \(K_{q}\) and \(G\), we have \(\chi_{a}(G^{\prime})=\min\{\chi_{a}(K_{q})+n,q+\chi_{a}(G)\}\) by [46, Lemma 2.1] and thus \(\chi_{a}(G^{\prime})=\min\{q+n,q+\chi_{a}(G)\}=q+\chi_{a}(G)\). Hence, \(G\) is \(3\)-acyclic colourable if and only if \(G^{\prime}\) is \((q+3)\)-acyclic colourable. To complete the proof of the guarantee, it suffices to prove the following claim. **Claim 3:**\(G\) has two \(3\)-acyclic colourings \(f_{1}\) and \(f_{2}\) such that \((f_{1},f_{2})\notin\mathcal{R}_{\mathrm{swap+auto}}(G,3)\) if and only if \(G^{\prime}\) has two \((q+3)\)-acyclic colourings \(f^{\prime}_{1}\) and \(f^{\prime}_{2}\) such that \((f^{\prime}_{1},f^{\prime}_{2})\notin\mathcal{R}_{\mathrm{swap+auto}}(G^{ \prime},q+3)\). To prove Claim 3, suppose that \(G\) admits two \(3\)-acyclic colourings \(f_{1}\) and \(f_{2}\) which are not the same up to colour swaps and automorphisms (i.e., \((f_{1},f_{2})\notin\mathcal{R}_{\mathrm{swap+auto}}(G,3)\)). Without loss of generality, assume that \(f_{1}\) and \(f_{2}\) use colours \(0,1\) and \(2\). For \(\ell\in\{1,2\}\), define \(f^{\prime}_{\ell}\colon V(G^{\prime})\to V(G^{\prime})\) as \(f^{\prime}_{\ell}(v_{i})=f_{\ell}(v_{i})\) for \(1\leq i\leq n\) and \(f^{\prime}_{\ell}(u_{j})=j+2\) for \(1\leq j\leq q\). Clearly, \(f^{\prime}_{1}\) and \(f^{\prime}_{2}\) are \((q+3)\)-colourings of \(G^{\prime}\), and \(f_{\ell}=f^{\prime}_{\ell\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, **Theorem 12**.: _For \(k\geq 3\), Another \(k\)-Acyclic Colouring\([\mathcal{R}_{swap+auto}]\) is NP-complete, and thus Unique \(k\)-Acyclic Colouring\([\mathcal{R}_{swap+auto}]\) is coNP-hard. _ ## 4 Open Problems and Related Works Many problems related to acyclic colouring are open even for the class of cubic graphs. Grunbaum [35] proved that cubic graphs are 4-acyclic colourable (see also [59]). So, \(3\leq\chi_{a}(G)\leq 4\) for every cubic graph \(G\) which is not a forest. Yet, it is unknown whether we can distinguish between the cases \(\chi_{a}(G)=3\) and \(\chi_{a}(G)=4\) in polynomial time. **Problem 2** ([6]).: _What is the complexity of \(3\)-Acyclic Colourability in cubic graphs?_ We know that there are infinitely many cubic graphs that are 3-acyclic colourable and infinitely many that are not [6, Lemma 2]. Cheng et al. [19] designed a polynomial-time algorithm that finds an optimal acyclic colouring of a subcubic claw-free graph. They also proved that there are exactly three subcubic claw-free graphs that require four colours for acyclic colouring. Zhang and Bylka [63] proved that every cubic line graph except \(K_{4}\) is 3-acyclic colourable. According to a conjecture of Zhu et al. [64], 3-Acyclic Colourability is in P when restricted to cubic planar 3-connected graphs. **Conjecture 1** ([64]).: _Every cubic planar 3-connected graph except \(K_{4}\), \(Q_{3}\) and the dual graph of \(P_{4}\phi K_{2}\) (see Figure 13) is 3-acyclic colourable._ Regarding the class of bounded-degree graphs, the problem of determining the exact value of \(L_{a}^{(k)}\) for each \(k\geq 3\) remains open. For \(k\geq 4\), finding the value of \(L_{a}^{(k)}\) suffices to characterise the values \(d\) for which \(k\)-Acyclic Colourability in \(d\)-regular graphs is NP-complete (see Theorem 7). ## 5 Acknowledgement We thank Emil Jerabek for his valuable comments.
2308.16654
Beyond domain alignment: Revealing the effect of intrinsic magnetic order on electrochemical water splitting
To reach a long term viable green hydrogen economy, rational design of active oxygen evolution reaction (OER) catalysts is critical. An important hurdle in this reaction originates from the fact that the reactants are singlet molecules, whereas the oxygen molecule has a triplet ground state with parallel spin alignment, implying that magnetic order in the catalyst is essential. Accordingly, multiple experimentalists reported a positive effect of external magnetic fields on OER activity of ferromagnetic catalysts. However, it remains a challenge to investigate the influence of the intrinsic magnetic order on catalytic activity. Here, we tuned the intrinsic magnetic order of epitaxial La$_{0.67}$Sr$_{0.33}$MnO$_{3}$ thin film model catalysts from ferro- to paramagnetic by changing the temperature in-situ during water electrolysis. Using this strategy, we show that ferromagnetic ordering below the Curie temperature enhances OER activity. Moreover, we show a slight current density enhancement upon application of an external magnetic field and find that the dependence of magnetic field direction correlates with the magnetic anisotropy in the catalyst film. Our work thus suggests that both the intrinsic magnetic order in La$_{0.67}$Sr$_{0.33}$MnO$_{3}$ films and magnetic domain alignment increase their catalytic activity. We observe no long-range magnetic order at the catalytic surface, implying that the OER enhancement is connected to the magnetic order of the bulk catalyst. Combining the effects found with existing literature, we propose a unifying picture for the spin-polarized enhancement in magnetic oxide catalysts.
Emma van der Minne, Lucas Korol, Lidewij M. A. Krakers, Michael Verhage, Carlos M. M. Rosário, Thijs J. Roskamp, Raymond J. Spiteri, Chiara Biz, Mauro Fianchini, Guus Rijnders, Kees Flipse, Jose Gracia, Guido Mul, Hans Hilgenkamp, Robert J. Green, Gertjan Koster, Christoph Baeumer
2023-08-31T11:53:39Z
http://arxiv.org/abs/2308.16654v1
Beyond domain alignment: Revealing the effect of intrinsic magnetic order on electrochemical water splitting ###### Abstract To reach a long term viable green hydrogen economy, rational design of active oxygen evolution reaction (OER) catalysts is critical. An important hurdle in this reaction originates from the fact that the reactants are singlet molecules, whereas the oxygen molecule has a triplet ground state with parallel spin alignment, implying that magnetic order in the catalyst is essential. Accordingly, multiple experimentalists reported a positive effect of _external_ magnetic fields on OER activity of ferromagnetic catalysts. However, it remains a challenge to investigate the influence of the _intrinsic_ magnetic order on catalytic activity. Here, we tuned the intrinsic magnetic order of epitaxial La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) thin film model catalysts from ferro- to paramagnetic by changing the temperature in-situ during water electrolysis. Using this strategy, we show that ferromagnetic ordering below the Curie temperature enhances OER activity. Moreover, we show a slight current density enhancement upon application of an external magnetic field and find that the dependence of magnetic field direction correlates with the magnetic anisotropy in the catalyst film. Our work thus suggests that both the intrinsic magnetic order in La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films and magnetic domain alignment increase their catalytic activity. We observe no long-range magnetic order at the catalytic surface, implying that the OER enhancement is connected to the magnetic order of the bulk catalyst. Combining the effects found with existing literature, we propose a unifying picture for the spin-polarized enhancement in magnetic oxide catalysts. ## Introduction To establish a sustainable energy infrastructure, efficient energy storage systems are of the highest interest to overcome the intermittent nature of renewable energy sources. Green hydrogen is one of the most promising fuels for energy storage and an ideal feedstock for coupling renewable energy with other sectors like the chemical and the steel industries. [1, 2] However, production of green hydrogen via water electrolysis suffers from sluggish kinetics in the oxygen evolution reaction (OER). [3, 4, 5] This might be connected to the fact that the reactants, OH or H\({}_{2}\)O, are diamagnetic, but the final product, triplet O\({}_{2}\), is paramagnetic.[6, 7] Accordingly, recent theoretical investigations suggest that the spin polarized orbital configurations in ferromagnetic bonds increase OER efficiency by promoting the generation of triplet oxygen by quantum spin-exchange interactions (QSEI) and intrinsic spin filtering through exchange splitting of the energy levels in the conduction band of a magnetic material.[6, 7, 8, 9, 10] From the orbital physics of correlated electrons, it is hypothesized that maximum OER activity can be obtained at catalytic surfaces with dominant metallic ferromagnetic behavior.[9, 10] Following this idea, multiple experimental studies reported a positive effect of external magnetic fields on OER activity of ferromagnetic catalysts, suggesting several possible explanations.[6, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] An increase in spin polarization at the active sites was suspected to reduce the charge transfer resistance at the electrode-electrolyte interface and increase spin selective adsorption.[6, 16, 20] The removal of domain walls may reduce domain wall scattering and thus lower the magnetoresistance and increase the amount of ferromagnetically coupled reactive sites.[14, 15] An increase in the ferromagnetic exchange field between antiferromagnetic or paramagnetic catalyst surfaces and ferromagnetic subsurface layers can increase the extent of spin order at the reaction sites.[17, 18] Lastly, the magnetoresistance effect may increase the activity by decreasing the electronic resistance in the catalyst.[11] From the lack of significant changes of OER enhancement on purely paramagnetic catalysts it was hypothesized that the effects of Lorentz and Kelvin forces can be excluded as dominant factors for magnetic field enhanced OER activity.[6, 12, 21] Although it has been shown that _external_ field application on open-shell catalysts can enhance OER activity, it remains a challenge to investigate the influence of the _intrinsic_ magnetic order in catalysts. Initial attempts to study the effects of intrinsic magnetic order have shown that a higher saturation magnetization[22, 23], the occurrence of spin channels[24], a higher spin magnetic moment on the active sites[25, 26, 27, 28], and the introduction of ferromagnetism[26] can enhance either the OER activity directly, or its response to magnetic field exposure. However, to date these changes in magnetic order were accompanied by a change in either the composition, the crystal structure, or the crystal symmetry of the catalyst such that it remains a challenge to pinpoint the observed effects to the intrinsic magnetic order. In this article, we introduce a strategy to vary the magnetic order without applied magnetic field during catalysis and without changing the crystal structure. We employed epitaxial La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) thin films grown by pulsed laser deposition (PLD)[29] as model catalysts with a Curie temperature (\(T_{c}\)) slightly above and below room temperature. This enabled us to change the magnetic order of the catalyst from ferromagnetic to paramagnetic in-situ during water electrolysis by changing the temperature. At \(T_{c}\), a change of catalytic activity is expected connected to the disturbance of the inter-atomic exchange interactions due to thermal fluctuations, also known as the Hedvall effect, as observed for a range of other catalytic processes.[30] Generally, one expects a decrease in current density with decreasing temperature because of smaller thermal energy. By comparing the para- and ferromagnetic films we show that for ferromagnetic films, the current densities below the Curie temperature were higher than expected if only temperature dependent effects are considered. This indicates that the presence of ferromagnetic ordering below the Curie temperature, i.e. the intrinsic magnetic order, indeed enhances OER activity. The importance of ferromagnetic order is further demonstrated by an enhancement of OER activity for the same ferromagnetic film upon alignment of magnetic domains with an external magnetic field. We show a correlation between the magnetic anisotropy in our catalyst and the external magnetic field enhancement. Our work thus suggests that both the intrinsic magnetic order in La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films and the magnetic domain alignment upon external field exposure increase the catalytic activity. Moreover, the long-range magnetic order at the catalytic surface is strongly suppressed, implying that the OER enhancement is related with the magnetic ordering of the catalyst bulk, rather than lateral long-range order in the surface layer. Combining our observations with existing literature, we propose a unifying picture for the spin-polarized enhancement in magnetic oxide catalysts. ## Results and Discussion To study the effects of intrinsic magnetic order, one needs a model system in which physical material properties can be changed without changing composition or crystal structure. Epitaxial thin films in which highly controlled synthesis helps finetune physical properties and electrochemical functionality while maintaining composition and structure have emerged as an ideal platform to identify structure-function-relationships at the atomic level.[31, 32] Here, we selected La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) which is a half metal at room temperature associated with a ferromagnetic double exchange mechanism from (intra- and inter- atomic) OSEI.[33, 34] Epitaxial thin films of this material enable control of the magnetic properties through the thickness of the film. We utilized this property to tune the Curie temperature similar to the approach described in Ref. [29]. Using this approach, we were able to obtain a film in which we could change the magnetic order during OER by changing the temperature. Moreover, we use the strain induced magnetic anisotropy in the film to further investigate the effect of domain alignment and the magnitude of the magnetization in our films. We synthesized epitaxial thin La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films of 10- and 13-unit cell (uc) thickness, using PLD parameters similar to Ref. [29]. For all films, the growth proceeds in a two-dimensional (2D) layer-by-layer manner, as demonstrated by in-situ reflection high energy electron diffraction (RHEED, Figure 1(a)), and the resulting surface morphologies exhibit the characteristic vicinal step-terraced structure Figure 1: _RHED and X-ray diffraction of La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) thin films. (a) RHED intensity during growth of the 10 uc and 13 uc thick films, with clear oscillations indicating a 20 layer-by-layer growth. (b) RHED diffraction pattern of the substrate and 13 uc thick film. The equal distance between diffraction spots indicates epitaxial growth. The slight elongation of the spots indicates roughening of the film. (c) AFM image of the 13 uc thick film. (d) Wide 29-uc scan of the 13 uc thick film which reveals a single phase of 001 La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) (e) 26 -u scan of the 002 SrTiO peak of the 13 uc thick film with pronounced _Louie fringes. (f) Reciprocal space map of the 103 peak of SrTiO\({}_{x}\)_ As the film peak lies along the same value of \(q\), as the substrate, the film is fully strained on the substrate._ also observed for the substrate surface with small corrugations/islands/decorations on each terrace. (Figure 1(b)). Clear RHED oscillations during growth indicate the intended thicknesses of 10uc and 13uc (Figure 1(a)), further confirmed by X-ray reflectivity (XRR), which revealed thicknesses of \(\sim\)3.6 nm and \(\sim\)4.8 nm, respectively (Supplementary figure 3). X-ray diffraction measurements reveal a single La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) 001 phase (Figure 1(d)). Moreover, the existence of clear Laue fringes indicates a high bulk crystalline quality of the film (Figure 1(e)). Reciprocal space mapping confirms coherent strain to the substrate (Figure 1(f)). To further investigate the surface structure in our La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films, resonant x-ray reflectivity measurements (RXR) were performed. Using this technique, the depth dependence of the atomic densities of each of the components in a thin film can be probed. From the obtained unit-cell resolved stoichiometry of the surface (as shown in Figure 2(a) and (b)), we observe that both films have a similar bulk and surface stoichiometry, both films are B-site terminated and have Sr enrichment and La deficiency at the surface (similar to ref. [35]). Moreover, a clear indication for oxygen vacancies at the surface is found, which could be the reason for the Mn\({}^{2+}\) formation shown in Figure 2(c). Figure 2: Unit-cell resolved model obtained from resonant X-ray reflectivity measurements of the films which shows the atomic density of the different elements O, La, Sr and Mn (total or split up between Mn\({}^{2+}\) and Mn\({}^{3+}\)) as a function of the distance from the surface (see methods for further information) for a (a) 10uc thick La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) film (b) 13uc thick La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\)film. Measurements were performed at 150K. (c) Unit-cell resolved model obtained from resonant circular polarized X-ray reflectivity measurements which shows magnetization in the surface region of the film as a function of the distance from the surface. Magnetization is shown alongside the depth dependent atomic density of the Mn species. Magnetization is given in arbitrary units. Magnetization was obtained under a field of 0.67 at 300K and is given in arbitrary units. We explored the magnetic properties of the La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films ex-situ using vibrating scanning magnetometry (VSM), scanning superconducting quantum interference device (SQUID) microscopy and magnetic force microscopy (MFM) (Figure 3). We observe room temperature ferromagnetic behavior with a saturation magnetization of 1.6*10\({}^{5}\) A/m, equal to approximately 1 \(\upmu\)s/Mn atom, and a H\({}_{c}\) of \(\sim\)3 mT for the 13 uc film as both a clear hysteresis loop and a \(T_{c}\) of \(\sim\)322K were found. For the 10 uc film, we observe room temperature paramagnetic behavior because no hysteresis loop is observed due to a \(T_{c}\) of \(\sim\)290K. We thus successfully prepared two films with different magnetic properties while maintaining crystal structure and stoichiometry in the bulk and only slightly changing the surface chemical composition. Moreover, as seen in Figure 3(b), the external magnetic response of the ferromagnetic film is anisotropic. A magnetic easy axis is found along the in-plane direction of the film, which was expected due to the tensile strain induced by the substrate. \({}^{35}\)A distinct domain structure with large domains along 1 direction is observed for the 13 uc film at low temperature and zero field as shown in Figure 3(d). However, room temperature MFM measurements reveal a spatial inhomogeneity in the magnetic behavior reflected by the inhomogeneity in the background of Figure 3(f). We hypothesize that this is due to the formation of a mixed phase, consisting of a ferromagnetic-like matrix with a range of different long spin correlation lengths containing local defects. Similar behavior has been reported before for La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\)[38]. These defects may be of a paramagnetic nature. This behavior is further supported by the trend of the resistivity curve with temperature Figure 3: Magnetic properties of La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) thin films with a thickness of 10 and 13 uc. (a) In-plane magnetic hysteresis loops measured at 300 K. The diamagnetic contribution to magnetization (not shown) has been attributed to the substrate and has been subtracted. The colored area indicates the in-plane magnetic fields we applied during electrocaptivic experiments described below. (b) In-plane and out-of-plane magnetic hysteresis loops of the 13 uc La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) film obtained under similar conditions as (a). Colored areas indicate the in-plane (orange) and out-of-plane (yellow) fields we applied during electrocaptivic experiments. (c) Zoom in of the hysteresis loops shown in (b). (d) Temperature dependent magnetization curve measured at 10 mT. All samples were cooled in this field to 20 K before measurement. The dotted lines indicate the temperatures at which electrochemical measurements were performed. (e) Scanning SQUID microscopy measurement performed at 4.2 K on 13 uc thick film. (f) MFM measurements obtained at room temperature on 0.13 uc thick film. The large features are due to crosstalk from the topography shown in Supplementary figure 2. The differences in the background indicate inhomogeneity in the magnetic behavior. (Supplementary figure 1(b)) which can only be explained if there exists a competition between ferromagnetic (FM) metallic domains and paramagnetic (PM) regions at elevated temperatures, as previously described for similar manganite oxides. [37, 38] These paramagnetic domains can be a reason for the low saturation magnetization compared to the low temperature value for bulk La\({}_{0.6\text{\tiny{5}}}\)Sr\({}_{0.33}\)MnO\({}_{3}\) of 2.5 \(\upmu\)s/Mn. Having established the physical properties of our model catalysts, we now turn to electrochemical investigation. We review the magnetic-field dependent activity first, followed by the assessment of the effect of the intrinsic magnetic order. Cyclic voltammetry measurements were performed, both in the absence and presence of an applied external magnetic field in different in-plane and out-of-plane directions (see Figure 4 and methods for details). The presence (absence) of systematic changes between the activity with and without in-plane applied magnetic field for ferromagnetic (paramagnetic) catalysts indicates that the activity enhancement is linked to the increase in magnetization in the ferromagnetic film upon external field application (Figure 5 (a),(b),(e),(f) and supplementary discussion A. Due to ageing and scattering, the effect of the out-of-plane field exposure could not be distinguished (Figure 5 (c),(d),(g),(h)), a point to which we will return below. Figure 4: Schematic of the different field directions applied during external field application experiments. The direction of the rotation of the sample is shown alongside the relative orientation of the sample to the field lines of the magnet where IP and OOP stand for the configuration where the field lines at the sample are respectively parallel or perpendicular to the surface The applied field strengths of 35 mT for the in-plane directions and 50 mT for the out-of-plane directions are indicated for each of the directions. “IP side” is a special case, where the sample was placed at the side of the magnetic instead of being placed at one of the poles (normal IP and OOP case). Here the field strength is only 12 mT at the sample. To further separate the effects of the external field application from ageing or scattering effects, chronoamperometry measurements were performed at 1.8 V vs RHE. External fields were applied in different in-plane and out-of-plane directions. Again, an enhancement in current density was observed for the ferromagnetic materials upon in-plane field application (Figure 6 (a)). However, for the behavior linked to the out-of-plane fields, the change in current density depends heavily on the direction of the field lines perpendicular to the sample surface (Figure 6 (a) & (c)). Moreover, the paramagnetic sample showed clearly distinguishable changes in current density upon magnetic field exposure in certain directions contrary to previously published results.[6, 12, 21] This behavior and possible origins are discussed in more detail in supplementary discussion B. In these measurements again ageing effects can be seen as the current density in the periods between magnetic field application slightly increases over time. Figure 5: (a) to (h) Current densities obtained from cyclic voltammetry measurements (see Supplementary figure 4) at an in-corrected potential of 1.8 V vs RHE for different sweeps. Top row: 13 UC thick \(L_{0.25}\)Sr\({}_{0.3}\)MnO\({}_{3}\) film. Bottom row: 10 UC thick \(L_{0.25}\)Sr\({}_{0.3}\)MnO\({}_{3}\) film. Colored bars indicate the presence of an external magnetic field during the sweep while grey bars indicate sweeps which were done without the presence of a field. The color of the bar represents the direction of the field and corresponds to the colors indicated in Figure 4. Figure 6: (a) Chronoamperometry measurement at an IR corrected potential of 1.8V vs RHE for 0 13 UC and 0 10 UC thick \(L_{0.25}\)Sr\({}_{0.3}\)MnO\({}_{3}\) film during which an external magnetic field was applied in different directions. The colored bars indicate the intervals during which the field was applied. The color of the bar represents the direction of the field and corresponds to the colors indicated in Figure 4. (b) Normalized current density enhancement due to ferromagnetic interactions after paramagnetic background subtraction. (c) Schematic drawing of the field lines in the samples in electrolyte. The arrow on top indicates stirring direction. To differentiate the effect of the changes in internal magnetic order through domain alignment from other effects also at play for the paramagnetic sample, the paramagnetic background as well as ageing induced effects were subtracted from the response of the ferromagnetic film (Figure 6(b)). Interestingly, an enhancement was found in all field directions for the renormalized data. The current increase is more than two times higher in the in-plane direction than in the out-of-plane direction (~0.4% and 0.1% respectively). This enhancement thus has an anisotropy where a small field along the easy axis has a larger effect on DER activity than a larger field along the hard axis. Comparing the magnitude of the enhancement to the increment in magnetoresistance at these field strengths, which is less than ~0.05% (Supplementary figure 1(a)), we exclude the change in magnetoresistance as a dominant factor for magnetic field enhanced DER activity in this system. Although we have thus shown that changing the total magnetization in the catalyst can enhance DER activity, we have not yet touched upon the key point of investigating the influence of the intrinsic magnetic order in catalysts without the application of an external field. For this purpose, we explored the Hedvall effect in the OER and changed the intrinsic magnetic order in-situ. We performed OER measurements at different temperatures as indicated in Figure 3(c) (for more information, see methods). For all samples, a decreasing activity with decreasing temperature was observed (Supplementary figure 5). This loss in current density \(J\) is expected because lowering the temperature lowers the thermal energy, lowering the reaction rate of the OER. Simplifying the Butler-Volmer equation using the Tafel approximation at high potentials and neglecting diffusion limitations gives [39]: \[lnJ=lnJ_{o}+\alpha_{f}F\eta\frac{1}{k_{B}T}\] **1** for the exchange current density \(J_{o}\), the transfer coefficient of the overall forward reaction \(\alpha_{f}\), Faraday constant \(F\), overpotential \(\eta\) and Boltzmann constant \(k_{B}\). However, the exact temperature-dependent behavior may deviate from equation 1 because multiple effects like water adsorption and dissociation, the nature of adsorbed species, and movement of oxygen vacancies or interstitials in the catalyst can also be influenced by temperature.[40] While this complicates identification of magnetic-order-effects, a comparison between 10uc and 13uc samples enables differentiating different effects, as the chemical composition is similar for both samples and the purely temperature dependent electrochemical effects should be similar. On the contrary, a change in intrinsic magnetic order occurs for the 13uc thick film in this temperature range (\(T_{C}\approx\)321 K) while the 10uc film is paramagnetic over the entire temperature range. The difference in electrical resistance between these 2 films resulted in differences in absolute activity, necessitating a comparison of the activity trends with varying temperature. We plot \(lnJ\) versus \(\frac{1}{T}\) for both films in an Arrhenius plot (Figure 7(a)), using offset y-axes to account for the overall activity differences, where the slope is proportional to \(\eta\) (eq. 1). For the paramagnetic 10uc film, we find the expected almost linear decrease of \(lnJ\) with \(\frac{1}{T}\). But for our ferromagnetic film, there are competing effects for \(T\)<\(T_{c}\), reflected by the upwards deviation of the slope of the 13uc sample for \(T\)<\(T_{c}\) (blue region) compared to the paramagnetic 10uc sample, whereas for \(T\)>\(T_{c}\) (yellow region), the slopes of both samples are similar (Figure 7(a)). The positive change in slope implies a decreased overpotential or a change in transfer coefficient, increasing the OER activity. We observe an enhancement of ~35% compared to the expected current density without ferromagnetic order (see Supplementary figure 6 for derivation). As discussed in supplementary discussion C, sample degradation, surface restructuring and changes in resistance with temperature cannot explain the observed effects. We can thus conclude that the occurrence and continuous increase of the ferromagnetic order with decreasing temperature induces the relative current density enhancement below \(T_{\rm C}\). To summarize, we have shown a relative current density increase during OER upon changing the magnetic properties of La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) thin film catalysts. The combination of effects shown in the temperature-dependent OER activity measurements for a ferromagnetic catalyst and the effects shown while exposing the same ferromagnetic catalysts to an external magnetic field during OER validate that the enhancement is primarily induced by the changes in magnetic order in the catalyst. The comparably small enhancement (see Supplementary table 1 for a detailed comparison to prior works) may be explained by the small applied external field, small coercivity and low saturation magnetization of the La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films at 300K. Most importantly, the difference between saturation and initial magnetization in La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films is comparatively small due to a small coercive field, large domains and the presence of paramagnetic regions which lower the increment in magnetization upon alignment as discussed in depth by Jingjie Ge et al. [14]. Moreover, both the effects of domain alignment and a change in magnetic order could be influenced by the existence of a magnetic dead-layer, in which any long-range magnetic order is diminished. Our epitaxial model system is a suitable platform to further investigate the effect of such dead-layers, similar to previous studies focusing on the surface properties of La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films.[41, 42, 43] To further investigate where the magnetic order resides in our La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films, we again use RXR in vacuum. Using left- and right-circularly polarized light, a depth profile of the magnetization Figure 7: (a) Plot of the natural logarithm of the current density versus the inverse of the temperature obtained for a ferromagnetic (13 uc) and a paramagnetic (10 uc) La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) film at an IR corrected potential of 1.8 V vs RHE. The plot is shown alongside the magnetization (b) of the films in the same temperature range. can be obtained from the best fit of the asymmetric spectra (Figure 2 (c)), for more information see methods). [44, 45] We found that the net magnetization in the top 1.5 uc of the 13 uc thick La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\) film is zero and that the magnetization is quenched in the subsurface layer up to 4 uc, even under an applied field. This decreased magnetization is likely connected to the off-stoichiometry discussed above. As these off-stoichiometries likely deteriorate long-range ordered magnetic states, we hypothesize that the 1.5 uc thick surface layer is paramagnetic. Although these measurements were done ex-situ, the paramagnetic surface layer is likely also present under OER conditions, because the fields applied in the electrochemical cell are smaller than the fields during RXR measurements. The lack of lateral long-range magnetic order in the catalyst surface implies that no ordered interaction exists between the long-range ordered spin states of the metal atoms in the film and the reaction intermediates. We can thus conclude that the observed OER enhancements due to interatomic QSEI are mainly induced by the magnetic order in the bulk of the La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films. Still the intra-atomic QSEI is always present and persists in the surface layer. The spin-polarization of the adsorbents is thus presumably mediated through the out-of-plane QSEI in the paramagnetic surface layer, which is smaller than the ferromagnetic QSEI in the bulk, limiting the efficiency of the OER enhancement. Based on the findings from the epitaxial La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\) thin film model system and considering the range of observations in recent literature, we propose a unifying picture for the spin-polarized OER activity enhancement oxide catalysts, schematically shown in Figure 8. The lowest activity is found for a paramagnetic material with no spin channels (left panel). [7, 30] Because of the absence of stable spin channels in this catalyst, O-O intermediates with parallel and antiparallel spin alignment can both be formed. The latter do not have optimum inter-site quantum spin exchange interactions and electronic quantum correlations are not optimum for the thermodynamic overpotential. Figure 8: Proposed mechanism for spin-polarized OER activity enhancement in epitaxial La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\) thin films. In the left panel we show a fully paramagnetic catalyst layer which corresponds to our 10 uc sample for the entire T range investigated here and the 13 uc sample at T\(\cap\)T\({}_{c}\) in the second panel, the catalytic film is made up of a ferromagnetic matrix with domains in which paramagnetic regions are embedded. Moreover, a paramagnetic surface layer is present. This state corresponds to the intrinsic magnetic state of our 13 uc film at T\(\cap\)T\({}_{c}\). The third panel shows the state of the catalyst after domain removal in the ferromagnetic matrix upon external magnetic field exposure. The right panel shows the ideal state of the 13 uc film after removal of the paramagnetic regions and surface layer, a homogeneous ferromagnetic catalyst. The states achieved in this work are highlighted with a blue square. At the bottom of the image, we show the physical interactions in the film which account for the presence/absence of spin selectivity during adsorption of electrons. The reaction intermediates shown are highly simplified to focus only on the formation of the O-O bond which is the most important bond when considering the spin-polarized OER mechanism as the spin alignment in this bond either blocks or allows the formation of triplet oxygen. [17] If ferromagnetic ordering comes into play below \(T_{c}\), a film consisting of a ferromagnetic bulk with small embedded paramagnetic regions and a paramagnetic surface is obtained (Figure 8, second panel). The ferromagnetic exchange interactions in the matrix induce ordered itinerant spin channels, inducing spin-selective electron mobility. This leads to a higher possibility of the formation of bonded spin ordered O-O intermediates which in turn increases the OER activity. [7, 10] Moreover, the QSEI associated with the spin orbital ordering lowers the interatomic repulsions, which lead to shorter metal-oxygen bonds, which may act favorably on the reaction intermediate binding energies.[10] All of these aspects facilitate the generation of triplet O\({}_{2}\) molecules for ferromagnetic thin film catalysts. [10] However, the described ferromagnetic film still contains domains and domain walls. The domain walls induce electron scattering, effectively lowering spin selective transport. [11, 15] Moreover, the presence of domain walls and paramagnetic regions lowers the amount of ferromagnetic reaction sites [15] and the paramagnetic surface lowers the effectiveness of the interaction between the ferromagnetic domains in the bulk of the film and the adsorbents. Upon domain wall removal under the influence of an external magnetic field a uniform ferromagnetic matrix is obtained (Figure 8, third panel). The removal of domains lowers the amount of domain wall scattering and increases the amount of long-range magnetically ordered reaction sites. Moreover, the exchange field between ferromagnetic bulk and paramagnetic surface layer becomes stronger, which may become thinner as a result, enhancing the interaction between the ferromagnetic layer and the adsorbents. [17] The effectiveness of domain alignment depends on the strength and direction of the magnetic field. However, as paramagnetic regions remain, OER activity is still not optimal. The application of higher external fields could further improve the performance, by increasing the net magnetic moment in the paramagnetic regions until saturation is achieved and the paramagnetic regions and surface layer are completely removed (Figure 8, right panel). [6, 16] This effect could explain the differences in activity enhancement between this work and the results shown in Ref. [19], where much higher fields were applied. This fully ferromagnetic state is not accessible in our model system using realistic fields. Further research thus needs to be done to verify OER enhancement in such a fully long-range ordered electrocatalyst. ## 6 Conclusion In conclusion, we employed epitaxial La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\) thin film model catalysts to arrive at a unifying picture of the effects of intrinsic magnetic order and applied magnetic fields on the oxygen evolution reaction. We tuned the magnetic order in the films in-situ during OER by exploiting the para- to ferromagnetic transition at \(T_{c}\) Using this strategy, we showed that the presence of ferromagnetic ordering below the Curie temperature enhances OER activity. In the ferromagnetic films, application of external magnetic fields is linked to a further increase in OER activity. Moreover, we observed a correlation between the magnetic anisotropy in our catalyst and the external magnetic field induced OER enhancement. Our work thus suggests that both the intrinsic magnetic order in La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\) films and externally triggered changes in the magnetic structure affect the catalytic activity of these films. The OER enhancements due to inter-atomic QSEI are found to primarily result from changes in the magnetic order of the bulk of the catalyst, because no long-range magnetic ordering existed at the top of the catalytic surface. To further verify the proposed unifying picture, further research could focus on operando magnetic characterization to directly correlate the magnetic order and interactions in these catalysts with the activity under realistic OER conditions. ## Supplementary Material See the supplementary material for more information that supports the findings of this work: Additional electrochemical data used to generate the overview in the main text. Schematics and images of the experimental setups. Additional thin film characterization, both before and after electrochemical tests. Additional discussion of results. ## Acknowledgments Support from the University of Twente in the framework of the tenure track start-up package is gratefully acknowledged. Robert J. Green acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSER(c) Discovery Grant program. Lucas Korol acknowledges support from the NSERC CREATE to INSPIRE program. Magnetocat acknowledges the funding from the European Union's, Horizon 2020 research and innovation program,under grant agreement No. 964972 (H2020-FETOPEN-2018-2019-2020-01). Chiara Biz, Mauro Fianchini and Jose Gracia thank the SpinCat consortium. ## Author Declarations ### Conflict of interest The authors have no conflicts to disclose. ### Author Contributions **E. van der Minne:** Conceptualization (lead); Methodology (lead); Data curation (lead); Investigation (lead); Writing - original draft (lead); Writing - review and editing (lead); Validation (lead); Visualization (lead); **L. Korol:** Investigation RXR (lead); Analysis RXR (lead); **L.A.M. Krakers:** Investigation electrochemistry under field (support); **M. Verhage:** Investigation MFM (lead); Analysis MFM (lead); Writing - review and editing (support); **C.M.M. Rosario:** Investigation SQUID (lead); Analysis SQUID (lead); Analysis SQUID (lead); **R.J. Spiter:** Analysis Software (support); Writing - review and editing (support) **C. Biz**: Methodology (support); Writing - review and editing (support); **M. Fianchini:** Methodology (support); Writing - review and editing (support); **G. Rijnders:** Methodology (support); Writing - review and editing (support); **K. Flipse:** Methodology (support); Writing - review and editing (support); **J. Gracia:** Theoretical understanding (lead); Writing - review and editing (support); **G. Mul**: Conceptualization (support); Methodology (support); Writing - review and editing (support); **R.J. Green:** Investigation RXR (lead); Analysis RXR (lead); **H. Hilgenkamp:** Conceptualization (support); Methodology (support); Writing - review and editing (support); Analysis SQUID (support); **G. Koster:** Supervision (support); Conceptualization (support); Methodology (support); Writing - review and editing (support); **C. Baeumer:** Conceptualization (lead); Methodology (lead); Data curation (lead); Investigation (supporting); Supervision (lead); Writing - original draft (support); Writing - review and editing (lead); Validation (support) ### Data Availability The data that support the findings of this study are available within the article and the supplementary material. Additional data is available from the corresponding authors upon reasonable request. ## Methods ### Pulsed Laser Deposition La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\) thin films are deposited via reflection high-energy electron diffraction (RHEE(d)-assisted pulsed laser deposition (PL(d) onto B-site terminated and step-terraced SrTiO\({}_{3}\) (001) substrates purchased from CrysTec GmbH or Shinkosha Co., Ltd. A stochiometric La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\) target was obtained from SurfaceNet. The films were deposited with a laser fluence of 2.0 J cm\({}^{-2}\) and a frequency of 1 Hz. The deposition was done at an oxygen pressure of 0.266 mbar, and the temperature of the substrate was kept at 750 \({}^{\circ}\)C during deposition. The distance between sample and target was kept at 5 cm, and a rectangular mask was used to obtain a laser spot size of 2.24 mm\({}^{2}\). Before deposition, the targets were pre-ablated at 5 Hz. After deposition, the samples were cooled down at 25-C/min inside the PLD at 100 mbar oxygen pressure. PLD was performed in a vacuum system (TSST) with a base pressure of 5 \(\times\) 10\({}^{-8}\) mbar, equipped with an in-situ RHEED (Staib instruments) and a KrF excimer laser (Coherent, Inc.) of 248 nm. ### Thin film characterization X-ray diffraction and reflectivity measurements were performed using a Panalytical X'pert pro diffractometer with Cu anode. For the diffractograms, a GE 220 Monochromator was used to obtain Cu-K\({}_{a}\) radiation. During the 26-\(\upmu\)o scans, the detector was operated in 0D mode with an active length of 0.165 mm. A slit of 1/2\({}^{\circ}\) was used to shape the beam. Reflectivity measurements were performed using a of 1/32\({}^{\circ}\) slit. The detector was operated in 0D mode with an active length of 1.12 mm. The RSM was performed using a Bruker D8 Discover diffractometer with Cu-K\({}_{a}\) radiation and an Eiger 2 R 500K area detector. The detector was kept stationary while operated in 1D mode as an omega rocking curve was performed. A grazing-exit configuration was chosen to obtain narrow diffractograms. The topography of the grown films was characterized by atomic force microscopy (AFM) using a Veeco Dimension Icon AFM in tapping mode in air. The oscillating cantilever is a Tespa-V2 cantilever (Bruker, Netherlands) with a pure silicon tip with a nominal radius of 20 nm. Images were obtained using the Nanoscope software and treated using the Gwyddion software. First, we align the rows using a polynomial approach, then the data is leveled by mean plane subtraction, lastly a polynomial background was subtracted. ### Magnetic characterization VSM measurements were obtained using a DYNACOOL physical properties measurement system (Quantum Design, Germany). All samples were field cooled at 10 mT before measuring temperature-dependent magnetization. Scanning superconducting-quantum-interference-device (SOUI(d) microscopy measurements were performed at 4.2 K in a liquid He bath. A superconducting Nb shield is used to shield the sample and SQUID sensor from any external magnetic fields. The SQUID sensor used is extended with a pickup loop having an effective area of approximately 18 \(\upmu\)m\({}^{2}\), and scanning is performed under an angle of approximately 15\({}^{\circ}\), making the distance between the pickup loop and sample surface approximately 2-3 \(\upmu\)m. The SQUID is a flux-to-voltage transducer, and the measured voltage can be converted to magnetic field by dividing by the (measurement-dependent) flux-to-voltage ratio and the effective area of the pickup loop. Typical flux sensitivities are in the order of 10-20 \(\upmu\)Phio/(Hz)\({}^{1/2}\), and the bandwidth is 1000 Hz.[46, 47] Magnetic force microscopy was performed with a CoCr coated Si cantilever, with a spring constant of 2.5 N/m and a resonant frequency around 65 kHz. Prior to imaging a permanent magnet was used to magnetize the tip for sensitivity of in-plane magnetic fields. A Veeco Dimension AFM III was operated in tapping mode feedback. Image was obtained at 1Hz scan speed with 512 pixels x 512 lines. The magnetic signal was obtained by lift mode, with a lift height of 10 nm. Gywdion was used for image processing with plane subtraction and line alignment via means of differences. ### Electrochemical characterization To perform electrochemical experiments with epitaxial thin films on 10 x 10 x 0.5 mm single crystal substrates, we used a custom-made adapter to press the sample back side to the Pt plug of a rotating disk electrode (RDE, Pine Research). For further details, see Supplementary figure 7. 50 nm Pt connections from the sample back side to the front side ensured electrical contact with the Nb:SrTiO\({}_{3}\) substrate and the epitaxial layers. On the front side, a film area of 7.5 mm diameter was exposed to the electrolyte and sealed using an O-ring (Kalrez, FRiKS, Germany). The RDE shaft was rotated at 1600 rpm. Electrochemical testing was performed using a Biologic SP-300 potentiostat in a 150 mL alkaline-resistant cup with a Pt wire as a counter electrode. Electrochemical impedance spectroscopy was conducted with the amplitude of 10 mV at open-circuit potential, and the correction for the cell resistance (IR correction,) was based on the high-frequency intercept of the real impedance. The electrolyte solution of 0.1 M KOH was prepared by dissolving KOH pellets (Sigma-Aldrich, 99.99%) in Milli-Q water. The electrolyte was O\({}_{2}\)-saturated prior to testing for at least 30 min and maintained under an O\({}_{2}\) atmosphere during testing. All electrochemical measurements were performed at room temperature for the domain alignment tests. For elevated temperature experiments, the electrochemical setup was placed in a heater water bath (see Supplementary figure 7). The temperature of the solution was measured in close approximation to the working electrode to determine the temperature near the working electrode. After heating to each temperature, we waited for approximately 20 minutes to ensure temperature stabilization. Potentials were referenced to a Hg/HgO reference electrode (C3 Prozess-und Analysentechnik, Germany). The potential of the Hg/HgO was measured against a reversible hydrogen electrode (RH(e) (HydroFlex, US(a) in 0.1 M KOH. The applied potentials were converted to RHE scale using the measured difference and were corrected using iR correction [48]. All of the OER testing was performed on a fresh electrode that had not undergone previous testing. Cyclic voltammetry was first performed in the pseudocapacitive redox phase change region (\(\sim\)1.1 to 1.5 V vs RH(e) at scan rates between 10 and 500 mV s\({}^{-1}\), followed by OER testing performed from 1.1 V to 1.9 V vs RHE at a scan rate of 10 mV s\({}^{-1}\). External magnetic fields were applied using a 1 T permanent disk magnet such that the applied field strength depended on the distance between the magnet and the sample. These magnetic field strengths were measured as a function of distance for each of the directions using a flat hall probe. ### RXR The Resonant x-ray reflectometry (RXR) data was acquired at the Resonant Elastic and Inelastic X-ray Scattering (REiXS) beamline of the Canadian Light Source (CLS) in Saskatoon, Canada [49]. The beamline has a flux of 5 \(\times\) 10\({}^{12}\) photons \(s^{-1}\) and has a photon energy resolution \(\Delta E/E\) ~10\({}^{-4}\). An elliptically polarizing undulator (EPU) is used to give the desired linear and circular polarizations. The experimental chamber was kept below a base pressure of 10\({}^{-9}\) Torr, and the measurements were taken at a temperature of 300 K. The samples were aligned with their surface normal in the scattering plane, and the reflection-geometry scans were aided by the in-vacuum 4-circle diffractometer. The measurements were performed in the specular reflection geometry with several resonant photon energies at Ti L\({}_{2,3}\) (\(\sim\)450-470 eV), Mn L\({}_{2,3}\) (\(\sim\)635-660 eV), and La M\({}_{4,5}\) (\(\sim\)830-860 eV) resonances, along with multiple non-resonant photon energies. The circular-dichroic magnetic measurements were performed by inserting a permanent magnet array into the sample environment producing a homogenous 0.6 Tesla field, aligning the magnetization in both the film \(xy\) plane and the measurement scattering plane. The dichroic measurements were then taken solely using the resonant photon energies of the Mn L\({}_{2,3}\) resonance. A photodiode was used to detect the reflected beam intensity, with the response function of the photodiode determined by directly measuring the synchrotron beam. The measured data was normalized by the incident beam flux and the response function to obtain a quantitative reflectivity spectra. The modelling of the RXR data was performed using the Global Optimization of Resonant X-ray Reflectometry (GO-RXR), a software package recently developed by the qMaX group at the University of Saskatchewan. Tabulated atomic form factors were used for non-resonant energies, while the resonant scattering tensors for elements Ti, Mn, and La were constructed using measured x-ray absorption spectra. For Mn, two different resonant scattering tensors were used - one for Mn\({}^{2+}\) and one for Mn in stoichiometric La\({}_{0.3}\)Sr\({}_{0.3}\)MnO\({}_{3}\) (implemented as a weighted linear combination of Mn\({}^{3+}\) and Mn\({}^{4+}\) scattering tensors corresponding to Mn\({}^{3+}\)). To determine the optical and magneto-optical profiles, a slab model was used that is made up of parametrized layers with defined elements, oxidation states, thicknesses, densities, and roughness. The model parameters are then used to construct an element-specific continuous depth-dependent density profile. The density profile, along with the form factors, are then used to determine the energy- and depth-dependent optical profile. The optical profile is then used to simulate the reflectivity for a given energy, reflection angle, and polarization. To determine the density profile of the 10uc and 13uc La\({}_{0.65}\)Sr\({}_{0.33}\)MnO\({}_{3}\) samples, the parameters of layer thickness, density, roughness, and Mn oxidation state were optimized while fitting the simulation to the experimental data. To reduce the parameter set, the concentration ratio of Sr and La was fixed to 3:7 throughout the bulk of the film (from the target stoichiometry), but it was allowed to vary near the interface and surface. To determine the magnetic profile of Mn, the nonmagnetic elemental density profile was determined first by optimizing the parameters against an extended sigma-polarized experimental dataset. The magnetic density profile was then fit to the circular polarized data using the asymmetry of the right \(R_{R}\) and left \(R_{L}\) circular polarizations where \(A=(R_{L}-R_{R})/(R_{L}+R_{R})\). We show the experimental RXR data and associated fits of the resonant and non- resonant theta/two-theta reflectivity scans and circular polarized asymmetry scans at different energies in Supplementary figure 11 and Supplementary figure 12. Moreover the Mn-resonant energy scans and Mn-resonant circular polarized asymmetry energy scans along with associated fits are displayed in Supplementary figure 13.
2309.05930
Combining Deep Learning and Street View Imagery to Map Smallholder Crop Types
Accurate crop type maps are an essential source of information for monitoring yield progress at scale, projecting global crop production, and planning effective policies. To date, however, crop type maps remain challenging to create in low and middle-income countries due to a lack of ground truth labels for training machine learning models. Field surveys are the gold standard in terms of accuracy but require an often-prohibitively large amount of time, money, and statistical capacity. In recent years, street-level imagery, such as Google Street View, KartaView, and Mapillary, has become available around the world. Such imagery contains rich information about crop types grown at particular locations and times. In this work, we develop an automated system to generate crop type ground references using deep learning and Google Street View imagery. The method efficiently curates a set of street view images containing crop fields, trains a model to predict crop type by utilizing weakly-labelled images from disparate out-of-domain sources, and combines predicted labels with remote sensing time series to create a wall-to-wall crop type map. We show that, in Thailand, the resulting country-wide map of rice, cassava, maize, and sugarcane achieves an accuracy of 93%. We publicly release the first-ever crop type map for all of Thailand 2022 at 10m-resolution with no gaps. To our knowledge, this is the first time a 10m-resolution, multi-crop map has been created for any smallholder country. As the availability of roadside imagery expands, our pipeline provides a way to map crop types at scale around the globe, especially in underserved smallholder regions.
Jordi Laguarta Soler, Thomas Friedel, Sherrie Wang
2023-09-12T03:05:06Z
http://arxiv.org/abs/2309.05930v2
# Combining deep learning and street view imagery to map smallholder crop types ###### Abstract Accurate crop type maps are an essential source of information for monitoring yield progress at scale, projecting global crop production, and planning effective policies. To date, however, crop type maps remain challenging to create in low- and middle-income countries due to a lack of ground truth labels for training machine learning models. Field surveys are the gold standard in terms of accuracy but require an often-prohibitively large amount of time, money, and statistical capacity. In recent years, street-level imagery, such as Google Street View, KartaView, and Mapillary, has become available around the world. Such imagery contains rich information about crop types grown at particular locations and times. In this work, we develop an automated system to generate crop type ground references using deep learning and Google Street View imagery. The method efficiently curates a set of street view images containing crop fields, trains a model to predict crop type by utilizing weakly-labelled images from disparate out-of-domain sources, and combines predicted labels with remote sensing time series to create a wall-to-wall crop type map. We show that, in Thailand, the resulting country-wide map of rice, cassava, maize, and sugarcane achieves an accuracy of 93%. As the availability of roadside imagery expands, our pipeline provides a way to map crop types at scale around the globe, especially in underserved smallholder regions. ## 1 Introduction & Background Ensuring global food security is one of the major challenges we will face this century, especially in the face of a changing climate and a growing climate population [1, 13]. Accurate crop type maps are an essential source of information for monitoring yield progress [1], projecting global crop production [12], and planning effective policies [14]. However, only a handful of countries--mostly high-income--have had the budget to collect large-scale ground data and develop crop type maps [10, 11, 12]. Meanwhile, regions with smallholder farms, which provide a living for two-thirds of the world's rural population of 3 billion [15] and produce 80% of the world's food [1], continue to lack such maps. The majority of smallholder farms are located in middle- and low-income countries, where expensive ground data on crop types remains scarce [11, 12, 13]. To address the high cost of acquiring ground reference labels, the remote sensing and machine learning communities have started to explore non-traditional sources of crop type data. One such source is roadside images through services like Google Street View (GSV), Bing Maps StreetSide, Mapillary, KartaView, Tencent Street View, and Baidu Total View. The images are captured by dash cams or panoramic cameras mounted on cars; depending on the service, they are crowdsourced or collected by dedicated fleets. Today, they are low-cost to access, available in almost every country, and updated every few years. Recent works have used roadside imagery in applications ranging from urban morphology to real estate to air quality prediction [10]. Most relevant to our work, Paliyam et al. (2021) deployed cameras mounted on the hoods of vehicles in Africa, Wu et al. (2021) used smartphones mounted on car windows, and Yan and Ryu (2021) used GSV to create crop type ground reference labels. Recently, the European Space Agency and WorldCereal Consortium released a crop type map for cereals leveraging GSV Figure 1: **Top**: Street-view images of roadside occlusions present between the car-mounted camera and fields. **Bottom**: Street-view images after the automated filtering process for the four major crop types in Thailand. to manually create a validation set [20]. However, existing approaches remain either small in scale (i.e., rely on manual labeling) or are difficult to deploy in smallholder regions. Challenges in smallholder regions include: complex road networks in rural areas compared to the grid system present in the US Midwest [20] and vegetation and man-made occlusions blocking the view between the road and fields (Figure 1). Furthermore, as different regions grow different crops, current methods require hand-labelling a new crop type roadside dataset when encountering a new region. In this paper, we propose a cost-effective automated deep learning pipeline to generate crop type references and remote sensing crop type maps with minimal manual labeling. We create a method to auto-generate field coordinates at scale, and then filter GSV images to create a large dataset of geo-tagged field images (Section 2.2). Next, a scraper automatically curates a training set of weakly-labeled images from disparate out-of-domain sources to train a CNN crop type discriminator on street-view images for the crop types of interest (Section 2.3). The crop type of field images is inferred by the CNN to generate ground referenced labels as a training set for a remote sensing crop type mapper (Sections 2.3-2.4). Once trained, the remote sensing model outputs crop type maps (Section 2.4). This work shows for the first time smallholder crop type mapping on a country scale using street-level imagery. We tested our approach in Thailand for the May 2022 to October 2022 wet growing season on the region's four major crops and created a ground truth set of 1600 GSV images labelled by a plant taxonomy expert. A total of 81,000 ground reference points were generated using our deep learning pipeline to train the remote sensing crop type model on the whole country. The crop type maps achieved an overall accuracy of 0.93 and an F1 score of 0.92. The approach is orders of magnitude cheaper than traditional survey-based methods, scalable, high accuracy, and automated -- with expert hand-labeling only necessary to create a test set. The approach extends to non-GSV street-level imagery as well. We open source all code, datasets, and crop type maps generated here: [_link omitted for blind review_]. ## 2 Datasets & Methods In brief, our method generates proposal locations along streets, curates a set of images containing crop fields (Section 2.2), predicts the crop type within the street-level image with minimal manual labeling (Section 2.3), and combines these labels with remote sensing time series to create a wall-to-wall crop type map (Section 2.4) (Figure 2). ### Study area We chose Thailand as the area of study, because it is simultaneously dominated by small-scale farms and has a high availability of Google Street View images. GSV images in Thailand grew 700% from March 2022 to May 2023 to a total of 3.9M images, of which 2.8M are in cropland areas across the country (Figure 3). Two-thirds of cropland in the country is used to grow rice, while sugarcane, cassava, and maize are the next most abundant crops at just under 10% of Figure 2: **Overview of the methods presented in this paper to create a Thailand-wide crop type map. Example field points, ground reference labels, and crop type map are shown for the district of Sawang Ha.** crop area each (Figure 1) (FAO 2021). Rice is grown in two seasons, wet and dry; we limit this work to the wet season. ### Finding street-level images of crop fields Extrapolate equidistant street pointsThe first step in the pipeline is to gather the latitude and longitude coordinates of candidate fields and their corresponding roadside image along all roads in the country. We started with Open Street Map (OSM), a worldwide open geographic database with high coverage and completeness (OpenStreetMap contributors 2017), and used Overpass API to query all OSM ways within Thailand. In order to maximize recall of GSV field images, we generated equidistant points at 10m steps along OSM ways. Examples of points sampled from OSM street nodes in a chosen area are shown in Figure A.1. Filter points using land cover mapNext, we used existing land cover maps to filter for street points near farmland and remove field points that are not visible from the street due to obstructions (Figure 1). In particular, we used the European Space Agency's WorldCover 10m v100, which classifies global land cover at 10 meter resolution for the year 2020. The map contains 11 land cover classes, including tree cover, shrubland, grassland, built-up, and cropland. We accessed WorldCover and filtered candidate points using functions available in Google Earth Engine. Starting from the OSM-derived equidistant points, we removed points containing no cropland within a 10m radius. An inspection of 200 remaining points revealed that, in 30% of GSV images, crop fields were still blocked by trees and other vegetation next to the road. We therefore also removed points containing any tree cover within the same 10m radius. Finally, we are only interested in roadside images taken during the growing season, when crop types may be visible from the road. For this work, only GSV images captured during the wet growing season in Thailand (May 2022-October 2022) were considered. The above approach can be modified to any region and growing season worldwide by changing the date range and shapefile of the region of interest. Extrapolate camera heading and field pointDue to the grid layout of streets in the US Midwest, prior work using GSV for crop type mapping obtained roadside images facing crop fields and points within crop fields by extrapolating street points in north/south and east/west directions (Yan and Ryu 2021). Since street layouts are more complex in small-holder regions, we first found the street bearing \(\theta\) using the haversine formula (see Section A.2), and then computed the heading for the camera to face the two fields on either side of the street as \(\theta\pm 90\) degrees (Figure A.1). We empirically determined that 30m was the best distance to extrapolate street points to crop field interior points (Table A.2). Classify in-field imagesAfter filtering out points near trees and finding the appropriate camera heading, we found that 58% of GSV images still had an obstructed view of a Figure 4: **Schematic of the process to generate ground reference labels. Each ground reference is composed of crop type and geocoordinates from street-view images.** Figure 3: **Spatial and temporal distribution of Google Street View in Thailand. Left: Hexbin plot of GSV availability across Thailand. The zoomed-in panel shows the location of street-view images overlaid on a satellite basemap in the district of Sawaeng Ha. Right: Availability of street-view images in Thailand by month, with a clear rise in availability since 2022 and a total of over 3 million images. During the wet season (May–October) shown in the blue box, 1.5 million images are available.** field due to small bushes between the road and field not detected by satellite land cover maps (Table 3). We therefore labelled 2986 images into {_field_, _not field_} and trained a binary classifier using a ResNet-18 pre-trained on ImageNet. Hyperparameters used include an Adam optimizer and learning rate of \(0.001\) for 15 epochs. The model classified field images from non-field images with 95% recall and 98% precision. Candidate street-view images were downloaded (\(n=224,000\)) and classified by the _field/not-field_ CNN. Those labelled as _field_ (\(n=89,000\)) were used as input in Section 2.3 to be classified into crop types. ### Predicting crop type in street-level images #### Compile training set from the web Data annotation to train a new classifier requires manual labeling, which is time consuming and, for crop type classification, requires domain expertise. Fortunately, the internet can serve as a source to rapidly obtain a training set of images that is large, low-cost, and diverse with real-world settings [11]. We compiled training sets of crop type images from two sources: Google Images Creative Commons (hereafter "WebCC") and iNaturalist. For each crop type, images were queried in Google Images by searching for the crop name followed by "field" (e.g., "rice field"). Returned images were labeled with the queried crop type. Meanwhile, iNaturalist is an online community of naturalists and citizen scientists who contribute photos of biodiversity across the globe. Its database contains over 161 million observations. While not targeted toward agriculture, iNaturalist contains images of crops, which can be downloaded via an API [12]. In total, thousands of image-label pairs were collected from the two platforms (Table 1). Although we tried to compile training images that look similar to GSV, online images varied in quality, size, and relevance. We observed many images of people with fields in the background, people holding crops, and label noise in WebCC (Figure 5). Fortunately, CNNs tend to be robust to noise and can perform well if enough signal from the desired task is present during training [11, 12, 13, 14]. Therefore, we did not manually clean the dataset. #### Manually label crop type ground references We collected a test set of GSV images randomly sampled by geography and had them manually labeled by a plant taxonomy expert. To avoid data leakage in the remote sensing evaluation from two points belonging to the same field, we ensured all field points were more than 100m away from each other. We labelled 1922 images into the following classes: cassava, maize, rice, sugarcane, unknown, non-crop, unsupported crop, and additional. The expert-labeled dataset served two purposes, (1) to compare performance of the CNN street-view classifier across different training sets and (2) to evaluate the remote sensing crop type classifier. #### Train a street-level crop type classifier We trained models on 5 different datasets, 3 of which contained only online images automatically labeled by the corresponding search label. We use **WebCC** and **iNaturalist** to denote models trained on Google Images results and iNaturalist images, respectively. **iNaturalist + WebCC** was trained by combining the WebCC and iNaturalist datasets. **Expert labeled** was trained on 60% of the expert-labeled GSV test set. Finally, **Combined** merged the same 60% set of GSV with WebCC and iNaturalist. For all models, the remaining 40% of the expert-labeled GSV dataset was split evenly into validation and test sets. \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline & \multicolumn{4}{c}{**Percent of Dataset**} & \multicolumn{4}{c}{**Number of Samples**} \\ \cline{2-9} **Crop Type** & **Planted Area** & **Expert Street View** & **WebCC** & **iNaturalist** & **Expert Street View** & **WebCC** & **iNaturalist** \\ \hline Rice & 67\% & 79\% & 24\% & 16\% & 1261 & 659 & 1396 \\ Sugarcane & 7\% & 9\% & 25\% & 22\% & 144 & 679 & 1882 \\ Cassava & 8\% & 5\% & 21\% & 42\% & 81 & 584 & 3662 \\ Maize & 8\% & 7\% & 30\% & 20\% & 111 & 832 & 1698 \\ \hline Total & 90\% & 100\% & 100\% & 100\% & 1597 & 2754 & 8638 \\ \hline \hline \end{tabular} \end{table} Table 1: **Crop type distribution for the various datasets used to train and test a classifier on street-view images in Thailand.** Planted area is obtained from annual national statistics released by the FAO. Street View refers to GSV images randomly sampled from Thailand and manually labelled with crop type. WebCC are images scraped from the web. iNaturalist are images selected from an online biodiversity database. Figure 5: **Example images from the two online datasets.** Although some images are of crop fields similar to the target street-view task, many images are either close-ups of a plant (especially in iNaturalist), a single plant instead of a field, or label noise (especially in WebCC). We used data augmentation techniques to accommodate images that varied in size across datasets and help the model focus on crop type features rather than a crop's position, zoom, or orientation in an image. During training, all images were expanded to \(600\times 600\) px and horizontally flipped at random, followed by a random crop of \(300\times 300\) px. We selected a ResNet-50 architecture pre-trained on ImageNet for ease of reproducibility. All models were trained on 4-class classification with cross-entropy loss for 15 epochs, with a learning rate starting at \(0.001\). A cosine annealing learning rate scheduler was used to dynamically adjust the learning rate and help improve model convergence. Predict crop type in street-view imagesTrained models were used to classify street-view images (\(n=89,000\)) into crop types. We took sliding windows across each image, with a window of \(300\times 300\) pixels and a stride of \(50\). We used the mode of high probabilities (MHP) to select the predicted class from all the sliding windows in an image. For each window, if **p** is the vector of softmax probabilities and \(c\) is the crop type with the highest probability, then the window casts one vote for class \(c\) if \(\textbf{p}_{c}\) exceeds some threshold \(\tau\). If none of its probabilities exceed \(\tau\), then the window casts no votes. The final prediction for the image is the crop type with the most votes across all windows. Each clean street-view image was classified by the 5 models with sliding window and MHP and paired with its field coordinate to generate the ground reference training sets. ### Remote sensing-based crop type mapping Feature extraction via harmonic regressionConsistent with existing state-of-the-art methods [10, 16], we used Sentinel-2 satellite time series for country-wide crop type classification. Sentinel-2 images capture 13 optical bands at 10-60m resolution on a 5-day cycle, and their time series contain information on crop phenology that allows different crop types to be distinguished. We used Google Earth Engine to export Sentinel-2 L2A time series in Thailand from May 1 - October 31, 2022. Four spectral bands were used: Red Edge 4, SWIR 1, SWIR 2, and NIR. We added the green chlorophyll vegetation index (GCVI \(=\text{NIR}/\text{GREEN}-1\)), as prior work showed it to be a valuable feature for crop type classification. We used the Cloud Probability band to remove cloudy days before extracting time-series for each ground reference (Figure A.3). We used the harmonic regression to extract frequency-domain features from time series of varying lengths for input to machine learning. Equivalent to the discrete Fourier Transform, harmonic regression generates robust features for crop type classification [15, 16, 17, 18, 19]. We applied a 3rd order harmonic regression to each band independently to arrive at a total of 35 features. Country-wide crop type classificationWe trained random forests (with \(500\) trees) on crop type classification, with the harmonic coefficients as input and the crop type labels generated by the street-view CNN as output. Random forests [10] are frequently used in remote sensing applications for their high accuracy and computational efficiency [1, 1, 13]. The crop type ground references (\(n=81,000\), Section 2.3) were used to train the remote sensing crop type classifier. We trained 5 random forests, one for each ground reference set (Table 4). A sixth model was trained only on the expert-labeled dataset (\(n=984\)). Compared to automatically-labeled points, expert-labeled points are more accurate but almost 100 times smaller in quantity. ## 3 Results ### Automatically generated field points and street-view images We obtained 98 million points across Thailand by computing 10m equidistant points from OSM ways. Of these points, 3.9 million occurred near cropland without tree cover, and 2.8 million had GSV images available (Table 2). Of these street-view images, we downloaded 224,000 and classified 89,000 into images of fields and 135,000 into not fields. The 89,000 field-view images were classified by crop type (rice, maize, cassava, sugarcane) and passed though the MHP threshold to create 81,000 point ground reference labels. The remaining 8,000 images were removed at MHP thresholding because none of their sliding window predictions had a softmax probability greater than 0.9 indicating label uncertainty. \begin{table} \begin{tabular}{l l} \hline \hline **Pipeline Step (Section)** & **Dataset Size** \\ \hline Candidate field points (2.2) & 98 million \\ Filtered field points (2.2) & 3.9 million \\ GSV field images available (2.2) & 2.8 million \\ Street-view images downloaded (2.2) & 224,000 \\ Clean street-view images (2.2) & 89,000 \\ Labelled field points (2.3) & 89,000 \\ MHP thresholding (2.3) & 81,000 \\ \hline \hline \end{tabular} \end{table} Table 2: **Summary of dataset size through the point and image filtering steps in the pipeline**. Starting with a road network of Thailand and ending with 81,000 ground reference labels for the four major crop types. \begin{table} \begin{tabular}{l c c} \hline \hline **Method** & **Precision** & **Recall** \\ \hline Baseline [16] & 0.07 & 0.25 \\ Camera heading & 0.14 & 1.00 \\ Cropland filter & 0.31 & 1.00 \\ Cropland and tree cover filters & 0.42 & 0.99 \\ \hline All + field classifier & 0.98 & 0.95 \\ \hline \hline \end{tabular} \end{table} Table 3: **Precision and recall for filtering out non-field GSV images.** Baseline is our implementation of the approach in [16]. Each **Method** includes the methods above. All methods previous to the field classifier are filters applied before downloading the GSV image. To understand how important land cover filtering, camera heading, and _field/not field_ classification were for finding GSV images of crops, we calculated the precision and recall at various steps along the pipeline (Table 3). We also implemented the method developed by Yan and Ryu (2021) in the US Midwest, which did not include these filtering steps, as a baseline. In a sample of random GSV images in Thailand, we found that the baseline method achieves a precision of 0.07 and recall of 0.25; in other words, 93% of downloaded GSV images did not show a crop field, and 75% of images of crop fields were missed. This illustrates the complexity of smallholder landscapes compared to industrial agriculture. By contrast, using the correct camera heading on points generated 10m apart from OSM ways yielded a precision of 0.14 and recall of 1.00. Filtering by cropland alone improved precision to 0.31, and removing points near trees further improved precision to 0.42. Finally, training a CNN to classify _field/not field_ improved precision to 0.98 and lowered recall slightly to 0.95 -- a worthwhile trade-off. ### Weakly supervised creation of crop type ground references Street-view crop type classificationOur experiments on the 5 datasets showed that weak supervision with online images can successfully classify crop type in street-view images (Table 4). WebCC (\(n=2,754\)) on its own achieved 82.9% overall accuracy, similar to iNaturalist (\(n=8,638\)) at 83.2%. For comparison, a baseline that classifies all samples as the most common class (rice) would achieve 70.3% accuracy. Cassava, despite possessing visual features that distinguish it from the other crops, had the lowest F1 score under both WebCC and iNaturalist (9.8% and 11.0%, respectively), followed by maize (40.3% and 41.3%) and sugarcane (53.6% and 54.2%). The low performance is likely due to the low fraction of street-view-like field images for iNaturalist (5% for cassava; Table A.2) and label noise for WebCC, as Google Images has fewer images of cassava so it returns images of related keywords (e.g., cassava root). When merged together, iNaturalist + WebCC (\(n=11,392\)) overall accuracy improved to 93.8%. F1 scores for cassava, maize, and sugarcane also substantially improved. This complementarity could be due to iNaturalist having higher label accuracy but more out-of-domain images (e.g. closeups), while WebCC has lower label accuracy but considerably more street-view-like images. The CNN trained on the expert-labeled dataset (\(n=947\)) achieved 93.9% accuracy, surpassed only by Combined (\(n=12,339\)), made up of all 3, which achieved 95.9%. The comparable performance of iNaturalist + WebCC versus these labelled datasets shows the weak supervision approach can help minimize the need for expert labelling. Role of sliding window classification and probability thresholdThe MHP approach to remove low-confidence classifications (Section 2.3) proved to increase the accuracy and F1 score across all models for thresholds set to 0.7 or above. The top performing model trained on the Combined dataset achieved 81.6% accuracy when directly classifying the whole image, 90.6% with sliding windows but no MHP threshold, and 95.9% with a 0.9 MHP threshold. Improvements were similar for the other training sets. We note that a higher MHP threshold \(\tau\) leads to some images being dropped from the inference set, because sometimes no sliding windows have softmax probabilities exceeding \(\tau\). Despite the slight loss of data, we observed that F1 score increased monotonically with \(\tau\) all the way up to \(\tau=0.95\) (Figure A.2). ### Country-wide crop type map Automated vs. expert-labeled GSV ground referencesWe found that training a random forest on large CNN-generated GSV ground references (\(n=81,000\)) resulted in crop type maps that were significantly more accurate than training on small expert-labeled ground references (\(n=984\)), despite the CNN-based labels being imperfect (Table 5). Training on expert-labeled ground references achieved 67.0% overall accuracy, which is actually lower than the 69.5% accuracy of a baseline model that classifies everything as the most common class (rice). In comparison, the random forest trained on 81,000 GSV samples whose crop types were predicted by the street-view CNN trained on expert labels achieved an accuracy of 92.9%. This suggests that the sample volume generated by leveraging GSV outweighs the noise that predicted crop type labels can carry from misclassification. Weakly supervised vs. expert-labeled automated GSV ground referencesAmong the 5 automatically-generated GSV ground reference datasets, the datasets generated by \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & \multicolumn{6}{c}{**Street-view Test Set Metrics**} \\ \cline{2-7} **Training Dataset** & **Overall Acc** & **Overall F1** & **Cassava F1** & **Maize F1** & **Rice F1** & **Sugarcane F1** \\ \hline Baseline: Most common class & 0.703 & 0.696 & 0.000 & 0.000 & 0.882 & 0.000 \\ WebCC & 0.829 & 0.822 & 0.098 & 0.403 & 0.980 & 0.536 \\ iNaturalist & 0.832 & 0.814 & 0.110 & 0.413 & 0.982 & 0.542 \\ iNaturalist + WebCC & 0.938 & 0.960 & 0.885 & 0.707 & 0.992 & 0.797 \\ Expert labeled & 0.939 & 0.963 & 0.813 & 0.882 & 0.978 & 0.854 \\ Combined & 0.959 & 0.973 & 0.989 & 0.692 & 0.990 & 0.928 \\ \hline \hline \end{tabular} \end{table} Table 4: **Performance on crop type classification from street-view images for the four major crops in Thailand.** Models were trained on combinations of three different training datasets: WebCC, iNaturalist, and Expert labeled GSV images. The MHP threshold was set to 0.9. CNNs trained on expert labels proved more accurate than those generated by CNNs trained on weak labels from the web. Consistent with the street-view classification results in Table 4, the lowest-performing datasets were those created by the WebCC CNN and the iNaturalist CNN; the remote sensing-based crop type classifier trained using their predictions as labels achieved accuracies of 79.4% and 78.3%, respectively. The crop type classifer trained on iNaturalist + WebCC CNN predictions performed better at 90.2% accuracy, but suffered from low cassava and maize F1 scores (51.4% and 45.3%). The two highest-performing crop type classifiers were trained on outputs of the expert-labeled CNN and Combined CNN, achieving accuracies of 92.9% and 93.1% and higher F1 scores across all four crop types. For the individual crop types, rice--the most abundant crop--was classified most accurately, with F1 scores over 96% for the top 3 models. Sugarcane was also classified accurately, with F1 scores ranging from 84% to 91% for the top 3 models. Maize and cassava were consistently the most difficult crop types to classify across all models. Sensitivity to training set sizeLastly, we investigated the relationship between the number of ground references and the remote sensing-based crop type classification performance. We found that, even at 81,000 ground references, performance had not yet saturated as a function of training set size; the overall F1 score continued to increase linearly as more street-view points were added (Figure 6). ## 4 Discussion We show that deep learning and street-view images can be combined to generate thousands of geolocated crop type ground references at scale in smallholder regions. These ground references, despite containing some label noise, can then be used to create high-accuracy crop type maps in countries where no such maps currently exist. In Thailand, 81,000 automated ground references led to a more accurate crop type map than 1000 expert-labeled ground references, and even at 81,000 performance had not yet saturated as a function of sample size. To minimize the need for experts to manually label crop types in street-view images, we explored using images from the web to weakly supervise a CNN to classify crop type. We found that images from Google Images and iNaturalist, although high in noise and often off-domain, can successfully supervise the classification of street-view images. Furthermore, combining images from different online sources improved performance. When creating the country-wide crop type map in Thailand, we did observe that weakly supervised CNNs led to lower accuracy crop type maps than CNNs trained on expert labels, suggesting that the best solution may be to combine noisy images from the web with a small number of expert-labeled street-view images. One limitation of our work is that, to train a classifier to remove street-view images containing small bushes, we manually labeled a set of _field/not field_ images. However, we point out that, unlike crop type labeling, _field/not field_ labeling does not require domain expertise. Another limitation is the uncertain update frequency of street-view services and their continued uneven distribution around the globe. Since more countries have street-view imagery than crop type maps, we believe street-view datasets still have significant value to add to global crop type mapping. We release all datasets used for training and testing each \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & & & & \\ \hline **Ground Ref. Train Dataset** & **Overall Acc** & **Overall F1** & **Cassava F1** & **Maize F1** & **Rice F1** & **Sugarcane F1** \\ \hline Baseline\({}^{*}\) & 0.695 & 0.456 & 0.000 & 0.000 & 0.753 & 0.000 \\ Auto w/ WebCC\({}^{\dagger}\) & 0.794 & 0.801 & 0.126 & 0.392 & 0.970 & 0.510 \\ Auto w/ iNaturalist\({}^{\dagger}\) & 0.783 & 0.796 & 0.132 & 0.403 & 0.954 & 0.535 \\ Auto w/ iNat + WebCC\({}^{\dagger}\) & 0.902 & 0.918 & 0.514 & 0.453 & 0.961 & 0.839 \\ Auto w/ Expert GSV\({}^{\dagger}\) & 0.929 & 0.929 & 0.740 & 0.686 & 0.967 & 0.854 \\ Auto w/ Combined\({}^{\dagger}\) & 0.931 & 0.921 & 0.662 & 0.744 & 0.971 & 0.907 \\ \hline Expert GSV\({}^{\ddagger}\) & 0.670 & 0.706 & 0.267 & 0.233 & 0.815 & 0.537 \\ \hline \hline \end{tabular} \end{table} Table 5: **Performance on remote sensing-based crop type classification for the four major crop types in Thailand.**\({}^{*}\)Baseline refers to classifying all samples as the most common class, rice. \({}^{\dagger}\)Ground references are automatically labeled using CNNs trained on the specified datasets (\(n=81,000\)). \({}^{\ddagger}\)Ground references contain only expert-labeled GSV images (\(n=984\)). Error is obtained over 10 runs with 10 different bootstrapped training sets. Figure 6: **Crop type map F1 score vs. ground reference dataset size.** Computed for the Auto w/ iNat + WebCC ground reference set from Table 5.
2303.18084
RDMNet: Reliable Dense Matching Based Point Cloud Registration for Autonomous Driving
Point cloud registration is an important task in robotics and autonomous driving to estimate the ego-motion of the vehicle. Recent advances following the coarse-to-fine manner show promising potential in point cloud registration. However, existing methods rely on good superpoint correspondences, which are hard to be obtained reliably and efficiently, thus resulting in less robust and accurate point cloud registration. In this paper, we propose a novel network, named RDMNet, to find dense point correspondences coarse-to-fine and improve final pose estimation based on such reliable correspondences. Our RDMNet uses a devised 3D-RoFormer mechanism to first extract distinctive superpoints and generates reliable superpoints matches between two point clouds. The proposed 3D-RoFormer fuses 3D position information into the transformer network, efficiently exploiting point clouds' contextual and geometric information to generate robust superpoint correspondences. RDMNet then propagates the sparse superpoints matches to dense point matches using the neighborhood information for accurate point cloud registration. We extensively evaluate our method on multiple datasets from different environments. The experimental results demonstrate that our method outperforms existing state-of-the-art approaches in all tested datasets with a strong generalization ability.
Chenghao Shi, Xieyuanli Chen, Huimin Lu, Wenbang Deng, Junhao Xiao, Bin Dai
2023-03-31T14:22:32Z
http://arxiv.org/abs/2303.18084v1
# RDMNet: Reliable Dense Matching Based Point Cloud Registration for Autonomous Driving ###### Abstract Point cloud registration is an important task in robotics and autonomous driving to estimate the ego-motion of the vehicle. Recent advances following the coarse-to-fine manner show promising potential in point cloud registration. However, existing methods rely on good superpoint correspondences, which are hard to be obtained reliably and efficiently, thus resulting in less robust and accurate point cloud registration. In this paper, we propose a novel network, named RDMNet, to find dense point correspondences coarse-to-fine and improve final pose estimation based on such reliable correspondences. Our RDMNet uses a devised 3D-RoFormer mechanism to first extract distinctive superpoints and generates reliable superpoints matches between two point clouds. The proposed 3D-RoFormer fuses 3D position information into the transformer network, efficiently exploiting point clouds' contextual and geometric information to generate robust superpoint correspondences. RDMNet then propagates the sparse superpoints matches to dense point matches using the neighborhood information for accurate point cloud registration. We extensively evaluate our method on multiple datasets from different environments. The experimental results demonstrate that our method outperforms existing state-of-the-art approaches in all tested datasets with a strong generalization ability. Autonomous Driving, 3D Registration, Deep Learning, Point Cloud Data Processing ## I Introduction Point cloud registration is a fundamental problem in computer vision, robotics, and autonomous driving. It aims to estimate the transformation between pairs of partially overlapped point clouds. The correspondence-based methods [1, 2, 3] are the current domination. They first find the data association, such as point matches between two LiDAR point clouds. Based on that, they then compute the relative transformation straightforwardly with a singular value decomposition (SVD) or a robust estimator, e.g., RANSAC [4]. To balance the computation consumption and correspondence quality, most existing methods find the association on the downsampled sparse points or keypoints [1, 3, 5]. However, downsampling will inevitably make part of the points lose their corresponding points, which degrades the registration performance. Inspired by works in image matching, recent advances [2, 7] utilize the coarse-to-fine mechanism show remarkable performance on point cloud registration. The coarse-to-fine mechanism [2, 7] downsamples the point cloud into sparse superpoints and uses these superpoints to split the point cloud into point patches. On the coarse level, it finds the superpoint (patch) correspondences depending on the overlapped area of two patches. On the fine level, the superpoint correspondences are then propagated to dense point matches based on neighborhood consensus, i.e., finding the point matches only from the matched patches. This limits the search space to a reasonable range, which not only reduces the computational complexity, Fig. 1: Point cloud registration under a challenging situation. We compare our method (right column) against GeoTransformer [6] (left column). Given two point clouds (first row), we first extract the sampled points uniformly distributed in the point cloud. Geotransformer uses them directly as superpoints in (second left), while our RDMNet adds the learned offsets to the sampled points and generates better superpoints near the geometrically significant regions (second right). The orange lines show the learned offsets, which make superpoints from both point clouds fall closer in the geometrically significant regions. Using our proposed 3D-RoFormer to generate high-quality superpoint correspondences, our RDMNet subsequently finds better dense-point correspondences (third right) compared to the baseline method (third left). In the end, our method successfully registers the two scans (bottom right) while the baseline method fails (bottom left). but also makes the established matches more reliable. However, the performance of the final matches heavily relies on the quality of the superpoint correspondences. In this paper, we dig into the properties of the superpoint that affect the final performance. As a powerful contextual information encoder, transformer [8] has been adapted to multiple point cloud learning tasks [9]. Existing methods [2, 6, 7] exploit transformers to increase the robustness of superpoint matches. However, the vanilla transformer used in CoFiNet [2] lacks geometric information, which hinders the performance of the position-sensitive point cloud registration. GeoTransformer [6] infuses the pair-wise distance and triplet-wise angular information into the transformer, while NegNet [7] constructs point pair features based on the geometric information [10]. Although yielding promising results, they are computationally expensive and neglect the distribution of the superpoints, thus leading to suboptimal point-matching results. To tackle the above-mentioned problems, we propose the RDMNet to exploit both the contextual and geometric information of the point cloud and generate reliable dense-point correspondence for point cloud registration. The core technique in RDMNet is our devised novel transformation-invariant attention mechanism, named 3D-RoFormer. It encodes the 3D position into a deep rotation matrix and naturally incorporates explicit relative position dependency into the self-attention calculation, thus becoming transformation-invariant while keeping lightweight and fast. For the superpoint distribution, RDMNet uses a superpoint detection module to first uniformly sample points over the whole point cloud and then learn the offset for each point, making the superpoint pairs more compact and falling in significant regions. Fig. 1 demonstrates that our RDMNet extracts more compact superpoint pairs and finds more reliable dense-point correspondences compared to the baseline methods. To thoroughly evaluate our method, we conduct experiments on multiple outdoor datasets, including KITTI [11], KITTI-360 [12], Apollo [13], Mulran [14], and a self-recorded dataset with our own mobile robot in a campus environment. Note that we only train our method on the training data of the KITTI dataset and directly apply it to other datasets collected by different LiDAR sensors in different environments. The experimental results show that our method outperforms the state-of-the-art methods in terms of both superpoint matching and pose estimation with strong generalization ability. To sum up, our main contributions are: * A novel transformer, 3D-RoFormer, that efficiently learns the contextual and geometric features for superpoint matching with limited computation and storage cost. * A novel network RDMNet that generates reliable superpoints and dense-point correspondences to achieve state-of-the-art point cloud registration performance. * Extensive evaluations on multiple outdoor datasets while only trained on the KITTI dataset show that our RDMNet achieves superior performance with strong generalization ability compared to other state-of-the-art methods. We will make the implementation of our method open-source. ## II Related Work Point cloud registration refers to finding the relative spatial transformation that aligns two point clouds. The existing methods can be broadly classified into two categories: correspondence-free and correspondence-based. The correspondence-free methods transform the registration problem into a regression problem. Early work like PointNetLK proposed by Aoki _et al._[15] first extracts the feature of the point cloud using PointNet [16] and then regresses the transformation from the features. Zheng _et al._[17] utilize a similar idea with PointNetLK and further refine the result in an iterative computation architecture. Other works such as the one by Huang _et al._[18] solve the registration problem by minimizing the feature-metric projection error. Such methods struggle to construct reliable regression models, and registration accuracy is not guaranteed. The correspondence-based methods first extract correspondences between two point clouds and then compute the transformation using a direct solver or a robust estimator. Extracting correct correspondences is the most challenging part of such methods. The standard correspondence-based approach is the iterative closest point (ICP) algorithm [19] and its numerous variants [20, 21]. They find correspondences using the nearest neighbor search or other heuristics iteratively, thus heavily relying on good initial estimation for transformation. Recent work [9] follows the idea of ICP and establishes the soft correspondences in the learned feature space, which relaxes the requirement of good initial guesses. However, the computational complexity and global searching limit the application of these methods to large-scale point clouds. Different from the above ICP-like methods, keypoint-based methods find correspondences on sparse points generated by either uniformly sampling [22, 23, 24] or keypoint detection [5, 25, 3]. PPFNet and PPF-FoldNet proposed by Deng _et al._[22, 23] introduce point pair feature (PPF) combined with PointNet to produce local patch representation for matching. Different from PPFNet and PPF-FoldNet establishing correspondences on uniform sampling points, keypoint-based methods sample points according to pre-defined [26] or learned saliency [5, 3, 25] for higher repeatability. In order to handle the low overlap situation, PREDATOR proposed by Huang _et al._[1] extracts the points that are not only salient but also lie in the overlap region. Based on that, Zhu _et al._[6] recently proposed NgeNet to augment the features with point pair information and use a multi-level consistency voting to improve discrimination. Inspired by recent advances in image matching, CoFiNet proposed by Yu _et al._[2] and GeoTransformer proposed by Qin _et al._[7] utilize a coarse-to-fine mechanism that first finds reliable sparse point patch correspondences and then propagates the sparse correspondences to dense point matches. Benefiting from the highly efficient backbone, they are able to obtain dense descriptors for large point clouds. The coarse-to-fine mechanism also greatly reduces the search space and increases the matching reliability. HRegNet proposed by Lu _et al._[27] also utilizes the coarse-to-fine scheme, but different from the above methods, it extracts multi-level features and refines the transformation hierarchically. The quality of the sparse patch correspondences is important for such coarse-to-fine methods. To improve the correspondence accuracy, the existing methods mostly exploit the transformer network [8]. For example, CoFiNet [2] adopts the original transformer [8] as a powerful contextual information encoder to generate more accurate point correspondences. However, the vanilla transformer lacks geometric information, which hinders the performance of the position-sensitive point cloud registration. GeoTransformer [6] tackles this problem by infusing the pair-wise distance and triplet-wise angular into the transformer, while NegNet [7] constructs the geometric features with point pair features [10]. Although yielding promising results, GeoTransformer results in extra-large \(\mathcal{O}(n^{2})\) storage complexity from the pair-wise distance and triplet-wise angular. NegNet is computationally expensive due to the normal estimation. Besides, existing works neglect the distribution of the superpoints. They use simple uniform sampling points that may separate a single object into several patches, which usually retain a low overlap ratio with patches in the other point cloud. This may lead to bad superpoint matching and dense point propagation results. Unlike the existing methods, our RDMNet uses the proposed 3D-RoFormer exploiting both the contextual and geometric information of the point cloud, which generates better correspondences for point registration. ## III Our Approach Given two point clouds \(\mathcal{P}^{\mathbf{A}}=\{\mathbf{p}^{\mathbf{A}}_{i}\in\mathbb{R}^{3}\}_{i=1}^{M}\) and \(\mathcal{P}^{\mathbf{S}}=\{\mathbf{p}^{\mathbf{S}}_{j}\in\mathbb{R}^{3}\}_{j=1}^{N}\), we aim to establish point correspondences between the two point clouds. To this end, we propose the RDMNet that finds correspondences in a coarse-to-fine manner. The overview of our approach is illustrated in Fig. 2. It is built upon our devised novel 3D-RoFormer network (see Sec. III-A) and consists of three main steps: superpoint detection (see Sec. III-B), coarse patch matching (see Sec. III-C), and finer point matching (see Sec. III-D). ### _3D-RoFormer_ We first introduce our devised novel 3D-RoFormer, which is a translation-invariant transformer and the core technique of our RDMNet. We build 3D-RoFormer upon the vanilla transformer [8]. For a point \(\mathbf{p}^{\mathbf{O}}_{i}\) with its feature \(\mathbf{h}^{\mathbf{O}}_{i}\) in the query point cloud \(\mathbf{Q}\) and all the points in the source point cloud \(\mathbf{S}\), the network computes the query \(\mathbf{q}_{i}\), key \(\mathbf{k}_{j}\), and value \(\mathbf{v}_{j}\) transformer feature maps with a linear projection: \[\mathbf{q}_{i} =\mathbf{W}_{1}\,\mathbf{h}^{\mathbf{Q}}_{i}+\mathbf{b}_{1}, \tag{1}\] \[\mathbf{k}_{j} =\mathbf{W}_{2}\,\mathbf{h}^{\mathbf{S}}_{j}+\mathbf{b}_{2},\] \[\mathbf{v}_{j} =\mathbf{W}_{3}\,\mathbf{h}^{\mathbf{S}}_{j}+\mathbf{b}_{3}.\] \(\mathcal{Q},\mathbf{S}\) could be downsampled input point clouds or sparse superpoints. If \(\mathcal{Q},\mathbf{S}\) are the same point cloud, Eq. (1) generates the feature maps for self-attention operation, otherwise cross-attention. After that, the transformer computes an attentional weight for the query point with each source point: \(\alpha_{ij}=\text{softmax}_{j}(\mathbf{q}^{\mathsf{T}}_{i}\mathbf{k}_{j})\), and obtains the final attention-enhanced feature for the query point as: \[(\tilde{\mathbf{h}}_{\text{vanilla}})_{i}=\sum_{j=1}^{|\mathbf{S}|}\text{softmax}_{j}( \mathbf{q}^{\mathsf{T}}_{i}\mathbf{k}_{j})\mathbf{v}_{j}=\sum_{j=1}^{|\mathbf{S}|}\alpha_{ij}\bm {v}_{j}, \tag{2}\] where \(|\mathbf{S}|\) represents the number of the points in \(\mathbf{S}\). Transformer has shown to be superior for point cloud registration [2, 7, 28]. However, the vanilla transformer contains no geometric information, thus leading to suboptimal registration results. Different works are proposed to enhance Fig. 2: **Pipeline overview. Given two point clouds, our RDMNet first extracts superpoints from them using a superpoint detection module. Then, it applies a coarse patch matching to find the correspondences between sparse superpoints from two point clouds. Finally, a finer point matching module is then used to propagate superpoint correspondences into dense-point matches, which are used to estimate the final transformations between these two point clouds.** the transformer with geometric information [6, 7]. However, they are either memory-consuming [7] or time-consuming [6]. Inspired by the recent Roformer [29] using rotational information for natural language processing, we propose a novel 3D-Roformer that encodes the absolute position information with a rotation matrix for 3D point cloud registration. Based on 3D-Roformer, our RDMNet can better exploit both contextual and geometric information of the point clouds to generate more reliable keypoints. We first adapt the rotary position embedding into 3D data by leveraging a MLP and mapping the position \(\hat{\mathbf{s}}_{i}\in\mathbb{R}^{3}\) into the rotary embedding \(\mathbf{\Theta}_{i}=[\theta_{1},\theta_{2},\cdots,\theta_{d/2}]\in\mathbb{R}^{ \frac{d}{2}}\): \[\mathbf{\Theta}_{i}=\text{MLP}_{\text{rot}}(\hat{\mathbf{s}}_{i}). \tag{3}\] Each element in \(\mathbf{\Theta}_{i}\) can be treated as a rotation in a 2D plane and represented by a rotation matrix. The final formulation of the rotary 3D position embeddings \(\mathbf{\mathit{R}}_{\Theta_{i}}\in\mathbb{R}^{d\times\tilde{d}}\) is: \[\mathbf{\mathit{R}}_{\Theta_{i}}=\left[\begin{array}{ccccc}\cos\theta_{1}&-\sin \theta_{1}&\cdots&0&0\\ \sin\theta_{1}&\cos\theta_{1}&\cdots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&\cos\theta_{\frac{d}{2}}&-\sin\theta_{\frac{d}{2}}\\ 0&0&\cdots&\sin\theta_{\frac{d}{2}}&\cos\theta_{\frac{d}{2}}\end{array}\right]. \tag{4}\] Applying \(\mathbf{\mathit{R}}_{\Theta_{i}}\) to a \(\tilde{d}\) dimensional vector is equivalent to divide the vector into \(\tilde{d}/2\) 2D vectors and rotate each of them by \(\{\theta_{i}|i=1,\cdots,\tilde{d}/2\}\) accordingly (see Fig. 3). We apply \(\mathbf{\mathit{R}}_{\Theta_{i}}\) and \(\mathbf{\mathit{R}}_{\Theta_{j}}\) to query \(\mathbf{q}_{i}\) and key \(\mathbf{k}_{j}\) respectively in self-attention operation and obtain the rotary self-attention as: \[\alpha_{ij}^{\prime\prime} =\text{softmax}_{j}((\mathbf{\mathit{R}}_{\Theta_{i}}\mathbf{q}_{i})^{ \mathsf{T}}\mathbf{\mathit{R}}_{\Theta_{j}}\mathbf{k}_{j}), \tag{5}\] \[\tilde{\mathbf{h}}_{i} =\sum_{j=1}^{|\tilde{\mathcal{P}}|}\alpha_{ij}^{\prime\prime}\mathbf{ v}_{j}. \tag{6}\] The benefits of using the proposed rotary self-attention are as follows. First, by encoding the position information as a rotation matrix, the rotary self-attention explicitly encodes the relative position information neatly. Using the properties of the rotation matrix, we can further derive Eq. (5) as: \[\alpha_{ij}^{\prime\prime} =\text{softmax}_{j}(\mathbf{q}_{i}^{\mathsf{T}}\mathbf{\mathit{R}}_{ \Theta_{i}}^{\mathsf{T}}\mathbf{\mathit{R}}_{\Theta_{j}}\mathbf{k}_{j}),\] \[=\text{softmax}_{j}(\mathbf{q}_{i}^{\mathsf{T}}\mathbf{\mathit{R}}_{ \Theta_{j}-\Theta_{i}}\mathbf{k}_{j}), \tag{7}\] where \(\mathbf{\Theta}_{j}-\mathbf{\Theta}_{i}\) is naturally incorporated into the calculation of the attention scores \(\alpha_{ij}^{\prime\prime}\) and then fused with the output feature \(\tilde{\mathbf{h}}_{i}\) in Eq. (6). Therefore, our method is lightweight without requiring extra-large storage memory for relative position embeddings, which will be further verified in Sec. IV-G. Secondly, the proposed 3D-Roformer is easy to deploy and operates very fast. Due to the sparsity of \(\mathbf{\mathit{R}}_{\Theta}\), the calculation of \(\mathbf{\mathit{R}}_{\Theta_{i}}\cdot\mathbf{q}_{i}\) and \(\mathbf{\mathit{R}}_{\Theta_{j}}\cdot\mathbf{k}_{j}\) can be done in a computationally efficient way using vector addition and multiplication operations, for example: \[\mathbf{\mathit{R}}_{\Theta_{i}}\cdot\mathbf{q}_{i}=\left[\begin{array}{c}q_{1}\\ q_{2}\\ \vdots\\ q_{d-1}\\ q_{d}\end{array}\right]\otimes\left[\begin{array}{c}\cos\theta_{1}\\ \cos\theta_{1}\\ \vdots\\ \cos\theta_{d/2}\\ \cos\theta_{d/2}\end{array}\right]+\left[\begin{array}{c}-q_{2}\\ q_{1}\\ \vdots\\ \vdots\\ -q_{d}\\ q_{d-1}\end{array}\right]\otimes\left[\begin{array}{c}\sin\theta_{1}\\ \sin\theta_{1}\\ \vdots\\ \sin\theta_{d/2}\\ \sin\theta_{d/2}\end{array}\right]. \tag{8}\] Thirdly, our proposed 3D-RoFormer is translation-invariant inherited from the linearity of MLP: \[\mathbf{\Theta}_{j}-\mathbf{\Theta}_{i}=\text{MLP}_{\text{rot}}(\hat{\mathbf{s}}_{j}-\hat{ \mathbf{s}}_{i}). \tag{9}\] Therefore, our 3D-RoFormer will not be influenced by the changes in observation positions when used for finding correspondences. We enhance the final output features of the 3D-RoFormer \(\tilde{\mathbf{H}}^{\mathbf{A}}\) and \(\tilde{\mathbf{H}}^{\mathbf{\Theta}}\) for point matching by interleaving the rotary self-attention and cross-attention for \(l\) times. Benefiting from the above-mentioned advantages, our proposed RDMNet uses the devised 3D-RoFormer in both the superpoint detection and sparse patch matching modules to better find superpoint correspondences. ### _Superpoint Detection_ Our approach aims first to find reliable sparse patch matches and then propagate them to dense point matches based on neighborhood consensus. Such a coarse-to-fine scheme avoids the time-consuming and unreliable global search of dense feature correspondences. Considering a point patch as the vicinity of a keypoint, the task can be treated as the keypoint detection and vicinity grouping. We assign a keypoint for each point patch and regard them all together as one superpoint. A simple way to extract keypoints is to directly use the uniform sampling center point of each voxel [2, 7]. However, the uniformly sampled center points may separate a single object into several patches and share low overlap vicinities with center points in the other point cloud (see Fig. 1), which leads to poor patch matching and dense point match propagation in the following steps. To address this issue, we propose the superpoint detection module to extract more reliable superpoints for the point patch partition. We use the KPEncoder [30] as the backbone of the proposed superpoint detection module that hierarchically downsamples and encodes the point cloud into the uniformly distributed nodes \(\hat{\mathcal{P}}\) with associated features \(\hat{\mathbf{F}}\in\mathbb{R}^{|\tilde{\mathcal{P}}|\times\tilde{d}}\). The node feature from KPEncoder contains only the contextual and geometric information of the single point cloud but lacks information between two point clouds, thus not being able to reason the inter-clues to make the associated superpoints compact. Therefore, we use the proposed 3D-RoFormer to fuse inter-point-cloud information and explicitly encode intra-point-cloud information at the same time. We denote the enhanced feature by 3D-RoFormer as \(\tilde{\mathbf{F}}\in\mathbb{R}^{|\tilde{\mathcal{P}}|\times\tilde{d}}\). A Voting module is then used to estimate the geometric offset and feature offset from the node to the proposal superpoint \(\mathcal{S}\), i.e., \([\Delta\mathbf{P},\Delta\mathbf{F}]=\text{Vote}(\tilde{\mathbf{F}})\), \(\mathcal{S}=\tilde{\mathcal{P}}+\Delta\mathbf{P}\), and \(\mathbf{H}=\mathbf{F}+\Delta\mathbf{F}\). Fig. 3: The rotary 3D position embedding. We use a group of Multi-Layer Perceptron (MLP) to form the Voting module. Though very simple, it generates meaningful offsets based on the feature learned by our 3D-RoFormer (see Fig. 7) and boosts the registration performance by a large margin (see Tab. V). In this paper, we supervise the superpoints to fall in the locally significant region, which may also lead to redundant proposals located in the same significant region. Thus, we use a simple radius search-based filtering strategy to force only one proposal in the same region. We iteratively perform a radius search for each proposal and filter out the ones close to the search center. After that, we obtain the final superpoints \(\hat{\mathcal{S}}\) with associated features \(\hat{\mathbf{H}}\). Note that we limit the offsets \(\Delta\mathbf{P}\) to a certain range, which maintains the superpoints to be evenly distributed throughout the point cloud instead of only concentrated in so-called significant areas. It also avoids possible degeneracy [5]. For each superpoint \(\hat{\mathbf{s}}_{i}\), we construct a local patch \(\mathcal{G}_{i}\) using a point-to-node strategy [5]. Specifically, each point is assigned to its nearest superpoint by: \[\mathcal{G}_{i}=\{\mathbf{p}\in\mathcal{P}|i=\operatorname*{argmin}_{j}(\|\mathbf{p}- \hat{\mathbf{s}}_{j}\|_{2}),\hat{\mathbf{s}}_{j}\in\hat{\mathcal{S}}\}. \tag{10}\] There are two advantages of this strategy. First, it assigns every point to a specific superpoint without duplication or loss. Second, it adapts to different densities, which is particularly suitable for our case since our superpoints break the uniformity of the original sampling after adding offsets. ### _Sparse Patch Matching_ Based on the detected superpoints, we then conduct patch matching on the coarse level and find superpoints/patch correspondences between point clouds \(\mathbf{A}\) and \(\mathbf{B}\). Since the superpoints have just been shifted and filtered by the superpoint detection module, the associated feature \(\hat{\mathbf{H}}\) could be inconsistent with the surroundings. Therefore, we first feed the superpoints with the associated features to another 3D-RoFormer to update the features with the newest contextual and geometric information from the updated superpoints. Then we conduct the superpoint matching. We follow Qin _et al._[7] and compute a Gaussian correlation matrix \(\mathbf{C}\in\mathbb{R}^{|\mathcal{S}^{\mathbf{A}}|\times|\mathcal{S}^{\mathbf{B}}|}\) between normalized \(\hat{\mathbf{H}}^{\mathbf{A}}\) and \(\hat{\mathbf{H}}^{\mathbf{B}}\) with \(c_{i,j}=\exp(-\|\hat{\mathbf{h}}_{i}^{\mathbf{A}}-\hat{\mathbf{h}}_{j}^{\mathbf{B}}\| ^{2})\). A dual-normalization is then performed to suppress ambiguous matches: \[\hat{c}_{i,j}=\frac{c_{i,j}}{\sum_{k=1}^{|\mathcal{S}^{\mathbf{A}}|}c_{i,k}}\cdot \frac{c_{i,j}}{\sum_{k=1}^{|\mathcal{S}^{\mathbf{B}}|}c_{k,j}}. \tag{11}\] We choose the largest \(N_{c}\) entries as the superpoint correspondences: \[\mathcal{M}=\{(\hat{\mathbf{s}}_{x_{i}}^{\mathbf{A}},\hat{\mathbf{s}}_{y_{i}}^{\mathbf{B}})|( x_{i},y_{i})\in\text{topk}_{x,y}(\hat{c}_{x,y})\}. \tag{12}\] Based on the fast superpoint matching, we determine the corresponding matched patches and use them as the basis for the subsequent fine-level dense-point matching. ### _Dense Point Matching_ On the Fine level, we aim to generate dense point matches from the coarse patch correspondences. We leverage the KPDecoder [30] to recover point-level descriptors \(\mathbf{F}\). Instead of using the updated superpoint feature, we recover the point-level feature from the raw anchor point feature \(\hat{\mathbf{F}}\) since there might be information loss after offseting and filtering. For each superpoint correspondence \((\hat{\mathbf{s}}_{x_{i}}^{\mathbf{A}},\hat{\mathbf{s}}_{y_{i}}^{\mathbf{B}})\), we have its corresponding patch match \((\mathcal{G}_{x_{i}}^{\mathbf{A}},\mathcal{G}_{y_{i}}^{\mathbf{B}})\) and then compute a match score matrix \(\mathbf{O}_{i}\in\mathbb{R}^{M_{i}\times N_{i}}:\) \[\mathbf{O}_{i}=\mathbf{F}_{x_{i}}^{\mathbf{A}}(\mathbf{F}_{y_{i}}^{\mathbf{B}})^{ \mathsf{T}}/\sqrt{\hat{d}}, \tag{13}\] where \(M_{i}=|\mathcal{G}_{x_{i}}^{\mathbf{A}}|\) and \(N_{i}=|\mathcal{G}_{y_{i}}^{\mathbf{B}}|\) represent the number of points in \(\mathcal{G}_{x_{i}}^{\mathbf{A}}\) and \(\mathcal{G}_{y_{i}}^{\mathbf{B}}\) respectively. To handle non-matched points, we append a "dustbin" row and column for \(\mathbf{O}_{i}\) filled with a learnable parameter \(\alpha\in\mathbb{R}\). The Sinkhorn algorithm is then used to solve the soft assignment matrix \(\mathbf{Z}^{i}\in\mathbb{R}^{(M_{i}+1)\times(N_{i}+1)}\). Different from [2, 7] that drops the dustbin and recovers the assignment by comparing the soft assignment score with a hand-tuned threshold, we directly find max entry both row-wise and column-wise on \(\mathbf{Z}^{i}\) which is then recovered to assignment \(\mathcal{C}^{i}\): \[\begin{split}\mathcal{C}^{i}=&\{(\mathcal{G}_{z_{i} }^{\mathbf{A}}(m),\mathcal{G}_{y_{i}}^{\mathbf{B}}(n)|(m,n)\in\text{toprow}_{m,n}( \mathbf{Z}_{i:M_{i},1:(N_{i}+1)}^{\mathsf{T}})\}\cup\\ &\{(\mathcal{G}_{z_{i}}^{\mathbf{A}}(m),\mathcal{G}_{y_{i}}^{\mathbf{B} }(n)|(m,n)\in\text{topcolumn}_{m,n}(\mathbf{Z}_{i:(M_{i}+1),1:N_{i}}^{\mathsf{ T}})\},\end{split} \tag{14}\] where \(m\) and \(n\) represent the indexes of the max entry of \(\mathbf{Z}^{i}\). A point is either assigned to points in the matched patch or to the dustbin. By this, we do not need manual tuning but require a discriminative assignment matrix, which can be obtained by using our proposed loss function as detailed in Sec. III-E. Note that a point is not strictly assigned to a single point in our approach, as the strict one-to-one point correspondences do not hold in practice due to the sparsity nature of the point cloud. Instead, we trust and keep the assignment results from both sides, i.e., matches from query to source and vice versa. This results in extensively more point matches while maintaining a high inlier ratio, which benefits the transformation estimation. The final correspondences are the combination of points matches from all patches: \[\mathcal{C}=\bigcup_{i=1}^{N_{c}}\mathcal{C}^{i}. \tag{15}\] ### _Loss function and training_ The final loss is the weighted sum of these three components: \(L=L_{\text{s}}+L_{\text{c}}+L_{\text{f}}\), where \(L_{\text{s}}\) is the superpoint detection loss, \(L_{\text{c}}\) is the coarse match loss, \(L_{\text{f}}\) is the fine match loss. **Superpoint detection loss**. The superpoint detection loss is composed of two parts \(L_{s}=L_{s1}+L_{s2}\). The first part \(L_{s1}\) is designed to guide our superpoints lying in the significant region, and the second part \(L_{s2}\) is designed to make the superpoints close to the real measurement points. Specifically, for the first part, we do not explicitly define the significance of a point, but use a chamfer loss to minimize the distance between matched superpoints: \[L_{s1}=\sum_{i=1}^{|\mathcal{S}^{\mathbf{A}}|}\min_{\mathbf{s}_{j}^{\mathbf{B}}\in\mathcal{ S}^{\mathbf{B}}}\|\mathbf{s}_{i}^{\mathbf{A}}-\mathbf{s}_{j}^{\mathbf{B}}\|_{2}^{2}+\sum_{i=1}^{| \mathcal{S}^{\mathbf{B}}|}\min_{\mathbf{s}_{j}^{\mathbf{A}}\in\mathcal{S}^{\mathbf{B}}}\|\mathbf{s}_{ i}^{\mathbf{B}}-\mathbf{s}_{j}^{\mathbf{A}}\|_{2}^{2}. \tag{16}\] Supervised by \(L_{\text{s1}}\), we find that the superpoints tend to move to their nearest "significant" regions to minimize the distance between superpoint pairs. For the second part, we use another chamfer loss that minimizes the distance between the superpoint to its closest point: \[L_{\text{s2}}=\sum_{i=1}^{|\mathcal{S}^{\mathbf{A}}_{i}|}\min_{\mathbf{p}^{\mathbf{A}}_{i} \in\mathcal{PA}}\|\mathbf{s}^{\mathbf{A}}_{i}-\mathbf{p}^{\mathbf{A}}_{j}\|_{2}^{2}+\sum_{i=1}^ {|\mathcal{S}^{\mathbf{B}}_{i}|}\min_{\mathbf{p}^{\mathbf{B}}_{j}\in\mathcal{P}^{\mathbf{B}}} \|\mathbf{s}^{\mathbf{B}}_{i}-\mathbf{p}^{\mathbf{B}}_{j}\|_{2}^{2}. \tag{17}\] **Coarse match loss**. We follow [7] and use overlap-aware circle loss to guide the network to extract reliable superpoint correspondence with relatively high overlap. We set the patch in \(\mathbf{A}\) with at least one positive patch in \(\mathbf{B}\) as the anchor patches \(\mathcal{G}^{\mathbf{A}}\). For an anchor patch \(\mathcal{G}^{\mathbf{A}}_{i}\), its positive patch set \(\varepsilon_{i}^{+}\) is defined as those sharing at least 10% overlap with \(\mathcal{G}^{\mathbf{A}}_{i}\), and its negative patch set \(\varepsilon_{i}^{-}\) is those that do not overlap with \(\mathcal{G}^{\mathbf{A}}_{i}\). Then the overlap-aware circle loss on \(\mathbf{A}\) is calculated as: \[L_{\text{c}}^{\mathbf{A}}=\frac{1}{|\mathcal{G}^{\mathbf{A}}_{i}\in\mathcal{A}}\log[ 1+\sum_{\mathcal{G}^{\mathbf{B}}_{j}\in\varepsilon_{i}^{+}}e^{\lambda_{i}^{2} \beta_{i,j}^{+}(d_{i}^{l}-\Delta^{-\Delta})}\cdot\sum_{\mathcal{G}^{\mathbf{B}}_{ k}\in\varepsilon_{i}^{-}}e^{\beta_{i,k}^{-}(\Delta^{-}-d_{i}^{k})}], \tag{18}\] where \(d_{i}^{j}=\|\hat{\mathbf{h}}_{i}^{\mathbf{A}}-\hat{\mathbf{h}}_{i}^{\mathbf{B}}\|_{2}\), \(\lambda_{i}^{j}\) refers to the overlap ratio between \(\mathcal{G}^{\mathbf{A}}_{i}\) and \(\mathcal{G}^{\mathbf{B}}_{j}\), and \(\beta_{i,j}^{+}=\gamma(d_{i}^{j}-\Delta^{+})\) and \(\beta_{i,k}^{-}=\gamma(d_{i}^{k}-\Delta^{-})\) represent the positive and negative weights. The hyper-parameters setting is followed by convention: \(\Delta^{+}=0.1\) and \(\Delta^{-}=1.4\). The overall coarse match loss is the average of overlap-aware circle loss on \(\mathbf{A}\) and \(\mathbf{B}\), i.e., \(L_{\text{c}}=(L_{\text{c}}^{\mathbf{A}}+L_{\text{c}}^{\mathbf{B}})/2\) **Fine match loss**. To learn a discriminative soft assignment matrix and support our dense point match module, we use a gap loss on the soft assignment matrix \(\mathbf{Z}^{i}\) of each patch correspondence \(\{\mathcal{G}^{\mathbf{A}}_{x_{i}},\mathcal{G}^{\mathbf{B}}_{y_{i}}\}\). For each matched patch pair, we generate its ground truth correspondences \(\mathbf{M}^{i}\in\{0,1\}^{(M_{i}+1)\times(N_{i}+1)}\) with a match threshold \(\tau\), where \(M_{i}=|\mathcal{G}^{\mathbf{A}}_{x_{i}}|\), \(N_{i}=|\mathcal{G}^{\mathbf{B}}_{x_{i}}|\). The gap loss is then calculated as: \[L_{\text{f}}^{i}= \frac{1}{M_{i}}\sum_{m=1}^{M_{i}}\log(\sum_{n=1}^{N_{i}+1}[(-r_{m} ^{i}+\mathbf{Z}_{m,n}^{i}+\eta)_{+}+1])\] \[+\frac{1}{N_{i}}\sum_{n=1}^{N_{i}}\log(\sum_{m=1}^{M_{i}+1}[(-c_{ n}^{i}+\mathbf{Z}_{m,n}^{i}+\eta)_{+}+1]), \tag{19}\] where \((\bullet)_{+}=max(\bullet,0)\), \(r_{m}^{i}=\sum_{n=1}^{N_{i}+1}\mathbf{Z}_{m,n}^{i}\mathbf{M}_{m,n}^{i}\) refers to the soft assignment value for the true match of \(m\)-th point in \(\mathcal{G}^{\mathbf{A}}_{x_{i}}\), and \(c_{n}^{i}=\sum_{m=1}^{M_{i}+1}\mathbf{Z}_{m,n}^{i}\mathbf{M}_{m,n}^{i}\) refers to the soft assignment value for the true match of \(n\)-th point in \(\mathcal{G}^{\mathbf{B}}_{x_{i}}\). The final fine match loss is the average over all the matched patch pairs: \(L_{\text{f}}=\frac{1}{2|\mathcal{M}|}\sum_{i=1}^{|\mathcal{M}|}L_{\text{f}}^ {i}\). We implement and evaluate our RDMNet on 4 NVIDIA RTX 3090 GPUs. The network is trained with Adam optimizer [31]. We use 5 layers of KPEncoder (4 layers of downsampling) and 3 layers of KPDecoder, which result in coarse-level points with a resolution of \(4.8\) m and fine-level points with a resolution of \(0.6\) m. The batch size is 1, and the learning rate is \(10^{-4}\) and decay exponentially by 0.05 every 4 epochs. We also adapt the same data augmentation as in [1]. ## IV Experimental Evaluation ### _Dataset Overview_ We evaluate RDMNet and compare it with the state-of-the-art methods on both publicly available datasets, including KITTI odometry [11], KITTI-360 [12], Apollo-SouthBay [13] and Mulran [14] datasets, and a self-recorded dataset. These datasets provide LiDAR scans collected in different environments with the corresponding ground-truth poses. The KITTI odometry and KITTI-360 contain LiDAR data collected by a Velodyne HDL64 LiDAR in Germany. These two datasets use a similar sensor setup but collect data from different times and environments. The Apollo-SouthBay dataset also uses a Velodyne HDL64 LiDAR but with a different sensor setup collecting data in the U.S. cities. The Mulran dataset contains data collected by an OS1-64 LiDAR from Korea. Fig. 4 shows our own platform equipped with a Velodyne VLP16 LiDAR, an inertial measurement unit (Xsens MTi-300), and a GNSS (INS CGI-410). We build our own dataset in a campus environment with ground-truth poses calculated by combining the GNSS and IMU with the state-of-the-art LiDAR SLAM method [32]. We follow [1, 7] and split the KITTI odometry into three sets: sequences 00-05 for training, 06-07 for validation, and 08-10 for testing. To evaluate the generalization ability, we directly apply the models trained on the KITTI odometry dataset to other datasets. Also in line with [1, 6, 7], we use the LiDAR pairs that are at most 10 m away as samples and get 1358 pairs for training, 180 pairs for validation, and 14577 pairs for testing. Note that the sensors, environments, and platform setups are different between the KITTI odometry Fig. 4: Our campus data collection platform and some LiDAR data visualization of the campus dataset. datasets to others, which thoroughly tests the generalization ability of the approaches. ### _Correspondence Matching Performance_ We first evaluate the correspondence matching performance. Following [7], we use two metrics to evaluate the matching performance: i) Inlier Ratio (IR), the ratio of correct correspondences with residuals below a certain threshold, e.g., 0.6 m, after applying the ground truth transformation, and ii) Feature Match Recall (FMR), the fraction of point cloud pairs with inlier ratio above a threshold, e.g., \(5\%\). We compare the results of our method with the recent state-of-the-art methods: Predator [1], CofiNet [2], NgeNet [6], and Geotransformer [7]. We report the results in Tab. I with different numbers of samples. The sampling strategy is slightly different for different methods. For Predator and NgeNet, we use the default setting that samples points with probability proportional to the estimated scores. As for our approach, CofiNet, and Geotransformer, because they directly output point correspondences without interest point sampling, we pick the top-\(k\) correspondences according to fine level soft assignment scores. Note that, the result of our approach reported on Mulran dataset is the result after removing the superpoint detection module, as we find that superpoint detection module is hard to generalize to partially occluded point clouds. For a fair comparison, we do not retrain the modified model. To our surprise, RDMNet still achieves the best FMR and the second-best IR on Mulran, as shown in Tab. I. RDMNet performs even better on other benchmarks, achieving the best FMR and IR. Notably, RDMNet exceeds the baseline by a large margin of about 10%-15% in IR. Interestingly, the performance of the keypoint-based methods and the methods that follow a coarse-to-fine manner present different trends when the number of samples becomes smaller. Keypoint-based methods, i.e., Predator and NgeNet, show a downward trend, while coarse-to-fine methods, i.e., CofiNet, Geotransformer, and ours, show an upward trend. The reason is that when the number of samples becomes smaller, the keypoint-based method becomes more difficult to sample the points in the overlap region, which is underwritten by the reliable patch correspondences for coarse-to-fine methods. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline & KITTI & KITTI-360 & Apollo & Mulran & Campus & \multicolumn{1}{c}{} \\ \hline \multicolumn{11}{c}{_Registration Recall (\%)_} \\ \hline Predator [1] & **99.82** & 99.50 & 99.27 & 53.02 & 9.94 \\ CofiNet [2] & **99.82** & 99.62 & **100** & 80.79 & 36.84 \\ NgeNet [6] & **99.82** & **99.94** & **100** & 82.96 & 81.29 \\ Geotransformer [7] & **99.82** & 99.96 & **100** & 75.68 & 71.93 \\ RDMDNet (_ours_) & **99.82** & 99.92 & **100** & **87.09** & **96.49** \\ \hline \multicolumn{11}{c}{_Relative Rotation Error (\(\%\))_} \\ \hline Predator [1] & 0.25 (0.27) & 0.29 & 0.21 & 1.03 & 1.94 \\ CofiNet [2] & 0.37 (0.41) & 0.44 & 0.18 & 0.52 & 1.81 \\ NgeNet [6] & 0.26 (0.30) & 0.30 & 0.18 & 0.35 & 1.01 \\ Gootransformer [7] & 0.22 (0.24) & 0.28 & 0.12 & **0.30** & 0.97 \\ RDMDNet (_ours_) & **0.18** & **0.25** & **0.10** & 0.45 & **0.69** \\ \hline \multicolumn{11}{c}{_Relative Translation Error (\(\text{cm}\))_} \\ \hline Predator [1] & 5.8 (6.8) & 7.2 & 7.8 & 30.4 & 53.9 \\ CofiNet [2] & 8.2 (8.5) & 10.1 & 6.7 & 17.3 & 38.6 \\ NgNet [6] & 6.1 (7.4) & 7.5 & 5.9 & **9.2** & 13.6 \\ Gootransformer [7] & 6.7 (7.4) & 8.1 & 6.1 & 12.0 & 18.4 \\ RDMDNet (_ours_) & **5.3** & **7.0** & **4.6** & 14.4 & **12.7** \\ \hline \hline \end{tabular} \end{table} TABLE II: Registration results on multiple datasets using RANSAC-50K. The results in brackets on the KITTI dataset are those reported in the original paper evaluated under bad ground truth poses. We fix it and also report the new results. The best results are highlighted in bold, and the second bests are marked with underlines. All the models are only trained on the KITTI dataset. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c|c c|c c} \hline \hline & & KITTI & & KITTI-360 & & Apollo & Mulran & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \# Samples & 5000 & 1000 & 250 & 5000 & 1000 & 250 & 5000 & 1000 & 250 & 5000 & 1000 & 250 & \multicolumn{1}{c}{} \\ \hline \multicolumn{11}{c}{_Feature Match Recall (\%)_} \\ \hline Predator [1] & 99.64 & 99.64 & 99.64 & 99.87 & 99.78 & 99.44 & 99.98 & 99.98 & 97.86 & 85.96 & 82.01 & 58.76 & 49.12 & 49.12 & 39.18 \\ CofiNet [2] & 99.64 & 99.64 & **99.82** & 99.89 & 99.98 & 99.86 & **100** & **100** & **100** & 91.79 & 93.10 & 93.20 & 62.57 & 66.07 & 67.84 \\ NegNet [6] & 99.64 & 99.64 & 99.64 & **99.94** & **99.94** & **99.94** & **100** & **100** & **100** & 95.18 & 94.90 & 87.70 & 91.81 & 88.89 & 70.76 \\ Gootransformer [7] & **99.82** & **99.82** & **99.82** & 99.89 & 99.89 & **29.92** & **99.94** & **100** & **100** & **100** & 88.82 & 91.05 & 91.67 & 98.25 & 82.52 & 92.57 & 97.66 \\ RDMDNet (_ours_) & **99.82** & **99.82** & **99.82** & 99.92 & **99.94** & **99.94** & **100** & **100** & **100** & **98.31** & **98.72** & **98.81** & **100** & **100** & **100** \\ \hline \multicolumn{11}{c}{_Inlier Ratio (\%)_} \\ \hline Predator [1] & 62.9 & 50.2 & 29.5 & 60.2 & 48.8 & 29.3 & 44.3 & 31.8 & 16.8 & 14.2 & 11.1 & 6.6 & 5.5 & 5.4 & 4.7 \\ CofiNet [2] & 34.2 & 36.1 & 36.2 & 32.4 & 34.4 & 34.8 & 40.4 & 41.9 & 42.2 & 17.6 & 19.1 & 19.4 & 6.5 & 6.7 & 6.8 \\ NgNet [6] & 66.5 & 51.5 & 28.6 & 63.2 & 49.6 & 28.5 & 68.6 & 49.2 & 24.8 & 29.1 & 20.7 & 11.1 & 12.4 & 11.4 & 8.1 \\ Gootransformer [7] & 75.7 & 86.0 & 87.5 & 73.2 & 83.7 & 85.5 & 83.8 & 91.0 & 92.4 & **33.6** & **43.7** & 46.3 & 19.0 & 21.7 & 24.9 \\ RDMDNet (_ours_) & **86.7** & **93.0** & **95.3** & **84.0** & **91.0** & **93.7** & **92.1** & **96.4** & **97.6** & 31.4 & 42.6 & **51.1** & **34.9** & **36.8** & **41.5** \\ \hline \hline \end{tabular} \end{table} TABLE I: Matching results on multiple datasets under different numbers of samples. The best results are highlighted in bold, and the second bests are marked with an underline. Fig. 5: The registration recall under different thresholds. When changing one of the thresholds, we fix the other one to its default number, i.e., \(5^{\circ}\) for the rotation threshold and 2 m for the translation threshold. ### _Point Cloud Registration Performance_ The second experiment evaluates the point cloud registration results and supports our claim that our method outperforms the state-of-the-art method in point cloud registration. We also follow [7] using three other metrics to evaluate the registration performance: i) Relative Translation Error (RTE), the Euclidean distance between estimated and ground truth translation vectors, ii) Relative Rotation Error (RRE), the geodesic distance between estimated and ground truth rotation matrices, and iii) Registration Recall (RR), the fraction of scan pairs whose RRE and RTE are below certain thresholds, e.g., 5\({}^{\circ}\) and 2 m. We compare the results of our method with the recent RANSAC-based state-of-the-art methods: Predator [1], CofiNet [2], NgeNet [6], and Geotransformer [7] in Tab. II. There is an error in the evaluation codes of these methods using KITTI ground-truth except NgeNet. We fix the error and report the results before the fix as reported in the original papers and new results after the fix. As can be seen, Our RDMNet achieves the best RR on Mulran and outperforms all the baselines on all other datasets. Especially on the campus dataset, RDMNet outperforms baseline methods with a large margin on all metrics. We further evaluate RR for all the methods at different RRE and RTE thresholds on the KITTI and Campus datasets (see Fig. 5). Our RMDNet exhibits higher registration recall at all thresholds. Especially, RMDNet exceeds baseline methods by a large margin when generalizing to the campus dataset. We also compare our method to state-of-the-art RANSAC-free methods using Local-to-Global Registration (LGR) [7]: HRgeNet [27] and Geotransformer [7] in Tab. III. LGR is specifically proposed for superpoint-based approaches [7]. It calculates poses by performing weighted SVD on dense point correspondences of each patch and chooses one that admits the most inlier matches, which greatly reduces the computation time by limiting the iterations to \(|\mathcal{M}|\). When using LGR, our method attains remarkable results for translation estimation \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline & \begin{tabular}{c} Rot. \\ RMSE (\(\sigma\)) \\ \end{tabular} & \begin{tabular}{c} Rot. \\ MAE (\(\sigma\)) \\ \end{tabular} & \begin{tabular}{c} Rot. \\ STD (\(\sigma\)) \\ \end{tabular} & \begin{tabular}{c} Trans. \\ RMSE(cm) \\ \end{tabular} & \begin{tabular}{c} Trans. \\ MAE(cm) \\ \end{tabular} & \begin{tabular}{c} Trans. \\ STD(cm) \\ \end{tabular} \\ \hline \multicolumn{7}{c}{_KITTI Sequence 08_} \\ Predator [1] & 104.7 & 39.77 & 8.13 & 10469.6 & 4228.3 & 4319.7 \\ CofiNet [2] & 8.80 & 4.11 & 1.77 & 879.8 & 335.1 & 381.8 \\ \multicolumn{7}{c}{_KITTI_} \\ SegNet [6] & 6.48 & 3.17 & 0.91 & 647.8 & 273.4 & 255.2 \\ Geotransformer [7] & 4.18 & 2.01 & **0.60** & 418.3 & 171.2 & 170.3 \\ \multicolumn{7}{c}{ and surpasses the best RANSAC-based results by about 1 cm on KITTI, KITTI-360, and Apollo datasets. In most cases, our method is the best among other RANSAC-free methods. The above experiments evaluate the performance of all the methods in relative pose estimation. We further evaluate the absolute pose error of all the methods. We still use the LiDAR pairs 10 m away for transformation estimation and chain these transformations to obtain the trajectory. Fig. 6 shows the trajectories estimated by different methods on the three test sequences of the KITTI dataset. We calculate the root mean squared error (RMSE) and mean absolute error (MAE) of each estimated trajectory against the ground truth trajectory listed in Tab. IV. As can be seen, our RMDNet achieves overall the best performance. So far, we have demonstrated the superior capability of our RMDNet for point cloud registration in terms of both relative and absolute pose errors. The superiorities of our method lay both in its high accuracy on the datasets similar to the training set (KITTI-360 and Apollo) and in its robustness while generalizing to the datasets under totally different sensor configurations (Mulran and Campus). The key to the robustness and accuracy are the two main modules of the method, i.e., 3D-RoFormer and superpoint detection module. The 3D-RoFormer injects powerful feature representation capability into the network by encoding relative position information in a lightweight manner, while the superpoint detection module generates reliable superpoints that boost the matching and registering significantly. ### _Qualitative Results_ To provide more insights into the proposed RDMNet, we visualize the superpoints and the corresponding point patches learned by our method in Fig. 7. As can be seen, compared to the vanilla uniformly distributed superpoints (second row), our method can extract the superpoints close to their nearest geometrically significant region, such as the curbs and objects, as shown from the third row to the fourth row. Based on that, our method groups the dense point patches more meaningfully, where points on a single object are gathered in the same patch (fifth row), which is important for finding accurate correspondence and obtaining good registration results. Fig. 7: Visualizations of our superpoint detection and point patch grouping. (a) shows the corresponding images of the environment only for reference. (b) shows the uniformly sampled superpoints (red dots) with their point patches grouped by point-to-node strategy. We assign each point patch a random color for visualization. (c) shows the uniformly sampled superpoints with learned offsets (orange lines). (d) shows the final superpoints after offseting and filtering. (e) shows the point patches grouped by the final superpoints. Each point patch is assigned a random color. Uniformly sampled superpoints can easily separate a single object into several parts, as seen in (b). This poses challenges for matching. Our superpoint detection module learns a pattern that brings the superpoints close to a geometrically significant region together without using any semantic information. This leads to a more reasonable point patch grouping which benefits point matching. Fig. 8 provides the matching and registration results of our method compared to Geotransformer [7] and CoFiNet [2]. It shows that our RDMNet finds more inlier matches on salient regions, rejects outlier matches between similar flat areas, and performs robust and accurate registration. ### _Robustness Tests_ We conduct registration robustness tests regarding different overlap ratios and noise levels on the KITTI datasets. We generate the testing datasets with varying overlap ratios using the LiDAR pairs at different distances. See Fig. 9a, in terms of RR, RRE, and RTE, RDMNet achieves the best registration performance for paired point clouds with varying overlap ratios. For evaluation under different noise levels, we add zero-mean Gaussian noise with \(\sigma\) standard deviation to the point coordinates. See Fig. 9b, RDMNet obtains the more accurate and robust registration than competitor algorithms at all noise levels. Note that RDMNet shows the superior robustness that maintains an extremely high registration recall of \(96.58\%\) compared to other baselines at a high noise level of \(0.9\) m. ### _Ablation Study_ We conduct ablation studies on KITTI and Apollo datasets to better understand the effectiveness of each module in the proposed RDMNet, and show that the full RDMNet is the best setup. We use the model trained with negative log-likelihood loss [2, 7] using the vanilla transformer and without the superpoint detection module as the base model. Tab. V summarizes the point registration results of the ablation study. SDM refers to the superpoint detection module, and RoPE refers to the rotary position embedding. As can be seen, all modules of our method bring improvement in the point cloud registration individually. Combining all proposed modules, our RDMNet performs the best. We also provide a study on the effectiveness of our proposed 3D-RoFormer. We compare it with other existing transformers, including vanilla transformer [2], absolute position embedding (APE) [28], and geometric embedding (GEO) [7]. We only change the transformer parts in our RDMNet while keeping the rest parts the same and comparing the point cloud registration results. As shown in Tab. VI, using our proposed 3D-RoFormer, our RDMNet achieves the best performance in all metrics on both the KITTI and Campus datasets. \begin{table} \begin{tabular}{c c c|c c c|c c c} \hline \hline \multirow{2}{*}{SDM} & \multirow{2}{*}{RoPE} & \multirow{2}{*}{gap loss} & \multicolumn{3}{c|}{KITTI} & \multicolumn{3}{c}{Apollo} \\ & & & RR & RRE & RTE & RR & RRE & RTE \\ \hline & ✓ & ✓ & 99.46 & 0.20 & 5.8 & 99.34 & 0.16 & 6.9 \\ ✓ & ✓ & **99.82** & 0.20 & 5.5 & 98.49 & 0.20 & 6.7 \\ ✓ & ✓ & **99.82** & **0.18** & 5.7 & **100** & 0.11 & 5.0 \\ ✓ & ✓ & ✓ & **99.82** & **0.18** & **5.3** & **100** & **0.10** & **4.6** \\ \hline \hline \end{tabular} \end{table} TABLE V: Ablation study of individual modules. Fig. 8: Matching and registration results of our RDMNet compared to recent advances CoFiNet [2] and GeoTransformer [7]. In (a), (c), and (e), we visualize the matching on three datasets. Our RMDNet finds more inlier matches on salient regions (e.g., the curbs and the shrubs) and reject the outlier matches between similar flat patches. (b), (d), and (f) show our RMDNet achieves more accurate and robust registration compared to baseline methods. Fig. 9: Registration robustness tests in terms of (a) pair-wise distance and (b) noise level on the KITTI datasets. ### _Study on Runtime and Storage_ We measure the runtime and storage of the major module 3D-RoFormer for different numbers of input nodes per scan. The experiments are performed on an NVIDIA GeForce GTX 3090 GPU. As shown in Tab. VII, the runtime and storage of 3D-RoFormer increase linearly as the number of input nodes increases, similar to the vanilla transformer and APE transformer, while less than the quadratic GEO transformer. ## V Conclusion In this paper, we present RDMNet that leverages a coarse-to-fine strategy to extract dense point correspondences for point cloud registration. We exploit insights from natural language processing and keypoint detection and design a novel transformation-invariant transformer named 3D-RoFormer. It learns to aggregate contextual and geometric information of the point clouds in a fast and lightweight way and extracts salient and compact superpoint pairs for point cloud registration. We evaluate and compare our approach on multiple datasets, including publicly available ones and our self-recorded dataset collected from different environments. Extensive experiments suggest that our approach outperforms the baseline methods in terms of both correspondence matching and point cloud registration with a strong generalization ability. In the future, we want to figure out why the voting module does not work well in the Mulran dataset and improve the voting module's generalization ability. We also want to explore the potential of RDMNet to tackle the global localization problem.
2310.20423
Limits of chordal graphs with bounded tree-width
We study random $k$-connected chordal graphs with bounded tree-width. Our main results are scaling limits and quenched local limits.
Jordi Castellví, Benedikt Stufler
2023-10-31T12:48:39Z
http://arxiv.org/abs/2310.20423v1
# Limits of chordal graphs with bounded tree-width ###### Abstract We study random \(k\)-connected chordal graphs with bounded tree-width. Our main results are scaling limits and quenched local limits. ## 1 Introduction A graph is _chordal_ if every cycle of length at least \(4\) contains a chord, i.e., an edge between two non-adjacent vertices of the cycle. Chordal graphs can be characterized as the graphs in which all minimal separating sets are cliques. The graphon limit of chordal graphs was established by Janson [21]. The structural and enumerative study of chordal graphs was pioneered by Wormald [34], where he developed a method to find the exact number of chordal graphs with exactly \(n\) vertices for a given \(n\) using generating functions. As he also noted, an interesting property of chordal graphs is that they admit a decomposition into \(k\)-connected components for any \(k\in\mathbb{N}\), which is not the case for arbitrary graphs, where this is only true until \(k=3\) (for a detailed explanation of this see [15]). _Tree-width_ is a fundamental parameter in structural and algorithmic graph theory which can be defined in terms of tree-decompositions or, equivalently, in terms of \(k\)-trees. A \(k\)-tree is a graph obtained from a \((k+1)\)-clique by iteratively connecting a new vertex to all the vertices of an already existing \(k\)-clique. Note that \(1\)-trees are just trees with at least two vertices. The tree-width of a graph \(\Gamma\) is the minimum \(k\) such that \(\Gamma\) is a subgraph of a \(k\)-tree. In particular, \(k\)-trees are the edge-maximal graphs with tree-width at most \(k\). The graph limits of \(k\)-trees have been studied both in the labelled and unlabelled setting, see [17] and [24], respectively. Labelled chordal graphs with tree-width at most \(t\) have been recently enumerated in [15] by means of their decomposition into \(k\)-connected components for \(k=1,\ldots,t+1\). The only \((t+1)\)-connected (chordal) graph with tree-width at most \(t\) is the \((t+1)\)-clique. For fixed \(n,t>0\) and \(0\leq k\leq t\), let \(\mathcal{G}_{t,k,n}\) denote the set of \(k\)-connected chordal graphs with \(n\) labelled vertices and tree-width at most \(t\). Then, by the first main result of [15] there exist constants \(c_{t,k}>0\) and \(\rho_{t,k}\in(0,1)\) such that \[|\mathcal{G}_{t,k,n}|=c_{t,k}\,n^{-5/2}\,\rho_{t,k}^{-n}\,n!\,(1+o(1))\qquad \text{as $n\to\infty$}. \tag{1}\] Furthermore, the second result of [15] establishes a multi-dimensional central limit theorem for the number of cliques in the uniformly at random selected graph \(\mathsf{G}_{t,k,n}\) from \(\mathcal{G}_{t,k,n}\). Let \(X_{n,i}\) denote the number of \(i\)-cliques in \(\mathsf{G}_{t,k,n}\) and set \(\mathbf{X}_{\mathbf{n}}=(X_{n,2},\ldots,X_{n,t})\). Then, we have that \[\frac{1}{\sqrt{n}}\,(\mathbf{X}_{n}-\mathbb{E}\,\mathbf{X}_{n})\overset{d}{ \to}N(0,\boldsymbol{\Sigma}),\quad\text{with $\mathbb{E}\,\mathbf{X}_{n}\sim \boldsymbol{\alpha}n$ and $\mathbb{C}\mathrm{ov}\,\mathbf{X}_{n}\sim\boldsymbol{\Sigma}n$}, \tag{2}\] where \(\boldsymbol{\alpha}=(\boldsymbol{\alpha}_{i})_{1\leq i\leq t-1}\) is a \((t-1)\)-dimensional vector of positive numbers and \(\boldsymbol{\Sigma}\) is a \((t-1)\times(t-1)\)-dimensional positive semi-definite matrix. In this paper, we extend the analytic study of labelled \(k\)-connected chordal graphs with bounded tree-width by introducing probabilistic methods. Instead of analyzing individual parameters using systems of equations for generating functions, our approach is to consider random graphs with a given number of vertices chosen uniformly. We then establish limit objects of the sequence of random graphs that encode asymptotic properties of the finite model. In what follows, chordal graphs with bounded tree-width will be labelled unless otherwise specified. Our first result establishes the Brownian tree \((\mathscr{T}_{e},d_{\mathscr{T}_{e}},\mu_{\mathscr{T}_{e}})\) constructed in [5, 6, 7] as the Gromov-Hausdorff-Prokhorov scaling limit. **Theorem 1**.: _There is a constant \(\kappa_{t,k}>0\) such that_ \[\left(\mathsf{G}_{t,k,n},\kappa_{t,k}n^{-1/2}d_{\mathsf{G}_{t,k,n}},\mu_{ \mathsf{G}_{t,k,n}}\right)\overset{d}{\longrightarrow}(\mathscr{T}_{e},d_{ \mathscr{T}_{e}},\mu_{\mathscr{T}_{e}})\] _in the Gromov-Hausdorff-Prokhorov sense as \(n\to\infty\)._ Here \(d_{\mathsf{G}_{t,k,n}}\) refers to the graph distance on the vertex set of \(\mathsf{G}_{t,k,n}\), and \(\mu_{\mathsf{G}_{t,k,n}}\) to the uniform measure on this set. Notice that when \(k=t\) we recover the result for \(t\)-trees [17, Thm. 1] and that when \(k=1\) we recover the result for the connected family, which is mentioned in [15] as a consequence of a general theorem about subcritical families [26]. Our result contributes to the universality of the Brownian tree, referring to the phenomenon that it arises as Gromov-Hausdorff-Prokhorov scaling limit of several different models [16, 23, 14, 29, 10, 8, 28]. Apart from describing the asymptotic global shape, we prove a quenched limit that determines the asymptotic local shape: **Theorem 2**.: _Let \(v_{n}\) denote a uniformly selected vertex of \(\mathsf{G}_{t,k,n}\). There exists an infinite vertex-rooted chordal graph \(\hat{\mathsf{G}}_{t,k}\) such that_ \[(\mathsf{G}_{t,k,n},v_{n})\overset{d}{\longrightarrow}\hat{\mathsf{G}}_{t,k}\] _in the local topology as \(n\to\infty\). Furthermore, we have quenched convergence_ \[\mathfrak{L}((\mathsf{G}_{t,k,n},v_{n})\mid\mathsf{G}_{t,k,n})\stackrel{{ p}}{{\longrightarrow}}\mathfrak{L}(\hat{\mathsf{G}}_{t,k}).\] Our third result is a tail-bound for the diameter \(\mathrm{D}(\mathsf{G}_{t,k,n})\) of \(\mathsf{G}_{t,k,n}\). Such bounds have been determined for models of random trees [3, 2, 4], and in fact our proof makes use of the main result of [4]. **Theorem 3**.: _There exist constants \(C,c>0\) (that may depend on \(t\) and \(k\)) such that for all \(n\geq 1\) and \(x>0\)_ \[\mathbb{P}(\mathrm{D}(\mathsf{G}_{t,k,n})\geq x)\leq C\exp(-cx^{2}/n).\] An application of these tail-bounds is that, by a standard calculation, they imply that the rescaled diameter \(\mathrm{D}(\mathsf{G}_{t,k,n})/\sqrt{n}\) is \(p\)-uniformly integrable for any fixed \(p\geq 1\). Theorem 1 entails distributional convergence \[\mathrm{D}(\mathsf{G}_{t,k,n})\kappa_{t,k}n^{-1/2}\stackrel{{ d}}{{\longrightarrow}}\mathrm{D}(\mathscr{T}_{e}).\] Arbitrarily large uniform integrability allows us to apply the mean convergence criterion, yielding \[\mathbb{E}[\mathrm{D}(\mathsf{G}_{t,k,n})^{p}]\kappa_{t,k}^{p}n^{-p/2}\to \mathbb{E}[\mathrm{D}(\mathscr{T}_{e})^{p}].\] The distribution and moments of the diameter of the Brownian tree are known [33, 31, 6]. In a similar fashion our results entail distributional convergence and convergence of moments for other Gromov-Hausdorff-Prokhorov continuous functionals that are bounded by the diameter, for example the eccentricity with respect to a random vertex, or the distance between two or more uniformly selected vertices. ### Notation We recall some notation that will be used in the following sections. The positive integers are denoted by \(\mathbb{N}\), and the non-negative integers by \(\mathbb{N}_{0}\). We denote by \(\mathfrak{L}(X)\) the law of a random variable \(X\). All unspecified limits are as \(n\to\infty\). We use \(\stackrel{{ d}}{{\longrightarrow}}\) and \(\stackrel{{ p}}{{\longrightarrow}}\) for convergence in distribution and probability of random variables. We say that some event happens with high probability if its probability tends to \(1\). We let \(\mathcal{O}_{p}(a_{n})\) denote the product of a real number \(a_{n}\) with a stochastically bounded random variable \(Z_{n}\) that we do not specify explicitly. We let \(d_{\mathrm{TV}}\) denote the total variation distance between (the laws of) random variables. ## 2 Background ### Local convergence Let \(\mathfrak{G}\) denote the collection of rooted locally finite unlabelled graphs. We define the local distance between two graphs \(G,H\in\mathfrak{G}\) as \[d_{\mathrm{loc}}(G,H)=2^{-\sup\{h\in\mathbb{N}_{0}\,|\,U_{m}(G)=U_{m}(H)\}},\] where \(U_{m}(G)\) is the \(m\)-neighbourhood of the root of \(G\), that is, the graph induced by all vertices in \(G\) at distance at most \(m\) from its root. \(d_{\mathrm{loc}}\) is indeed a metric and \((\mathfrak{G},d_{\mathrm{loc}})\) is a Polish space, so we are in a standard setting for studying distributional convergence of random elements of this space. If \(X_{\infty},X_{1},X_{2},\ldots\) denote random graphs from \(\mathfrak{G}\), then \[X_{n}\stackrel{{ d}}{{\longrightarrow}}X_{\infty}\] if for any bounded continuous function \(F:\mathfrak{G}\to\mathbb{R}\) we have that \[\mathbb{E}[F(X_{n})]\to\mathbb{E}[F(X_{\infty}].\] This is in fact equivalent to the condition that for all \(r\in\mathbb{N}_{0}\) and all graphs \(G\in\mathfrak{G}\) it holds that \[\mathbb{P}(U_{r}(X_{n})=G)\to\mathbb{P}(U_{r}(X_{\infty})=G).\] ### Global convergence The Gromov-Hausdorff-Prokhorov distance \(d_{\mathrm{GHP}}\) is a well-known metric on the collection of \(\mathfrak{K}\) of isometry-equivalence classes of compact metric spaces endowed with Borel probability measures. Detailed expositions of this concept can be found in [9], [13, Ch. 7], [32, Ch. 27], [1], [25, Sec. 6], and [22]. ## 3 Sampling Procedures ### Generating functions of labelled chordal graphs with bounded tree-width For the remainder of this paper, we fix an integer \(t\geq 1\) and omit the index \(t\) from notation, so that the dependency on \(t\) is implicit. In accordance with this convention, we let \(\mathcal{G}\) denote the class of chordal graphs with tree-width at most \(t\). For a graph \(\Gamma\in\mathcal{G}\) and \(j\in[t]\), let us denote by \(n_{j}(\Gamma)\) the number of \(j\)-cliques of \(\Gamma\). Define the multivariate (exponential) generating function associated to \(\mathcal{G}\) to be \[G(\mathbf{x})=G(x_{1},\ldots,x_{t})=\sum_{\Gamma\in\mathcal{G}}\frac{1}{n_{1} (\Gamma)!}\prod_{j=1}^{t}x_{j}^{n_{j}(\Gamma)},\] Let \(g_{n}\) denote the number of chordal graphs with \(n\) vertices and tree-width at most \(t\). Then, \[G(x,1,\ldots,1)=\sum_{n\geq 1}\frac{g_{n}}{n!}x^{n}.\] For \(0\leq k\leq t+1\), let \(\mathcal{G}_{k}\) be the class of \(k\)-connected chordal graphs with tree-width at most \(t\) and \(G_{k}(\mathbf{x})\) be the associated generating function. In particular, for \(k=t+1\) the only graph in the class is the \((t+1)\)-clique: \[G_{t+1}(\mathbf{x})=\frac{1}{(t+1)!}\prod_{j\in[t]}x_{j}^{\binom{t+1}{j}}.\] Rooting the graph \(\Gamma\in\mathcal{G}_{k}\) at an (ordered) \(i\)-clique means distinguishing one \(i\)-clique \(K\) of \(\Gamma\), choosing an ordering of its vertices and forgetting their labels. In order to avoid over-counting, we will discount the subcliques of \(K\). Let \(i\in[k]\) and define \(\mathcal{G}_{k}^{(i)}\) to be the class of \(k\)-connected chordal graphs with tree-width at most \(t\) and rooted at an \(i\)-clique. Let then \(G_{k}^{(i)}(\mathbf{x})\) be the associated generating function, where now for \(1\leq j\leq i\) the variables \(x_{j}\) mark the number of \(j\)-cliques that are not subcliques of the root. Then, for \(k\in[t]\), the following equations hold [15, Lem. 2.5]: \[G_{k+1}^{(k)}(\mathbf{x}) =k!\left(\prod_{j=1}^{k-1}x_{j}^{-\binom{k}{j}}\right)\frac{ \partial}{\partial x_{k}}G_{k+1}(\mathbf{x}), \tag{3}\] \[G_{k}^{(k)}(\mathbf{x}) =\exp\left(G_{k+1}^{(k)}\big{(}x_{1},\ldots,x_{k-1},x_{k}G_{k}^{( k)}(\mathbf{x}),x_{k+1},\ldots,x_{t}\big{)}\right),\] (4) \[G_{k}(\mathbf{x}) =\frac{1}{k!}\left(\prod_{j=1}^{k-1}x_{j}^{\binom{k}{j}}\right) \int G_{k}^{(k)}(\mathbf{x})\ dx_{k}. \tag{5}\] For ease of notation, we \(G_{k}^{(k)}(x):=G_{k}^{(k)}(x,1,\ldots,1)\), so that \[G_{k}^{(k)}(x)=\exp(G_{k+1}^{(k)}(x,1,\ldots,1,G_{k}^{(k)}(x),1,\ldots,1)).\] Let \(\rho_{k}\) be radius of convergence of \(G_{k}^{(k)}(x)\). We recall the fact that Equation (4) is subcritical [15, Lem. 3.7], implying that \[G_{k+1}^{(k)}\big{(}\rho_{k}+\epsilon,1\ldots,1,G_{k}^{(k)}(\rho_{k})+ \epsilon,1,\ldots,1\big{)}<\infty \tag{6}\] for some \(\epsilon>0\). Moreover, we can get the asymptotics of \(\mathcal{G}_{k,n}^{(k)}\) as follows. By (5), \[\frac{|\mathcal{G}_{k,n}^{(k)}|}{n!} =[x_{1}^{n}]G_{k}^{(k)}(x_{1},x_{k})\] \[=k![x_{1}^{n}]\frac{\partial}{\partial x_{k}}G_{k}(x_{1},x_{k})x_{1 }^{-k}\] \[=k![x_{1}^{n}]\sum_{\Gamma\in\mathcal{G}_{k}}n_{k}(\Gamma)\frac{x_ {1}^{n_{1}(\Gamma)-k}}{n_{1}(\Gamma)!}\] \[=k![x_{1}^{n}]\sum_{\Gamma\in\mathcal{G}_{k,n+k}}n_{k}(\Gamma) \frac{x_{1}^{n}}{(n+k)!}\] \[=\frac{k!}{(n+k)!}\sum_{\Gamma\in\mathcal{G}_{k,n+k}}n_{k}(\Gamma).\] Therefore, by (2), \[\binom{n+k}{k}|\mathcal{G}_{k,n}^{(k)}|\sim n\boldsymbol{\alpha}_{k}| \mathcal{G}_{k,n+k}|,\qquad\text{as $n\to\infty$}, \tag{7}\] where \(\boldsymbol{\alpha}_{k}\) is the \(k\)-th component of \(\boldsymbol{\alpha}\). ### Boltzmann sampling procedure We describe a Boltzmann sampler \(\Gamma G_{k}^{(k)}(\rho_{k})\) that samples graphs in \(\mathcal{G}_{k}^{(k)}\) according to \[\mathbb{P}(\Gamma G_{k}^{(k)}(\rho_{k})=A)=\frac{\rho_{k}^{|A|}}{|A|!G_{k}^{( k)}(\rho_{k})}.\] In particular, \(\Gamma G_{k}^{(k)}(\rho_{k})\) conditioned on having \(n\) vertices is uniformly distributed on \(\mathcal{G}_{k,n}^{(k)}\). In order to describe \(\Gamma G_{k}^{(k)}(\rho_{k})\), will use an auxiliary Boltzmann sampler \(\Gamma S(\rho_{k}):=\Gamma\exp(G_{k+1}^{(k)}(\rho_{k},1,\ldots,1,G_{k}^{(k)}( \rho_{k}),1,\ldots,1))\), for which we give no description. \(\Gamma S(\rho_{k})\) samples a set of graphs in \(\mathcal{G}_{k+1}^{(k)}\) with distribution \(\text{Pois}\left(G_{k+1}^{(k)}(\rho_{k},1,\ldots,1,G_{k}^{(k)}(\rho_{k}),1, \ldots,1)\right)\). ### Blow-up sampling procedure We are going to reformulate the Boltzmann sampling algorithm such that it first generates a random tree and then creates a graph from that tree using a blow-up procedure. To this end, we define a pair \((\xi,\zeta)\) of random non-negative integers with probability generating function \[\mathbb{E}[z^{\xi}w^{\zeta}]=\exp(G_{k+1}^{(k)}(w\rho_{k},1,\ldots,1,zG_{k}^{(k )}(\rho_{k}),1,\ldots,1))/G_{k}^{(k)}(\rho_{k}). \tag{8}\] For ease of reference, if we consider a graph rooted at a \(k\)-clique, we refer to the vertices not contained in that clique as non-root vertices, and to the \(k\)-cliques different from the root \(k\)-clique also as non-root \(k\)-cliques. Thus, \(\xi\) and \(\zeta\) are distributed like the numbers of non-root \(k\)-cliques and vertices in \(\Gamma S(\rho_{k})\). Since \(\Gamma S(\rho_{k})\) follows a Boltzmann distribution, conditioning it on a given number of vertices yields the uniform distribution among all possible outcomes with that number of vertices. Thus, if we perform a two-step procedure where we first generate \((\xi,\zeta)\) and then generate a \(\text{SET}(\mathcal{G}_{k+1}^{(k)})\) object with accordingly many non-root \(k\)-cliques and vertices, we create a random structure that is distributed like \(\Gamma S(\rho_{k})\). We let \(\mathsf{T}\) denote a 2-type Bienayme-Galton-Watson tree such that vertices of the second kind are infertile, and each vertex of the first kind receives mixed offspring according to an independent copy of \((\xi,\zeta)\). The root of \(\mathsf{T}\) is always defined to have type 1. For ease of reference, we refer to vertices of the first type as 'black', and vertices of the second type as 'white'. We consider \(\mathsf{T}\) as ordered, so that there is a total order on the black children of any vertex, and a total order on the white children of any vertex. We do not impose any ordering between vertices of different types. It follows from the recursive description of \(\Gamma G_{k}^{(k)}(\rho_{k})\) in Algorithm 1 that the numbers of black and white vertices in \(\mathsf{T}\) are distributed like the total number of \(k\)-cliques (including the root clique) and the number of non-root vertices of \(\Gamma G_{k}^{(k)}(\rho_{k})\). In particular, since the graph \(\Gamma G_{k}^{(k)}(\rho_{k})\) is almost surely finite, it follows that the tree \(\mathsf{T}\) is almost surely finite as well. Since we can generate \(\Gamma S(\rho_{k})\) from \((\xi,\zeta)\), we can also generate \(\Gamma G_{k}^{(k)}(\rho_{k})\) from \(\mathsf{T}\) by drawing for each black vertex \(v\) of \(\mathsf{T}\) a uniform and independent _decoration_\(\alpha(v)\) from \(\operatorname{SET}(\mathcal{G}_{k+1}^{(k)})\) such that the numbers of non-root \(k\)-cliques and vertices of \(\alpha(v)\) agrees with the numbers of black and white children of \(v\). Note that \(\operatorname{SET}(\mathcal{G}_{k+1}^{(k)})\) contains a unique structure of size \(0\), so this also makes sense when \(v\) has no offspring. The decoration \(\alpha(v)\) may be viewed as a single connected graph by gluing together the root \(k\)-cliques of its components. We refer to the pair \((\mathsf{T},\alpha)\) as a _decorated tree_. The chordal graph corresponding to this decorated tree is obtained by gluing the decorations together according to the tree-structure \(\mathsf{T}\). That is, we start with the decoration of the root vertex \(o\) of \(\mathsf{T}\), viewed as a single graph. The non-root \(k\)-cliques of this decoration correspond to the black children of \(o\) in a canonical way, by using the total order on the black children and ordering the non-root \(k\)-cliques of the decoration according to some rule or fixed choice. So we may proceed recursively by gluing for each black child of \(v\) the corresponding \(k\)-clique of \(\alpha(o)\) to the root clique of the (connected graph corresponding to) the decoration \(\alpha(v)\). We then proceed in the same fashion with the grandchildren of \(o\), and so on. Since this _blow-up procedure_ is a reformulation of Algorithm 1, the resulting graph \(\mathsf{G}_{k}^{(k)}\) is hence (up to relabelling of vertices) distributed like the Boltzmann random variable \(\Gamma G_{k}^{(k)}(\rho_{k})\). (Note that analogously we may transform any decorated tree into a graph via this blow-up operation.) Consequently, the random graph \(\mathsf{G}_{k,n}^{(k)}\) obtained by conditioning \(\mathsf{G}_{k}^{(k)}\) on having \(n\) non-root vertices is uniformly distributed among all graphs from \(\mathcal{G}_{k,n}^{(k)}\). We may view \(\mathsf{G}_{k,n}^{(k)}\) as the result of forming a tree \(\mathsf{T}_{n}\) obtained by conditioning \(\mathsf{T}\) on having \(n\) white vertices, choosing for each vertex \(v\) of \(\mathsf{T}_{n}\) a decoration \(\alpha_{n}(v)\) uniformly at random with such that the numbers of black and white children of \(v\) match the numbers of non-root \(k\)-cliques and vertices of the decoration, and finally performing the same blow-up procedure as before. ### Structural analysis We collect some asymptotic properties that we are going to use frequently. **Proposition 4**.: 1. _The random non-negative integer_ \((\xi,\zeta)\) _has finite exponential moments. That is, there exists_ \(\epsilon>0\) _such that_ \[\mathbb{E}[(1+\epsilon)^{\xi}(1+\epsilon)^{\zeta}]<\infty.\] 2. _We have_ \[\mathbb{E}[\xi]=1.\] 3. _Letting_ \(\#_{1}(\cdot)\) _and_ \(\#_{2}(\cdot)\) _denote the number of black and white vertices of an input tree, we have_ \[\mathbb{P}(\#_{2}\mathsf{T}=n)\sim\frac{\mathbf{\alpha}_{k}k!\rho_{k}^{-k}}{G_{k}^ {(k)}(\rho_{k})}n^{-3/2}.\] Proof.: The fact that \((\xi,\zeta)\) has finite exponential moments follows readily from Equation (6). In order to see that \(\mathbb{E}[\xi]=1\), note that Equation (4) implies that \[\phi(x,G_{k}^{(k)}(x))=0\] for \[\phi(x,y)=y-\exp\left(G_{k+1}^{(k)}\big{(}x,1,\ldots,1,y,1,\ldots,1\big{)} \right).\] By the implicit function theorem, if \(\frac{\partial\phi}{\partial y}(\rho_{k},G_{k}^{(k)}(\rho_{k}))\neq 0\), then \(G_{k}^{(k)}(x)\) would have an analytic continuation in a neighbourhood of \(\rho_{k}\), which contradicts Pringsheim's theorem [18, Thm. IV.6]. Using (4), it hence follows that \[0 =\frac{\partial\phi}{\partial y}(\rho_{k},G_{k}^{(k)}(\rho_{k}))\] \[=1-G_{k}^{(k)}(\rho_{k})\frac{\partial G_{k+1}^{(k)}}{\partial x_ {k}}\left(\rho_{k},1,\ldots,1,G_{k}^{(k)}(\rho_{k}),1,\ldots,1\right)\] \[=1-\mathbb{E}[\xi].\] The probability for the event \(\#_{2}\mathsf{T}=n\) is equal to the probability that the Boltzmann sampler \(\Gamma G_{k}^{(k)}(\rho_{k})\) produces a graph with \(n\) non-root vertices. That is, \[\mathbb{P}(\#_{2}\mathsf{T}=n)=\frac{|\mathcal{G}_{k,n}^{(k)}|\rho_{k}^{n}}{n! G_{k}^{(k)}(\rho_{k})}.\] Using Equations (7) and (1), it follows that \[\mathbb{P}(\#_{2}\mathsf{T}=n)\sim\frac{\mathbf{\alpha}_{k}k!\rho_{k}^{-k}}{G_{k} ^{(k)}(\rho_{k})}n^{-3/2} \tag{9}\] as \(n\to\infty\). We collect a few additional properties concerning the degree profile of the black subtree of \(\mathsf{T}_{n}\). **Proposition 5**.: 1. _We have_ \(\#_{1}\mathsf{T}_{n}/n\stackrel{{ p}}{{\longrightarrow}}1/\mathbb{E}[\zeta]\)_._ 2. _For each integer_ \(d\geq 0\) _the number_ \(B_{d}(\mathsf{T}_{n})\) _of black vertices with_ \(d\geq 0\) _black children satisfies_ \(B_{d}(\mathsf{T}_{n})/n\stackrel{{ p}}{{\longrightarrow}} \mathbb{P}(\xi=d)/\mathbb{E}[\zeta]\)_._ 3. _There is a constant_ \(C>0\) _such that_ \[\lim_{n\to\infty}\mathbb{P}\left(\sum_{d\geq C\log n}B_{d}(\mathsf{T}_{n})=0 \right)=1.\] 4. _We have_ \(\sum_{d\geq 1}d^{2}\mathbb{E}[\zeta]B_{d}(\mathsf{T}_{n})/n\stackrel{{ p}}{{ \longrightarrow}}1+\mathbb{V}[\xi]\)_._ Proof.: We may construct the black subtree of \(\mathsf{T}\) in a depth-first-search order so that the number of black and white children of the \(i\)th vertex in that order are given by an independent copy \((\xi_{i},\zeta_{i})\) of \((\xi,\zeta)\). By standard properties of the depth-first-search the tree is fully explored after \(\ell\) steps if \(\sum_{i=1}^{\ell}(\xi_{i}-1)=-1\) and \(\sum_{i=1}^{k}(\xi_{i}-1)\geq 0\) for all \(1\leq k<\ell\). See for example [20, Lem. 15.2] The total number of white vertices is then given by \(\sum_{i=1}^{\ell}\zeta_{i}\), and the number of black-vertices with \(d\) black children by \(\sum_{i=1}^{\ell}\mathbf{1}_{\xi_{i}=d}\). Using the well-known cycle lemma [20, Lem. 15.3], it follows that \[\mathbb{P}(\#_{1}\mathsf{T}=\ell,\#_{2}\mathsf{T}=n)\] \[\qquad=\mathbb{P}\left(\sum_{i=1}^{\ell}(\xi_{i}-1,\zeta_{i})=(- 1,n),\sum_{i=1}^{k}(\xi_{i}-1)\geq 0\text{ for all }1\leq k<\ell\right)\] \[\qquad=\frac{1}{\ell}\mathbb{P}\left(\sum_{i=1}^{\ell}(\xi_{i}-1,\zeta_{i})=(-1,n)\right).\] We have shown above that \(\zeta\) has finite exponential moments. Hence, by a standard medium deviation inequality recalled in Lemma 17 it follows that \[\mathbb{P}\left(\sum_{i=1}^{\ell}\zeta_{i}=n\right)\leq O(1)\exp(-\Theta(|n/ \mathbb{E}[\zeta]-\ell|/\sqrt{\ell}))\] uniformly for all integers \(\ell\). For any \(0<\delta<1/4\) we may set \[I=\{\ell\mid|\ell-n/\mathbb{E}[\zeta]|<n^{1/2+\delta}\}.\] Using Equation (9) we obtain \[\mathbb{P}(\#_{1}\mathsf{T}_{n}\notin I) =\mathbb{P}(\#_{1}\mathsf{T}\notin I\mid\#_{2}\mathsf{T}=n)\] \[\leq O(n^{3/2})\mathbb{P}(\#_{1}\mathsf{T}\notin I,\#_{2}\mathsf{T} =n)\] \[\leq O(n^{3/2})\sum_{\ell\notin I}\frac{1}{\ell}\exp(-\Theta(|n/ \mathbb{E}[\zeta]-\ell|/\sqrt{\ell}))\] \[=\exp(-\Theta(n^{\delta})). \tag{10}\] In particular, \[\#_{1}\mathsf{T}_{n}/n\stackrel{{ p}}{{\longrightarrow}}1/ \mathbb{E}[\zeta].\] For notational convenience, we set \[I_{d}=[\mathbb{P}(\xi=d)n/\mathbb{E}[\zeta]-n^{1/2+2\delta},\mathbb{P}(\xi=d) n/\mathbb{E}[\zeta]+n^{1/2+2\delta}].\] Using Equation (9) it follows that \[\mathbb{P}(B_{d}(\mathsf{T}_{n})\notin I_{d}) =o(1)+\sum_{\ell\in I}\mathbb{P}(B_{d}(\mathsf{T}_{n})\notin I_{d },\#_{1}\mathsf{T}_{n}=\ell)\] \[=o(1)+\sum_{\ell\in I}\frac{\mathbb{P}(B_{d}(\mathsf{T})\notin I_ {d},\#_{1}\mathsf{T}=\ell,\#_{2}\mathsf{T}=n)}{\mathbb{P}(\#_{2}\mathsf{T}=n)}\] \[\leq o(1)+O(n^{3/2})\sum_{\ell\in I}\mathbb{P}\left(\sum_{i=1}^{ \ell}\mathbf{1}_{\xi_{i}=d}\notin I_{d},\sum_{i=1}^{\ell}\zeta_{i}=n\right).\] Applying the Chernoff bounds it follows that \[\mathbb{P}\left(\sum_{i=1}^{\ell}\mathbf{1}_{\xi_{i}=d}\notin I_{d}\right) \leq\exp\left(-\Theta(n^{\delta})/\mathbb{P}(\xi=d)\right)\] uniformly for all \(d\geq 0\) and all integers \(\ell\) with \(|\ell-n/\mathbb{E}[\zeta]|<n^{\delta/2}\). It follows that \[\mathbb{P}(B_{d}(\mathsf{T}_{n})\notin I_{d})\leq\exp(-\Theta(n^{\delta/2}))\] uniformly for all \(d\). In particular, \[B_{d}(\mathsf{T}_{n})/n\stackrel{{ p}}{{\longrightarrow}} \mathbb{P}(\xi=d)/\mathbb{E}[\zeta].\] Next, arguing analogously as before we obtain for any \(d_{0}\geq 0\) \[\mathbb{P}\left(B_{d}(\mathsf{T}_{n})>0\text{ for some }d\geq d_{0}\right) \leq o(1)+O(n^{3/2})\sum_{\ell\in I}\mathbb{P}\left(\sum_{i=1}^{ \ell}\mathbf{1}_{\xi_{i}\geq d_{0}}>0\right)\] \[\leq o(1)+O(n^{7/2})\mathbb{P}(\xi\geq d_{0}).\] Since \(\xi\) has finite exponential moments, we may set \(d_{0}=C\log n\) for a large enough constant \(C>0\) such that this probability tends to zero. This shows: \[\lim_{n\to\infty}\mathbb{P}\left(\sum_{d\geq C\log n}B_{d}(\mathsf{T}_{n})=0 \right)=1.\] Finally, with the intermediate results at hand, we have with probability tending to \(1\) as \(n\to\infty\) that \[\sum_{d\geq 1}d^{2}\mathbb{E}[\zeta]B_{d}(\mathsf{T}_{n})/n =\sum_{d=1}^{C\log n}d^{2}(\mathbb{P}(\xi=d)+O(n^{-1/4}))\] \[=\mathbb{V}[\xi]+1+o(1).\] This concludes the proof. We may order the vertices of \(\mathsf{T}_{n}\) linearly according to a depth-first-search exploration. This induces an ordering of the \(\#_{1}\mathsf{T}_{n}\) black vertices and an ordering on the \(\#_{2}\mathsf{T}_{n}=n\) white vertices. Let \(U\) denote a random element of the half-open interval \(]0,1]\) sampled according to the uniform distribution. Hence \(L_{n}:=\lceil U\#_{1}\mathsf{T}_{n}\rceil\) is distributed like the position of a uniformly selected black vertex of \(\mathsf{T}_{n}\) in the induced ordering on the black vertices. Likewise, we may select the white vertex at position \(\lceil Un\rceil\) among the \(n\) white vertices (so that it is uniformly distributed) and let \(1\leq L_{n}^{\prime}\leq\#_{1}\mathsf{T}_{n}\) denote the position of its unique black parent in the ordering of the \(\#_{1}\mathsf{T}_{n}\) black vertices. **Proposition 6**.: _Given \(0<\delta<1/4\), there exists a constant \(C>0\) such that for all \(n\) the tree \(\mathsf{T}_{n}\) has with probability at least \(1-\exp(-\Theta(n^{\delta}))\) the property that_ \[|L_{n}-L_{n}^{\prime}|\leq Cn^{3/4},\] _regardless of the value of \(U\)._ Proof.: We use the same notation as in the proof of Proposition 5. By Equation (10) we know that for any \(0<\delta<1/4\) \[\mathbb{P}(\#_{1}\mathsf{T}_{n}\notin I)\leq\exp(-\Theta(n^{\delta}))\] with \(I=\{\ell\mid|\ell-n/\mathbb{E}[\zeta]|<n^{1/2+\delta}\). Consider the event \(\mathcal{E}\) that there exists an index \(1\leq j\leq\#_{1}\mathsf{T}_{n}\) such that the sum \(S\) of the number of white children of the first \(j\) black vertices in depth-first-search order does not lie in \(J:=\{r\mid|j\mathbb{E}[\zeta]-r|\leq n^{1/2+\delta}|\). By analogous arguments as for Equation (10), we obtain \[\mathbb{P}(\mathcal{E},\#_{1}\mathsf{T}_{n}\in I) \leq O(n^{3/2})\sum_{\ell\in I}\sum_{j=1}^{\ell}\mathbb{P}\left( \left|\sum_{i=1}^{j}\zeta_{i}-j\mathbb{E}[\zeta]\right|>n^{1/2+\delta}\right)\] \[\leq O(n^{3/2})\sum_{\ell\in I}\sum_{j=1}^{\ell}\sum_{r\notin J} \exp(-\Theta(|j\mathbb{E}[\zeta]-r|/\sqrt{j}))\] \[\leq\exp(-\Theta(n^{\delta})).\] Suppose now that \(\#_{1}\mathsf{T}_{n}\in I\) and that the event \(\mathcal{E}\) does not take place. Then, by construction, we know that \[\frac{L_{n}}{\#_{1}\mathsf{T}_{n}}=\frac{S}{n}+O(1/n)\] and \(|\#_{1}\mathsf{T}_{n}-n/\mathbb{E}[\zeta]|\leq n^{1/2+\delta}\) and \(|S-L_{n}^{\prime}\mathbb{E}[\zeta]|\leq n^{1/2+\delta}\), which implies \[|L_{n}-L_{n}^{\prime}|=O(n^{1/2+\delta}).\] ### De-rooting the \(k\)-clique rooted random graphs The random graph \(\mathsf{G}_{k,n}^{(k)}\) is uniformly distributed among all \(n\)-vertex \(k\)-connected chordal graphs with tree-width at most \(t\) that are rooted at a \(k\)-clique. Let us also denote by \(\mathcal{G}_{k,n}^{(k)_{L}}\) the class of \(k\)-connected chordal graphs with tree-width at most \(t\), a distinguished \(k\)-clique and having \(n\) vertices not in the root clique. One can think of these as rooted graphs where the vertices in the root clique are labelled instead of ordered. Then, we have that \(|\mathcal{G}_{k,n}^{(k)_{L}}|=\binom{n+k}{k}|\mathcal{G}_{k,n}^{(k)}|\). **Lemma 7**.: _If we consider \(\mathsf{G}_{k,n}^{(k)}\) and \(\mathsf{G}_{k,n+k}\) as unrooted unlabelled graphs by forgetting about the labels and the root-clique, then_ \[d_{\mathrm{TV}}(\mathsf{G}_{k,n}^{(k)},\mathsf{G}_{k,n+k})\to 0\] _as \(n\to\infty\)._ Proof.: Let \(\mathrm{L}(\mathsf{G}_{k,n}^{(k)})\) denote the result of relabelling all vertices of \(\mathsf{G}_{k,n}^{(k)}\) (including those contained in the root clique) uniformly at random. Furthermore, let \(\mathrm{F}(\cdot)\) denote a forgetful functor that takes as input a clique-rooted graph and outputs the unrooted graph obtained by forgetting about the root clique. By (2) the subset \(\mathcal{E}_{k}\) of all all graphs \(G\in\mathcal{G}_{k,n}\) with \[|n_{k}(G)-\boldsymbol{\alpha}_{k}n|\leq n^{3/4}\] has the property that \(\mathsf{G}_{k,n}\in\mathcal{E}_{k,n}\) with high probability as \(n\to\infty\). Using Equation (7), it follows that \[\frac{\mathbb{P}(\mathrm{F}(\mathrm{L}(\mathsf{G}_{k,n}^{(k)}))=G)}{\mathbb{P}( \mathsf{G}_{k,n+k}=G)}=\frac{n_{k}(G)/(\binom{n+k}{k}|\mathcal{G}_{k,n}^{(k)}|) }{1/|\mathcal{G}_{k,n+k}|}\to 1\] uniformly for all \(G\in\mathcal{E}_{k,n+k}\) as \(n\to\infty\). This completes the proof. ## 4 Proof of the local limit ### Local convergence of random trees Consider the set \(\mathfrak{X}_{f}\) of all finite \(2\)-type plane trees with a root vertex and second marked vertex (which is not necessarily distinct from the root vertex), subject to the following restriction: We require all trees in \(\mathfrak{X}_{f}\) to have the property that vertices of the second type have no offspring. For ease of reference, we will again refer to vertices of the first type as black vertices, and vertices of the second type as white vertices. Let \(T\) denote a \(2\)-type plane tree. For any vertex \(v\) of \(T\) we can form the _fringe subtree_\(f(T,v)\) consisting of all descendants of \(v\), including \(v\) itself. For any vertex \(v\) of \(T\) and any integer \(h\geq 0\) we can define the _extended fringe subtree_\(f^{[h]}(T,v)\in\mathfrak{X}_{f}\) obtained by taking the fringe subtree \(f(T,v_{h})\) of \(T\) at the \(h\)-th ancestor \(v_{h}\) of \(v\), and marking it at the vertex of \(f(T,v_{h})\) that corresponds to \(v\). Of course, this is only well-defined if \(v\) has height at least \(h\) in \(T\). If \(v\) has smaller height, we set \(f^{[h]}(T,v)\) to some place-holder value. Given \(\tau\in\mathfrak{X}_{f}\) we define the number \(N_{\tau}(T)\) as the number of vertices \(v\) of the tree \(T\) such that \(f^{[h_{\tau}]}(T,v)=\tau\), with \(h_{\tau}\) denoting the height of the marked vertex of \(\tau\). We also say \(N_{\tau}(T)\) counts the _occurrences_ of \(\tau\) in \(T\). Let \(\mathfrak{X}\) denote the union of \(\mathfrak{X}_{f}\) with the set of all locally finite \(2\)-type plane trees with a marked vertex having a countably infinite number of ancestors, subject to the following restriction: We require all trees in \(\mathfrak{X}\) to have the property that the marked vertex is white, that all white vertices to have no offspring, and that all extended fringe subtrees of the marked vertex are finite. In particular, any tree in \(\mathfrak{X}\) may be decomposed into a (possibly infinite) _spine_ given by the ancestors of the marked vertex, to which finite trees are attached. The space \(\mathfrak{X}\) can be equipped with a metric \[d_{\mathfrak{X}}(\tau,\tau^{\prime})=2^{-\sup\{h\geq 0|f^{[h]}(\tau)=f^{[h]}( \tau^{\prime})\}},\qquad\tau,\tau^{\prime}\in\mathfrak{X}.\] It is easy to see that this metric is complete and separable [30, Prop. 1]. Weak convergence of probability measures defines a topology on \(\mathfrak{M}(\mathfrak{X})\), making it a Polish space too. We let \(v_{n}\) denote a uniformly selected white vertex of our random tree \(\mathsf{T}_{n}\). This makes \((\mathsf{T}_{n},v_{n})\) a random element of \(\mathfrak{X}\). We can also interpret the conditional law \(\mathfrak{L}((\mathsf{T}_{n},v_{n})\mid\mathsf{T}_{n})\) as a random element of \(\mathfrak{M}(\mathfrak{X})\). That is, for any tree \(T\in\mathfrak{X}\) with \(n\) white vertices we can look at the uniform probability measure that assigns probability \(1/n\) to each of the \(n\) marked versions of \(T\). This measure is a (deterministic) element of \(\mathfrak{M}(\mathfrak{X})\). The conditional law \(\mathfrak{L}((\mathsf{T}_{n},v_{n})\mid\mathsf{T}_{n})\) may be constructed in this way by taking \(T=\mathsf{T}_{n}\) to be random, and hence it is a random element of \(\mathfrak{M}(\mathfrak{X})\). Proposition 4 allows us to apply a general result [30, Thm. 1], yielding that there exists a random limiting tree \(\mathsf{T}^{\circ}\) with \[(\mathsf{T}_{n},v_{n})\stackrel{{ d}}{{\longrightarrow}}\mathsf{T}^ {\circ} \tag{11}\] and \[\mathfrak{L}((\mathsf{T}_{n},v_{n})\mid\mathsf{T}_{n})\stackrel{{ p}}{{\longrightarrow}}\mathfrak{L}(\mathsf{T}^{\circ}). \tag{12}\] The first equation is referred to as annealed local convergence, and the second (stronger) equation as quenched local convergence. The limiting tree \(\mathsf{T}^{\circ}\) may be described as follows. We first define a size-biased versions \((\xi^{\bullet},\zeta^{\bullet})\) and \((\xi^{\circ},\zeta^{\circ})\) of the random variable \((\xi,\zeta)\) by \[\mathbb{P}\left((\xi^{\circ},\zeta^{\circ})=(a,b)\right)=\mathbb{P}\left((\xi,\zeta)=(a,b)\right)b/\mathbb{E}[\zeta] \tag{13}\] and \[\mathbb{P}\left((\xi^{\bullet},\zeta^{\bullet})=(a,b)\right)= \mathbb{P}\left((\xi,\zeta)=(a,b)\right)a. \tag{14}\] Now, let \(u_{1},u_{2},\ldots\) denote a sequence of black vertices, which we are going to call the _spine_ of \(\mathsf{T}^{\circ}\). The tip \(u_{1}\) of the spine receives offspring according to \((\xi^{\circ},\zeta^{\circ})\) and a uniformly selected white child \(u_{0}\) is marked. For each \(i\geq 2\) the vertex \(u_{i}\) receives offspring according to an independent copy of \((\xi^{\bullet},\zeta^{\bullet})\) and we identify \(u_{i-1}\) with a uniformly selected black child. The construction of \(\mathsf{T}^{\circ}\) is finalized by identifying each black non-spine vertex with the root of an independent copy of the unconditioned tree \(\mathsf{T}\). ### Local convergence of random decorated trees Recall from Section 3.3 that the random decorated tree \((\mathsf{T}_{n},\alpha_{n})\) is constructed from \(\mathsf{T}_{n}\) by drawing for each black vertex \(v\) of \(\mathsf{T}_{n}\) an independent decoration \(\alpha_{n}(v)\) uniformly at random from the class \(\operatorname{SET}(\mathcal{G}^{(k)}_{k+1})\) such that the numbers of non-root \(k\)-cliques and vertices of \(\alpha(v)\) agree with the numbers of black and white children of \(v\). We refer to this construction as the _canonical way_ of decorating the tree \(\mathsf{T}_{n}\). Of course, this can be performed on any 2-type plane tree, allowing us to form the decorated tree \((\mathsf{T}^{\circ},\alpha^{\circ})\) by assigning random decorations \(\alpha^{\circ}(v)\), \(v\in\mathsf{T}^{\circ}\) in the canonical way. We define the space \(\mathfrak{A}\) as the collection of all tuples \((\tau,\alpha_{\tau})\) with \(\tau\in\mathfrak{X}\) and \(\alpha_{\tau}\) a (deterministic) decoration of \(\tau\). That is, \(\alpha_{\tau}=(\alpha_{\tau}(v))_{v\in\tau}\) is a family of objects from \(\operatorname{SET}(\mathcal{G}^{(k)}_{k+1})\) so that for each \(v\in\tau\) the numbers of non-root \(k\)-cliques and vertices of \(\alpha_{\tau}(v)\) agree with the numbers of black and white children of \(v\) in \(\tau\). (Here use the index \(\tau\) in the notation \(\alpha_{\tau}\) only in order to avoid any confusion with the canonical decoration \(\alpha\) of the unconditioned Bienayme-Galton-Watson tree \(\mathsf{T}\). Of course, multiple decorated versions may correspond to the undecorated tree \(\tau\).) We define the subset \(\mathfrak{A}_{f}\subset\mathfrak{A}\) of all such trees \((\tau,\alpha_{\tau})\) that are finite. The notions of extended fringe subtrees and the metric on \(\mathfrak{X}\) may be defined analogously for decorated trees, making \(\mathfrak{A}\) a Polish space. Analogously to the number \(N_{\tau}(T)\) of occurrences of a finite marked tree \(\tau\) in some plane tree \(T\), we may define the number \(N_{(\tau,\alpha_{\tau})}(T,\alpha_{T})\) of occurrences of a finite decorated tree marked tree \((\tau,\alpha_{\tau})\) in the tree \(T\) endowed with some decoration \(\alpha_{T}\). Equation (12) may be used to deduce quenched local convergence of random decorated trees: **Proposition 8**.: _As \(n\to\infty\),_ \[\mathfrak{L}(((\mathsf{T}_{n},v_{n}),\alpha_{n})\mid(\mathsf{T}_{n},\alpha_{ n}))\stackrel{{ p}}{{\longrightarrow}}\mathfrak{L}(\mathsf{T}^{\circ},\alpha^{ \circ}).\] Proof.: It is easy but crucial to observe that occurrences of \(\tau\) in \(T\) are pairwise disjoint. Therefore, if the decoration \(\alpha_{T}\) of \(T\) is chosen at random in the mentioned canonical way, then each occurrence of \(\tau\) in \(T\) has a fixed probability \(p_{(\tau,\alpha_{\tau})}\) of also being an occurrence of \((\tau,\alpha_{\tau})\), independently from the other occurrences. This probability \(p_{(\tau,\alpha_{\tau})}\) is precisely equal to the probability that a random canonical decoration of \(\tau\) is equal to \(\alpha_{\tau}\). The quenched local convergence from Equation (12) is equivalent to stating that for each \(\tau\in\mathfrak{X}_{f}\) \[\frac{N_{\tau}(\mathsf{T}_{n})}{n}\stackrel{{ p}}{{ \longrightarrow}}p_{\tau}\] for \(p_{\tau}:=\mathbb{P}(f^{[h_{\tau}]}(\mathsf{T}^{\circ})=\tau)\), with \(h_{\tau}\) denoting the height of the marked vertex of \(\tau\). Since occurrences of \(\tau\) do not overlap, it follows by the Chernoff bounds that for each decorated version \((\tau,\alpha_{\tau})\) of \(\tau\) we have \[\frac{N_{(\tau,\alpha_{\tau})}(\mathsf{T}_{n},\alpha_{n})}{n}\stackrel{{ p}}{{\longrightarrow}}p_{\tau}p_{(\tau,\alpha_{\tau})}.\] We have \[p_{\tau}p_{(\tau,\alpha_{\tau})} =\mathbb{P}(f^{[h_{\tau}]}(\mathsf{T}^{\circ})=\tau)\mathbb{P}(f ^{[h_{\tau}]}(\mathsf{T}^{\circ},\alpha^{\circ})=(\tau,\alpha_{\tau})\mid f^ {[h_{\tau}]}(\mathsf{T}^{\circ})=\tau)\] \[=\mathbb{P}(f^{[h_{\tau}]}(\mathsf{T}^{\circ},\alpha^{\circ})=( \tau,\alpha_{\tau}).\] It follows that \[\mathfrak{L}(((\mathsf{T}_{n},v_{n}),\alpha_{n})\mid(\mathsf{T}_{n},\alpha_{n} ))\stackrel{{ p}}{{\longrightarrow}}\mathfrak{L}(\mathsf{T}^{ \circ},\alpha^{\circ}).\] By the de-rooting ### Local convergence of random graphs The blow-up procedure described in Section 3.3 constructs a graph from a decorated tree. Consequently, we may assign to each marked decorated tree \((\tau,\alpha_{\tau})\in\mathfrak{A}\) a vertex-rooted graph \(G\) obtained by performing these blow-up operations on \((\tau,\alpha_{\tau})\). In each step of this procedure we join a graph constructed so far to some previously unused decoration by gluing them together at specific \(k\)-cliques. The graph \(G\) is locally finite if and only if by completing the procedure we never glue a vertex an infinite number of times to other vertices. We let \(\mathfrak{A}_{0}\subset\mathfrak{A}\) denote the subset of decorated marked trees where the corresponding graph is locally finite. We define a map \[\Phi:\mathfrak{A}\to\mathfrak{G},\] that maps a marked decorated tree \((\tau,\alpha_{\tau})\) to a neighbourhood \(U_{j_{0}}(G)\) of the corresponding vertex-rooted graph \(G\) with \[j_{0}=\sup\{j\geq 0\mid U_{j}(G)\text{ is finite}\}\in\{0,1,2,\ldots\}\cup\{\infty\}.\] This way, \(\Phi(\tau,\alpha_{\tau})=G\) if \((\tau,\alpha_{\tau})\in\mathfrak{A}_{0}\). **Proposition 9**.: _The following statements hold._ 1. _The map_ \(\Phi\) _is continuous in a point_ \((\tau,\alpha_{\tau})\) _if and only if_ \((\tau,\alpha_{\tau})\in\mathfrak{A}_{0}\)_._ 2. \(\mathfrak{A}_{0}\) _is measurable and the map_ \(\Phi\) _is measurable._ 3. \((\mathsf{T}^{\circ},\alpha^{\circ})\in\mathfrak{A}_{0}\) _holds almost surely._ Proof.: We start with the first claim. Only the vertices that are part of \(k\)-cliques that correspond to the black vertices of the spine of an infinite marked decorated tree \((\tau,\alpha_{\tau})\) from \(\mathfrak{A}\) may undergo an infinite number of gluing when forming the corresponding graph \(G\). Such a vertex \(v\) has infinite degree in \(G\) if and only if all but a finite number of these cliques are glued together in an overlapping manner so that the overlap always contains \(v\). It follows that if \((\tau,\alpha_{\tau})\in\mathfrak{A}_{0}\) then for each \(j\geq 0\) there exist a number \(h(j)\geq 0\) such that the graph corresponding to \(f^{[h(j)]}(\tau,\alpha_{\tau})\) already contains the \(j\)-neighbourhood \(U_{j}(\Phi(\tau,\alpha_{\tau}))\). Thus, \(\Phi\) is continuous in \(\mathfrak{A}_{0}\). On the other hand, suppose that \((\tau,\alpha_{\tau})\in\mathfrak{A}\setminus\mathfrak{A}_{0}\), so that \(j_{0}\) as defined as above is finite. \(\Phi\) cannot be continuous in \((\tau,\alpha_{\tau})\) since \(f^{[h]}(\tau,\alpha_{\tau})\to(\tau,\alpha_{\tau})\) as \(h\to\infty\), but clearly \(\Phi(f^{[h]}(\tau,\alpha_{\tau}))\) has radius at least \(j_{0}+1\) for all sufficiently large \(h\). This concludes the proof of the first claim. We proceed to prove the second claim. The points of continuity of a map defined on a metric space always forms a Borel measurable subset, see [11, Appendix M, Sec. M10]. Consequently, the subset \(\mathfrak{A}_{0}\subset\mathfrak{A}\) is measurable. Using the fact that any open set in \(\mathfrak{G}\) is a countable union of sets of the form \(\{G\in\mathfrak{G}\mid U_{m}(G)=H\}\), \(m\geq 0\), \(H\) a finite rooted graph, measurability of \(\Phi\) now follows by standard arguments. We now prove the third claim. Let \(u_{1},u_{2},\ldots\) denote the black spine vertices of \(\mathsf{T}^{\circ}\), so that \(u_{i+1}\) is the father of \(u_{i}\) for all \(i\geq 1\). We know by the discussion in the first paragraph that in order for \((\mathsf{T}^{\circ},\alpha^{\circ})\in\mathfrak{A}_{0}\) it is sufficient that an infinite number of times it happens that the decoration \(\alpha^{\circ}(u_{i})\) of some black spine vertex \(u_{i}\) gets glued to the decoration \(\alpha^{\circ}(u_{i+1})\) of its father \(u_{i+1}\) in a way such that the two root-cliques of the decorations are pairwise disjoint. The event \(\mathcal{E}_{i}\) for this to happen at \(u_{i}\) has a fixed positive probability that does not depend on \(i\), and \((\mathcal{E}_{i})_{i\geq 1}\) is independent. By the Borel-Cantelli lemma it follows that almost surely \(\mathcal{E}_{i}\) takes place for infinitely many \(i\). Thus, \((\mathsf{T}^{\circ},\alpha^{\circ})\in\mathfrak{A}_{0}\) holds almost surely. This completes the proof. **Corollary 10**.: _The push-forward map_ \[\Phi^{*}:\mathfrak{M}(\mathfrak{A})\to\mathfrak{M}(\mathfrak{G}),\qquad P \mapsto P\Phi^{-1}\] _that maps a probability measure \(P\) to the push-forward measure \(P\Phi^{-1}:B\mapsto P(\Phi^{-1}(B))\) has the following properties._ 1. \(\Phi^{*}\) _is continuous at the point_ \(\mathfrak{L}(\mathsf{T}^{\circ},\alpha^{\circ})\)_._ 2. \(\Phi^{*}\) _is measurable._ Proof.: Proposition 9 allows us to apply the continuous mapping theorem [11, Thm. 2.7], yielding directly that \(\Phi^{*}\) is continuous at \(\mathfrak{L}(\mathsf{T}^{\circ},\alpha^{\circ})\). Measurability of the map \(\Phi^{*}\) follows by standard arguments. We are now ready to prove Theorem 2. Proof of Theorem 2.: Recall that by Proposition 8 we have \[\mathfrak{L}(((\mathsf{T}_{n},v_{n}),\alpha_{n})\mid(\mathsf{T}_{n},\alpha_{ n}))\stackrel{{ p}}{{\longrightarrow}}\mathfrak{L}(\mathsf{T}^{\circ},\alpha^{ \circ})\] as \(n\to\infty\). Corollary 10 allows us to apply the continuous mapping theorem [11, Thm. 2.7], yielding \[\Phi^{*}\mathfrak{L}(((\mathsf{T}_{n},v_{n}),\alpha_{n})\mid(\mathsf{T}_{n}, \alpha_{n}))\stackrel{{ p}}{{\longrightarrow}}\Phi^{*} \mathfrak{L}(\mathsf{T}^{\circ},\alpha^{\circ}).\] Clearly \(\Phi^{*}\mathfrak{L}(\mathsf{T}^{\circ},\alpha^{\circ})\) is equal to the law \(\mathfrak{L}(\mathsf{\hat{G}})\) of the infinite random rooted graph \(\mathsf{\hat{G}}:=\Phi(\mathsf{T}^{\circ},\alpha^{\circ})\). (Of course, \(\mathsf{\hat{G}}\) depends implicitly on \(k\).) Recall that \(\mathsf{G}^{(k)}_{k,n}\) denotes a uniformly selected labelled \(k\)-clique rooted \(k\)-connected chordal graph with tree-width at most \(t\). The push-forward \(\Phi^{*}\mathfrak{L}(((\mathsf{T}_{n},v_{n}),\alpha_{n})\mid(\mathsf{T}_{n}, \alpha_{n}))\) is distributed like the conditional law \(\mathfrak{L}((\mathsf{G}^{(k)}_{k,n},x^{\circ}_{n})\mid\mathsf{G}^{(k)}_{k,n})\), with \(x^{\circ}_{n}\) denoting a uniformly selected non-root vertex of \(\mathsf{G}^{(k)}_{k,n}\)). If we select a vertex \(x_{n}\) uniformly at random among the \(n+k\) vertices of \(\mathsf{G}^{(k)}_{k,n})\), then this vertex is with high probability not one of the \(k\) vertices contained in the root-clique. Consequently, it follows that \[\mathfrak{L}((\mathsf{G}_{k,n}^{(k)},x_{n})\mid\mathsf{G}_{k,n}^{(k)})\stackrel{{ p}}{{\longrightarrow}}\mathfrak{L}(\hat{\mathsf{G}}).\] By Lemma 7 it follows that \[\mathfrak{L}((\mathsf{G}_{k,n},y_{n})\mid\mathsf{G}_{k,n}^{(k)})\stackrel{{ p}}{{\longrightarrow}}\mathfrak{L}(\hat{\mathsf{G}}),\] with \(y_{n}\) denoting a uniformly selected vertex of \(\mathsf{G}_{k,n}\). This completes the proof. ## 5 Proof of the scaling limit ### Scaling limit of a coupled random tree We are going to verify the following limit of the underlying tree \(\mathsf{T}_{n}\), which is a special case of a conditioned sesqui-type tree. **Proposition 11**.: _There is a constant_ \[\kappa_{\mathrm{tree}}=\frac{\sqrt{\mathbb{V}[\xi]\mathbb{E}[\zeta]}}{2}\] _that depends on \(k\) and \(t\) such that_ \[\left(\mathsf{T}_{n},\kappa_{\mathrm{tree}}n^{-1/2}d_{\mathsf{T}_{n}},\mu_{ \mathsf{T}_{n}}\right)\stackrel{{ d}}{{\longrightarrow}}( \mathscr{T}_{e},d_{\mathscr{T}_{e}},\mu_{\mathscr{T}_{e}})\] _in the Gromov-Hausdorff-Prokhorov sense as \(n\to\infty\). Here \(\mu_{\mathsf{T}_{n}}\) denotes the uniform measure on the white vertices of \(\mathsf{T}_{n}\). Furthermore, there are constants \(C,c>0\) such that for all \(n\) and all \(x\geq 0\) the height \(\mathrm{H}(\mathsf{T}_{n})\) satisfies_ \[\mathbb{P}(\mathrm{H}(\mathsf{T}_{n})\geq x)\leq C\exp(-cx^{2}/n).\] Proof.: Let us call the outdegree profile of a rooted tree the family \((N_{d})_{d\geq 0}\) with \(N_{d}\) denoting the number of vertices with outdegree \(d\geq 0\). Conversely, we can fix a profile and uniformly sample a rooted tree with that profile. Given an integer \(1\leq\ell\leq n\), the black subtree of the conditioned tree \((\mathsf{T}_{n}\mid\#_{1}\mathsf{T}_{n}=\ell)\) is a mixture of uniform random \(\ell\)-vertex rooted trees with given outdegree profile. This is because if we condition \((\mathsf{T}_{n}\mid\#_{1}\mathsf{T}_{n}=\ell)\) on having a 2-dimensional outdegree profile \((N_{(a,b)})_{a,b\geq 0}\), then the black subtree is uniformly distributed among all rooted trees with degree profile \((\sum_{b\geq 0}N_{(a,b)})_{a\geq 0}\).1 Footnote 1: The authors are grateful to Louigi Addario-Berry for pointing out this fact. This allows us to apply [4, Thm. 3] to the black subtree of \((\mathsf{T}_{n}\mid\#_{1}\mathsf{T}_{n}=\ell)\). Any vertex of \(\mathsf{T}_{n}\) is at graph distance at most 1 from a black vertex, hence [4, Thm. 3] in fact yields that there are constants \(C,c>0\) that do not depend on \(\ell\) or \(n\) or \(x\geq 0\) such that \[\mathbb{P}\left((\mathrm{H}(\mathsf{T}_{n})\mid\#_{1}\mathsf{T}_{n}=\ell)\geq x \right)\leq C\exp(-cx^{2}/\ell).\] This immediately yields \[\mathbb{P}(\mathrm{H}(\mathsf{T}_{n})\geq x) =\sum_{\ell=1}^{n}\mathbb{P}(\#_{1}\mathsf{T}_{n}=\ell)\mathbb{P} \left((\mathrm{H}(\mathsf{T}_{n})\mid\#_{1}\mathsf{T}_{n}=\ell)\geq x\right)\] \[\leq C\sum_{\ell=1}^{n}\mathbb{P}(\#_{1}\mathsf{T}_{n}=\ell)\exp( -cx^{2}/\ell)\] \[\leq C\exp(-cx^{2}/n).\] This concludes the proof of the tail-bounds for the height. For the scaling limit, Proposition 5 ensures that we may apply [12, Thm. 1] to the black subtree \(\mathsf{B}_{n}\) of \(\mathsf{T}_{n}\) (using Skorokhod's representation theorem analogously as for monotype trees in [12, Sec. 6]), yielding \[\left(\mathsf{B}_{n},\frac{\sqrt{\mathbb{V}[\xi]}}{2}\sqrt{\frac{\mathbb{E}[ \zeta]}{n}}d_{\mathsf{B}_{n}},\mu_{\mathsf{B}_{n}}\right)\stackrel{{ d}}{{\longrightarrow}}\left(\mathscr{T}_{e},d_{ \mathscr{T}_{e}},\mu_{\mathscr{T}_{e}}\right).\] Here \(\mu_{\mathsf{B}_{n}}\) denotes the uniform probability measure on the set of vertices of \(\mathsf{B}_{n}\). To be precise, [12, Thm. 1] states Gromov-Hausdorff convergence, however it is clear that the proof given there extends to the Gromov-Hausdorff-Prokhorov metric. Any vertex of \(\mathsf{T}_{n}\) is at graph distance at most \(1\) from \(\mathsf{B}_{n}\), hence it is clear that the Hausdorff distance between \(\mathsf{B}_{n}\) and \(\mathsf{T}_{n}\) tends to zero after rescaling distances by \(n^{-1/2}\). It remains to bound the Prokhorov distance between \(\mu_{\mathsf{B}_{n}}\) (interpreted as a measure on the entire tree \(\mathsf{T}_{n}\) ) and the measure \(\mu_{\mathsf{T}_{n}}\), both after rescaling distances by \(\kappa_{\text{tree}}n^{-1/2}\). Proposition 6 ensures that there exists a set \(\mathcal{E}\) of trees so that \[|1-\mathbb{P}(\mathsf{T}_{n}\in\mathcal{E})|\leq\exp(-\Theta(n^{2/3})) \tag{15}\] and such that whenever \(\mathsf{T}_{n}\in\mathcal{E}\) we may couple a uniformly selected black vertex \(v_{n}\) of \(\mathsf{T}_{n}\) with the unique black parent \(v^{\prime}_{n}\) of a uniformly selected white vertex of \(\mathsf{T}_{n}\) such that the positions \(L_{n}\) and \(L^{\prime}_{n}\) of \(v_{n}\) and \(v^{\prime}_{n}\) in the depth-first-search order of \(\mathsf{B}_{n}\) satisfy \[|L_{n}-L^{\prime}_{n}|\leq n^{3/4}. \tag{16}\] By applying [12, Thm. 3] to \(\mathsf{B}_{n}\) in the same way as we applied [12, Thm. 1] before, we know that the height process and the contour process of \(\mathsf{B}_{n}\) admit the same Brownian excursion of duration \(1\) as distributional scaling limit after multiplying height by \(\kappa_{\text{tree}}/\sqrt{n}\), and time by \(\mathbb{E}[\zeta]/n\) and \(\mathbb{E}[\zeta]/(2n)\), respectively, and jointly the Lukasiewicz path converges in distribution towards the same Brownian excursion after rescaling height by \(\sqrt{\frac{\mathbb{E}[\xi]}{\mathbb{P}[\zeta]}}\frac{1}{\sqrt{n}}\) and time by \(\mathbb{E}[\zeta]/n\). Skorokhod's representation theorem allows us to assume that this convergence holds almost surely. The graph distance between arbitrary vertices \(v\) and \(v^{\prime}\) in the tree \(\mathsf{B}_{n}\) is given by \[d_{\mathsf{B}_{n}}(v,v)=h_{\mathsf{B}_{n}}(v)+h_{\mathsf{B}_{n}}(v^{\prime})-2 h_{\mathsf{B}_{n}}(\text{lca}(v,v^{\prime})), \tag{17}\] with \(\text{lca}(v,v)\) denoting the lowest common ancestor of \(v\) and \(v\), and \(h_{\mathsf{B}_{n}}(\cdot)\) referring to the height of a vertex. Since Brownian excursion is continuous, Equation (17) and standard arguments imply that almost surely \[n^{-1/2}\sup_{v,v^{\prime}}d_{\mathsf{B}_{n}}(v,v^{\prime})\to 0 \tag{18}\] with the indices \(v\) and \(v^{\prime}\) ranging over all vertices in \(\mathsf{B}_{n}\) whose positions in the depth-first-search order differ by at most \(n^{3/4}\). Using Equation (15) and the Borel-Cantelli lemma, it follows from (18) that almost surely \(\mathsf{T}_{n}\) has the property \[n^{-1/2}\sup_{v_{n},v^{\prime}_{n}}d_{\mathsf{B}_{n}}(v_{n},v^{\prime}_{n}) \to 0. \tag{19}\] By [27, Cor. 7.5.2], this implies that the Prokhorov distance between \(\mu_{\mathsf{B}_{n}}\) and \(\mu_{\mathsf{T}_{n}}\) converges almost surely to zero after rescaling distances by \(\kappa_{\text{tree}}n^{-1/2}\). This completes the proof. ### The size-biased tree We construct the tree \(\mathsf{T}^{(\ell)}\) with spine length \(\ell\) in the following way. Let \(u_{0},u_{1},\ldots,u_{\ell}\) denote a sequence of black vertices, which form the spine of \(\mathsf{T}^{(\ell)}\). The tip \(u_{\ell}\) of the spine receives offspring according to \((\xi^{\circ},\zeta^{\circ})\) and a uniformly selected white child is marked. For each \(0\leq i<\ell\) the vertex \(u_{i}\) receives offspring according to an independent copy of \((\xi^{\bullet},\zeta^{\bullet})\) and we identify \(u_{i+1}\) with a uniformly selected black child. The construction of \(\mathsf{T}^{(\ell)}\) is finalized by identifying each black non-spine vertex with the root of an independent copy of the unconditioned tree \(\mathsf{T}\). Again, a decorated tree \((\mathsf{T}^{(\ell)},\alpha^{(\ell)})\) can be constructed by assigning random decorations \(\alpha^{(\ell)}(v)\) to each \(v\in\mathsf{T}^{(\ell)}\) in the canonical way. **Lemma 12**.: _For any finite marked tree \((\tau,\alpha_{\tau})\in\mathfrak{A}_{f}\) with the height of the marked vertex equal to \(\ell\) it holds that_ \[\mathbb{P}((\mathsf{T}^{(\ell)},\alpha^{(\ell)})=(\tau,\alpha_{\tau}))= \mathbb{P}((\mathsf{T},\alpha)=(\tilde{\tau},\alpha_{\tau}))/\mathbb{E}[\zeta],\] _with \(\tilde{\tau}\) denoting the unmarked tree obtained from \(\tau\) by forgetting which vertex is marked._ Proof.: For any \(\tau\in\mathfrak{X}_{f}\) with a marked white vertex at height \(\ell+1\), denote by \(v_{\ell}\) the tip of the spine (the parent of the marked white vertex) and by \(v_{0},\ldots,v_{\ell-1}\) the remaining vertices of the spine (the ancestors of \(v_{\ell}\)). Denote also the black and white outdegrees of its vertices by \(d_{\tau}^{\bullet}(v)\) and \(d_{\tau}^{\circ}(v)\), respectively. Consider first the undecorated setting. We have that \[\mathbb{P}(\mathsf{T}^{(\ell)}=\tau) =\left(\prod_{v\in\tau\setminus\{v_{0},\ldots,v_{\ell}\}} \mathbb{P}((\xi,\zeta)=(d_{\tau}^{\bullet}(v),d_{\tau}^{\circ}(v)))\right)\] \[\qquad\cdot\left(\prod_{v\in\{v_{0},\ldots,v_{\ell-1}\}}\mathbb{P }((\xi^{\bullet},\zeta^{\bullet})=(d_{\tau}^{\bullet}(v),d_{\tau}^{\circ}(v))) \frac{1}{d_{\tau}^{\bullet}(v)}\right)\] \[\qquad\cdot\mathbb{P}((\xi^{\circ},\zeta^{\circ})=(d_{\tau}^{ \bullet}(v_{\ell}),d_{\tau}^{\circ}(v_{\ell})))\frac{1}{d_{\tau}^{\circ}(v_{ \ell})}\] \[=\left(\prod_{v\in\tau\setminus\{v_{0},\ldots,v_{\ell}\}}\mathbb{ P}((\xi,\zeta)=(d_{\tau}^{\bullet}(v),d_{\tau}^{\circ}(v)))\right)\] \[\qquad\cdot\left(\prod_{v\in\{v_{0},\ldots,v_{\ell-1}\}}\mathbb{ P}((\xi,\zeta)=(d_{\tau}^{\bullet}(v),d_{\tau}^{\circ}(v)))\right)\] \[\qquad\cdot\mathbb{P}((\xi,\zeta)=(d_{\tau}^{\bullet}(v_{\ell}),d _{\tau}^{\circ}(v_{\ell})))\frac{1}{\mathbb{E}[\zeta]}\] \[=\left(\prod_{v\in\tau}\mathbb{P}((\xi,\zeta)=(d_{\tau}^{\bullet }(v),d_{\tau}^{\circ}(v)))\right)\frac{1}{\mathbb{E}[\zeta]}\] \[=\mathbb{P}(\mathsf{T}=\tilde{\tau})\frac{1}{\mathbb{E}[\zeta]}.\] Then, taking into account the decorations we conclude that \[\mathbb{P}((\mathsf{T}^{(\ell)},\alpha^{(\ell)})=(\tau,\alpha_{ \tau})) =\mathbb{P}((\mathsf{T}^{(\ell)},\alpha^{(\ell)})=(\tau,\alpha_{ \tau})\mid\mathsf{T}^{(\ell)}=\tau)\cdot\mathbb{P}(\mathsf{T}^{(\ell)}=\tau)\] \[=\mathbb{P}((\mathsf{T},\alpha)=(\tilde{\tau},\alpha_{\tau})\mid \mathsf{T}=\tilde{\tau})\cdot\mathbb{P}(\mathsf{T}=\tilde{\tau})/\mathbb{E}[\zeta]\] \[=\mathbb{P}((\mathsf{T},\alpha)=(\tilde{\tau},\alpha_{\tau}))/ \mathbb{E}[\zeta].\] **Remark 13**.: _The same construction can be extended to \(\ell=\infty\), obtaining the tree \(\mathsf{T}^{(\infty)}\). Notice that here, as opposed to \(\mathsf{T}^{\circ}\), the spine is growing downwards and thus there is no vertex with offspring distributed as \((\xi^{\circ},\zeta^{\circ})\) and no marked vertex. It is easy to adapt the proof of the monotype case [20] to see that \(\mathsf{T}_{n}\stackrel{{ d}}{{\longrightarrow}}\mathsf{T}^{( \infty)}\) in the local topology with respect to the root._ By Skorokhod's representation theorem, we may assume without loss of generality that \(\mathsf{T}_{n}\stackrel{{\text{a.s.}}}{{\longrightarrow}}\mathsf{ T}^{(\infty)}\). Assigning decorations on both sides in the canonical way immediately implies a corresponding convergence of \((\mathsf{T}_{n},\alpha_{n})\) towards \((\mathsf{T}^{(\infty)},\alpha^{(\infty)})\). Arguing analogously as for the random marked vertex case, it follows that \(\mathsf{G}_{k,n}^{(k)}\stackrel{{ d}}{{\longrightarrow}}\mathsf{G} _{k,(\infty)}^{(k)}\), where \(\mathsf{G}_{k,(\infty)}^{(k)}=\Phi(\mathsf{T}^{(\infty)},\alpha^{(\infty)})\), in the local topology with respect to, say, the first vertex in the root \(k\)-clique. ### Comparison of different metrics **Lemma 14**.: _The maximum white outdegree in \(\mathsf{T}_{n}\) is \(\mathcal{O}_{p}(\log n)\)._ Proof.: Let \(C>0\) be a large enough constant. Then, using Proposition 4 \[\mathbb{P}(\exists v\in\mathsf{T}_{n}\text{ with }d^{\circ}(v)>C \log n)\] \[\quad=\mathbb{P}(\exists v\in\mathsf{T}\text{ with }d^{\circ}(v)>C \log n\mid\#_{2}\mathsf{T}=n)\] \[\quad=\frac{\mathbb{P}(\exists v\in\mathsf{T}\text{ with }d^{\circ}(v)>C \log n,\#_{2}\mathsf{T}=n)}{\mathbb{P}(\#_{2}\mathsf{T}=n)}\] \[\quad=\sum_{1\leq m\leq n}c_{k}^{-1}n^{3/2}\mathbb{P}(\exists v \in\mathsf{T}\text{ with }d^{\circ}(v)>C\log n,\#_{1}\mathsf{T}=m,\#_{2}\mathsf{T}=n),\] where \(c_{k}=\frac{\boldsymbol{\alpha}_{k}k!\rho_{k}^{-k}}{G_{k}^{(k)}(\rho_{k})}\). For each of these events, there is at least one out of \(m\) independent copies of \((\xi,\zeta)\) with \(\zeta>C\log n\). But by Proposition 4\((\xi,\zeta)\) has finite exponential moments, so \[\mathbb{P}(\zeta>C\log n)<ae^{-b\cdot C\log n}=an^{-bC},\] for some constants \(a,b>0\). Therefore, \[\mathbb{P}(\exists v\in\mathsf{T}_{n}\text{ with }d^{\circ}(v)>C \log n) \leq\sum_{1\leq m\leq n}c_{k}^{-1}n^{3/2}(1-(1-an^{-bC})^{m})\] \[\leq nc_{k}^{-1}n^{3/2}(1-(1-an^{-bC})^{n})\] \[\leq c_{k}^{-1}n^{5/2}(1-(1-n\cdot an^{-bC}))\] \[=c_{k}^{-1}an^{-bC+7/2}\] and the result follows. In the rest of this section we assume any tree \((\mathsf{T},\alpha)\) and its corresponding graph \(\Phi((\mathsf{T},\alpha))\) to be coupled. A black vertex and its corresponding clique receive the same name and the same goes for white vertices and their corresponding vertices. For every white vertex \(v\) of \((\mathsf{T},\alpha)\), define \[h_{\mathsf{T}_{n}}(v) :=\text{the height of }v\text{ in }\mathsf{T}_{n},\] \[h_{\Phi((\mathsf{T},\alpha))}(v) :=\text{the graph distance in }\Phi((\mathsf{T},\alpha))\text{ from the first vertex in }\] \[\quad\text{the root clique to }v.\] For every black vertex \(v\) of \((\mathsf{T},\alpha)\), define \[h_{\mathsf{T}_{n}}(v) :=\text{the height of $v$ in $\mathsf{T}_{n}$}\] \[h_{\Phi((\mathsf{T},\alpha))}(v) :=\text{the shortest graph distance in $\Phi((\mathsf{T},\alpha))$ from the first vertex in the root clique to any vertex in the clique $v$.}\] Our goal is to show that if a vertex \(v\) of \((\mathsf{T}_{n},\alpha_{n})\) has large enough height \(h_{\mathsf{T}_{n}}(v)\), then \(h_{\mathsf{G}_{k,n}^{(k)}}(v)\) concentrates around \(\gamma_{k}h_{\mathsf{T}_{n}}(v)\), as \(n\to\infty\), for some constant \(\gamma_{k}>0\). In order to do so, we will use the size-biased tree \(\mathsf{T}^{(\ell)}\) constructed in the previous section. Recall that \(\mathsf{T}^{(\ell)}\) has a spine \(u_{0},u_{1},\ldots,u_{\ell}\) and that all vertices in the spine except for its tip \(u_{\ell}\) receive offspring according to an independent copy of \((\boldsymbol{\xi}^{\bullet},\zeta^{\bullet})\). And the tree is then decorated to obtain \((\mathsf{T}_{n},\alpha_{n})\) by chosing uniformly for every vertex a decoration that is compatible with its offspring. Consider, for \(0\leq i\leq\ell\) the random variables \[S_{i}:=h_{\mathsf{G}_{k,n}^{(k)}}(u_{i}).\] Consider also the random variables \(X_{i}\in\{1,\cdots,k\}\) defined as the number of vertices in the clique \(u_{i}\) at distance \(S_{i}\) from the first vertex in the root clique of \(\mathsf{G}_{k,n}\). Since any two vertices in a clique are adjacent, the graph distance in \(\mathsf{G}_{k,n}\) from the first vertex in the root clique to any of the remaining \(k-X_{i}\) vertices in the clique \(u_{i}\) is \(S_{i}+1\). By definition, \(X_{0}=1\) and \(S_{0}=0\). Now, observe that * The random pair \((X_{i},S_{i})\) depends only on the previous pair \((X_{i-1},S_{i-1})\). * The random pair \((X_{i},S_{i}-S_{i-1})\) depends only on \(X_{i-1}\). This means that \((X_{i},S_{i})\) is a discrete Markov additive process. Let us also verify that it is irreducible and aperiodic. Suppose that one of the \((k+1)\)-connected components in the offspring of \(u_{i}\) is a \((k+1)\)-clique and that \(u_{i+1}\) is one of its cliques (different from \(u_{i}\)). Then, the clique \(u_{i+1}\) is obtained from \(u_{i}\) by replacing one of its vertices by a new vertex at distance \(S_{i}+1\) from the first vertex in the root clique. We can thus say the following about \(X_{i+1}\) depending on \(X_{i}\): * If \(X_{i}=k\), then \(X_{i+1}=k-1\). * If \(1<X_{i}<k\), then \(X_{i+1}\) is either equal to \(X_{i}\) or \(X_{i}-1\), depending on whether the replaced vertex was at distance \(S_{i}+1\) or \(S_{i}\) from the first vertex in the root clique. * If \(X_{i}=1\), then \(X_{i+1}\) is either equal to \(1\) or \(k\), depending on the same. Since all states are reachable from any state (possibly in many steps) and some states are reachable from themselves, then the process is irreducible and aperiodic. This together with the fact that the offspring distribution has finite exponential moments allows us to use the large deviation results in [19] for the additive component \(S_{i}\). **Lemma 15**.: _There exists a constant \(\gamma_{k}>0\) such that, for every \(\varepsilon>0\), if \(E\) denotes the event that there exists a vertex \(v\in(\mathsf{T}_{n},\alpha_{n})\) with \(h_{\mathsf{T}_{n}}(v)\geq\log^{3}n\) and \(h_{\mathsf{G}_{k,n}^{(k)}}(v)\notin\gamma_{k}(1\pm\varepsilon)h_{\mathsf{T}_{n }}(v)\), then \(\mathbb{P}(E)\leq n^{-\Theta(\log^{2}n)}\)._ Proof.: Let \(\mathcal{E}_{\ell,n}\) be the set of decorated trees \((\tau,\alpha_{\tau})\in\mathfrak{A}\) with \(n\) white vertices, a white marked vertex \(v_{\tau}\) of height \(\ell\), and such that the distance in \(\Phi(\tau,\alpha_{\tau})\) between the first vertex in the root \(k\)-clique and \(v_{\tau}\) is not in the range \(\gamma_{k}(1\pm\varepsilon)\ell\). Then, using Proposition 4 and Lemma 12, \[\mathbb{P}(E) \leq\sum_{\ell=\lfloor\log^{3}n\rfloor}^{n}\sum_{(\tau,\alpha_{ \tau})\in\mathcal{E}_{\ell,n}}\mathbb{P}((\mathsf{T}_{n},\alpha_{n})=(\tilde{ \tau},\alpha_{\tau}))\] \[=\sum_{\ell=\lfloor\log^{3}n\rfloor}^{n}\sum_{(\tau,\alpha_{\tau })\in\mathcal{E}_{\ell,n}}\mathbb{P}((\mathsf{T},\alpha)=(\tilde{\tau},\alpha _{\tau})\mid\#_{2}\mathsf{T}=n)\] \[=\sum_{\ell=\lfloor\log^{3}n\rfloor}^{n}\sum_{(\tau,\alpha_{\tau })\in\mathcal{E}_{\ell,n}}\frac{\mathbb{P}((\mathsf{T},\alpha)=(\tilde{\tau}, \alpha_{\tau}))}{\mathbb{P}(\#_{2}\mathsf{T}=n)}\] \[=c_{k}^{-1}n^{3/2}\mathbb{E}[\zeta]\sum_{\ell=\lfloor\log^{3}n \rfloor}^{n}\sum_{(\tau,\alpha_{\tau})\in\mathcal{E}_{\ell,n}}\mathbb{P}(( \mathsf{T}^{(\ell)},\alpha^{(\ell)})=(\tau,\alpha_{\tau}))\] \[=c_{k}^{-1}n^{3/2}\mathbb{E}[\zeta]\sum_{\ell=\lfloor\log^{3}n \rfloor}^{n}\mathbb{P}((\mathsf{T}^{(\ell)},\alpha^{(\ell)})\in\mathcal{E}_{ \ell,n}), \tag{20}\] where \(c_{k}=\frac{\mathbf{\alpha}_{k}k!\rho_{k}^{-k}}{G_{k}^{(k)}(\rho_{k})}\). But the distance in \(\Phi((\mathsf{T}^{(\ell)},\alpha^{(\ell)}))\) between the first vertex in the root clique and the marked vertex, call it \(d\), satisfies \[d=S_{\ell}+\mathcal{O}_{p}(\log n).\] Indeed, the distance to clique corresponding to the tip of the spine is given by the discrete Markov additive process \((X_{i},S_{i})\) described above and the remaining distance is \(\mathcal{O}_{p}(\log n)\) because we know from Lemma 14 that the size of a \((k+1)\)-connected component is \(\mathcal{O}_{p}(\log n)\). It thus follows from [19, Thm. 5.1] (see also [19, Rem. 3.5 and Sec. 7, (ii)]) that, for large enough \(\ell\), \[\mathbb{P}((\mathsf{T}^{(\ell)},\alpha^{(\ell)})\in\mathcal{E}_{\ell,n})<ae^{ -b\ell},\] for some \(a,b>0\). Therefore, for large enough \(n\), \[\mathbb{P}(E) <c_{k}^{-1}n^{3/2}\mathbb{E}[\zeta]\sum_{\ell=\lfloor\log^{3}n \rfloor}^{n}ae^{-b\ell}\] \[=n^{-\Theta(\log^{2}n)}.\] **Lemma 16**.: _For every vertex \(v\in(\mathsf{T}_{n},\alpha_{n})\), it holds that_ \[h_{\mathsf{G}_{k,n}^{(k)}}(v)\in\gamma_{k}(1\pm\varepsilon)h_{\mathsf{T}_{n}} (v)+\mathcal{O}_{p}(\log^{4}n).\] Proof.: Lemma 15 covers the case where \(h_{\mathsf{T}_{n}}(v)\geq\log^{3}n\). Suppose now that \(h_{\mathsf{T}_{n}}(v)<\log^{3}n\). Then, the distance \(h_{\mathsf{G}_{k,n}^{(k)}}(v)\) is bounded by \(h_{\mathsf{T}_{n}}(v)\) times the size of each \((k+1)\)-connected component, which is bounded by \(\mathcal{O}_{p}(\log n)\) by Lemma 14, and so the result follows. We are now ready to prove Theorem 1. Proof of Theorem 1.: For any two white vertices \(u,v\in(\mathsf{T}_{n},\alpha_{n})\) it is clear that \[d_{\mathsf{T}_{n}}(u,v)=h_{\mathsf{T}_{n}}(u)+h_{\mathsf{T}_{n}}(u)-2h_{ \mathsf{T}_{n}}(w),\] where \(w\) is the lowest common ancestor of \(u\) and \(v\) in \(\mathsf{T}_{n}\). And it also holds that \[d_{\mathsf{G}_{k,n}^{(k)}}(u,v)=h_{\mathsf{G}_{k,n}^{(k)}}(u)+h_{\mathsf{G}_{ k,n}^{(k)}}(v)-2h_{\mathsf{G}_{k,n}^{(k)}}(w)+\mathcal{O}_{p}(\log n),\] since the shortest path in \(\mathsf{G}_{k,n}^{(k)}\) between \(u\) and \(v\) may not go through the \(k\)-clique \(w\) but it certainly goes through some of the white children of \(w\). Therefore, using Lemma 16 we obtain that, for every \(\varepsilon>0\), \[d_{\mathsf{G}_{k,n}^{(k)}}(u,v)\in\gamma_{k}(1\pm\varepsilon)d_{\mathsf{T}_{ n}}(u,v)+\mathcal{O}_{p}(\log^{4}n).\] And thus it follows that \[\frac{|d_{\mathsf{G}_{k,n}^{(k)}}(u,v)-\gamma_{k}d_{\mathsf{T}_{n}}(u,v)|}{ \sqrt{n}}\stackrel{{ p}}{{\longrightarrow}}0.\] This together with Proposition 11 implies that \[\left(\mathsf{G}_{k,n}^{(k)},\kappa_{k}n^{-1/2}d_{\mathsf{G}_{k,n}^{(k)}}, \mu_{\mathsf{G}_{k,n}^{(k)}}\right)\stackrel{{ d}}{{ \longrightarrow}}(\mathscr{T}_{e},d_{\mathscr{T}_{e}},\mu_{\mathscr{T}_{e}})\] in the Gromov-Hausdorff-Prokhorov sense as \(n\to\infty\), where \(\kappa_{k}=\gamma_{k}^{-1}\). Finally, the result follows from this by Lemma 7. Finally, we prove the tail-bounds for the diameter. Proof of Theorem 3.: Any graph from \(\mathcal{G}_{k,n}\) has at least \(k(n-k)+1\) many \(k\)-cliques. Indeed, take a perfect elimination ordering of the vertices in the graph and remove them one by one. Since a vertex in a minimal separator will never be removed, the graph remains \(k\)-connected until there is only a \(k\)-clique left. Therefore, every time a vertex is removed, the number of \(k\)-cliques decreases by at least \(k\), and the claim follows. Combining this fact with the asymptotic 7, it follows that it suffices to verify the stated tail-bounds for the diameter of \(\mathsf{G}_{k,n}^{(k)}\) instead of \(\mathsf{G}_{k,n}\). Furthermore, since the diameter is at most twice the height plus 1 (with height referring to the maximal distance of a vertex from the root \(k\)-clique), it hence suffices to show such a bound for the height of \(\mathsf{G}_{k,n}^{(k)}\). Any vertex \(v\) in \(\mathsf{G}_{k,n}^{(k)}\) that does not belong to the root-clique corresponds to a white vertex (also denoted by \(v\)) in the tree \(\mathsf{T}_{n}\). Similar as argued in the proof of Theorem 1, the geodesics in \(\mathsf{G}_{k,n}^{(k)}\) from the root-clique to \(v\) need to pass through the cliques corresponding to sequence \(v_{0},\ldots,v_{\ell}\) of black ancestors of \(v\) in \(\mathsf{T}_{n}\). Therefore, apart from the first vertex that belongs to the root-clique all vertices in such a geodesic correspond to white children of \(v_{0},\ldots,v_{\ell}\) in \(\mathsf{T}_{n}\). Thus, the sum \(S(v)\) of the number of white children of \(v_{0},\ldots,v_{\ell}\) is an upper bound for the height of \(v\) in \(\mathsf{G}_{k,n}^{(k)}\). Therefore, in order to prove Theorem 3, it suffices to show that there exist constants \(C,c>0\) such that \[\mathbb{P}(\max_{v}S(v)\geq x)\leq C\exp(-cx^{2}/n) \tag{21}\] for all \(n\) and \(x>0\), with the index \(v\) ranging over the white vertices in the tree \(\mathsf{T}_{n}\). Since we may always replace \(C\) by some larger constant and \(c\) by some smaller constant it furthermore suffice to verify such a bound for all \(x>\sqrt{n}\) instead of \(x>0\). Furthermore, since the height can be at most \(n\), it also suffices to consider \(x\leq n\). With foresight, set \[\alpha=\frac{1}{2\mathbb{E}[\zeta^{\bullet}]}.\] Proposition 11 ensures that there exist constants \(C_{1},c_{1}>0\) with \[\mathbb{P}(\mathrm{H}(\mathsf{T}_{n})\geq\alpha x)\leq C_{1}\exp(-c_{1}x^{2}/ n). \tag{22}\] By an identical calculation as in Equation (20) the probability that \(\mathrm{H}(\mathsf{T}_{n})<\alpha x\) and at the same time there exists of some white vertex \(v\) with \(S(v)>x\) is bounded by \[O(n^{3/2})\sum_{\ell=1}^{\lfloor\alpha x\rfloor}p_{\ell},\] with \(p_{\ell}\) denoting the probability that the sum of white children of the black ancestors of the marked vertex of \(\mathsf{T}^{(\ell)}\) is larger than \(x\). These numbers of children are distributed like independent copies \(\zeta_{1}^{\bullet},\ldots,\zeta_{\ell}^{\bullet}\) of \(\zeta^{\bullet}\). Hence \[p_{\ell}=\mathbb{P}\left(\sum_{i=1}^{\ell}\zeta_{i}^{\bullet}>x\right).\] Using \(\ell\leq\alpha x\) and \(x\geq\sqrt{n}\) it follows by Lemma 17 that \[p_{\ell}\leq\exp(-\Theta(x)).\] This means our upper bound simplifies to \[O(n^{3/2})\sum_{\ell=1}^{\lfloor\alpha x\rfloor}p_{\ell} =O(n^{3/2})x\exp(-\Theta(x))\] \[=O(x^{4})\exp(-\Theta(x))\] \[=\exp(-\Theta(x)).\] Together with Inequality (22) this proves Inequality (21) and hence completes the proof. ## Appendix A Deviation inequalities In our proof we make use of the following medium deviation inequality found in most textbooks on the subject. **Lemma 17**.: _Let \((X_{i})_{i\in\mathbb{N}}\) be an i.i.d. family of real-valued random variables with \(\mathbb{E}[X_{1}]=0\) and \(\mathbb{E}[e^{tX_{1}}]<\infty\) for all \(t\) in some open interval around zero. Then there are constants \(\delta,c>0\) such that for all \(n\in\mathbb{N}\), \(x\geq 0\) and \(0\leq\lambda\leq\delta\) it holds that_ \[\mathbb{P}(|X_{1}+\ldots+X_{n}|\geq x)\leq 2\exp(cn\lambda^{2}-\lambda x).\] ## Acknowledgement The authors are grateful to Louigi Addario-Berry for helpful discussions on random trees with prescribed degree sequences.
2305.00545
Optimal multi-action treatment allocation: A two-phase field experiment to boost immigrant naturalization
Research underscores the role of naturalization in enhancing immigrants' socio-economic integration, yet application rates remain low. We estimate a policy rule for a letter-based information campaign encouraging newly eligible immigrants in Zurich, Switzerland, to naturalize. The policy rule assigns one out of three treatment letters to each individual, based on their observed characteristics. We field the policy rule to one-half of 1,717 immigrants, while sending random treatment letters to the other half. Despite only moderate treatment effect heterogeneity, the policy tree yields a larger, albeit insignificant, increase in application rates compared to assigning the same letter to everyone.
Achim Ahrens, Alessandra Stampi-Bombelli, Selina Kurer, Dominik Hangartner
2023-04-30T18:11:34Z
http://arxiv.org/abs/2305.00545v3
# Optimal multi-action treatment allocation: ###### Abstract The challenge of assigning optimal treatment policies is ubiquitous in public economics. While a vast empirical literature is concerned with the estimation of causal effects under treatment effect heterogeneity, the potentials of individualized treatment assignments are under-explored, despite recent advances in the field of policy learning. We evaluate the benefits of multi-action policy learning using policy trees in the context of immigration naturalization. We use a tailored two-phase randomized field experiment to estimate and field a policy rule for an information campaign encouraging eligible immigrants in the City of Zurich to apply for Swiss citizenship. The policy rule takes the form of a decision tree assigning treatments based on individual-level characteristics drawn from linked administrative data. In the exploration phase, we randomly allocate 60% of our sample of 5,145 citizenship-eligible immigrants to receive one of three different letters addressing specific naturalization hurdles pertaining to the lack of information and encouragement. The exploitation phase estimates and fields the policy rule on one-half of the remaining sample while sending random treatment letters to the other half. While we find only moderate levels of heterogeneity, the policy tree yields a larger, albeit not significant, increase in take-up compared to the best-performing one-size-fits-all treatment. **Keywords:** Policy learning, targeted treatment, statistical decision rules, randomized field experiment, immigrant naturalization **JEL Codes:** J15, J61, C44, C93, Q48 Introduction Policymakers frequently need to select among alternative treatment options. While one of the stated aims of empirical research is to provide new insights to inform decision-making processes, the primary focus is usually on estimating averages of treatment effects rather than providing direct guidance on how to design treatment assignment mechanisms. In practice, the empirical researcher specifies a statistical model and estimates the efficacy of each treatment using an experimental or observational sample, while the decision maker assigns the treatment, interpreting the point estimates _as if_ they were true. This approach, termed _as-if_ maximization by Manski (2021), tends to yield one-size-fits-all rules assigning the treatment that appears the most effective in a sample to the wider population. Such one-size-fits-all policies seem inefficient given that treatments frequently exhibit relevant effect heterogeneity across observations and the increasing availability of administrative data providing rich individual characteristics. Policy learning provides a framework for directly estimating statistical decision rules, so-called policy rules, which prescribe treatments to individuals based on their observed characteristics (also known as profiling or targeting). While its origins date back to statistical decision theory (Wald, 1950; Savage, 1951), the seminal work of Manski (2004) sparked a flourishing literature in econometrics which has developed methods for estimating statistical treatment rules, initially focusing on data drawn from randomized control trials (Manski, 2004; Stoye, 2009; Stoye, 2012; Hirano and Porter, 2009), but subsequently also covering observational data under unconfoundedness assumptions (Manski, 2007; Athey and Wager, 2021; Zhou, Athey, and Wager, 2022; see Hirano and Porter 2020 for a review). While applied research using policy learning is still relatively scarce, previous work has revealed the potential for optimal allocation across a variety of domains, including active labor market programs (e.g. Lechner and Smith, 2007; Frolich, 2008), vaccines accounting for spill-over effects (Kitagawa and Wang, 2023), deforestation-reducing policies (Assumcao et al., 2022), anti-malaria subsidies under budget constraints (Bhattacharya and Dupas, 2012), energy use information campaigns (Gerarden and Yang, 2022) and maximizing fundraising (Cagala et al., 2021). In this pre-registered study, we design, implement, and evaluate an individualized treatment allocation program with the goal of facilitating the naturalization of eligible immigrants in the City of Zurich, Switzerland. An expanding literature shows that the acquisition of host-country citizenship can benefit the economic, political and social integration of immigrants (Keller, Gathmann, and Monscheuer, 2015; Hainmueller, Hangartner, and Pietrantuono, 2015; Hainmueller, Hangartner, and Pietrantuono, 2017; Gathmann and Keller, 2018; Hainmueller, Hangartner, and D. Ward, 2019; Felfe et al., 2021) and strengthen economic growth and social cohesion in host communities (National Academies of Sciences, 2016). Despite these benefits, naturalization rates remain low in many countries such that policymakers, often at the local level, are increasingly considering information campaigns to boost citizenship applications (D. G. Ward, Pianzola, and Hangartner, 2019). Typically, these campaigns combine information about the citizenship requirements with an encouragement to apply. Following this practice, this study evaluates the impact of three different information interventions that seek to address specific hurdles surrounding (i) the perceived complexity of the application process, (ii) knowledge about the requirements for naturalization including a German language and civics test, and (iii) the feeling of not being welcome to naturalize through an encouragement from the City Mayor of Zurich. While this study focuses on naturalization applications as the main behavioral outcome, a companion paper employs a range of survey measures to assess the efficacy of the interventions in overcoming specific hurdles (see Hangartner et al., 2023). We derive a multi-action policy rule in form of a decision tree, referred to as a policy tree. Policy trees are introduced by Athey and Wager (2021) for binary and by Zhou, Athey, and Wager (2022) for multi-valued treatments. In our context, the policy tree selects one treatment from a set of three treatment options for each eligible immigrant based on their individual characteristics including residency, nationality and age. The treatment options are incorporated into three different letters with enclosed flyers sent out by the City of Zurich. The policy rule was chosen to maximize the application rate for naturalization, the first step in the process of acquiring Swiss citizenship. We evaluate the performance of the targeted policy rule against random treatment allocation, one-size-fits-all treatment rules for each of the three treatments, and a model-free _plug-in_ rule assigning the treatment with the largest estimated treatment effect. Policy trees possess several strengths that make them a particularly promising method for immigrant naturalization and other sensitive policy contexts. First, policy trees allow policymakers and researchers to select those variables that can be used to tailor treatment assignment and, more importantly, exclude those that should not be used (e.g., protected characteristics such as religion)--and quantify the costs of exclusion in terms of foregone treatment efficacy. Second, policy trees make transparent which variables, and which variable values, guide treatment assignment. Related to the second strength is the third: policy trees are easy to visualize and easy to explain to users of the research--e.g., policymakers, case officers, and subjects receiving treatment assignment--even if they lack training in statistics. Together, transparency and interpretability are important steps towards satisfying requirements for explainable Artificial Intelligence (AI), e.g., as outlined in recent proposals for the regulation of AI by the European Commission (2021) and The White House (2022). Finally, compared to dynamic or adaptive policy learning (e.g., Caria et al., 2020), offline policy learning which trains policies on a single data batch is often easier to implement in a public policy context. We illustrate the practical feasibility of the targeted assignment rule and evaluate its benefits using a tailored, two-phase randomized controlled trial. We test policy trees in a realistic setting where the level of heterogeneity is only moderate. Evaluating the exploration phase of the experiment, we find that policy trees can capture the vast majority of treatment effect heterogeneity of the more flexible but less transparent and non-interpretable _plug-in_ rule. In the exploitation phase of the experiment, we find that policy trees perform favorably compared to random treatment assignment and each individual treatment option. Our study contributes to three fields of empirical research. First, sparked by methodological advances, especially the advent of causal forests (due to Wager and Athey, 2018), there is a rich literature estimating heterogeneous treatment effects using machine learning (e.g., Davis and Heller, 2017; Knittel and Stolper, 2021; Knaus, Lechner, and Strittmatter, 2022).1 While studies in this literature emphasize the potential of estimating heterogeneous effects for improved targeting, they usually do not explicitly derive interpretable targeting rules. Second, we build on the expanding literature statistical decision rules. The vast majority of applied studies, including those discussed above (i.e., Lechner and Smith, 2007; Frolich, 2008; Bhattacharya and Dupas, 2012; Assuncao et al., 2022; Kitagawa and Wang, 2023), only provide backtest results about the ex-post performance of policy targeting rules. Closest to our study are Gerarden and Yang (2022) and Cagala et al. (2021). Gerarden and Yang (2022) follow the methodology of Kitagawa and Tetenov (2018) to estimate policy rules for a behavioral intervention targeted at reducing household electricity usage, but do not implement the derived policy rules. Similar to us, Cagala et al. (2021) consider policy trees in an application to maximizing fundraising and gauge the performance of the estimated policy tree on out-of-sample data. We add to this literature by fielding the estimated optimal policy rule in the second phase of our experiment, which allows us to directly evaluate the performance against other policy rules. Furthermore, both Cagala et al. (2021) and Gerarden and Yang (2022) focus on the choice between two treatment options, whereas we are concerned with the more challenging problem of multi-action policy learning. Third, we contribute to the larger literature on informational interventions aimed at increasing take-up of government services and subsidies among eligible people (e.g., Bhargava and Manoli, 2015; Finkelstein and Notowidigdo, 2019; Hotard et al., 2019; Goldin et al., 2022). This article proceeds as follows. In Section 2, we provide a review of policy learning. While we rely on a randomized experimental design to learn the optimal policy rule in our application, we also discuss the setting where one has to rely on unconfoundedness assumptions, thereby illustrating the generality of the methodological framework. Section 3 turns to our application. We contextualize our application, describe the data, the treatments and the study design in Sections 3.1-3.4. We summarize the results of the exploration and exploitation phase in Sections 3.5 and 3.6. Section 4 concludes. ## 2 Multi-action policy learning The aim of policy learning is to formulate a policy rule \(\pi(X)\) designed to maximize the expected value of \(Y\), the outcome of interest. A policy rule assigns a treatment \(a\) from the choice set of treatment options \(\mathcal{A}=\{1,2,\ldots,D\}\) to each individual based on their observed covariates \(X\). Note that \(\mathcal{A}\) may include the no-treatment option. Formally, \(\pi(X)\) is a function mapping individual characteristics to one of the treatment options in \(\mathcal{A}\). For example, a policy rule might assign treatment 1 to every person below age 30, treatment 2 to individuals aged 30-40, and treatment 3 to individuals older than 40. ### Evaluating policies Before we turn to the estimation of optimal policies, it is instructive to consider a candidate policy rule \(\pi^{\prime}\) and assess its effectiveness. We assume that we have access to some data \(\{Y_{i},A_{i},X_{i}\}\) for \(i=1,\ldots,n\), which includes the treatment received, \(A_{i}\), the realized outcome, \(Y_{i}\), as well as observed individual \(i\)'s characteristics \(X_{i}\). As typical in the causal effects literature, we assume the existence of the potential outcomes \(\{Y_{it}(1),Y_{it}(2),\ldots,Y_{it}(D)\}\), which are the outcomes if individual \(i\) had received treatments 1, 2,..., \(D\)(Donald B. Rubin, 1974; Imbens and Donald B Rubin, 2015). This allows us to define the expected reward of \(\pi^{\prime}\), which is the expected value of the potential outcomes if the policy rule had been followed, i.e., \(Q(\pi^{\prime})=E[Y_{it}(\pi^{\prime}(X_{i}))]\). The fundamental challenge for estimating the reward of a candidate policy \(\pi^{\prime}\) is that we only observe \(Y_{i}=Y_{i}(A_{i})\) and that, in a non-experimental setting, individuals might self-select into treatment options that optimize their expected pay-off. The offline policy learning literature commonly imposes the following set of assumptions (Kitagawa and Tetenov, 2018; Zhou, Athey, and Wager, 2022): **Assumption 1**: 1. _Unconfoundedness:_ \(\{Y_{it}(0),Y_{it}(1),\ldots,Y_{it}(D)\}\perp A_{i}|X_{i}\)_._ 2. _Overlap: There exists some_ \(\eta>0\) _such that_ \(e_{a}(X_{i})\geq\eta\) _for any_ \(a\in\mathcal{A}\) _and_ \(X\)_._ 3. _Boundedness: The potential outcomes are contained on a finite interval in_ \(\mathbb{R}^{D}\)_._ Unconfoundedness in (a) states we observe all necessary covariates that allow us to account for selection biases. The condition is naturally satisfied by randomized treatment assignments. The overlap assumption in (b) requires that for any observed individual characteristic, the probability of taking each action is greater than zero. The boundedness assumption in (c) serves the purpose of simplifying mathematical proofs but can be replaced by weaker assumptions. Under the stated assumptions, we can evaluate the reward of a candidate policy \(\pi^{\prime}\) by averaging over observations that happen to align with the candidate policy rule, i.e., \[\widehat{Q}_{IPW}(\pi^{\prime})=\frac{1}{n}\sum_{i=1}^{n}\frac{\mathbbm{1}\{ A_{i}=\pi^{\prime}\left(X_{i}\right)\}Y_{i}}{e_{A_{i}}(X_{i})}. \tag{1}\] where we weight by the propensity score \(e_{a}(X_{i})\equiv P(A_{i}=a|X_{i})\) to account for selection bias (Swaminathan and Joachims, 2015). ### Optimal policies Suppose now that the policym suggests a number of policy rules, e.g., \(\Pi^{\prime}=\{\pi^{\prime},\pi^{\prime\prime},\pi^{\prime\prime\prime}\}\) where \(\Pi^{\prime}\) is the set of candidate policies. We can define the optimal policy among the candidate polices as \(\pi^{\star}=\arg\max_{\pi\in\Pi^{\prime}}Q(\pi)\), and accordingly estimate the optimal policy rule as \(\hat{\pi}=\arg\max_{\pi\in\Pi^{\prime}}\hat{Q}_{IPW}(\pi)\). The performance of a policy learner \(\hat{\pi}\), which estimates \(\pi^{\star}\) from the data, is measured by its regret, \(R(\hat{\pi})=Q(\pi^{\star})-Q(\hat{\pi})\). If the propensity scores \(e_{a}(X)\) are known, we can consistently learn policies with a \(\sqrt{n}\)-convergence rate (Swaminathan and Joachims, 2015; Kitagawa and Tetenov, 2018). If the exact assignment mechanism is not known, which is often the case, we have to estimate the \(e_{a}(X)\) from the data. One approach is to estimate \(e_{a}(X)\) using the full sample and plug the estimates into (1). However, the convergence rate for learning the optimal policy is generally sub-optimal, unless we confine ourselves to parametric specifications (Kitagawa and Tetenov, 2018). In many settings, parametric estimators are insufficiently flexible in estimating the propensity scores as the underlying data-generating process is typically unknown. To allow for data-adaptive nonparametric estimators, including supervised machine learners, which are more robust towards unknown data structures, Zhou, Athey, and Wager (2022) combine two strategies for policy learning: the use of doubly robust scores and cross-fitting. Double robustness ensures consistency if either the propensity scores or the outcome model is correctly specified. Cross-fitting is a form of sample splitting that addresses the own-observation bias arising from using the same observation for both estimating the nuisance functions and learning the optimal policy. Cross-fitting allows leveraging a general class of machine learners, only relying on relatively mild convergence rate requirements.2 Footnote 2: The causal machine learning literature frequently relies on sample splitting approaches, such as cross-fitting; see for example Chernozhukov, Chetverikov, et al. (2018) for the estimation of average treatment effects and Wager and Athey (2018) for the estimation of CATE using causal forests. To implement cross-fitting, we randomly split the sample into \(K\) folds of approximately equal size. We use \(\hat{e}_{a}^{-k(i)}\) to denote the _cross-fitted_ propensity score of observation \(i\) for treatment \(a\). The cross-fitted predicted value is calculated by fitting an estimator to all folds but fold \(k(i)\), which is the fold that observation \(i\) falls into. Similarly, we introduce \(\hat{\mu}_{a}^{-k(i)}\) which is the cross-fitted predicted value of the outcome under treatment \(a\) and conditional on \(X\), i.e., it is a cross-fitted estimate of \(\mu_{a}\equiv E[Y_{i}(a)|X_{i}]\). Combining both AIPW score of Dudik, Langford, and Li (2011) with cross-fitted conditional expectation function estimates yields \[\hat{Q}_{CAIPWL}(\pi)=\frac{1}{n}\sum_{i=1}^{n}\left(\frac{Y_{i}-\hat{\mu}_{A_{i} }^{-k(i)}\left(X_{i}\right)}{\hat{e}_{A_{i}}^{-k(i)}\left(X_{i}\right)}\mathbbm{1 }\{A_{i}=\pi(X_{i})\}+\hat{\mu}_{\pi(X_{i})}^{-k(i)}\left(X_{i}\right)\right)= \frac{1}{n}\sum_{i=1}^{n}\hat{\Gamma}_{i,\pi(X_{i})} \tag{2}\] where \(\mathbbm{1}\{\cdot\}\) is the indicator function and \(\hat{\Gamma}_{i,a}\) is an estimate of the doubly robust score under treatment \(a\) for individual \(i\). ### Policy class So far, we have assumed a predefined set of candidate policies. In many applications, however, we wish to learn policies flexibly from the data instead of relying on a pre-defined set of policy rules. A fully flexible approach could assign each individual to the treatment for which the estimated treatment effect is the largest. This _plug-in policy rule_ requires no functional form restrictions but may be inappropriate when stakeholders wish to learn about the drivers of treatment efficacy and have hesitations to rely on a black-box treatment assignment mechanism.3 Footnote 3: For formal results on plug-in rules, see Hirano and Porter (2009) and Bhattacharya and Dupas (2012). Policy learning allows estimating interpretable treatment rules from the data. To this end, we must choose a suitable policy class from which we estimate the optimal policy. In an application to active labor market programs with a binary treatment and two covariates, Kitagawa and Tetenov (2018) discuss three policy classes defined by the three functional form restrictions: Quadrant policy rule: \[\pi(X_{i}) =\mathbbm{1}\{s_{1}(X_{1i}-\beta_{1})\geq 0\}\mathbbm{1}\{s_{2}(X_ {2i}-\beta_{2})\geq 0\}\] Linear policy rule: \[\pi(X_{i}) =\mathbbm{1}\{\beta_{0}+\beta_{1}X_{1i}+\beta_{2}X_{2i}\geq 0\}\] Cubic policy rule: \[\pi(X_{i}) =\mathbbm{1}\{\beta_{0}+\beta_{1}X_{1i}+\beta_{2}X_{2i}+\beta_{3} X_{2i}^{2}+\beta_{4}X_{2i}^{3}\geq 0\}\] where \(s_{1},s_{2}\in\{-1,+1\}\) and \(\beta_{j}\in\mathbb{R}\). The first rule defines a quadrant in the two-dimensional space spanned by the covariates \(X_{1}\) and \(X_{2}\), and assigns the treatment to individuals for which \(X_{1i}\) and \(X_{2i}\) lie in that quadrant. The second rule defines a linear decision boundary, and the third rule allows for non-linear decision boundaries by including quadratic and cubic terms. Compared to the plug-in rule, these rules exhibit a higher degree of interpretability as they can be easily visualized in a two-dimensional plot. In a multi-action setting, Zhou, Athey, and Wager (2022) focus on the class of policy rules that take the form of (shallow) decision trees. Trees are widely employed as predictive tools that construct predictions by splitting the feature space into non-overlapping regions. In the prediction context, classification and regression trees yield the same prediction for observations falling into the same region. In the policy context, observations falling into the same region are assigned the same treatment action. Policy trees are conceptually similar to the quadrant rule but can be generalized to multiple treatments. Since trees can be easily visualized, they are transparent and simple to interpret, even without statistical education, which makes them attractive in a public policy context where users of the research vary in statistical literacy and often view black-box methods with skepticism. Zhou, Athey, and Wager (2022) describe how the optimization of policy trees can be regarded as a mixed integer program. In addition, Sverdrup et al. (2022) implement a less costly but approximate optimization algorithm, referred to as _hybrid_ tree search. ## 3 Personalizing naturalization campaigns ### Background: Immigrant Integration and Citizenship The integration of immigrants into the host-country fabric and economy is a central policy issue in many, increasingly diverse societies. One promising policy to foster integration is naturalization, i.e., the process of awarding host-country citizenship to immigrants (Goodman, 2014; Dancygier, 2010). Observational studies relying on difference-in-difference models and regression discontinuity designs show that naturalization can positively impact the integration of immigrants by increasing their earnings and labor market attachment, fostering political efficacy and knowledge, spurring cultural assimilation, and reducing feelings of isolation and discrimination (OECD, 2011; Keller, Gathmann, and Monscheuer, 2015; Hainmueller, Hangartner, and Pietrantuono, 2017; Gathmann and Keller, 2018; Hainmueller, Hangartner, and D. Ward, 2019; Felfe et al., 2021; Vink et al., 2021). This process can also benefit the host society by increasing immigrants' contributions to economic growth, lowering their dependency on welfare, and, by extension, reducing societal tensions and strengthening social cohesion (for reviews, see National Academies of Sciences, 2016; Pastor and Scoggins, 2012). Despite these potential benefits, naturalization rates remain low, with a median annual naturalization rate (number of naturalized immigrants divided by number of registered immigrants) of 1.9% in Europe and 3.1% in the U.S. in 2017 (Blizzard and Batalova, 2019). To reduce this mismatch between the manifold benefits of citizenship and the low naturalization rate, some countries, states, and municipalities across Europe and the U.S. have begun to turn to information campaigns to overcome hurdles to citizenship acquisition for eligible immigrants. While the content and scope of these naturalization campaigns vary, they often combine information provision about the naturalization process and requirements with an encouragement to apply for citizenship. Yet, despite the growing popularity of these campaigns across Europe and the U.S., there exists little experimental research to evaluate its effectiveness (for exceptions, see Hotard et al. (2019) and Hangartner et al. (2023)). Furthermore, despite the high diversity of the immigrant population in terms of, e.g., country of origin, language skills, and age, these naturalization campaigns have typically relied on a one-size-fits-all approach. Tailoring such campaigns to the specific needs of diverse immigrants promises to deliver both a deeper understanding of the different hurdles that immigrants face and to increase the effectiveness of the campaign. ### Data We draw our data from administrative sources of the Canton of Zurich. The data includes records of whether and when eligible immigrants submit an application for Swiss citizenship to the City of Zurich during the study period, which allows us to define the outcome variable of our analysis. The data also includes additional covariates which we use to identify and leverage treatment effect heterogeneity. These covariates are age, gender, nationality, years of residency in Switzerland, and years of residency in Zurich. The data also includes an address identifier which allows us to assign the treatment on a building level to minimize contamination by spill-over effects. The study sample includes all immigrants in the City of Zurich who satisfy the following criteria: 1. They were born on or before June 30, 2003 (i.e., they must have been at least 18 years of age at the start of the study), 2. they arrived in Switzerland on or before June 30, 2011, 3. they arrived in Zurich City on or before June 30, 2019, 4. they must have possessed a permanent residence permit (C permit) at the time of randomization (August 2021), and 5. they must not have received any information or encouragement letter in the past. The first criterion ensures that only adults are in the study. Criteria 2-4 ensure that the entire sample meets the current residency and permit requirements for citizenship. The sample includes 5,145 individuals. ### Treatment letters Previous evidence from surveys and qualitative studies suggest that uncertainty about the eligibility criteria such as residency and language requirements can prevent immigrants from applying (Baubock et al., 2006; Gonzalez-Barrera et al., 2013). Other studies highlight that--particularly in hostile immigration environments--a lack of encouragement by politicians, public administration, or the general public might deter immigrants (Baubock et al., 2006; Bloemraad, 2002; Bloemraad, Korteweg, and Yurdakul, 2008). Furthermore, in earlier research using a tailored survey, we find evidence for informational deficits and the feeling that an application is not welcome by the host society (Hangartner et al., 2023). Combining insights from the existing literature and our own surveys, we identify three key barriers to naturalization: (i) perceived complexity of the naturalization process, (ii) perceived difficulty of and uncertainty about naturalization requirements and (iii) perception that naturalization is not welcome. In collaboration with the City of Zurich, we developed one treatment letter for each of the three hurdles. Each treatment involves the receipt of a letter sent by representatives of the City of Zurich. The treatments differ in the sender, content, wording and design of the letters. The letters, including enclosed flyers, were written in German. Appendix A.2 contains copies of the original letters in German as well as an English translation. The _Complexity letter_ consists of a short informational cover letter written by the City Clerk of the City of Zurich (see Appendix A.2.1) and a flyer. The half-page cover letter informs recipients that they meet the basic requirements for Swiss citizenship and directs them to sources of further information about the citizenship application process. The flyer included in the _Complexity letter_ (shown in Figure A.2.2) attempts to tackle the perceived complexity of the naturalization process. The left-hand side of the flyer shows a video screenshot and a QR code that directs readers to the video, explaining the naturalization process in a simplified way. The right-hand side encourages readers to scan another QR code redirecting to the contact and advice webpage4 of the City of Zurich's citizenship office. Footnote 4: The first QR code redirects to [https://www.stadt-zuerich.ch/portal/de/index/politik_u_recht/einburgerungen.html](https://www.stadt-zuerich.ch/portal/de/index/politik_u_recht/einburgerungen.html) (last accessed on December 7, 2022). The second QR code redirects to [https://www.stadt-zuerich.ch/portal/de/index/politik_u_recht/einburgerungen/kontakt-und-beratung.html](https://www.stadt-zuerich.ch/portal/de/index/politik_u_recht/einburgerungen/kontakt-und-beratung.html) (last accessed on December 7, 2022). The _Requirements letter_ includes the same short informational cover letter as the _Complexity letter_ but uses a different flyer addressing the perceived difficulty of the naturalization process (see Appendix A.2.3). This flyer is also divided into two sections, each containing a descriptive text and a QR code. The QR code on the left-hand side redirects to the targeted, free-of-charge mobile application, which allows immigrants to study for the civics exam and test their knowledge with practice questions.5 The section on the right lists the German language requirements for citizenship and the QR code redirects to a webpage containing more detailed information on the language requirements, exam costs, as well as a link to a practice language exam.6 Footnote 5: The mobile application is developed by the City of Zurich and named _Einbugerungstest Code Schweiz_, which translates to Naturalization Test Code Switzerland. Footnote 6: The website, which the QR code redirected to, moved to [https://www.stadt-zuerich.ch/portal/de/index/politik_u_recht/einburgerungen/kenntnisse/sprachlicheanforderungen.html](https://www.stadt-zuerich.ch/portal/de/index/politik_u_recht/einburgerungen/kenntnisse/sprachlicheanforderungen.html) on October 21, 2022, due to a mistake by the website maintainers. As a consequence, the QR code broke more than five months after the letter was dispatched to wave 2 participants. We show in Table A.3, where we only consider the naturalization applications recorded up to five months after letter dispatch, that our main results in Table 2 are not affected by this issue. We thus use, in line with the pre-analysis plan, application outcomes recorded seven months after letter dispatch in the remainder of the study. The _Welcome letter_ is an information and encouragement letter signed by the Mayor of the City of Zurich. The _Welcome letter_ attempts to tackle the hurdle stemming from the perception that naturalization is not welcome (Hainmueller and Hangartner, 2013). The letter includes only a cover letter (shown in Appendix A.2.4) that is a little less than one page long and contains three sections. The first section informs recipients that they meet the basic eligibility requirements for Swiss citizenship. The second section encourages them to play an active part in Zurich's political life by becoming a citizen. The last section briefly directs to sources for further information about the citizenship application process and states that the City hopes to see them at the next ceremony for new citizens. Hence, compared to the other two treatment letters, this letter puts more emphasis on the emotional and psychological aspects associated with naturalization and only provides minimal information. ### Experimental design The experimental design and analysis was pre-registered at [https://osf.io/9wr4t](https://osf.io/9wr4t) before conducting the study. In the exploration phase of the project, we randomly divided the sample into two groups: Group A (60% of the sample) received one of three treatment letters at random from the City of Zurich in October 2021, while Group B (40%) received no letter. The randomization design allocated one of the three treatment letters to individuals in Group A by building address and applied block randomization by nationality groups. The randomization by building address reduced the risk of spill-over effects among eligible immigrants living in the same or neighboring households. The block randomization by nationality group ensured that we have a roughly equal share of nationalities in Group A (including each subgroup receiving different letters) and Group B. We blocked on nationality groups given the importance of this effect moderator in earlier studies (D. G. Ward, Pianzola, and Hangartner, 2019). The letters for this first wave were delivered in October 2021. In the exploitation phase, we used the realized application outcomes from the exploration phase, measured at the end of March 2022, to estimate an optimal policy tree determining which letters individuals in Group B should receive. To evaluate the performance of the policy rule, we randomly subdivided Group B into two sub-groups, referred to as Group B.1 and Group B.2, and sent treatment letters to Group B.1 based on the estimated policy rule while Group B.2 received a random treatment letter (with one-third probability for each letter). We randomized by building address for the random division into Groups B.1 and B.2, as well as for the randomization of treatments within Group B.2. The City of Zurich delivered the letters for the exploitation phase in May 2022. ### Results from the exploration phase: Learning the policy rule We begin by analyzing the results from the exploration phase of the experiment using naturalization applications received by the end of March 2022 (i.e., wave 1). We proceed in three steps: estimation of (conditional) averages of treatment effects, tuning policy trees using a validation exercise and fitting the policy tree on the full wave-1-data. First, we fit a multi-arm causal forest to estimate average treatment effects as well as conditional average treatment effects by nationality group and years lived in Switzerland (Wager and Athey, 2018; Athey, Tibshirani, and Wager, 2019). Results are displayed in Figure 1.7 The average treatment effects for the first-wave sample imply that the _Complexity letter_ increases application rates by 1.08 p.p. (\(s.e.\)=0.91), the _Requirements letter_ by 4.33 p.p. (\(s.e.\)=1.04), and the _Welcome letter_ by 3.51 p.p. (\(s.e.\)=1.03), relative to the control condition of no letter.8 Footnote 7: We removed 274 individuals who moved between October 2021 and March 2022, resulting in an estimation sample of 4,871 individuals. Footnote 8: See Hangartner et al. (2023) for a discussion the letters’ efficacy in overcoming specific hurdles. The left panel of Figure 1 shows only moderate heterogeneity in treatment effects by nationality. The _Welcome letter_ appears to have slightly stronger effects for immigrants from Germany, Austria and the Americas, whereas the relative effect size of the _Requirements letter_ is particularly large for immigrants from Central-Eastern and South-Eastern Europe, as well as for stateless immigrants. The right panel of Figure 1 indicates that the _Complexity letter_ has the largest effect on application rates among eligible immigrants that have lived between 13 and 16 years in Switzerland. In contrast, eligible immigrants who have lived for more than 30 years in Switzerland are especially receptive to the requirements letter. This effect may also be partially driven by age since we also find the _Requirements letter_ to have the largest effect among immigrants aged 46 and above (see Figure A.3 in the Appendix). Finally, we find that men are slightly more receptive to the letter treatments overall than women, but the ranking of treatment letter efficacy is the same (see Figure A.3). Second, we conduct a validation exercise to select the tree depth of the policy tree and to compare alternative policy rules. In each iteration, we randomly split the wave-1-data (including untreated) into training and test data with a 60/40 split, and sample from each partition separately with replacement to construct bootstrapped training and validation data sets of sizes \(n_{1}=4871\) and \(n_{2}=1857\). We then fit a policy tree on the bootstrapped training data and estimate the difference in reward between alternative policy rules on the bootstrapped validation data. We focus on policy trees with tree depths of 2 and 3, as well as a hybrid policy tree of depth 4. For comparison, we consider (i) one-size-fits-all rules that always assign one of the _Complexity_, _Requirements_ or _Welcome_ letters, (ii) random allocation of one of the three letters, and (iii) a model-free plug-in rule that assigns the treatment for which the estimated reward is the largest. We repeat the exercise 500 times and report average differences in rewards and bootstrapped standard errors.9 Table 1 shows the results. For instance, the coefficient of 1.026 (\(s.e.=.99\)) in the top-left entry corresponds to the difference in reward between a policy rule that assigns the _Complexity letter_ to everyone and a policy rule assigning no letter. We find that all three policy trees outperform each individual (one-size-fits-all) treatment letter as well as random treatment allocation. Among the three policy trees, the tree of depth 3 performs marginally better than trees of depth 2 and 4. As expected, the plug-in rule shows overall the best performance. However, the plug-in rule provides no insights into the drivers of treatment effects. The results thus highlight the trade-off between interpretability and performance, but also show that in this context, the best-performing policy tree is able to reach more than 85% of the performance of the plug-in rule. Third, in light of the advantages and limited costs of policy trees in this setting, we opted for implementing the policy tree of depth 3. Following the approach of Zhou, Athey, and Wager (2022) as outlined in Section 2, we trained the policy tree on wave 1 data, including Group A (who received a letter in the first wave) and Group B (who did not receive a letter in the first wave). Figure 1: Average and conditional average treatment effects wave). Since we randomized treatment assignment in the first wave, we did not need to estimate the propensity scores but plugged the known treatment shares into (2). We used multi-arm causal forests to estimate the double robust scores, although other estimators are possible. The fitted policy tree \(\hat{\pi}\) of depth three is displayed in Figure 2. The boxes at the bottom of the tree show the assigned treatment for the wave-1 sample and the wave-2 sample (i.e., Group B) per terminal node. For instance, the very-left branch assigns individuals who have spent no more than 12 years in Switzerland, are aged 37 years or younger, and who are not from Italy to the requirements treatment. 815 individuals in total and 324 individuals from Group B fall into that category. In total, 139 individuals of Group B are assigned to the _Complexity letter_, 874 individuals to the _Requirements letter_ and 844 to the _Welcome letter_.10 The splits in the tree are based on years in Switzerland, age, and only two nationality indicators, but no split is based on gender confirming that the relative performance of each letter is the same for women and men. It is also noteworthy that no individuals were assigned to receive no letter, which suggests that at least one of the three letters has a positive effect for every individual. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{One size fits all} & \multicolumn{3}{c}{Random} & \multicolumn{3}{c}{Policy tree} & \multicolumn{1}{c}{Plug-in} \\ & Complexity & Requirem. & Welcome & treatment & \(d=2\) & \(d=3\) & \(d=4\) & rule \\ \hline Nothing & 1.026 & 4.050\({}^{***}\) & 3.266\({}^{***}\) & 2.782\({}^{***}\) & 5.343\({}^{***}\) & 5.500\({}^{***}\) & 5.408\({}^{***}\) & 6.369\({}^{***}\) \\ & (0.992) & (1.147) & (1.057) & (0.776) & (1.030) & (1.028) & (1.013) & (0.988) \\ Always 1 & & 3.025\({}^{**}\) & 2.241\({}^{*}\) & 1.756\({}^{**}\) & 4.317\({}^{***}\) & 4.474\({}^{***}\) & 4.383\({}^{***}\) & 5.343\({}^{***}\) \\ & & (1.282) & (1.236) & (0.722) & (1.141) & (1.097) & (1.063) & (1.052) \\ Always 2 & & & -0.784 & -1.269 & 1.293\({}^{*}\) & 1.449\({}^{**}\) & 1.358\({}^{*}\) & 2.319\({}^{***}\) \\ & & & (1.331) & (0.771) & (0.742) & (0.707) & (0.730) & (0.693) \\ Always 3 & & & & -0.484 & 2.077\({}^{**}\) & 2.234\({}^{**}\) & 2.142\({}^{**}\) & 3.103\({}^{***}\) \\ & & & & (0.743) & (0.861) & (0.892) & (0.865) & (0.864) \\ Random & & & & & 2.561\({}^{***}\) & 2.718\({}^{**}\) & 2.627\({}^{**}\) & 3.587\({}^{***}\) \\ & & & & & (0.566) & (0.534) & (0.507) & (0.482) \\ Policy tree (\(d=2\)) & & & & & & 0.157 & 0.065 & 1.026\({}^{***}\) \\ & & & & & & (0.293) & (0.315) & (0.297) \\ Policy tree (\(d=3\)) & & & & & & & -0.091 & 0.869\({}^{**}\) \\ & & & & & & & (0.251) & (0.240) \\ Hybrid tree (\(d=2\)) & & & & & & & 0.961\({}^{***}\) \\ & & & & & & & & (0.227) \\ \hline \hline \end{tabular} _Notes:_ The table reports the difference in estimated rewards between policy rules based on wave 1 data (including untreated immigrants of Group B). The results are based on a resampling exercise where we randomly split the wave 1 data into training and test data using a 60/40 split, and separately draw \(n_{1}=4871\) and \(n_{2}=1857\) observations with replacement from the training and test data. We use 500 repetitions and report the average difference in rewards and associated bootstrapped standard errors. \end{table} Table 1: The effect of the policy rule compared randomization, always the same treatment and no treatment ### Results from the exploitation phase: Evaluating the policy rule In this section, we evaluate the performance of the decision tree-based policy rule which assigned treatments to Group B.1 according to the policy rule displayed in Figure 2. We compare the policy tree against (1) no treatment, (2) random treatment allocation, and (3) conventional one-size-fits-all policy rules that always assign the same treatment to everyone, ignoring treatment effect heterogeneity. To this end, we estimate models of the form: \[Y_{it}=W_{it}^{\prime}\beta+f(X_{i},\delta_{t})+\varepsilon_{it} \tag{3}\] where \(Y_{it}\) is the application outcome of eligible immigrant \(i\) recorded approximately 7 months after the date of letter dispatch \(t\), which took place on October 8, 2021 and May 6, 2022 for wave 1 and 2, respectively. 11\(X_{i}\) are the covariates region of nationality, age, gender, years lived in Zurich and years lived in Switzerland, which are constant over the sample period. \(\delta_{t}\) is a dummy for wave \(t\in\{1,2\}\), and accounts for seasonal effects and other external shocks that may affect application rates. The vector \(W_{it}\) assigns individuals to treatment groups, and is defined as \(W_{it}=(\mathit{Letter}_{it}^{1},\mathit{Letter}_{it}^{2},\mathit{Letter}_{it}^{3 },\mathit{Nothing}_{it},\mathit{PolicyTree}_{it})\) or \(W_{it}=(\mathit{Random}_{it},\mathit{Nothing}_{it},\mathit{PolicyTree}_{it})\), re Figure 2: Fitted policy tree , respectively, where \(\textit{Letter}_{it}^{j}\) is set to 1 if the individual \(i\) was randomly assigned to treatment letter \(j\in\{1,2,3\}\) for wave \(t\), 0 otherwise. \(\textit{Nothing}_{it}\) is set to 1 if the individual \(i\) has received no treatment in wave \(t\), and \(\textit{PolicyTree}_{it}\) equals 1 if individual \(i\) has received the treatment letter assigned to them by the policy tree. Finally, \(\textit{Random}_{it}\) is set to 1 if individual \(i\) was randomly assigned to one of the three letters, 0 otherwise. We estimate (3) by linear regression using only the elementary controls, but also consider more flexible methods. Namely, we use Post-Double Selection Lasso (PDS-Lasso; Belloni, Chernozhukov, and Hansen, 2014) and Double-Debiased Machine Learning (DDML; Chernozhukov, Chetverikov, et al., 2018) where we extend the set of controls by interaction terms and second-order polynomials.12 We cluster standard errors by building addresses, i.e., the level at which the treatment was applied. Footnote 12: For the Post-Double Selection Lasso, we use cluster-robust penalty loadings of Belloni, Chernozhukov, Hansen, and Kozbur (2016). With regard to DDML, we use 10 cross-fitting folds, 5 cross-fitting repetitions and use stacking with a set of candidate learners including linear regression, lasso, ridge, random forests and gradient boosting. Table 2 shows the results of the evaluation based on estimating versions of (3) using OLS (columns 1-3), PDS lasso (col. 4-5), and DDML (col. 6-7). The sample includes only wave 2 in column 1, and both waves in the remaining columns. The reference group in column 1 is random treatment allocation, while the base group in columns 2-7 is no treatment. Panel A reports the coefficient estimates and Panel B compares the policy rule using policy trees against each individual treatment letter and random treatment allocation. According to the OLS results in columns 1-3, the treatment assignment by policy tree increased the application rate by 1.79 (\(s.e.\)=1.36) to 1.90 p.p. (1.36) relative to random treatment, and by around 5.13 p.p. (1.61) compared to no treatment. Random allocation is associated with an application rate increase of approximately 3.23 p.p. (0.82). Turning to the individual treatments, we find that the _Welcome letter_ yields overall the largest increase in application take-up with an effect size around 3.79 p.p. (1.07), closely followed by the _Requirements letter_ with an effect size around 3.65 p.p. (1.10). The _Complexity letter_ performs substantially worse in comparison, with an effect size of 2.23 (\(s.e.\)=1.04). Panel B shows that the policy tree performs better than random treatment or each individual treatment option. The take-up increase compared to the best-performing individual treatment (the _Welcome letter_) is 1.03 p.p. but statistically insignificant. The PDS lasso estimates are almost identical and the DDML estimator yields effect sizes only marginally smaller. ## 4 Conclusion This paper employs policy trees for assigning eligible immigrants to the information and encouragement treatment that is most likely to address hurdles on their path to citizenship and boost their propensity to naturalize. We evaluate the benefits of this policy rule using a tailored two-phase field experiment. During the exploration phase, we randomly assign eligible immigrants to one of three treatment arms or the control group, based on which we estimate average treatment effects and train the policy tree. We find that despite its simplicity, the optimal policy tree of depth 3 captures more than 85% of the treatment effect heterogeneity (relative to a model-free plug-in rule). Next, we move on to the exploitation phase, in which we assign the subjects that belonged to the control \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \multicolumn{8}{c}{_Dependent variable: Naturalization application_} \\ \hline \multicolumn{8}{l}{_Panel A. Coefficient estimates_} \\ Policy tree & 1.794 & 5.124\({}^{**}\) & 5.127\({}^{**}\) & 5.004\({}^{**}\) & 5.005\({}^{**}\) & 4.702\({}^{**}\) & 4.758\({}^{**}\) \\ & (1.358) & (1.609) & (1.609) & (1.606) & (1.606) & (1.472) & (1.479) \\ Random & & 3.225\({}^{***}\) & & 3.245\({}^{***}\) & & 3.207\({}^{***}\) & \\ & (0.821) & & (0.822) & & (0.752) & \\ Complexity & & & 2.230\({}^{*}\) & & 2.260\({}^{*}\) & 2.199\({}^{*}\) \\ & & & (1.035) & & (1.037) & (0.938) \\ Requirements & & & 3.650\({}^{***}\) & & 3.711\({}^{***}\) & 3.613\({}^{***}\) \\ & & & (1.095) & & (1.096) & (1.026) \\ Welcome & & & 3.787\({}^{***}\) & & 3.752\({}^{***}\) & 3.675\({}^{***}\) \\ & & & (1.074) & & (1.071) & (0.986) \\ \hline \multicolumn{8}{l}{_Panel B. Comparison of Policy tree with:_} \\ Random & 1.794 & 1.899 & & 1.759 & & 1.495 & \\ & (1.358) & (1.361) & & (1.359) & & (1.257) & \\ Complexity & & & 2.897 & & 2.745 & & 2.559 \\ & & & (1.539) & & (1.538) & & (1.432) \\ Requirements & & & 1.477 & & 1.294 & & 1.144 \\ & & & (1.503) & & (1.499) & & (1.409) \\ Welcome & & & 1.340 & & 1.253 & & 1.083 \\ & & & (1.528) & & (1.527) & & (1.415) \\ \hline Sample & Wave 2 & Wave 1-2 & Wave 1-2 & Wave 1-2 & Wave 1-2 & Wave 1-2 \\ Estimator & OLS & OLS & OLS & PDS lasso & PDS lasso & DDML & DDML \\ Outcome mean & 7.69 & 7.92 & 7.92 & 7.92 & 7.92 & 7.92 \\ Observations & 1717 & 6588 & 6588 & 6588 & 6588 & 6588 \\ \hline \hline \end{tabular} _Notes:_ The table reports results from estimating versions of (3) using OLS (columns 1-3), PDS-Lasso (columns 4-5) and DDML (columns 6-7). Column 1 only uses data from wave 2; the remaining columns use the full data set. The reference group in column 1 is random treatment allocation; no treatment in columns 2-7. Panel A reports the coefficient estimates. Panel B compares the policy rule using policy trees against always assigning the same treatment to everyone and random treatment allocation. Covariates include the region of nationality, age, gender, years lived in Zürich and years lived in Switzerland. Standard errors are clustered at building address level. \({}^{*}\)\(p<0.05\), \({}^{**}\)\(p<0.01\), \({}^{***}\)\(p<0.001\) \end{table} Table 2: The effect of the policy rule compared randomization, always the same treatment and no treatment group in the previous phase to either the policy tree or randomly to one of the three treatments. We find that the policy tree outperforms the best-performing individual treatment slightly. While these differences are not statistically significant, it is worth noting that these benefits persist in a context with at most moderate levels of treatment effect heterogeneity and come at little additional costs. Policy trees possess several advantages that make them particularly suited for policymakers and researchers interested in tailoring treatment assignment to the specific needs of increasingly diverse populations. Policy trees are transparent in terms of which variables guide treatment assignment, they are simple to visualize, and intuitive to communicate even to users of the research who lack statistical training. While using machine learning to personalize treatment assignments raises a host of important ethical and policy questions, we should keep in mind that a one-size-fits-all approach can often exacerbate existing inequalities. For instance, an earlier information letter sent out by the City of Zurich had by far the strongest effects among newly eligible immigrants, which often score higher on multiple integration dimensions compared to more marginalized immigrants who have been residing in the host country for decades without naturalizing (D. G. Ward, Pianzola, and Hangartner, 2019). For all these reasons, we believe that policy trees are a well-suited approach to leverage the potential of tailored treatment assignment in a world where rich background characteristics are increasingly available. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements We thank our partners from the City of Zurich for their collaboration and support. This research was supported by the Stiftung Mercator Schweiz and by the _nccr - on the move_ program, which is funded by the Swiss National Science Foundation (grant no. 51NF40-182897).
2302.14597
deHuBERT: Disentangling Noise in a Self-supervised Model for Robust Speech Recognition
Existing self-supervised pre-trained speech models have offered an effective way to leverage massive unannotated corpora to build good automatic speech recognition (ASR). However, many current models are trained on a clean corpus from a single source, which tends to do poorly when noise is present during testing. Nonetheless, it is crucial to overcome the adverse influence of noise for real-world applications. In this work, we propose a novel training framework, called deHuBERT, for noise reduction encoding inspired by H. Barlow's redundancy-reduction principle. The new framework improves the HuBERT training algorithm by introducing auxiliary losses that drive the self- and cross-correlation matrix between pairwise noise-distorted embeddings towards identity matrix. This encourages the model to produce noise-agnostic speech representations. With this method, we report improved robustness in noisy environments, including unseen noises, without impairing the performance on the clean set.
Dianwen Ng, Ruixi Zhang, Jia Qi Yip, Zhao Yang, Jinjie Ni, Chong Zhang, Yukun Ma, Chongjia Ni, Eng Siong Chng, Bin Ma
2023-02-28T14:33:41Z
http://arxiv.org/abs/2302.14597v1
# De'Hubert: Disentangling Noise in a Self-Supervised Model for Robust Speech Recognition ###### Abstract Existing self-supervised pre-trained speech models have offered an effective way to leverage massive unannotated corpora to build good automatic speech recognition (ASR). However, many current models are trained on a clean corpus from a single source, which tends to do poorly when noise is present during testing. Nonetheless, it is crucial to overcome the adverse influence of noise for real-world applications. In this work, we propose a novel training framework, called deHuBERT, for noise reduction encoding inspired by H. Barlow's redundancy-reduction principle. The new framework improves the HuBERT training algorithm by introducing auxiliary losses that drive the self- and cross-correlation matrix between pairwise noise-distorted embeddings towards identity matrix. This encourages the model to produce noise-agnostic speech representations. With this method, we report improved robustness in noisy environments, including unseen noises, without impairing the performance on the clean set. Dianwen Ng\({}^{1,2}\), Ruixi Zhang, Jia Qi Yip\({}^{1,2}\), Zhao Yang\({}^{2,3}\), Jinjie Ni\({}^{1,2}\), Chong Zhang\({}^{1}\), Yukun Ma\({}^{1}\) Chongjia Ni\({}^{1}\), Eng Siong Chng\({}^{2}\), Bin Ma\({}^{1}\)+\({}^{1}\)Alibaba Group \({}^{2}\)School of Computer Science and Engineering, Nanyang Technological University, Singapore \({}^{3}\)Faculty of Electronic and Information Engineering, Xi'an Jiaotong University, China self-supervised learning, disentangling representations, noise robust automatic speech recognition Footnote †: This work was supported by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (URI), Nanyang Technological University, Singapore. ## 1 Introduction Recently, self-supervised pre-training in speech has seized the limelight with numerous successes in building a highly effective automatic speech recognition (ASR) system [1, 2], especially for low-resource languages [3]. This success stems from leveraging large amounts of unannotated utterances to construct universal speech representations that benefit downstream ASR tasks. Such frameworks include contrastive predictive coding (CPC) [4], which learns by making the next step prediction using a contrastive loss, and autoregressive predictive coding (APC) [5] that builds its speech representations by reconstructing future frames from the past sequence. Most of these works focused on a single domain of relatively clean audio, e.g. LibriSpeech [6], that lacks domain variation. Nevertheless, speech in real-world environments usually contain background noises, reverberation and other non-linear distortions. [7] had shown that many off-the-shelf universal speech models are vulnerable to this issue, where the performance of downstream ASR systems significantly degrade if there is a domain shift from the pre-training data. To improve the noise robustness, [8] modified wav2vec2.0 (w2v2) to include a contrastive loss that learns the cross-quantized targets between the original-noisy pair. Likewise, [9] employed contrastive loss as a regularizer to achieve noise-reduced speech features. [10] provides another approach using a teacher-student framework to encode denoising representations from the perturbed data that resembles a siamese network. In addition, [11] constructed an enhanced w2v2 that minimizes the consistency between noisy and clean features, and [12] introduced an auxiliary reconstruction task to improve the noise robustness of the learned representations. However, most of these approaches maybe hard to reproduce and involve careful implementation details. In this paper, we aim to improve the noise robustness of the self-supervised pre-trained HuBERT [2] model for noisy ASR. We achieve this by introducing a new pair of auxiliary loss functions that encourages noise invariance in HuBERT's embedded contextual representations. To realize this, we propose a novel self-supervised training framework, disentangled HuBERT (deHuBERT), which regularizes HuBERT training using the recently proposed Barlow Twins [13], a method which reduces redundant information between the vector representations in images. We adapt this technique for sequential modelling and show that it is simple and highly effective in learning noise-invariant speech representations. The method aggregates the cross-correlation matrix between the embeddings of two identical networks forward-fed with different noise-augmented samples and pushes it towards the identity matrix. For the diagonal elements of the cross-correlation matrix to approach 1, the network has to extract agreeing features (i.e. speech content) of the two augmented utterances while minimizing other variational factors (i.e. background noises) between the dimensional representations at the frame level. Furthermore, decorrelating the off-diagonal elements creates the conditions for disentanglement. Experimental results show that our pre-trained model consistently exhibits better robustness in noisy environments, including unseen noises, without compromising the performance of the clean audio test set. ## 2 Methodology ### HuBERT The HuBERT model architecture follows w2v2 with a convolutional encoder, BERT encoder, projection layer and code embedding layer. HuBERT adapts the BERT model from NLP to perform self-supervised speech representation learning. This allows the encoder to discover good high-level latent representations of both acoustic and language information from the continuous speech signals. During pre-training, it exploits an offline clustering step (i.e., using the K-Means algorithm) to generate the aligned discrete target labels (codes) for computing the BERT-like prediction loss from the masked frames, following the SpanBERT masking strategy. The training of HuBERT is initiated with hidden units of \(K=100\) clusters derived based on the MFCC features of the raw audio data. In the subsequent iterations, the target codes are updated based on a hidden unit of \((K=500)\) clusters determined using the intermediate latent representations of the sixth layer of HuBERT's transformer at the second iteration. However, the HuBERT training algorithm does not inherently disentangle representations for noise separation or reduction, making the encoder vulnerable to noise. ### deHuBERT To obtain disentangled noise-agnostic representations using the HuBERT model, our proposed deHuBERT training algorithm makes use of the HuBERT to generate, in parallel, a second embedding of a different noise-augmented version using a shared CNN encoder, as shown in Fig 1. Here, two sets of noise are randomly selected and added to the training data with SNRs ranging between 0-25 dB. We then collect the encoded feature representations, \(X\) and \(\tilde{X}\), from the intermediate outputs and pass them to a shared linear projection block to get \(Y\) and \(\tilde{Y}\) respectively. Finally, following the losses introduced by [13], we derive the empirical cross-correlation (CC) matrix by \[C_{ij}^{(cc)}\triangleq\frac{\sum_{n}y_{n,i}\tilde{y}_{n,j}}{\sqrt{\sum_{n}(y _{n,i})^{2}}\sqrt{\sum_{n}(\tilde{y}_{n,j})^{2}}} \tag{1}\] where \(n\) denotes the number of frames used and \(i\), \(j\) refer to the dimensional position of the frame-level representations. Note that \(C\in[-1,1]\) is a square matrix of \(d\)-dimensional based on the size of the projected output. We employ a CC loss that pushes the CC matrix towards the identity matrix. This loss function is defined by \[\mathcal{L}_{cc}\triangleq\underbrace{\sum_{i}(1-C_{ii})^{2}}_{\text{ invariance term}}+ \lambda\underbrace{\sum_{i}\sum_{j\neq i}{C_{ij}}^{2}}_{\text{disentangling term}} \tag{2}\] where \(\lambda\) is a penalizing parameter that balances the trade off between the first and second terms of the loss. Since \(Y\) and \(\tilde{Y}\) are sequential features, ignoring the frame-level correlation tends to overestimate the variability. To account for this, we flatten the outputs and remove the zero-padded frames within each minibatch before we perform a random sampling of size \(n\), where we will index on both \(Y\) and \(\tilde{Y}\) identically. This causes the feature set to be more independent and will give us some control in tuning the stability of the proposed framework. To understand how the proposed CC loss can reduce noise to obtain invariant features, we compare it to the infoNCE [14] loss. Formally, the first term in Eq. 2 shares a close resemblance to the positive contrastive pair in infoNCE as presented in Eq. 3. \[\underbrace{-\sum_{n}\frac{\langle y_{n},\tilde{y}_{n}\rangle_{i}}{\tau\|y_{n }\|_{2}\|\tilde{y}_{n}\|_{2}}}_{\text{infoNCE's positive contrastive}}\quad \underbrace{\sum_{i}(1-\frac{\langle y_{,i},\tilde{y}_{,i}\rangle_{n}}{\|y_{,i }\|_{2}\|\tilde{y}_{,i}\|_{2}})^{2}}_{\text{proposed invariance term}} \tag{3}\] Similar to the objective behind the positive contrastive loss, we try to maximize the agreeing speech content between the two distorted embeddings and lower other variations (e.g. noise) by getting the two dimensional feature components perfectly correlated. Likewise, decorrelating the off-diagonal matrix discourages information sharing over the feature components while simultaneously encouraging disentangled representations. To gain further disentanglement in the output representations, we build another linear projection block of the same structure that takes in the bottleneck representations \(Z\) to compute the projected \(P_{Z}\) for estimating the empirical self-correlation (SC). The estimation can be done by reusing the Figure 1: The deHuBERT method minimizes the self- and cross-correlation of the latent embeddings to an identity matrix. computational function in Eq. 1 with random sampling, and replacing the arguments with (\(P_{Z}\), \(P_{Z}\)). Again, we compute the SC loss similar to Eq. 2 but with the SC matrix. In practice, we believe that CC loss may not be perfect in obtaining noise-invariant representations. Disentangling the bottleneck features and then using them to predict the hidden units (i.e. Hubert's codes) of the original clean training audio guides the encoder to detect the residual noise information and eventually suppressing it in the final contextual representations. The complete optimization loss used in our pre-training framework is given by \[\mathcal{L}=\mathcal{L}_{\text{HB}}+\alpha\mathcal{L}_{\text{CC}}+\beta \mathcal{L}_{\text{SC}} \tag{4}\] where the three terms refer to the HuBERT loss, cross-correlation loss and self-correlation losses, respectively. \(\alpha\) and \(\beta\) have both been set to 0.5 in this work. ## 3 Experiment ### Data Description We set up our data environments following [15, 11] for performance comparisons. In our experiments, we use the full 960h of Librispeech for pre-training and the dev-clean corpus for the validation set. The noise dataset used for training is obtained from FreeSound [16], which consists of 16kHz noise data which can be categorized into stationary (Type A) and non-stationary (Type B). The type A noises available are Car, Metro and Traffic noises. In the Type B category, Babble, Airport/Station, Cafe and AC/Vacuum noises are available. Each type of noise has 10 and 8 different audio streams in the training and testing sets, respectively. The total duration of the noise data is around 2h. During testing, 120 randomly chosen sub-files from the test-clean set of Librispeech are used, as per the standard procedure for testing on this dataset. In addition, LibriSpeech comes with pre-mixed noises at different SNRs between 0-20 dB, which ultimately makes up 4200 instances of noisy test data. The noise data and noisy test sets can be downloaded from the website1. Footnote 1: [https://github.com/archiki/Robust-E2E-ASR](https://github.com/archiki/Robust-E2E-ASR) ### Model Pre-training We perform continual pre-training by utilizing the weights provided by the Fairseq toolkit for 250k steps. In our implementation, we construct the final projection block with the corresponding \(d\)-dimensional size of 2048 and 4096 for CC and SC. In contrast to [13], we observed a concave plot of the performance with the effect on increasing dimensionality of the projector network. Additionally, we sampled n=640, and we found that adopting a smaller sample size benefits early-stage learning as it contains a slightly higher estimation error that excites the network and allows the model to escape from the local minimum. However, this requires a smaller \(\lambda=0.005\) to limit the adversity contributed by the estimation error. Finally, we also found that applying a smaller learning rate of 7e-5 leads to better model pre-training. ### Model Fine-tuning We used the best checkpoint from the pre-training and followed the typical base setup for 100h, 10h, 1h and 10m. The ASR finetuning involves only the HuBERT component. Additionally, we employed multi-conditioning training with the training noise of 0 to 20 dB. Finally, we tested our performance with the best checkpoint according to the validation WER for final evaluations. ## 4 Experimental Results We compare our results without a language model with an off-the-shelf HuBERT as the baseline to determine the efficiency of our model in learning a noise-robust ASR with limited finetuning data. Also, we included results from the HuBERT base model that undergoes multi-conditioning pre-training to cast a holistic analysis. Table 1 shows the ASR performance in WER based on the subset test-clean audio pre-mixed with the individual noise types of SNRs between 0 and 20 dB. We observe that pre-training HuBERT with noise helps to improve the adaptability to noise on the downstream ASR, but this comes at the cost of degrading clean speech performance. Nonetheless, deHuBERT outperforms baseline HuBERT on both noisy and clean speech regardless of the pre-training condition. Additionally, the difference in performance becomes more apparent with the increasing scarcity of finetuning resources. Finally, we investigate the experiment with the typical 100h finetuning to compare our deHuBERT with existing models. On the complete test-clean and test-other set, we achieved a WER of 6.3% and 13.2%, respectively. This score is comparable to the baseline performance despite using only noisy speech for finetuning. Additionally, deHuBERT achieves the top WER on the noisy data. To visualize the noise-agnostic properties of the deHuBERT embeddings, we plot the t-SNE of the bottleneck features of both HuBERT and deHuBERT in Fig. 2. The features were obtained from 720 randomly selected audio samples of Figure 2: t-SNE plots that compare the disentanglement and noise invariance of different networks with 0 dB noises. train-clean-100 mixed with 0 dB of Airport, Metro and Cafe noises. Before plotting, we performed a global mean pooling of all the bottle neck features in a sequence to get vector representations before applying the t-SNE algorithm. On the HuBERT Base plot (left), we can identify clusters consisting of samples with the same noise type, indicating the presence of noise information. In comparison, the deHuBERT plot (right) exhibits no clear clustering according to the type of noise. ### Post-methodology Study In this section, we are stress testing our model to determine the robustness of its out-of-domain (OOD) performance. We use the TEDLIUM3 [17] dataset to explore the effect of domain shift with noisy ASR. Moreover, we introduce out-of-domain office noise from FSD50K [18] by selecting noise from the group _Whispering, Writing, Typing, Typing, Typewriter, Telephone, Conversation, Laughter, Computer Keyboard_ and _Printer_. We filter those that are less than 10m, which led us to 385 files. Table 2 presents the performance based on fine-tuning the selected clean audio set (10h) on the complete test set under three different conditions: (1) In-domain (ID) clean test set, (2) ID pre-train noise but OOD finetuning, (3) OOD pre-train noise and OOD finetuning. Firstly, our pre-trained model is comparable to the base under the condition (1). This is important as it indicates that our model remains robust and is unaffected by noisy pre-training. Secondly, even on unseen noise during finetuning, deHuBERT performs consistently better than HuBERT base under noisy environments in conditions (2) and (3). Lastly, although there is still a degradation in performance on ID and OOD noisy ASR, the percentage increase in WER is relatively lower in deHuBERT than for HuBERT base, especially for condition (3). ## 5 Conclusion In this paper, we proposed a novel pre-training framework that disentangles noise with the self- and cross-correlation loss for more robust speech recognition. Our model exhibits superiority in handling noisy ASR environments, including OOD noises, without compromising the performance of the clean audio test. The t-SNE plot of the contextual representations from deHuBERT offers a visual understanding of the improvement in noise robustness by observing randomly scattered projection that implies meagre embedded noise information. \begin{table} \begin{tabular}{l|c|c c c|c c c|c|c} \hline \multirow{3}{*}{Methods} & \multirow{3}{*}{Pre-train} & \multicolumn{6}{c|}{WER (\%) under noisy (0 – 20 dB) SNR and clean environment \(\downarrow\)} \\ \cline{3-10} & & \multicolumn{4}{c|}{Type-B noise} & \multicolumn{4}{c|}{Type-A noise} \\ \cline{3-10} & & \multirow{2}{*}{Babble} & Airport/ & AC/ & \multirow{2}{*}{Cafe} & Traffic & \multirow{2}{*}{Metro} & \multirow{2}{*}{Car} & Avg. & \multirow{2}{*}{Clean} \\ & & Station & & Vacuum & & & & & (noisy) & (subset) \\ \hline \multicolumn{10}{c}{Fine-tuning: 10-hours labeled (with additive FreeSound noise)} \\ \hline HuBERT Base & Clean & 33.71 & 26.85 & 23.82 & 20.19 & 19.05 & 18.26 & 12.91 & 22.11 & 13.5 \\ HuBERT Base & FreeSound & 27.93 & 22.33 & 20.77 & 17.58 & 17.08 & 17.30 & 13.05 & 19.43 & 13.7 \\ deHuBERT (Ours) & FreeSound & **26.58** & **21.23** & **20.14** & **16.83** & **16.05** & **15.74** & **11.95** & **18.36** & **12.8** \\ \hline \multicolumn{10}{c}{Fine-tuning: 1-hour labeled (with additive FreeSound noise)} \\ \hline HuBERT Base & Clean & 49.72 & 41.86 & 39.98 & 35.79 & 34.42 & 33.08 & 26.74 & 37.37 & **27.8** \\ HuBERT Base & FreeSound & 42.54 & 36.83 & 36.11 & 32.82 & 32.19 & 31.77 & 27.60 & 34.27 & 29.1 \\ deHuBERT (Ours) & FreeSound & **41.74** & **36.27** & **35.54** & **32.41** & **31.51** & **31.24** & **26.68** & **33.63** & 28.4 \\ \hline \multicolumn{10}{c}{Fine-tuning: 10-mins labeled (with additive FreeSound noise)} \\ \hline HuBERT Base & Clean & 70.25 & 63.62 & 61.89 & 57.68 & 55.41 & 54.66 & 47.95 & 58.78 & 48.4 \\ HuBERT Base & FreeSound & 60.53 & 56.31 & 56.00 & 52.92 & 53.16 & 52.58 & 49.56 & 54.44 & 50.7 \\ deHuBERT (Ours) & FreeSound & **58.59** & **53.82** & **53.88** & **50.66** & **49.67** & **49.71** & **45.80** & **51.73** & **47.1** \\ \hline \multicolumn{10}{c}{Fine-tuning: 100-hours labeled (with additive FreeSound noise)} \\ \hline DEMUCS [15] & FreeSound & 45.56 & 36.98 & 38.20 & 27.02 & 26.46 & 23.22 & 16.02 & 30.49 & 10.9 \\ AvT [15] & No & 43.42 & 35.32 & 36.62 & 27.06 & 27.88 & 24.28 & 17.76 & 30.33 & 13.1 \\ Wav2vec 2.0 [11] & Clean & 47.50 & 39.68 & 38.84 & 31.14 & 29.22 & 27.44 & 18.24 & 33.15 & 14.0 \\ Wav2vec 2.0 [11] & FreeSound & 39.56 & 32.50 & 34.94 & 25.22 & 24.52 & 22.48 & 16.24 & 27.92 & 13.5 \\ EW2 [11] & FreeSound & 33.88 & 27.36 & 27.94 & 22.08 & 20.94 & 19.84 & 14.88 & 23.85 & 12.3 \\ HuBERT Base & FreeSound & 22.52 & 16.91 & 15.94 & 12.79 & 12.43 & 12.20 & 8.39 & 14.45 & 9.4 \\ deHuBERT (Ours) & FreeSound & **21.25** & **16.02** & **14.93** & **11.94** & **11.66** & **11.21** & **7.62** & **13.52** & **8.6** \\ \hline \end{tabular} \end{table} Table 1: Experimental results on the givenhesized noisy data for various noise types of SNRs (0-20)dB without a LM. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \multirow{3}{*}{Models} & \multirow{3}{*}{ \begin{tabular}{c} FT Data \\ (10hrs) \\ \end{tabular} } & \multicolumn{3}{c|}{WER (\%) of testing data \(\downarrow\)} \\ \cline{3-5} & & LS (Test set) & \multicolumn{2}{c}{TEDLIUM} \\ \cline{3-5} & & Clean & Other & Dev & Test \\ \hline \multicolumn{5}{c}{Testing set from the original data (Clean)} \\ \hline HuBERT Base & LibriSpeech & **9.8** & 18.2 & **25.4** & **23.6** \\ deHuBERT (Ours) & LibriSpeech & 10.1 & **18.1** & 25.5 & 23.8 \\ \hline HuBERT Base & TEDLIUM & **14.9** & 23.8 & **18.1** & **17.3** \\ deHuBERT (Ours) & TEDLIUM & 15.2 & **23.7** & 18.2 & 17.4 \\ \hline \multicolumn{5}{c}{Testing set with additive FreeSound noise (0-20 dB)} \\ \hline HuBERT Base & LibriSpeech & 20.3 & 36.4 & 35.8 & 36.4 \\ deHuBERT (Ours) & LibriSpeech & **13.4** & **26.0** & **30.1** & **30.3** \\ \hline HuBERT Base & TEDLIUM & 23.5 & 38.8 & 26.4 & 27.8 \\ deHuBERT (Ours) & TEDLIUM & **19.3** & **32.8** & **22.7** & **22.8** \\ \hline \multicolumn{5}{c}{Testing set with additive OOD, office noise (0–20 dB)} \\ \hline HuBERT Base & LibriSpeech & 26.6 & 44.5 & 42.2 & 43.9 \\ deHuBERT (Ours) & LibriSpeech & **17.0** & **32.0** & **33.7** & **35.5** \\ \hline HuBERT Base & TEDLIUM & 30.6 & 46.2 & 34.5 & 35.3 \\ deHuBERT (Ours) & TEDLIUM & **23.2** & **37.4** & **26.2** & **27.7** \\ \hline \end{tabular} \end{table} Table 2: Results on various out-of-domain noisy conditions. We finetuned our model with 10h (respective) dataset.
2305.19772
A note on Serrin's type problem on Riemannian manifolds
In this paper, we deal with Serrin-type problems in Riemannian manifolds. First, we obtain a Heintze-Karcher inequality and a Soap Bubble result, with its respective rigidity, when the ambient space has a Ricci tensor bounded below. After, we approach a Serrin problem in bounded domains of manifolds endowed with a closed conformal vector field. Our primary tool, in this case, is a new Pohozaev identity, which depends on the scalar curvature of the manifold. Applications involve Einstein and constant scalar curvature spaces.
Allan Freitas, Alberto Roncoroni, Márcio Santos
2023-05-31T12:05:08Z
http://arxiv.org/abs/2305.19772v2
# A note on Serrin's type problem on Riemannian manifolds ###### Abstract. In this paper, we deal with Serrin-type problems in Riemannian manifolds. First, we obtain a Heintze-Karcher inequality and a Soap Bubble result, with its respective rigidity, when the ambient space has a Ricci tensor bounded below. After, we approach a Serrin problem in bounded domains of manifolds endowed with a closed conformal vector field. Our primary tool, in this case, is a new Pohozaev identity, which depends on the scalar curvature of the manifold. Applications involve Einstein and constant scalar curvature spaces. 2020 Mathematics Subject Classification: Primary 35R01, 35N25, 53C24; Secondary 35B50, 58J05, 58J32. ## 1. Introduction In [26], inspired by a question proposed by Fosdick of a problem related to fluids dynamics, Serrin has begun to study the rigidity of the following Poisson equation with boundary constraints: \[\Delta u = -1\ \ \text{in}\ \ \ \Omega,\ \ \ u=0\ \ \ \text{on}\ \ \ \partial\Omega, \tag{1.1}\] \[u_{\nu} = -c\ \ \ \text{on}\ \ \ \partial\Omega, \tag{1.2}\] where \(\Omega\subset\mathbb{R}^{n}\) is a bounded domain, \(\nu\) is the outward unit normal, \(u_{\nu}\) is the normal derivative of \(u\) on \(\partial\Omega\) and \(c\) is a positive constant. This underlying work proved that the problem (1.1)-(1.2) admits a solution if and only if \(\Omega\) is a ball and \(u\) is a radial function. The technique used to solve this problem relies on the so-called _method of moving planes_. We refer to the original paper [26] for further details. In the same edition of the above-mentioned Serrin's work, Weinberger [30] provided a simpler proof based on integral identities; this method is nowadays known as the _\(P\)-function approach_. In short, Weinberger defines a sub-harmonic function \(P(u)\) associated with the solution of (1.1)-(1.2) and applies the classical strong maximum principle to prove that this function is constant, which, in terms, implies the rigidity result. One main step to prove this constancy is the classical Pohozaev identity for Euclidean domains (we refer to Remark 3 below and to the paper [19] for further details). After Serrin and Weinberger's papers in the literature, there are several alternative proofs and generalizations of Serrin's result in \(\mathbb{R}^{n}\) (see, e.g., [1, 2, 3, 5, 4, 10, 11, 13, 14, 15, 21, 27, 28] and the references therein). It is no coincidence that these two methods to prove the rigidity of the problem (1.1)-(1.2), i.e., the moving planes method and the integral technique, remember Introduction Let \(M\) be a bounded domain with \(n-\)dimensional Riemannian manifold \(M\). We consider the following Cauchy problem \[\frac{n-1}{n}\int_{\partial\Omega}\frac{1}{H}\geq Vol(\Omega)+nk\int_{\Omega}u,\] and the equality occurs if and only if \(\Omega\) is a geodesic ball and \(u\) is a radial function. We mention that when \(k=0\), we recover the classical Heintze-Karcher inequality (1.3). The essential tool here is Reilly's identity applied to the solution of (1.5) and to perform the analog of Ros' proof. In particular, as a Corollary of this Theorem, we obtain an Alexandrov Soap Bubble-type result (see Theorem 4 below). Talking about Weiberger's \(P\)-function approach, the two ingredients associated with his technique, that is, a maximum principle for a suitable \(P\)-function and a Pohozaev-type identity, have been reproduced to study overdetermined problems for domains in Riemannian manifolds. In particular, by studying Serrin's problem on simply connected manifolds with constant sectional curvature, the space forms, Ciraolo and Vezzoni [6] used the \(P\)-function approach to prove the analog of Serrin's theorem for the problem (1.5) with the overdetermined condition \[u_{\nu} = -c\quad\mbox{on}\quad\partial\Omega, \tag{1.6}\] where \(\Omega\subset M\) is a bounded domain and \(M=\mathbb{R}^{n}\) when \(k=0\), \(M=\mathbb{H}^{n}\) when \(k=-1\) and \(M=\mathbb{S}^{n}_{+}\) when \(k=1\). We also refer to [16] for the same result via the method of moving planes. Following the line in [6], Farina and the second author [12] (see also [24] for a previous partial result) have studied this same problem on warped products, also obtaining rigidity results by considering such ambients with Ricci curvature bounded below. On the other hand, it is known that Serrin's theorem is, in general false; indeed, given a compact Riemannian manifold \((M,g)\) such that there exists a non-degenerate critical point \(p\in M\) of its scalar curvature function, then it is possible to construct a smooth foliation \((\partial\Omega_{\varepsilon})\) of a neighborhood of \(p,\) where \(\Omega_{\varepsilon}\) is a domain in which (1.1)-(1.2) possesses a solution with \(c=\frac{\varepsilon}{n}\) (we refer to [9], and to [20] and [8] for related results for the eigenvalue problem). In this work, we deal with Serrin-type problems on Riemannian manifolds, providing geometric conditions on the ambient space to obtain rigidity results. In particular, we study problem (1.5)-(1.6) in a class of Riemannian manifolds satisfying (1.4) and that admit a closed conformal vector field. A vector field \(X\in\mathfrak{X}(M)\) is called _closed conformal_ if \[\nabla_{Y}X=\varphi Y,\quad\text{ for all }Y\in\mathfrak{X}(M),\] for some smooth function \(\varphi\) called the _conformal factor_. For such a class, we obtain a Pohozaev identity that we think it posses its own interest (see Lemma 3 below). This identity will be combined with the \(P\)-function approach to obtain geometric constraints that imply rigidity (see Theorem 5 below). Some interesting applications are noticed when the ambient is, e.g., an Einstein manifold. **Theorem 2.**_Let \((M,g)\) be an Einstein manifold with \(Ric=(n-1)kg\), for some \(k\in\mathbb{R}\) such that \(M\) is endowed with a closed conformal field with a positive conformal factor \(\varphi\). Let \(u\) be a positive solution of the problem (1.5)-(1.6), then \(\Omega\) is a metric ball, and \(u\) is a radial function._ Finally, another important tool in Ros' proof of the Alexandrov Soap bubble Theorem (besides the already mentioned Heintze-Karcher inequality) is the _Minkowski identity_ (see e.g. [23]): \[\int_{\partial\Omega}H\langle x-p,\nu\rangle dx=(n-1)|\partial\Omega|,\] where \(p\in\mathbb{R}^{n}\) and \(\Omega\) is a bounded domain in \(\mathbb{R}^{n}.\) Motivated by this we have the following result **Theorem 3.**_Let \((M,g)\) be a manifold endowed with a closed conformal vector field \(X\) and constant scalar curvature \(R=n(n-1)k.\) Suppose that \(u\) is a positive solution of the problem (1.5)-(1.6). Then,_ \[\int_{\partial\Omega}\langle X,\nu\rangle((n-1)-cnH)=0,\] _where \(H\) denotes the mean curvature of \(\partial\Omega.\) In particular, if \(k=0\), \(\langle X,\nu\rangle>0\) and \(\partial\Omega\) has constant mean curvature, then \(H=\frac{(n-1)}{n}\frac{|\partial\Omega|}{|\Omega|}.\)_ **Organization of the paper.** In Section 2, we prove Theorem 1 and apply it to establish an Alexandrov Soap Bubble Theorem. In Section 3, we consider the class of Riemannian manifolds that admits a closed conformal vector field; we prove a Pohozaev-type identity and obtain a general integral condition that implies the rigidity, giving some applications (like Theorem 2 and 3). ## 2. Heintze-Karcher inequality and Soap Bubble Theorem We begin this section remembering the well-known _Reilly identity_, proved in [22], \[\int_{\Omega}\Big{[}\frac{n-1}{n}(\Delta f)^{2}-|\mathring{\nabla} ^{2}f|^{2}\Big{]}\\ =\int_{\partial\Omega}\Big{(}h(\bar{\nabla}z,\bar{\nabla}z)+2f_{ \nu}\bar{\Delta}z+Hf_{\nu}^{2}\Big{)}+\int_{\Omega}Ric(\nabla f,\nabla f), \tag{2.1}\] which holds true for every domain \(\Omega\) in a Riemannian manifold \((M^{n},g)\) and for every \(f\in C^{\infty}(\overline{\Omega})\), where \(\mathring{\nabla}^{2}f\) denotes the traceless Hessian of \(f\), explicitly \[\mathring{\nabla}^{2}f=\nabla^{2}f-\frac{\Delta f}{n}g\,,\] \(\bar{\nabla}\) and \(\bar{\Delta}\) indicate the gradient and the Laplacian of the induced metric in \(\partial\Omega\), \(z=f|_{\partial\Omega}\) and \(\nu\) be the unit outward normal of \(\partial\Omega\), \(h(X,Y)=g(\nabla_{X}\nu,Y)\) and \(H=tr_{g}h\) the second fundamental form and the mean curvature (with respect to \(\nu\)) of \(\partial\Omega\), respectively. In this section, we consider the following problem \[\left\{\begin{array}{rcll}\Delta u+nku&=&-1\quad\mbox{in}\quad\Omega\\ u&>&0\\ u&=&0\quad\mbox{on}\quad\partial\Omega,\end{array}\right. \tag{2.2}\] where \(k\in\mathbb{R}\) and the main purpose is to obtain the Heintze-Karcher type inequality announced in Theorem 1. The first step is to apply the Reilly identity (2.1) to the solution of (2.2) **Lemma 1.**_Let \(u\) a solution of (2.2). Then_ \[\int_{\Omega}|\mathring{\nabla}^{2}u|^{2}+\int_{\Omega}[Ric-(n-1)kg](\nabla u,\nabla u)=-\frac{1}{n}\int_{\partial\Omega}u_{\nu}[(n-1)+nHu_{\nu}] \tag{2.3}\] Proof.: Reilly's identity applied to the solution of (2.2) implies \[\int_{\Omega}|\mathring{\nabla}^{2}u|^{2}=-\int_{\partial\Omega}Hu_{\nu}^{2} -\int_{\Omega}\mbox{Ric}(\nabla u,\nabla u)+\frac{n-1}{n}\int_{\Omega}( \Delta u)^{2}. \tag{2.4}\] On the other hand, again using (2.2), \[\int_{\Omega}(\Delta u)^{2} = \int_{\Omega}\Delta u(-1-nku)\] \[= -\int_{\partial\Omega}u_{\nu}-nk\int_{\Omega}u\Delta u\] \[= -\int_{\partial\Omega}u_{\nu}+nk\int_{\Omega}|\nabla u|^{2}.\] Replacing (2.5) in (2.4), the result follows. An immediate consequence of the previous formula is the following **Corollary 1.**_Let \((M^{n},g)\) a manifold with \(Ric\geq(n-1)kg\), for some \(k\in\mathbb{R}\). Let \(\Omega\subset M\) be a domain and \(u\) a solution of (2.2). If_ \[u_{\nu}(x)=-\frac{n-1}{nH(x)}\quad\mbox{on }\partial\Omega, \tag{2.6}\] _then \(\Omega\) is a ball and \(u\) is a radial function._ Proof.: From (2.6) and the condition on the Ricci curvature, we immediately get, from Lemma 1, that \[\dot{\nabla}^{2}u=0\quad\mbox{in }\Omega.\] The conclusion follows immediately from the Obata-type theorem in [12, Lemma 6]. **Remark 1**.: _The technique introduced in this section, what is a rearrangement of a Reilly(-type) formula applied to a solution of a PDE can be extended to study similar problems. In this sense, for example, the last result can be related (and indicate directions to extension) to a recent work associated with Serrin's problem to the \(p\)-Laplacian (see [7, Theorem 1.6])._ Another consequence of Lemma 1 is Theorem 1. Proof of Theorem 1.: Since \[\frac{1}{nH}[(n-1)+nHu_{\nu}]^{2} = \frac{(n-1)^{2}}{nH}+u_{\nu}[(n-1)+nHu_{\nu}]+u_{\nu}(n-1),\] we have \[-\frac{1}{n}\int_{\partial\Omega}u_{\nu}[(n-1)+nHu_{\nu}] = -\int_{\partial\Omega}\frac{1}{n^{2}H}[(n-1)+nHu_{\nu}]^{2}\] \[+\left(\frac{n-1}{n}\right)^{2}\int_{\partial\Omega}\frac{1}{H} +\frac{n-1}{n}\int_{\partial\Omega}u_{\nu}.\] On the other hand, \[\int_{\partial\Omega}u_{\nu} = \int_{\Omega}\Delta u\] \[= -Vol(\Omega)-nk\int_{\Omega}u.\] Then, by Lemma 1, \[\int_{\Omega}|\dot{\nabla}^{2}u|^{2}+\int_{\Omega}[\mbox{Ric}-(n- 1)kg](\nabla u,\nabla u)+\frac{1}{n^{2}}\int_{\partial\Omega}\frac{1}{H}[(n- 1)+nHu_{\nu}]^{2} \tag{2.7}\] \[= \left(\frac{n-1}{n}\right)^{2}\int_{\partial\Omega}\frac{1}{H}- \frac{n-1}{n}\mbox{Vol}(\Omega)-(n-1)k\int_{\Omega}u.\] Since the left-hand side of (2.7) is non-negative, we obtain the desired inequality. Furthermore, the equality holds if and only if all integrals in the left-hand side of (2.7) vanish (since each integral is non-negative). In particular, this implies \(\dot{\nabla}^{2}u=0\); therefore, \(\Omega\) is a geodesic ball, and \(u\) is a radial function. The last result that we prove in this section is an integral identity inspired by the ones in [17] and is the following: **Theorem 4** (Soap Bubble Theorem).: _Let \(\Omega\) be a domain in a Riemannian manifold \((M^{n},g)\) with \(\mbox{Ric}\geq(n-1)kg\), for some \(k\in\mathbb{R}\), and \(u\) be a solution of (2.2). Then_ \[\int_{\partial\Omega}(H_{0}-H)(u_{\nu})^{2}\geq 0,\] _where \(H\) is the mean curvature of \(\partial\Omega\) and \(H_{0}=\frac{1}{c}\) with \(c\) a constant given by (2.9). In particular, if \(H\geq H_{0}\) on \(\partial\Omega\) then \(\Omega\) is a ball (and, a fortiori, \(u_{\nu}=-c\))._ Proof.: We start from (2.3) and we analyse its right-hand side: \[-\frac{1}{n}\int_{\partial\Omega}u_{\nu}[(n-1)+nHu_{\nu}]= \tag{2.8}\] \[= \frac{n+1}{n}\int_{\partial\Omega}u_{\nu}+c|\partial\Omega|-\frac {1}{c}\int_{\partial\Omega}\left(u_{\nu}+c\right)^{2}+\int_{\partial\Omega}(H_ {0}-H)u_{\nu}^{2},\] where we used the following trivial identity \[\int_{\partial\Omega}Hu_{\nu}^{2} = H_{0}\int_{\partial\Omega}u_{\nu}^{2}+\int_{\partial\Omega}(H-H_ {0})u_{\nu}^{2}\] \[= \frac{1}{c}\int_{\partial\Omega}\left(u_{\nu}+c\right)^{2}-2\int_ {\partial\Omega}u_{\nu}-c|\partial\Omega|+\int_{\partial\Omega}(H-H_{0})u_{ \nu}^{2}.\] We choose \[c := -\frac{n+1}{n|\partial\Omega|}\int_{\partial\Omega}u_{\nu} \tag{2.9}\] \[= \frac{(n+1)}{n|\partial\Omega|}\left(Vol(\Omega)+nk\int_{\Omega} u\right)\] ## 3. Serrin type problem in an ambient endowed with a closed conformal vector field Given a Riemannian manifold \((M,g)\), we recall that a vector field \(X\in\mathfrak{X}(M)\) is called _closed conformal_ if \[\nabla_{Y}X=\varphi Y, \tag{3.1}\] for all vector field \(Y\in\mathfrak{X}(M)\), where \(\varphi\) is a smooth function called conformal factor. Along this section, \(M\) denotes a manifold endowed with a closed conformal vector field \(X\in\mathfrak{X}(M)\), and \(\varphi\) is its conformal factor. **Remark 2**.: _Manifolds endowed with a nontrivial closed conformal vector field are locally isometric to a warped product with a 1-dimensional factor (for details, we refer, e.g., to [18, Section 3]). In this sense, the results of this section extend and could be compared with those previously obtained in [12]. Despite this, we provide new geometric identities and conclusions (even in the case of warped products) that permit their applications in Einstein and constant scalar curvature ambient spaces._ Our first result is the following Bochner-type identity. **Lemma 2**.: _Let \(f\) be a smooth function on \(M\). Then,_ \[\Delta\langle X,\nabla f\rangle=\langle\nabla\varphi,\nabla f\rangle(2-n)+2 \varphi\Delta f+\langle\nabla\Delta f,X\rangle,\] _where \(X\) is the closed conformal field with conformal factor \(\varphi\)._ Proof.: First, a straightforward calculation shows that \[\nabla\langle X,\nabla f\rangle=\varphi\nabla f+\nabla^{2}f(X). \tag{3.2}\] We observe that \[div(\nabla^{2}f(X)) = \frac{1}{2}\langle\nabla^{2}f,\mathcal{L}_{X}g\rangle+\langle \nabla\Delta f,X\rangle+Ric(\nabla f,X),\] where \(\mathcal{L}\) denotes the Lie derivative, and we have used Ricci's identity \(div(\nabla^{2}f)=\nabla\Delta f+Ric(\nabla f)\). Then, by taking the divergence of (3.2), \[\Delta\langle X,\nabla f\rangle=\langle\nabla\varphi,\nabla f\rangle+\varphi \Delta f+\frac{1}{2}\langle\nabla^{2}f,\mathcal{L}_{X}g\rangle+\langle\nabla \Delta f,X\rangle+Ric(\nabla f,X) \tag{3.3}\] On other hand, since (3.1) the curvature tensor \[R(u,v)X = \langle u,\nabla\varphi\rangle v-\langle v,\nabla\varphi\rangle u,\] for all \(u,v\in\mathfrak{X}(M)\). Then \(Ric(X,\cdot)=-(n-1)\nabla\varphi\). Furthermore, again by (3.1), \(\mathcal{L}_{X}g=2\varphi g.\) Therefore, replacing in (3.3) we get the desired result. \(\square\) We now devote our attention to the following overdetermined problem \[\left\{\begin{array}{rcl}\Delta u+nku&=&-1\quad\mbox{in}\quad\Omega\\ u&>&0\\ u&=&0\quad\mbox{on}\quad\partial\Omega,\\ |\nabla u|&=&c\quad\mbox{on}\quad\partial\Omega,\end{array}\right. \tag{3.4}\] where \(\Omega\) is a bounded domain in \((M,g)\). The following result provides a Pohozaev-type identity for such domains. **Lemma 3**.: _Let \(M\) be a manifold endowed with a closed conformal vector field X. Let \(u\) be a solution of the problem (3.4), then_ \[\frac{n+2}{n}\int_{\Omega}\varphi u=c^{2}\int_{\Omega}\varphi-\frac{n-2}{2n(n -1)}\int_{\Omega}u^{2}(\varphi R+\frac{1}{2}X(R))-2k\int_{\Omega}\varphi u^{ 2}, \tag{3.5}\] _where \(R\) is the scalar curvature of \(M.\)_ Proof. If \(u\) is a solution of the Serrin problem (3.4), Lemma 3 gives \[\Delta\langle X,\nabla u\rangle=\langle\nabla\varphi,\nabla u\rangle(2-n)+2 \varphi(-1-nku)-nk\langle X,\nabla u\rangle.\] Thus, multiplying this identity by \(u\) and using again (3.4), we conclude that \[u\Delta\langle X,\nabla u\rangle-\langle X,\nabla u\rangle\Delta u=u\langle \nabla\varphi,\nabla u\rangle(2-n)-2\varphi u-2n\varphi ku^{2}+\langle X, \nabla u\rangle. \tag{3.6}\] Now, note that \(u\Delta\langle X,\nabla u\rangle-\langle X,\nabla u\rangle\Delta u=div(u \nabla\langle X,\nabla u\rangle-\langle X,\nabla u\rangle\nabla u).\) Since \(u=0\) along of the boundary and \(\nu=-\frac{\nabla u}{|\nabla u|}\), we conclude from divergence theorem \[\int_{\Omega}u\Delta\langle X,\nabla u\rangle-\langle X,\nabla u\rangle \Delta u=-\int_{\partial\Omega}\langle X,\nabla u\rangle\langle\nabla u,\nu \rangle=c\int_{\partial\Omega}\langle X,\nabla u\rangle\] Again using the divergence theorem \[\int_{\partial\Omega}\langle X,\nabla u\rangle=-c\int_{\Omega}divX=-cn\int_{ \Omega}\varphi.\] Thus, \[\int_{\Omega}u\Delta\langle X,\nabla u\rangle-\langle X,\nabla u\rangle\Delta u =-c^{2}n\int_{\Omega}\varphi. \tag{3.7}\] We intend to study the integral of the right side of (3.6). First, since \(u=0\) on \(\partial\Omega\) and \(divX=n\varphi\), we have \[\int_{\Omega}\langle X,\nabla u\rangle=-\int_{\Omega}udivX=-n\int_{\Omega}u\varphi. \tag{3.8}\] Moreover, using again that \(u=0\) on \(\partial\Omega\), \[\int_{\Omega}u\langle\nabla\varphi,\nabla u\rangle=-\frac{1}{2}\int_{\Omega}u^{2} \Delta\varphi. \tag{3.9}\] From (3.6), (3.7), (3.8), and (3.9) we get that \[\frac{n+2}{n}\int_{\Omega}\varphi u=c^{2}\int_{\Omega}\varphi+\frac{n-2}{2n} \int_{\Omega}u^{2}\Delta\varphi-2k\int_{\Omega}\varphi u^{2} \tag{3.10}\] Finally, since \(Ric(X)=-(n-1)\nabla\varphi\), we conclude that \[-(n-1)\Delta\varphi = div(Ric(X))\] \[= \frac{1}{2}\langle Ric,\mathcal{L}_{X}g\rangle+div(Ric)(X)\] \[= \varphi R+\frac{1}{2}X(R),\] where \(R\) denotes the scalar curvature of \(\Omega.\) From (3.10) and (3.11) we get the desired result. Now, we consider the following \(P\)-function \[P(u)=|\nabla u|^{2}+\frac{2}{n}u+ku^{2}. \tag{3.12}\] If \(u\) is a solution of (3.4), the Bochner's formula applied to \(\nabla u\) can be used to prove (under a condition of Ricci bounded below) the following sub-harmonicity for \(P(u)\) as well as a rigidity result (see Farina-Roncoroni [12, Lemma 5 and Lemma 6]) **Lemma 4** ([12]).: _Let \((M^{n},g)\) be an \(n\)-dimensional Riemannian manifold such that_ \[Ric\geq(n-1)kg.\] _Let \(\Omega\subset M\) be a domain and \(u\in C^{2}(\Omega)\) a solution of (3.4) in \(\Omega\). Then_ \[\Delta P(u)\geq 0.\] _Furthermore, \(\Delta P(u)=0\) if and only if \(\Omega\) is a metric ball and \(u\) is a radial function._ **Remark 3**.: _These two adapted tools, a Pohozaev-type identity and a suitable maximum principle, make a rule to study Serrin's problem using Weinberger's technique. We remember that to study (1.1)-(1.2) in Euclidean domains, Weinberger defined the \(P\)-function_ \[P(u)=|\nabla u|^{2}+\frac{2}{n}u.\] _This function is subharmonic by Cauchy-Schwartz inequality_ \[\Delta P(u)=2|\nabla^{2}u|^{2}-\frac{2}{n}\geq 0, \tag{3.13}\] _and, moreover,_ \[P(u)=c^{2}\quad\text{ on }\partial\Omega.\] _Hence, from the strong maximum principle, one gets that either \(P(u)\equiv c^{2}\) in \(\overline{\Omega}\) or \(P(u)<c^{2}\) in \(\Omega\). The second case would give a contradiction with the classical Pohozaev identity applied to (1.1)-(1.2) (if it is necessary, see, e.g., Struwe [29, page 171] for this identity). This means that \(P(u)\equiv c^{2}\) in \(\overline{\Omega}\) and (3.13) is equality, which implies that \(u\) is a radial function and \(\Omega\) is a ball._ From now on, the goal is to provide a condition for the \(P\)-function associated with the problem to be harmonic. The idea, which follows the original Weinberger's approach, is to use the maximum principle to the sub-harmonic function \(P(u)\) to prove that it is constant given an integral condition. The following result, mainly its geometric consequences, gives such a condition. It is directly inspired by the Pohozaev-type identity obtained in Lemma 3. **Theorem 5**.: _Let \((M,g)\) be a Riemannian manifold endowed with a closed conformal vector field \(X\) such that \(Ric\geq(n-1)kg\). If \(u\) is a solution of (3.4), the conformal factor \(\varphi\) of \(X\) is positive and_ \[\int_{\Omega}u^{2}\left[\varphi(-R+n(n-1)k)-\frac{1}{2}X(R)\right]\geq 0, \tag{3.14}\] _then \(\Omega\) is a metric ball and \(u\) is a radial function._ Proof.: Consider the \(P\)-function given by (3.12). We observe that \(P(u)=c^{2}\) on \(\partial\Omega\). Suppose by contradiction that \(P(u)<c^{2}\) in \(\Omega.\) Thus, \[\varphi(|\nabla u|^{2}+\frac{2}{n}u+ku^{2})<c^{2}\varphi. \tag{3.15}\] From Lemma 3, we get that \[\frac{2}{n}\int_{\Omega}\varphi u=-\int_{\Omega}\varphi u+c^{2}\int_{\Omega} \varphi-\frac{n-2}{2n(n-1)}\int_{\Omega}u^{2}(\varphi R+\frac{1}{2}X(R))-2k \int_{\Omega}\varphi u^{2}. \tag{3.16}\] On the other hand, taking into account that \(u=0\) on the boundary, we use the divergence theorem \[\int_{\Omega}\varphi div(u\nabla u)=\frac{1}{2}\int_{\Omega}u^{2}\Delta\varphi =-\frac{1}{2(n-1)}\int_{\Omega}u^{2}(\varphi R+\frac{1}{2}X(R)).\] Moreover, since \(\Delta u=-1-nku\) is not hard to see that \[\int_{\Omega}\varphi div(u\nabla u)=\int_{\Omega}\varphi(|\nabla u|^{2}-u(1+ nku)).\] From the above equations, we conclude that \[\int_{\Omega}\varphi(|\nabla u|^{2})=-\frac{1}{2(n-1)}\int_{\Omega}u^{2}( \varphi R+\frac{1}{2}X(R))+\int_{\Omega}\varphi u(1+nku). \tag{3.17}\] Since \(\varphi(|\nabla u|^{2}+\frac{2}{n}u+ku^{2})<c^{2}\varphi,\) from (3.16) and (3.17), we conclude that \[-\frac{1}{n-1}(\frac{n-2}{2n}+\frac{1}{2})\int_{\Omega}u^{2}(\varphi R+\frac{ 1}{2}X(R))+k\int_{\Omega}u^{2}\varphi(n-1)<0.\] Thus, \[\int_{\Omega}u^{2}\left[\varphi(-R+n(n-1)k)-\frac{1}{2}X(R)\right]<0,\] what gives a contradiction with (3.14). Thus, using maximum principle, \(P(u)\equiv c^{2}\) and, consequently, \(\Delta P(u)\equiv 0\). The rigidity conclusion follows from Lemma 4. In order to give applications of the last result, we look for geometric properties which imply the integral condition (3.14). For example, the first obvious condition arises in the following result in the context of Einstein manifolds. Proof of Theorem 2.: Einstein's condition \(Ric=(n-1)kg\) implies that \(R=n(n-1)k,\,X(R)=0,\) and, then (3.14) is satisfied This last result can be used to give explicit examples in warped products. We remember that if \(M=I\times_{f}N\) is a warped product, the vector field \(X=f\partial_{t}\) is a closed conformal vector field with conformal factor \(\varphi=f^{\prime}\). Thus, we get the following results. **Corollary 2**.: _Let \(M\) be a warped product given by \(\mathbb{R}\times_{e^{t}}N,\) where \(N\) is a Ric flat manifold. If \(u\) is a solution to the problem 3.4, with \(k=-1,\) then \(\Omega\) is a metric ball, and \(u\) is a radial function._ **Corollary 3**.: _Let \(M\) be a warped product given by \((\varepsilon,+\infty)\times_{\cosh t}N,\,\varepsilon>0\), where \(N\) is an Einstein manifold with scalar curvature given by \(\widetilde{R}=-(n-1)(n-2)\). If \(u\) is a solution of the problem 3.4, with \(k=-1,\) then \(\Omega\) is a metric ball, and \(u\) is a radial function._ In order to weaken the Einstein manifold assumption, we deal again with the closed conformal property of \(X\). First, we notice that, using (3.11), \[\Delta\varphi+nk\varphi = -\frac{1}{n-1}(\varphi R+\frac{1}{2}X(R))+nk\varphi \tag{3.18}\] \[= \frac{\varphi}{n-1}[-R+nk(n-1)]-\frac{1}{2(n-1)}X(R),\] and, thus, the inequality (3.14) is equivalent to \[\int_{\Omega}u^{2}(\Delta\varphi+nk\varphi)\geq 0. \tag{3.19}\] With this in mind, we have the following result **Corollary 4**.: _Let \(M\) be a manifold endowed with a closed conformal vector field \(X\) satisfying \(Ric\geq(n-1)kg.\) In addition, suppose that \(u\) is a solution of the problem 3.4 and \(Ric(X,\nabla u)\geq(n-1)k\langle\nabla u,X\rangle.\) If the conformal factor \(\varphi\) of \(X\) is positive then \(\Omega\) is a metric ball and \(u\) is a radial function._ Proof.: Since \(X\) is a closed conformal vector field, \[-(n-1)u^{2}\Delta\varphi=u^{2}div(Ric(X))=div(u^{2}Ric(X))-\langle\nabla u^{2},Ric(X)\rangle \tag{3.20}\] and \[u^{2}nk\varphi=u^{2}kdiv(X)=kdiv(u^{2}X)-k\langle\nabla u^{2},X\rangle \tag{3.21}\] Since \(u=0\) along boundary, from (3.20) and (3.21) we obtain \[\int_{\Omega}u^{2}(\Delta\varphi+nk\varphi)=\frac{2}{n-1}\int_{\Omega}u(Ric(X, \nabla u)-(n-1)k\langle\nabla u,X\rangle).\] Then the condition \(Ric(X,\nabla u)\geq(n-1)k\langle\nabla u,X\rangle\) implies (3.19), and finally we use Theorem 5 to guarantee the desired result. We note that these series of identities arising from the existence of a closed conformal field also allow dealing with the case of manifolds with constant scalar curvature. The influence of the scalar curvature in the existence of solutions is an exciting topic that, to the knowledge of authors, is treated for the first time in the work of Fall-Millend ([9]). By considering a non-degenerate critical point \(p\) of the scalar curvature function in a Riemannian manifold \((M,g)\), they can construct a smooth foliation \((\partial\Omega_{\varepsilon})\) of a neighborhood of \(p,\) where \(\Omega_{\varepsilon}\) is a domain in which (3.4) with \(k=0\) possesses a solution (here the constant \(c=\frac{\varepsilon}{n}\)). The following result, which can be viewed as a Minkowski-type identity, provides a necessary condition for having a solution in a case of ambient with constant scalar curvature (and therefore with degenerate critical points of the scalar curvature). Proof of Theorem 3. Being \(u\) a solution of (3.4) we get \[-\int_{\Omega}\varphi(1+nku)-\int_{\Omega}u\Delta\varphi=\int_{\Omega}div( \varphi\nabla u-u\nabla\varphi)=\int_{\partial\Omega}\varphi\langle\nabla u,\nu\rangle\] Thus, \[-\int_{\Omega}u(\Delta\varphi+nk\varphi)=\int_{\partial\Omega}\varphi\langle \nabla u,\nu\rangle+\int_{\Omega}\varphi\] Now, since \(R=n(n-1)k,\) we conclude from (3.18) \[\int_{\partial\Omega}\varphi\langle\nabla u,\nu\rangle+\int_{\Omega}\varphi=0.\] On the other hand, since \(\nu=-\frac{\nabla u}{|\nabla u|}\) and \(n\varphi=divX\) we get that \[-c\int_{\partial\Omega}div(X)+\int_{\partial\Omega}\langle X,\nu\rangle=0.\] Now, a straightforward calculation shows that \(\frac{n-1}{n}div(X)=\widetilde{div}(X^{T})+H\langle X,\nu\rangle,\) where \(\widetilde{div}\) denotes the divergence operator of \(\partial\Omega\) and \(X^{T}\) is the tangential part of the vector field \(X.\) Since \(\partial\Omega\) is closed, we obtain the desired identity from the divergence theorem. Finally if, in particular, \(k=0,\) we integrate the identity \(\Delta u=-1\) to get that \(c|\partial\Omega|=|\Omega|.\) Taking into account that \(\langle X,\nu\rangle>0\) we conclude the desired result. The previous theorem provides a nonexistence result. **Corollary 5**.: _Let \(M\) be a manifold endowed with a closed conformal vector field \(X\) and constant scalar curvature \(R=n(n-1)k\). Let \(\Omega\) be a bounded domain with \(\langle X,\nu\rangle>0\). If \((n-1)-cnH\) has strict signal on \(\partial\Omega\), then there exists no a solution of problem 3.4, on \(\Omega\)._ Theorem 3 also provides a rigidity result to Serrin's problem. The following result uses this Minkowski-type formula (joined with Reilly's technique explored in Section 2) to obtain rigidity for geodesic balls. It is interesting to note that this coupling of techniques was used by Reilly himself to get an alternative proof for the classical Alexandrov's Theorem in the Euclidean space (see, for example, [22] for this alternative proof, and also [17] for a comprehensive discussion relating this theorem with Serrin's problem in euclidean domains). **Corollary 6**.: _Let \((M^{n},g)\) be an Einstein manifold with scalar curvature \(R=n(n-1)k\) and endowed with a closed conformal vector field \(X\). Let \(\Omega\) be a bounded domain with \(\langle X,\nu\rangle>0\) on \(\partial\Omega\). If \(u\) is a solution of (3.4), then \(\Omega\) is a geodesic ball, and \(u\) is a radial function._ Proof.: \(P\)-function (3.12) associated to (3.4) satisfies \[P(u)_{\nu}=2\nabla^{2}(\nabla u,\nu)+\frac{2}{n}u_{\nu}+2kuu_{\nu}. \tag{3.22}\] Furthermore, \[Hu_{\nu}=|\nabla u|div\frac{\nabla u}{|\nabla u|}=\Delta u-\nabla^{2}u(\nu,\nu )=-1-nku-\nabla^{2}u(\nu,\nu). \tag{3.23}\] Since \(u=0\) on \(\partial\Omega\), \(\nabla u=u_{\nu}\nu\), we replace (3.23) in (3.22) to obtain that, along \(\partial\Omega\), \[P(u)_{\nu}=-\frac{2}{n}u_{\nu}((n-1)+nHu_{\nu}). \tag{3.24}\] Lemma 4 implies that \(P(u)\) is sub-harmonic. Known (3.24), Hopf's maximum principle applied to \(P(u)\) gives \[-\frac{2}{n}u_{\nu}((n-1)+nHu_{\nu})\geq 0,\] on \(\partial\Omega\). Thus, since \(u_{\nu}=-c\) on \(\partial\Omega\), \[0\leq\int_{\partial\Omega}((n-1)+nHu_{\nu})\langle X,\nu\rangle.\] From Theorem 3 and above inequality we conclude that \((n-1)+nHu_{\nu}=0\). Thus, from Corollary 1, we get the desired result. By the end of this paper, we provide more one necessary condition to find solutions to Serrin's problem in manifolds with constant scalar curvature (here we are restricted to \(k=0\) case in (3.4), but a similar analysis also be done for any \(k\) with a suitable change in the bound for the scalar curvature below). **Theorem 6**.: _Let \(M\) be a manifold with constant scalar curvature \(R\) and endowed with a closed conformal vector field such that the conformal factor \(\varphi\) of \(X\) is positive. Suppose that \(\Omega\subset M\) is a domain where a solution exists for_ \[\left\{\begin{array}{rcll}\Delta u&=&-1,\\ u&>&0&\mbox{in}&\mbox{int}(\Omega),\\ u&=&0&\mbox{on}&\partial\Omega,\\ |\nabla u|&=&c&\mbox{on}&\partial\Omega,\end{array}\right.\] _Then, \(R>-\frac{|\partial\Omega|^{2}}{|\Omega|^{2}}\frac{(n-1)}{2(n-2)}\frac{(n+2)^{ 2}}{n}.\)_ Proof.: In fact, suppose by contradiction that \(u\) is solution of (3.4) on a bounded domain \(\Omega\subset M\) satisfying \(R\leq-\frac{|\partial\Omega|^{2}}{|\Omega|^{2}}\frac{(n-1)}{2(n-2)}\frac{(n+2 )^{2}}{n}.\) Since the scalar curvature is constant, we get, from (3.5), that \[\int_{\Omega}\varphi\left(\frac{n+2}{n}u-c^{2}+\frac{(n-2)R}{2n(n-1)}u^{2} \right)=0. \tag{3.25}\] It is easy to see that since \(\Delta u=-1\), the divergence theorem implies that \(c|\partial\Omega|=|\Omega|\). Thus, our constraint on the scalar curvature implies that \[R\leq-\frac{n(n-1)}{2c^{2}(n-2)}\frac{(n+2)^{2}}{n^{2}},\] which is equivalent to the inequality \(\frac{(n+2)^{2}}{n^{2}}+2c^{2}\frac{(n-2)R}{n(n-1)}\leq 0.\) In particular, \(\frac{(n-2)R}{n(n-1)}<0.\) Now, let us consider the quadratic function \(y=\frac{n+2}{n}u-c^{2}+\frac{(n-2)R}{2n(n-1)}u^{2}.\) Note that the discriminant of the quadratic function is non-positive and, therefore, \(\frac{n+2}{n}u-c^{2}+\frac{(n-2)R}{2n(n-1)}u^{2}\leq 0\), where the equality occurs in a closed set. From (3.25), we reach a contradiction. ## Acknowledgements The authors would like to thank Alberto Farina and Luciano Mari for their discussions about the object of this paper and several valuable suggestions. The first author would like to thank the hospitality of the Mathematics Department of Universita degli Studi di Torino, where part of this work was carried out (he was supported by CNPq/Brazil Grant 200261/2022-3). The first and third authors have been partially supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) of the Ministry of Science, Technology and Innovation of Brazil, Grants 316080/2021-7 and 306524/2022-8, and supported by Paraiba State Research Foundation(FAPESQ), Brazil, Grant 3025/2021. The second author is member of GNAMPA, Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni of INdAM.
2309.07841
Two Timin': Repairing Smart Contracts With A Two-Layered Approach
Due to the modern relevance of blockchain technology, smart contracts present both substantial risks and benefits. Vulnerabilities within them can trigger a cascade of consequences, resulting in significant losses. Many current papers primarily focus on classifying smart contracts for malicious intent, often relying on limited contract characteristics, such as bytecode or opcode. This paper proposes a novel, two-layered framework: 1) classifying and 2) directly repairing malicious contracts. Slither's vulnerability report is combined with source code and passed through a pre-trained RandomForestClassifier (RFC) and Large Language Models (LLMs), classifying and repairing each suggested vulnerability. Experiments demonstrate the effectiveness of fine-tuned and prompt-engineered LLMs. The smart contract repair models, built from pre-trained GPT-3.5-Turbo and fine-tuned Llama-2-7B models, reduced the overall vulnerability count by 97.5% and 96.7% respectively. A manual inspection of repaired contracts shows that all retain functionality, indicating that the proposed method is appropriate for automatic batch classification and repair of vulnerabilities in smart contracts.
Abhinav Jain, Ehan Masud, Michelle Han, Rohan Dhillon, Sumukh Rao, Arya Joshi, Salar Cheema, Saurav Kumar
2023-09-14T16:37:23Z
http://arxiv.org/abs/2309.07841v1
# Two Timin': Repairing Smart Contracts With A Two-Layered Approach ###### Abstract Due to the modern relevance of blockchain technology, smart contracts present both substantial risks and benefits. Vulnerabilities within them can trigger a cascade of consequences, resulting in significant losses. Many current papers primarily focus on classifying smart contracts for malicious intent, often relying on limited contract characteristics, such as bytecode or opcode. This paper proposes a novel, two-layered framework: 1) classifying and 2) directly repairing malicious contracts. Silther's vulnerability report is combined with source code and passed through a pre-trained RandomForestClassifier (RFC) and Large Language Models (LLMs), classifying and repairing each suggested vulnerability. Experiments demonstrate the effectiveness of fine-tuned and prompt-engineered LLMs. The smart contract repair models, built from pre-trained GPT-3.5-Turbo and fine-tuned LLama-2-7B models, reduced the overall vulnerability count by 97.5% and 96.7% respectively. A manual inspection of repaired contracts shows that all retain functionality, indicating that the proposed method is appropriate for automatic batch classification and repair of vulnerabilities in smart contracts. Smart Contract, Vulnerability Detection, Silther, Large Language Model, Repair ## I Introduction As we delve into the crucial role smart contracts play in the global blockchain, it becomes increasingly imperative that we understand the severity of cyberattacks that exploit weak code. 2018 saw $23.5 million worth of cryptocurrencies stolen from the Bancor network due to the compromise of a wallet used to upgrade smart contracts, sparking controversy online over the safety of decentralized exchange and smart contract systems [16]. More recently, in 2020, a hacker drained Harvest Finance of $24 million by implementing a smart contract that manipulated the share values of the vaults [17]. The common theme across these hacks is that vulnerabilities within smart contracts were exploited to steal millions of dollars, highlighting the importance of strengthening smart contracts to prevent vulnerabilities from arising. Smart contracts provide a secure platform for transactions without the need for a trusted intermediary. For this reason, they have become increasingly common in blockchain applications. But because most blockchain applications prevent users from editing smart contracts after they have been deployed, there is a need for analysis tools that can accurately and precisely determine the vulnerabilities of smart contracts. Although most tools rely on expert-developed frameworks, recent research has begun developing deep learning models that can evaluate a smart contract's vulnerability. However, most existing deep learning models fail to provide helpful feedback on a smart contract's vulnerabilities -- instead, they determine whether or not a smart contract is vulnerable. DLVA [1] introduces a three-step approach involving mapping bytecode to high-dimensional vectors, classifying vectors based on training data, and using neural networks to infer vulnerable contracts. However, a significant weakness in this approach was the high false positive rate during the prediction process. Similarly, MRN-GCN [5] utilizes deep learning with a nest contract graph capturing syntactic and semantic information, enabling the classification of vulnerable functions, but like [1], retained mixed recall percentages ranging from 98.18% to 79.59%. The authors of [3] take a different approach by proposing peer-to-peer voting and reward-and-slash mechanisms to mitigate and discourage malicious behavior in smart contracts. Large Language Models (LLMs) models prove to be exceptional in performing complex tasks. The authors of [8] demonstrated the capabilities of various LLMs in identifying vulnerabilities in DeFi smart contracts with F1-scores significantly higher than random baselines, which has the potential to be improved by the tool enhancement framework developed in [4]. Prompt engineering allows LLMs to be substantially enhanced. One powerful LLM prompt engineering method involves Chain of Thought (CoT) prompting [2] that significantly improves the ability of LLMs to perform complex reasoning. In eight CoT exemplars, [2] achieves an accuracy of 56.9 on PaLM-540B in the GSM8K benchmark, demonstrating an accuracy improvement of 39. However, the paper chooses to rely solely on CoT, neglecting fine-tuning entirely. In a similar implementation, the authors of [7] present a framework that improves upon CoT by transferring advanced reasoning abilities from large models to smaller ones through knowledge distillation, resulting in improved question-answering performance. In another scenario, [6] utilized prompt engineering by giving ChatGPT specific information, such as the translation's purpose and target audience, leading to industry standard translation quality. A comprehensive survey [11] described the current landscape of smart contract security, identifying eight core defense methods across 133 models. This finding underscores the complexity of the field but also reveals limitations. One limitation is seen in applying automated smart contract tools to DeFi systems [12]. Surprisingly, these tools only detected 8% of attacks, indicating a challenge with intricate vulnerabilities. Addressing this, [13] evaluated five smart contract detection tools, focusing on three types of vulnerabilities. [13]'s analysis determined that different detection models have varying strengths and weaknesses, suggesting a combination of methods may be more effective. Furthermore, this notion is corroborated by [9] and [10], which both utilize Multi-Task Learning, a combination method that leverages concurrent learning and optimization of multiple tasks. Notably, [14] advances this methodology by using an approach that blends K-means clustering and LSTM networks with a universal sentence encoder. This approach understood the smart contract code's semantic meaning, outperforming baseline models. Moreover, current work regarding repairing smart contracts has been shown to be reliable. For example, [19] utilizes a framework called ContractFix to repair vulnerabilites with 94% accuracy. ContractFix was based around static code analyzers and focused on repairing broken patches. Similarly, [15] utilizes a tool, Elysium, to repair patches in bytecode for seven vulnerabilities. However, this paper improves on these frameworks in two main ways. First, our framework is built on LLMs which allow for a more robust repairing process, that is adaptable to zero-day vulnerabilities. Secondly, we work directly with source code, which is a novel approach to repair vulnerabilities. These existing methods have been shown to work well in vulnerability detection across various situations with relatively little statistical error. However, we show that existing vulnerability detection methods face the following problems: 1) lack of a broad approach, 2) little detail on specific errors, 3) high false positive evaluations, and 4) lack of a direct repair framework. To address all these problems, we propose a novel pipeline. The pipeline first utilizes Slither and a RandomForestClassifier to detect and provide specific vulnerabilities within smart contract source code. After filtering out non-malicious contracts, two LLMs, GPT-3.5-Turbo and a fine-tuned Llama-2-7b generation model, each repair the vulnerable smart contract source code. The repaired contract is then evaluated by Slither against its vulnerable counterpart, assessing the effectiveness of the repair. The rest of this paper is outlined as follows: Section II details our novel pipeline approach that utilizes two layers for vulnerability detection: Slither and RandomForestClassifier, to classify vulnerable smart contracts and two LLM models (Llama-2-7B and GPT-3.5-Turbo) to repair them. Section III exhibits the results of our approach in comparison to existing methods. Section IV provides a conclusion. ## II Methods ### _Datasets_ To achieve high-quality results in training our framework utilizing a RandomForestClassifier and LLMs for classification and repair (Fig. 1), several essential features must be incorporated. A source code column ("contract_source") is necessary to run Slither and the LLMs. However, since the datasets consistently excluded source code, a web scraping algorithm that employed the "contract_address" column would be necessary to obtain source code from Etherscan and generation (see subsection \(D\).). In order to account for source code that could not be scraped through Etherscan, the dataset (200,000 contracts) was reduced to 2500 rows. Slither was then run on the newly acquired source code (see subsection \(B\).), adding columns "vulnerability", "confidence", and "impact". Slither occasionally failed to provide any vulnerabilities, totalling 474 failed contracts (80% successful output rate). To account for this, the dataset was reduced again to 2,000 smart contracts. Of the dataset, 400 were labeled malicious, and 1,600 were labeled non-malicious. Table I visualizes a segment of the finalized dataset. ### _Slither_ Slither is a static code analyzer, which checks the smart contracts for vulnerabilities without executing the contract. Slither's initial input comes from the Solidity Abstract Syntax Tree (AST) generated by the Solidity compiler from the contract source code. The smart contract is then simplified into an intermediate representation called SlithIR. This intermediate representation is compared to current industry standards, and Slither outputs vulnerabilities. Slither leads the industry in smart contract vulnerability detection, outperforming other static code analyzers in almost every metric, as shown in Table II. This, coupled with our Random Forest Classifier, ensures high accuracy in detecting vulnerable smart contracts. After importing and running all 89 basic detectors provided by the API, we added each contract's vulnerabilities to the dataset as a list of Slither's natural language names with empty lists denoting contracts Slither deemed safe. ### _Data Issues and Generation_ When it came to data collection, specific issues were encountered. Our biggest issue, extracting source code, proved to be a challenging task. For instance, in a dataset that bytecode was given, we were unsuccessful in decompiling that code into analyzable source code as we were unaware of the decompiler's limits. We also struggled to find additional malicious source code to train a model on, as our dataset only included 150 malicious contracts. To overcome this, we implemented OpenAI's GPT 3.5 Turbo to generate malicious source code. Initial attempts were barred by GPT 3.5's ethical limitations (Fig. 2). However, after jailbreaking GPT 3.5 with prompt engineering [18], GPT 3.5 would produce malicious source code that could be repaired by the model. The variability of the dataset made it difficult to generate Slither vulnerabilities for smart contracts, so a BLANK-step approach was used. The primary issue was the 100+ versions all contracts were written in combined with the limited backward compatibility of Solidity -- i.e., version 0.4.11 could run on a compiler of version 0.4.26 but not a compiler of version 0.5.0+. Addressing this required modifying each contract to read "pragma solidity \(\geq\){version}", creating five different scripts, and running each script on the entire dataset with one of five following Solidity versions: 0.4.26, 0.5.17, 0.6.12, 0.7.6, or 0.8.21, with Slither vulnerabilities of scripts that could not be compiled recorded as null, and those that could be recorded with the English name of the vulnerability, obtained from parsing the returned json. Combining these lists resulted in the final list of Slither vulnerabilities for the 75% of smart contracts for which this method yielded results. Each detector class includes the detector's confidence and impact levels. After creating a key-value pair of each detector's English name and their confidence plus impact, this list was used to create confidence and impact lists for all vulnerabilities for each smart contract. ### _Classifier_ Various models were implemented to classify smart contract maliciousness. Ultimately, RandomForestClassifier (RFC) provided the highest accuracy after pre-processing the finalized dataset. RFC is unable to train on the dataset as provided by web-scraping, generation, and Slither processing due to the abundance of unnecessary string-based features. So, unnecessary features are dropped, and necessary features are processed for RFC. For example, "confidence" and "vulnerability" retain a weaker correlation to "malicious" in comparison to "impact", so to avoid convoluting the model, both are dropped. Thus, "contract_source" and "impact" remain as the classifying features and "malicious" as the target label. As all columns are still either string or boolean data types, RFC is still unable to train on the dataset. "contract_source" was tokenized using the CountVectorizer (CV) tool from the sci-kit-learn library. "malicious" and "impact" were encoded into usable numeric values by mapping dictionaries. Since "impact" contained more than two possible outputs, unlike "malicious", the outputs of "impact" were scaled from 0-4. Fig. 1: The ”Two Timin” Framework Fig. 2: GPT 3.5 ethical limitations with production of “vulnerable” source code After the tokenized and encoded columns are concatenated, RFC's numeric prerequisite is fulfilled. The data is then split into a train-test split of 0.6-0.4 and randomized before RFC fits to the train set and predicts on the test set. Accuracy and confusion are evaluated in \(Results\). ### _Large Language Models (LLMs)_ #### Iv-E1 Finetuning Llama-2-7B We incorporated multiple Large Language Models to repair the smart contracts after they had been identified as malicious with our two-layered frameworks. The best results came from the Llama-2-7B model, which can be found on Hugging Face. This model finished training in July 2023. Our finetuning process took place about three weeks later. The Llama-2-7B model has become very popular due to its low number of parameters and reliability, leading to a less memory-intensive alternative to other LLMs in the industry. The finetuning process took place on Google Colab using the T4 chip, which carries 16 GB of VRAM. However, Llama-2-7B's weights themselves fill this limit (7b * 2 bytes = 14). This also does not include any weights, optimizers, or gradients. Thus to run Llama-2-7B and be able to run it without memory restrictions on a platform like Google Colab, we will use parameter-efficient-finetuning (PEFT). Specifically, we will use QLoRa (Efficient Finetuning of Quantized LLMs), using 4-bit precision instead of the normal 16-bit precision. This quantization process allows for finetuning on Colab while also ensuring that the precision of the model is adequate. This is because when saving the 4-bit model, we also save the QLoRa adapters, which can be used with the model. Moreover, Llama-2-7B is open source meaning the model is available to be downloaded and used locally. Traditional data privacy concerns with LLMs are therefore nullified because all data is processed on the local machine, not in a 3rd party server. This bodes well for smart contracts as many execute agreements with sensitive information and large sums of money. Llama-2-7B provides the benefits and accuracy of an advanced LLM while also providing the security and versatility neccesary for blockchain technology. The Llama-2-7B model was fine-tuned on fifty smart contracts that were once malicious and then repaired, using a supervised learning approach. These smart contracts were collected in the data collection mentioned above. Specifically, the source code was tokenized and embedded, using the quantization outlined previously. The model was trained over 100 steps, with training loss consistently decreasing with every step(as shown in figure 3). The supervised fine-tuning process allowed the model to understand the relationships between malicious source code and the same source code that had been repaired to emulate that with any other contract. #### Iv-E2 Prompt Engineering We also utilized OpenAI's API to use GPT-3.5-Turbo to repair vulnerabilities. OpenAI is one of the most well known names in the industry with applications such as DALL -E and ChatGPT. Specifically, while all GPT models are optimized to generate code, GPT-3.5-Turbo is the best combination of performance and efficiency. Moreover, by utilizing a "chat bot", we were able to use prompt engineering to create a prompt with the best possible performance. Directly querying GPT-3.5-Turbo to repair malicious code was unsuccessful. Similar to the generation of malicious smart contracts, GPT-3.5-Turbo had a reluctance to work with malicious source code (Fig. 4). Thus prompt engineering was utilized to circumvent this problem. First, the use of the word "malicious" needed to be removed. While we were looking for our LLM to repair malicious smart contracts, GPT-3.5 Turbo was instead asked to help us "fix vulnerable smart contracts". We then used Chain of Thought Techniques in order for the model to elaborate on what changes it made and why. This led to a more accurate source code output and more vulnerabilities repaired. Additionally, this provided more information for the Fig. 3: Training Loss vs Step for Finetuning Llama-2-7B. user as the specific vulnerabilities in the malicious smart contract were highlighted and explained. Ultimately, our prompt(Fig. 5) used Slither's source code and vulnerabilities to prompt GPT 3.5 Turbo to repair the smart contracts. While Slither also outputs impact level and confidence on those vulnerabilities, we found incorporating these into the prompt hurt the model's ability to output repaired source code or even source code that could be compiled. Essentially, using other Slither outputs led to overfitting. This prompt was also used with the Llama-2-7B model outlined above in order to create uniformity across outputs. In both models, the prompt allowed for the generation of repaired source code while also generating details that explained any changes and provided explanation. In conclusion, we ended with two primary models to repair source code. First, the Llama-2-7B, which had been fine-tuned specifically for repairing smart contracts. Second was the utilization of GPT-3.5-Turbo which learned to repair smart contracts through CoT prompt engineering. ## III Results ### _Results from the RandomForestClassifier (RFC)_ Of the 2000 contracts used on the model, the RFC was tested on 800 (40%). 717 out of the 800 contracts were predicted accurately for an accuracy of 89.6% and an F1 score of 0.76. The generated confusion matrix further detailed that for positive predictions ("True"), 133 were true positives, and 23 were false positives. For negative predictions, 584 were true negatives, and 60 were false negatives. The false positive rate was only 3.8%, successfully fulfilling our goal. This is a significant improvement over just static analysis tools, such as Slither, which alone has a false positive rate of 10.9% [20]. Furthermore, the RFC is able to examine the source code without a limited number of vulnerability detectors, making it more adaptable to syntax changes. ### _Results from the GPT-3.5-Turbo and Llama-2-7B Error Correction Models_ To test the GPT-3.5-Turbo and the fine tuned Llama-2-7B model with our prompt, we aimed to repair vulnerabilities as reported by Slither. The results are shown in the graphs above. The results of Slither checks on GPT-corrected smart contracts are promising, with the fine-tuned GPT-3.5 Turbo model able to repair 97.5% of vulnerabilities. Specifically, out of the 40 vulnerabilities encountered while running through the source code, only a single medium level vulnerability remained. Meanwhile, the fine-tuned Llama-2 model was able to correct all but two errors across 60 vulnerabilities encountered, with one medium- and one low-impact vulnerability remaining. Thus the Llama-2 model was able to decrease the proportion of vulnerabilities by 96.7%. We revie Fig. 4: GPT-3.5-Turbo limitations on repairing smart contracts Fig. 5: Prompt used with both LLMs to repair malicious smart contracts Fig. 6: The number of Slither vulnerabilities of each impact level within twenty and thirty randomly selected smart contracts from our dataset before and after our prompt-engineered GPT 3.5 Turbo and fine-tuned Llama 2-7b models attempted to correct them. The models were given a maximum of 5 attempts, and we ran Slither on each returned contract. The results show the models were able to eliminate all high-impact vulnerabilities from every smart contract, while between them being unable to remove a total of four vulnerabilities of lesser degrees. repaired smart contracts and found that all of them had retained their previous functionality, with the models usually correcting syntax-level errors rather than changing underlying structures. The CoT GPT-3.5-Turbo prompts and fine-tuning of the Llama-2-7B classifier were vital to the accuracy of these models. Upon initial testing, the GPT-3.5-Turbo was able to repair fewer than 85% of smart contracts and the Llama-2-7B model was unable to produce code that could be compilied. However, with the methods outlined above, the results demonstrate a reliable process to repair smart contracts. Indeed, these results demonstrate that the LLMs were able to successfully repair vulnerable smart contracts with near perfect accuracy, with only three total vulnerabilities remaining. The error correction rate was well above that of any existing methods, making them state-of-the-art tools with impressive error reduction capabilities. Moreover, due to the "Two Timin"" framework described above, only malicious contracts were repaired, cutting down on computing time and maximizing the quantity of secure, reliable smart contracts available. Due to the tens of millions of smart contracts on blockchains such as Etherscan [21], minimizing computational complexity and cost in an already energy-intensive industry is beneficial to users, companies, and the environment. ## IV Conclusion In this paper, we used the Solidity source code of smart contracts to build a novel approach to identify and repair vulnerabilities. This approach utilized a two tiered flow for identifying and repairing vulnerabilities. First, the Slither static code analyzer and a Random Forest Classifier were used to identify malicious smart contracts and their specific vulnerabilities. These malicious smart contracts and their vulnerabilities were used as parameters in a prompt on two separate LLMs, GPT-3.5-Turbo and Llama-2-7B. This prompt was a result of prompt engineering using Chain of Thought reasoning. The two smart contract repair models, one using pre-trained GPT-3.5-Turbo and the other a fine-tuned Llama-2-7B, reduced the overall vulnerability count by 97.5% and 96.7% respectively. This novel approach, with state of the art accuracy, allows for smart contracts to be screened and repaired before being deployed. Thus, cybercriminals are unable to exploit vulnerabilites in the contracts. Indeed, this paper establishes a framework that is easy to use, with reliable results, increasing access to safe smart contracts for all. Using the "Two Timin"" framework, businesses and DAOs can utilize LLMs to repair smart contracts efficiently and effectively, an important step forward as the prevalence of blockchain continues to increase. ## Future Work Different methods of classifiers powered by transformers or neural networks could be used to identify malicious smart contracts. These could learn across a broader concentration of data with access to a larger proportion of malicious smart contracts. In addition, more finetuning could be completed on Llama-2-7B, with more hidden layers and a larger dataset in order to raise its error correction rate above that of GPT-3.5-Turbo. At the time of writing this paper, GPT-3.5-Turbo is unable to be fine-tuned, however if fine-tuning capabilities were to be developed, further research could focus on fine tuning GPT-3.5-Turbo for repairing smart contracts. Moreover, advances in PEFT and/or QLoRa could allow for a less memory intensive but more accurate LLM for repairing smart contracts.
2309.16794
Electroweak Precision Measurements of a Nearly-Degenerate $Z^\prime$-$Z$ System
In this paper, we discuss the possibility to probe a nearly-degenerate $Z^{\prime}$-$Z$ system by analyzing the $Z$-lineshape at an electron-positron collider. Compared with the usual $Z^{\prime}$ in the literature well separated with the standard model (SM) $Z$ boson in mass, the nearly-degenerate $Z^{\prime}$-$Z$ mixing affects the observed effective ``oblique parameters'' $\tilde{S}$, $\tilde{T}$, $\tilde{U}$, and the effective deviation of ``number of neutrino species'' $\delta \tilde{N}_{\nu}$ in a more complicated way and cannot be simply computed perturbatively up to a particular order. Aiming at solving this problem, we write down a general simplified effective Lagrangian and enumerate some parameter spaces corresponding to some typical models, and suggest a method to extract the constraints by looking into the line-shape of the $Z$-like resonance at an electron-positron collider.
Dayun Qiu, Yi-Lei Tang
2023-09-28T18:44:48Z
http://arxiv.org/abs/2309.16794v2
# Electroweak Precision Measurements of a Nearly-Degenerate \(Z^{\prime}\)-\(Z\) System ###### Abstract In this paper, we discuss the possibility to probe a nearly-degenerate \(Z^{\prime}\)-\(Z\) system by analyzing the \(Z\)-lineshape at an electron-positron collider. Compared with the usual \(Z^{\prime}\) in the literature well separated with the standard mocel (SM) \(Z\) boson in mass, the nearly-degenerate \(Z^{\prime}\)-\(Z\) mixing affects the observed effective "oblique parameters" \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\), and the effective deviation of "number of neutrino species" \(\delta\tilde{N}_{\nu}\) in a more complicated way and cannot be simply computed perturbatively up to a particular order. Aiming at solving this problem, we write down a general simplified effective Lagrangian and enumerate some parameter spaces corresponding to some typical models, and suggest a method to extract the constraints by looking into the line-shape of the \(Z\)-like resonance at an electron-positron collider. Introduction The standard model (SM) of particle physics can be extended with an additional gauge group, thus accommodates various exotic vector bosons. The electromagnetic neutral \(Z^{\prime}\) boson, accompanied with an extra \(U(1)^{\prime}\) gauge symmetry is the simplest selection[1; 2; 3; 4]. \(Z\)-\(Z^{\prime}\) mixing[5; 6; 7] can arise due to the absence of any unbroken symmetry below the electroweak scale, affecting various phenomenologies for different searching proposals. Straightforward "bump" searches are effective for a \(Z^{\prime}\) with a measurable interaction with the SM particles (For a review, see Ref. [8]). In the literature, Tevatron[9; 10; 11; 12; 13; 14; 15] and the ATLAS and CMS collaborations at the LHC[16; 17; 18; 19; 20; 21; 22; 23; 24] published constraints for such kind of \(Z^{\prime}\) boson with its mass well-separated with the SM-\(Z\) boson. In contrast with this "direct" strategy, an "oblique" way is to look into the "oblique parameters", e.g., Peskin-Takeuchi parameters \(S\), \(T\), and \(U\)[25; 26], and even \(W\), \(X\), \(Y\), \(Z\)[27], etc., to observe the hints imprinted on the electroweak precision measurement parameters from an off-shell \(Z^{\prime}\)[28; 29] which might be veiled beneath its faint coupling with, or low decay ratio into the visible SM particles during a straightforward searching process. In the literature, most of the results utilizing the LEP data actually follow this route[30; 31; 32]. The recent \(W\) boson mass data published by the CDF collaboration[33] and its deviation from the SM predicted value give rise to the possibility of the exisitence of an exotic vector boson contributing to the electroweak precision measurement values[34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47]. Theoretical predictions on both "direct" and "oblique" ways to find a \(Z^{\prime}\) are based upon the perturbative expansions up to a particular order. This is sufficient for the case with a significant mass difference between \(Z^{\prime}\) and the SM-\(Z\), and the mixing angle is usually extremely small. Yet in the literature, it seems that discussions about a nearly degenerate \(Z^{\prime}\)-\(Z\) system are rare. Inspired by the famous \(K^{0}\)-\(\overline{K^{0}}\) system[48; 49], we have realized that despite the Hermitian squared mass elements, the non-Hermitian widths might also play important roles in finding the "mass eigen-states" of two mixing states. In the case when two vector bosons mix, diagonalizing a mass-squared matrix including the non-Hermitian width contributions is equivalent to a resummation over all possible "string diagrams" including the imaginary contributions from all possible one-loop self-energy diagrams. Then these two "mass eigenstates" both contribute to the line-shape of a \(Z\)-boson-like object at an electron-positron collider, which might not be naively considered as a simple composition of two resonances. Compared with a sheer SM-\(Z\) line-shape, such an object might appear as a distorted "resonance" to affect the electroweak precision measurement results extracted from its appearance. In this paper, we try to compute the electroweak precision measurement distortions induced by a \(\hat{Z}^{\prime}\) field which is nearly degenerate with the SM \(\hat{Z}\) field through diagonalizing their mass-squared matrix including their widths. In order to compare our results with the familiar parameters, we define observables \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\) and \(\delta\tilde{N}_{\nu}\), corresponding to the well-known Peskin-Takeuchi parameters \(S\), \(T\), \(U\), and the deviation of the neutrino species \(\delta N_{\nu}\). Unlike \(S\), \(T\) and \(U\), our \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\) only reflect the change of the line-shape, and is unable to be attributed to some particular effective operator contributions. Since we are not able to find the complete original LEP data published or extracted[50], a constraint on the \(Z^{\prime}\)-model parameter space is non-viable. As a compromise, we adopt a set of events simulated in the conditions similar to the LEP environments dubbed "pseudo-LEP" data to predict the sensitivity if the real original LEP data are utilized. Besides LEP, proposed future leptonic colliders, e.g., ILC[51], CEPC[52], FCC-ee[53], have been proposed with extremely large integrated luminosities. Usually at least a calibration around the \(Z\)-scale should be proceeded, at the same time electroweak precision measurement data are updated by the way. For an example, the CEPC takes the potential to produce \(\sim 10^{11}\)-\(10^{12}\)\(Z\)-bosons, which can be regarded as a super \(Z\)-factory[54] to significantly improve the sensitivity of the oblique parameters. Simulating such a large "pseudo-CEPC" data set is beyond our current computational resources, however its sensitivities can still be estimated by utilizing the pseudo-LEP results. This paper is organized as follows. In Sec. II, we introduce the effective Lagrangian for an exotic vector boson \(\hat{Z}^{\prime}\). Other basic concepts are elaborated. Then simulation details and settings are illustrated in Sec. III. In Sec. IV, the numerical results are presented and described in three scenarios. Finally, Sec. V summarize this paper. Effective Lagrangian In this paper, we rely on a simplified general effective Lagrangian introduced in Ref. [44], which is rewritten as \[\mathcal{L} \supset -\frac{1}{4}\hat{Z}^{\prime}_{\mu\nu}\hat{Z}^{\prime\mu\nu}+\frac{1 }{2}m^{2}_{\hat{Z}}\hat{Z}^{\prime}_{\mu}\hat{Z}^{\prime\mu}-\frac{\epsilon_{B} }{2}\hat{Z}^{\prime}_{\mu\nu}B^{\mu\nu}-\frac{1}{2\Lambda^{2}_{W}}\hat{Z}^{ \prime}_{\mu\nu}W^{a\mu\nu}H^{\dagger}\sigma^{a}H \tag{1}\] \[- \frac{1}{2\Lambda^{2}_{BW}}B_{\mu\nu}W^{a\mu\nu}H^{\dagger}\sigma ^{a}H-\frac{1}{4\Lambda^{4}_{WW}}W^{a\mu\nu}H^{\dagger}\sigma^{a}HW^{b}_{\mu \nu}H^{\dagger}\sigma^{b}H\] \[+ \frac{1}{\Lambda^{2}_{HD}}(H^{\dagger}D_{\mu}H)^{\dagger}(H^{ \dagger}D^{\mu}H)+\hat{Z}^{\prime\mu}\left[i\lambda_{HZ^{\prime}}(D_{\mu}H)^{ \dagger}H+\text{H.c.}\right].\] where \(\hat{Z}^{\prime\mu}\) is the exotic neutral vector boson \(\hat{Z}^{\prime}\), and \(\hat{Z}^{\prime}_{\mu\nu}\equiv\partial_{\mu}\hat{Z}^{\prime}_{\nu}-\partial_ {\nu}\hat{Z}^{\prime}_{\mu}\). \(H\) indicates the SM Higgs doublet, \(W^{a\mu}\) and \(B^{\mu}\) are the \(SU(2)_{L}\) and \(U(1)_{Y}\) gauge fields respectively, and \(D_{\mu}=\partial_{\mu}-i\hat{g}^{\prime}_{Y}B_{\mu}/2-i\hat{g}\sigma^{a}W^{a} _{\mu}/2\), where \(\hat{g}^{\prime}\) and \(\hat{g}\) are the "original" coupling constants. \(\epsilon_{B}\), \(\Lambda_{W}\), \(\Lambda_{BW}\), \(\Lambda_{WW}\), \(\Lambda_{HD}\), \(\lambda_{HZ^{\prime}}\) are the corresponding constants. Here the mass term \(m^{2}_{\hat{Z}^{\prime}}\hat{Z}^{\prime\mu}_{\mu}\hat{Z}^{\prime\mu}\) is put by hand, and might originate from an exotic Higgs carrying a \(U(1)^{\prime}\) charge corresponding with the \(Z^{\prime}\), or from the Stueckelberg mechanisms. After \(H\) acquires the vacuum expectation value (VEV), \[H=\begin{pmatrix}i\phi^{+}\\ \frac{\hat{v}+h+i\phi^{0}}{\sqrt{2}}\end{pmatrix}, \tag{2}\] where \(\hat{v}\approx 246\)GeV, we therefore acquire the effective kinematic mixing terms \[\mathcal{L}_{\text{eff}}\supset-\frac{\epsilon_{B}}{2}\hat{Z}^{\prime}_{\mu\nu }B^{\mu\nu}-\frac{\epsilon_{W}}{2}\hat{Z}^{\prime}_{\mu\nu}W^{3\mu\nu}-\frac{ \epsilon_{BW}}{2}B_{\mu\nu}W^{3\mu\nu}-\frac{\epsilon_{WW}}{4}W^{3}_{\mu\nu}W^ {3\mu\nu}. \tag{3}\] where \(\epsilon_{W}\equiv-\hat{v}^{2}/(2\Lambda^{2}_{W})\), \(\epsilon_{BW}\equiv-\hat{v}^{2}/(2\Lambda^{2}_{BW})\), and \(\epsilon_{WW}\equiv\hat{v}^{4}/(4\Lambda^{4}_{WW})\). Then, the mass terms as well as the kinematic terms can be written in the form of matrices, \[\mathcal{L}_{mass} = \left(\hat{Z}^{\prime}_{\mu},\ B_{\mu},\ W^{3}_{\mu}\right) \mathcal{M}^{2}_{V}\begin{pmatrix}\hat{Z}^{\prime\mu}\\ B^{\mu}\\ W^{3\mu}\end{pmatrix}, \tag{4}\] \[\mathcal{M}^{2}_{V} = \begin{pmatrix}m^{2}_{\hat{Z}^{\prime}}&\hat{g}^{\prime}\delta m ^{2}&-\hat{g}\delta m^{2}\\ \hat{g}^{\prime}\delta m^{2}&\frac{\hat{g}^{\prime 2}}{4}(\hat{v}^{2}+\delta v^{2})&- \frac{\hat{g}^{\prime}\hat{g}}{4}(\hat{v}^{2}+\delta v^{2})\\ -\hat{g}\delta m^{2}&-\frac{\hat{g}^{\prime}\hat{g}}{4}(\hat{v}^{2}+\delta v ^{2})&\frac{\hat{g}^{2}}{4}(\hat{v}^{2}+\delta v^{2})\end{pmatrix}. \tag{5}\] \[\mathcal{L}_{\rm kin} = -\frac{1}{4}\left(\hat{Z}^{\prime}_{\mu\nu},\ B_{\mu\nu},\ W^{3}_{ \mu\nu}\right)\mathcal{K}_{V}\left(\begin{array}{c}\hat{Z}^{\prime\mu\nu}\\ B^{\mu\nu}\\ W^{3\mu\nu}\end{array}\right), \tag{6}\] \[\mathcal{K}_{V} = \left(\begin{array}{ccc}1&\epsilon_{B}&\epsilon_{W}\\ \epsilon_{B}&1&\epsilon_{BW}\\ \epsilon_{W}&\epsilon_{BW}&1+\epsilon_{WW}\end{array}\right), \tag{7}\] where \(\delta m^{2}\equiv-\lambda_{HZ^{\prime}}\hat{v}^{2}/2\) and \(\delta v^{2}\equiv\hat{v}^{4}/(4\Lambda_{HD}^{2})\). Ref. [44] aims at diagonalizing (5) and (7), and perturbatively expand the results in the case that \(m_{Z^{\prime}}\) is well-separated with the \(m_{Z}\). The contributions from the self-energy diagrams are usually calculated through two ways, order-by-order perturbatively, or resummed to correct the mass term of every propagator. Near the resonance of each s-channel propagator, the imaginary part of its self-energy diagrams must be resummed prior to any other process and contributes to the Breit-Wigner form of the propagator by adding up an imaginary part in the denominator. Other contributions may be taken into account perturbatively order by order later, and behave like sub-leading corrections in quantities. Besides the self-energy of each particle, the self-energy among different types of particles, or the "cross terms" may also give rise to the imaginary parts, as shown in Fig. 1 as an example. This is not a problem if \(Z\) and \(Z^{\prime}\) are well separated in mass spectrum, and its contributions are suppressed by a factor \(\frac{1}{m_{Z^{\prime}}^{2}-m_{Z}^{2}}\) so they can be taken into account order-by-order perturbatively. However, when \(Z\) and \(Z^{\prime}\) are nearly degenerate so \(m_{Z^{\prime}}^{2}\approx m_{Z}^{2}\), such a suppression becomes non-viable. Denote \(c_{XY}\equiv{\rm Im}\left\{\Pi_{\rm X\leftrightarrow Y}({\rm p}^{2}\approx{\rm m }_{\rm X,Y}^{2})\right\}\), where \(\Pi_{X\leftrightarrow Y}(p^{2})\) is the \(g_{\mu\nu}\) coefficient of the self-energy of a particle \(X\) transforming into \(Y\), one has to correct each element of (5) with an additional \(ic_{XY}\) term before the diagonalizion processes. In fact, as we will illustrate in Appendix A, in order to keep the photon massless, we utilize the matrix below \[{\cal M}_{V}^{2}\,{}^{\prime} = {\cal M}_{V}^{2}+i\,(C_{X\leftrightarrow Y})_{3\times 3} \tag{8}\] \[= \begin{pmatrix}m_{\hat{Z}^{\prime}}^{2}+ic_{\hat{Z}^{\prime}\hat{Z} ^{\prime}}&\hat{g}^{\prime}(\delta m^{2}-\frac{ic_{\hat{Z}^{\prime}\hat{Z}}}{ \sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}})&-\hat{g}(\delta m^{2}-\frac{ic_{\hat{Z}^{ \prime}\hat{Z}}}{\sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}})\\ \hat{g}^{\prime}(\delta m^{2}-\frac{ic_{\hat{Z}^{\prime}\hat{Z}}}{\sqrt{\hat{g }^{\prime 2}+\hat{g}^{2}}})&\frac{\hat{g}^{\prime 2}}{4}(\hat{v}^{2}+\delta v^{2}+ \frac{4ic_{\hat{Z}^{\prime}\hat{Z}}}{\hat{g}^{\prime 2}+\hat{g}^{2}})&- \frac{\hat{g}^{\prime}\hat{g}}{4}(\hat{v}^{2}+\delta v^{2}+\frac{4ic_{\hat{Z} ^{\prime}\hat{Z}}}{\hat{g}^{\prime 2}+\hat{g}^{2}})\\ -\hat{g}(\delta m^{2}-\frac{ic_{\hat{Z}^{\prime}\hat{Z}}}{\sqrt{\hat{g}^{ \prime 2}+\hat{g}^{2}}})&-\frac{\hat{g}^{\prime}\hat{g}}{4}(\hat{v}^{2}+ \delta v^{2}+\frac{4ic_{\hat{Z}^{\prime}\hat{Z}}}{\hat{g}^{\prime 2}+\hat{g}^{2}})& \frac{\hat{g}^{2}}{4}(\hat{v}^{2}+\delta v^{2}+\frac{4ic_{\hat{Z}^{\prime} \hat{Z}}}{\hat{g}^{\prime 2}+\hat{g}^{2}})\end{pmatrix},\] Now the mass terms are no longer hermitian, and this can be understood by adding some non-conjugate corrections into the Lagrangian, or the Hamiltonian, just as what happens in a \(K^{0}\)-\(\overline{K^{0}}\) system[48; 49]. We also have to note that the "hatted" \(\hat{Z}\) denotes the "SM-\(\hat{Z}\)" when all the mixing terms are switched off, that is to say, \[\hat{Z}^{\mu}=\frac{1}{\sqrt{\hat{g}^{2}+\hat{g}^{\prime 2}}}(-\hat{g}^{ \prime}B^{\mu}+\hat{g}W^{3\mu}). \tag{9}\] In the following of this paper, the hatless "\(Z\)" usually appears within the symbols associated with the aspect of the experimentalists who might be unaware that their observed resonance can accommodate exotic contributions. "\(Z\)" also appears within the definitions from the literature or simulating tools. In this paper, the hatless "\(Z^{\prime}\)" is also associated with a general reference of "\(Z^{\prime}\)-model" or "\(Z^{\prime}\)-\(Z\) system", while the hatted "\(\hat{Z}^{\prime}\)" particularly refers to the \(\hat{Z}^{\prime}\) field that we introduce in (1). Then we can follow Ref. [44] to diagonalize the kinetic matrix (7) beforehand: \[V_{C}=V_{1}V_{2}V_{3},\quad V_{C}^{\rm T}{\cal K}_{V}V_{C}=I_{3 \times 3}, \tag{10}\] where \[V_{1} = \begin{pmatrix}1&-\epsilon_{B}&-\epsilon_{W}\\ 0&1&0\\ 0&0&1\end{pmatrix},\;V_{2}=\begin{pmatrix}1&0&0\\ 0&1&\frac{-\epsilon_{BW}+\epsilon_{B}\epsilon_{W}}{1-\epsilon_{B}^{2}}\\ 0&0&1\end{pmatrix},\] \[V_{3} = \begin{pmatrix}1&0&0\\ 0&\frac{1}{\sqrt{1-\epsilon_{B}^{2}}}&0\\ 0&0&\sqrt{\frac{1-\epsilon_{B}^{2}}{1+\epsilon_{WW}-\epsilon_{B}^{2}-\epsilon _{W}^{2}-\epsilon_{BW}^{2}\epsilon_{WW}+2\epsilon_{BW}\epsilon_{BW}}}\end{pmatrix}, \tag{11}\] and then the mass-squared matrix becomes \[(V_{1}V_{2}V_{3})^{\rm T}{\cal M}_{V}^{2}\,{}^{\prime}V_{1}V_{2}V_ {3}, \tag{12}\] and diagonalizing this matrix gives \[V=V_{1}V_{2}V_{3}V_{\rm SM}V_{\rm f},\quad V^{\rm T}{\cal M}_{V}^{2}\,{}^{\prime} V={\rm diag}(m_{1}^{2}-{\rm i}m_{1}\Gamma_{1},\,m_{2}^{2}-{\rm i}m_{2}\Gamma_{2},\,0), \tag{13}\] where \(V_{\rm SM}\) is the familiar EW rotation matrix \[V_{\rm SM}=\begin{pmatrix}1&0&0\\ 0&-\frac{\hat{g}^{\prime}}{\sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}}&\frac{ \hat{g}}{\sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}}\\ 0&\frac{\hat{g}}{\sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}}&\frac{\hat{g}^{ \prime}}{\sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}}\end{pmatrix}. \tag{14}\] \(m_{1}\), \(\Gamma_{1}\), \(m_{2}\), \(\Gamma_{2}\) are the "masses" and "widths" of two "mass eigenstates" of the \(Z^{\prime}\)-\(Z\) system. Since their masses are nearly degenerate, and their mixing angle might be large, they might altogether form a single SM-\(Z\)-like object, and which of them is identified to be the \(Z\)- or \(Z^{\prime}\) is unessential. We have to note that although (8) is no longer hermitian, one can still verify that it is symmetric, that is, \({\cal M}_{V}^{2}\,{}^{\prime}=({\cal M}_{V}^{2}\,{}^{\prime})^{\rm T}\). This guarantees the existence of \(V_{\rm f}\) in (13) with the condition \(V_{\rm f}^{\rm T}V_{\rm f}=V_{\rm f}V_{\rm f}^{\rm T}=I\). However, the elements of \(V_{\rm f}\) can be complex, which are weird to be understood as the "mixing terms" among real vector fields, since the mixed "eigenstate fields" are no longer real numbers. This indicates that the usual perceptions of the "mixing fields" are non-viable and should be replaced with the concept of the resummmed propagators as described in Appendix A. ## III Details of the event generation and the extraction of the observables The standard way to extract the electroweak precision data is to compare the line-shape of the \(Z\)-resonance with the parameterized functions considering the Breit-Wigner propagators, initial-state radiation (ISR) effects, and the momentum distribution of the beams (See section 55 of "Reviews, Tables & Plots" in Ref. [49] for a review, and for the references therein). The photon mediated s-channel diagrams and all the t- and u-channel contributions with the interference effects should also be considered. After finding out the most fitted parameterized function, \(m_{Z}\), \(\Gamma_{Z}\), \(R_{e,\mu,\tau}\), \(A_{\rm FB}^{0e,\mu,\tau}\), and \(N_{\nu}\), which are the mass of the \(Z\)-boson, the width of the \(Z\)-boson, the ratios of the \(Z\to\) hadrons over \(Z\to e/\mu/\tau\) branching ratios, the forward-backward asymmetry parameters, and the effective number of the active neutrinos, respectively, are extracted for further comparison with the SM predicted values. In this paper, we alternatively adopt what we called the "SM templates" to replace the role of the parameterized functions. These are the line-shape data acquired from the event generator based upon the dubbed "pseudo-SM" model file, which is a modified variation of the default SM model file provided by FeynRules[55], added with four additional parameters \(m_{Z}\), \(s_{l}^{2}\), \(\Gamma_{Z}\) and \(\xi\) as the input value. Here \(s_{l}^{2}\) is the effective Weinberg angle affecting the weak coupling constants, independent with the \(s_{W}^{2}\) associated with the ratio of the \(W\) and \(Z\) bosons. \(\Gamma_{Z}\) can also be assigned with an arbitrary value, which might not equal to the SM-predicted one. \(\xi\) appears in \[B_{\mu} = -s_{W}\xi Z_{\mu}+c_{W}A_{\mu},\] \[W_{\mu}^{3} = c_{W}\xi Z_{\mu}+s_{W}A_{\mu}, \tag{15}\] modifying the definitions of the \(B_{\mu}\) and \(W_{\mu}^{3}\) in the model file. This parameter aims at rescaling the height of the resonance while keeping the shape of it intact. The purpose to utilize the event generator to generate the "pseudo-SM templates" is to elude the complicated comparison between our generator results and the SM values with intricated loop-level contribution considered under various renormalization schemes. Since the real LEP data are absent, it is also reasonable to compare our simulated \(Z^{\prime}\)-model events with the SM templates generated by exactly the same event generator, eliminating the errors from the theoretical uncertainties automatically. To compare the continuous line-shape curves, one should sample a set of discrete \(\sqrt{s}\)'s, which are defined as the invariant mass, or the total energy of the colliding \(e^{+}e^{-}\) system. LEP published some of the early detailed data at each of the sampled \(\sqrt{s}\), while later as the integrated luminosity accumulates, only the fitted results of the electroweak precision measurements were published. With these incomplete information recorded in Ref. [56], we adopt \(\sqrt{s}=[88.2,89.2,90.2,91.2,92.2,93.2,94.2]\)GeV as our samples as a reference, and the integrated luminosity is set \(300~{}{\rm pb}^{-1}\) for each \(\sqrt{s}\) of the \(Z^{\prime}\)-model events. Since in Ref. [56], the total luminosity is \(60~{}{\rm pb}^{-1}\), we therefore multiply a \(\sqrt{5}\) when evaluating the statistical uncertainties. We call our data with such settings the "pseudo-LEP" results, and believe that these can characterize the main features of the LEP in the case of incomplete data. For the pseudo-SM templates, we generate \(10^{7}\) events for each \(\sqrt{s}=[88.2,\,89.2,\,90.2,\,91.2,\,92.2,\,93.2,\,94.2]\)GeV sample with various input values of \(m_{Z}\), \(s_{l}^{2}\), \(\Gamma_{Z}\), and \(\xi\). Then we use the polynomial \[\begin{split}&\sigma_{t}^{\text{PSM}}(m_{Z},\Gamma_{Z},s_{l}^{2}, \xi,\sqrt{s})=a_{0t}(\sqrt{s})+(m_{Z}-m_{Z0})^{2}a_{1t}(\sqrt{s})+(s_{l}^{2}-s_ {l0}^{2})^{2}a_{2t}(\sqrt{s})+\\ &(\Gamma_{Z}-\Gamma_{Z0})^{2}a_{3t}^{F/B}(\sqrt{s})+(\xi-\xi_{0 })^{2}a_{4t}(\sqrt{s})+(m_{Z}-m_{Z0})(s_{l}^{2}-s_{l0}^{2})a_{5t}(\sqrt{s})+\\ &(m_{Z}-m_{Z0})(\Gamma_{Z}-\Gamma_{Z0})a_{6t}(\sqrt{s})+(m_{Z}-m_ {Z0})(\xi-\xi_{0})a_{7t}(\sqrt{s})+(s_{l}^{2}-s_{l0}^{2})(\Gamma_{Z}-\Gamma_{Z0 })a_{8t}(\sqrt{s})\\ &+(s_{l}^{2}-s_{l0}^{2})(\xi-\xi_{0})a_{9t}(\sqrt{s})+(\Gamma_{Z }-\Gamma_{Z0})(\xi-\xi_{0})a_{10t}(\sqrt{s})+(m_{Z}-m_{Z0})a_{11t}(\sqrt{s})+ \\ &(s_{l}^{2}-s_{l0}^{2})a_{12t}(\sqrt{s})+(\Gamma_{Z}-\Gamma_{Z0 })a_{13t}(\sqrt{s})+(\xi-\xi_{0})a_{14t}(\sqrt{s}),\end{split} \tag{16}\] to fit the pseudo-SM template events by quadratic fitting for each \(\sqrt{s}=\) [88.2, 89.2, 90.2, 91.2, 92.2, 93.2, 94.2]GeV in each final product channel \(t\in\{(e^{+}e^{-})_{\text{F/B}},(\mu^{+}\mu^{-})_{\text{F/B}},\text{hadrons}\}\). Here the subscript of the "\((e^{+}e^{-})_{\text{F}}\)" or "\((e^{+}e^{-})_{\text{B}}\)" denotes the "forward" or "backward" directions that the positive charged product outcomes parallels or anti-parallels to the incoming positron beam. The \(a_{(0-14)t}\) are the factors to be determined by the fitting processes. The charge asymmetry of the quarks also affect the charge unbalance of the final hadrons, however analyzing such an asymmetry is beyond our ability of simulation, so we evade taking this into account. In principle the \(e^{+}e^{-}\to\tau^{+}\tau^{-}\) channels should also be considered. \(\tau\) might decay into muons, electrons, or hadrons to fake the corresponding channels. The leptonic decay channels can be distinguished by the additional missing energy/momentum criterion during the event selections, and the hadronic decay channels might be problematic. At a lepton collider, the hadronic \(\tau\)-decay products can be well discriminated from the hadronic-jet events[57]. The efficiency of the "\(\tau\)-tagging" techniques at a hadronic collider seems to have been significantly improved during these years[58; 59], and it is reasonable to expect that such a technology might in turn contribute to the future lepton collider programs. However, all these algorithms are too complicated for a fast simulation in this paper, and with the consideration of the technology improvement in the future, we just assume that all the \(\tau\) products can be well-discriminated, so we neglect them in this paper. In this work, we apply WHIZARD [60; 61; 62] as our event generator. LHAPDF6 [63; 64], PYTHIA6 [65], FastJet [66; 67] and DELPHES [68; 69; 70] are connected for the detective-level data. In DELPHES, we utilize the CEPC card regardless our purpose for a LEP-like prediction, and not so many differences are expected. The beams structure is chosen to be the Gaussian distribution. The WHIZARD parameter \(\sigma\) of both the "gaussian_spread1" and "gaussian_spread2" are set to be 250/90000. ISR is also switched on. During the event generation, the cuts are set E \(\geq\) 100 MeV, all abs(cos(Theta)) \(\geq\) 0.8 and all M2 \(\geq\) 1 GeV\({}^{2}\) for final products, where "E", "Theta" and "M2" are energy of the particle in the argument, absolute polar angle in the lab frame and invariant mass squared of the particle in the argument, respectively. Then transverse momentum P\({}_{\rm T}\) \(>\) 7 GeV and pseudorapidity \(\eta<\) 2.4 are set for event selection. We then compare the line-shape cross sections in the \(Z^{\prime}\)-model for each parameter point with the pseudo-SM template cross sections fitted in (16) to find the best fitted \((m_{Z}^{*},s_{l}^{*2},\Gamma_{Z}^{*},\xi^{*})\) by minimizing the \(\chi_{F}^{2}\) defined as \[\chi_{F}^{2}(m_{Z},\Gamma_{Z},s_{l}^{2},\xi)=\sum_{t,\sqrt{s}}\frac{(\sigma_{t }^{Z^{\prime}}(\sqrt{s})-\sigma_{t}^{\rm PSM}(m_{Z},\Gamma_{Z},s_{l}^{2},\xi, \sqrt{s}))^{2}}{(\Delta_{t}^{Z^{\prime}}(\sqrt{s}))^{2}+(\Delta_{i}^{\rm PSM} (m_{Z},\Gamma_{Z},s_{l}^{2},\xi,\sqrt{s}))^{2}}, \tag{17}\] where \(\sqrt{s}=[88.2,\,89.2,\,90.2,\,91.2,\,92.2,\,93.2,\,94.2]\)GeV, \(\sigma_{t}^{\rm PSM}(m_{Z},\Gamma_{Z},s_{l}^{2},\xi,\sqrt{s})\) is defined in (16). \(\sigma_{t}^{Z^{\prime}}(\sqrt{s})\) are line-shape sample cross sections for the \(Z^{\prime}\)-model with a particular set of parameters. \(\Delta_{i}^{Z^{\prime}}\) and \(\Delta_{i}^{\rm PSM}(m_{Z},\Gamma_{Z},s_{l}^{2},\xi,\sqrt{s})\) are the statistical uncertainties of the pseudo-SM template cross sections and the \(Z^{\prime}\)-model cross sections respectively. The statistical uncertainty \(\Delta_{\rm X}\) for each cross section \(\sigma_{\rm X}\) at channel \(X\) is evaluated by \[\Delta_{\rm X}=\frac{1}{\sqrt{n_{\rm X}}}\sigma_{\rm X}, \tag{18}\] where \(n_{\rm X}\) is the number of events passing through all selection criterion. The total \(\chi_{F}^{2}\) includes seven points of center-of-mass energy, three channels, and the leptonic channels are separated into forward-back parts, so the total degree of freedom is counted to be 35. Then the best-fitted \((m_{Z}^{*},s_{l}^{*2},\Gamma_{Z}^{*},\xi^{*})\) will be converted into effective oblique parameters \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\), and \(\delta\tilde{N}_{\nu}\), which are the effective Peskin-Takeuchi oblique parameters and the the effective deviation of the species of neutrinos respectively. Ref. [25; 26] derived the \(S\), \(T\), \(U\) expressions depending on \(m_{W},s_{l},\Gamma_{Z}\). Here we reverse these formulas as the definitions of the effective \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\) so that the constraints of \(S\), \(T\), \(U\) in the literature can be straightforwardly cast here. One subtle thing is that in the usual electroweak precision measurement discussions, \(m_{Z}\) is the input parameter, so \(\delta m_{W}\) is given by \[\delta m_{W}=-\frac{\alpha m_{W}^{\rm SM}}{4(c_{\rm W}^{2}-s_{\rm W}^{2})}(S- 2c_{\rm W}^{2}T-\frac{c_{\rm W}^{2}-s_{\rm W}^{2}}{2s_{\rm W}^{2}}U), \tag{19}\] where \(\delta m_{W}=m_{W}-m_{W}^{\rm SM}\). However, in our pseudo model, \(m_{Z}\) is no longer the "measured" \(Z\) boson mass. The correction on \(m_{W}\) is equivalent to a correction on \(m_{Z}\) towards the opposite direction, so according to \(m_{W}^{\rm SM}=m_{Z}c_{\rm W}\), \(m_{W}=m_{Z}^{\rm SM}c_{\rm W}\) and Ref. [71; 72], we define \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\), as well as the deviation of the neutrino species number \(\delta\tilde{N}_{\nu}\) by solving the following equations. \[m_{Z}^{*}-m_{Z}^{\rm SM} = \frac{\alpha m_{Z}^{\rm SM}}{4(c_{\rm W}^{2}-s_{\rm W}^{2})}( \tilde{S}-2c_{\rm W}^{2}\tilde{T}-\frac{c_{\rm W}^{2}-s_{\rm W}^{2}}{2s_{\rm W }^{2}}\tilde{U}),\] \[s_{l}^{*2}-(s_{l}^{2})^{\rm SM} = \frac{\alpha}{4(c_{\rm W}^{2}-s_{\rm W}^{2})}(\tilde{S}-4s_{\rm W }^{2}c_{\rm W}^{2}\tilde{T}),\] \[\Gamma_{Z}^{*}-(\Gamma_{Z})^{\rm SM} = \frac{\alpha^{2}m_{Z}^{\rm SM}}{72s_{\rm W}^{2}c_{\rm W}^{2}(c_{ \rm W}^{2}-s_{\rm W}^{2})}(-10(3-8s_{\rm W}^{2})\tilde{S}+(63-126s_{\rm W}^{2} -40s_{\rm W}^{4})\tilde{T}),\] \[\delta\tilde{N}_{\nu} = \frac{\Gamma_{Z}^{*}-(\Gamma_{Z})^{\rm pSM}}{(\Gamma_{\nu\nu})^{ \rm pSM}}, \tag{20}\] where \(m_{Z}^{\rm SM}\) is the \(\hat{Z}\)-mass encoded in the model file of the \(Z^{\prime}\)-model file if all the vector boson's mixing terms are switched off. \((s_{l}^{2})^{\rm SM}\) and \((\Gamma_{Z})^{\rm SM}\) are also the calculated values of \(s_{l}^{2}\) and \(\Gamma_{Z}\) in this case. \((\Gamma_{Z})^{\rm pSM}\) and \((\Gamma_{\nu\nu})^{\rm pSM}\) are calculated according to the pseudo-SM model files with the best fitted \(m_{Z}^{*}\), \(s_{l}^{*2}\) and \(\xi^{*}\) as the input parameter. Solving (20), we obtain \[\tilde{S} = 367.29\delta s_{l}^{2}+37.92\delta\Gamma_{Z},\] \[\tilde{T} = 132.35\delta s_{l}^{2}+52.94\delta\Gamma_{Z},\] \[\tilde{U} = -2.62\delta m_{Z}+144.31\delta s_{l}^{2}-37.92\delta\Gamma_{Z}, \tag{21}\] where \(\delta m_{Z}=m_{Z}^{*}-m_{Z}^{\rm SM}\), \(\delta s_{l}^{2}=s_{l}^{*2}-(s_{l}^{2})^{\rm SM}\) and \(\delta\Gamma_{Z}=\Gamma_{Z}^{*}-(\Gamma_{Z})^{\rm SM}\). The \(\chi^{2}\) comparing the \(\tilde{S}\), \(\tilde{T}\) and \(\tilde{U}\) with the global fitted data is defined by \[\chi^{2}=\left(\tilde{S}-S_{0},\ \tilde{T}-T_{0},\ \tilde{U}-U_{0}\right)C^{-1} \begin{pmatrix}\tilde{S}-S_{0}\\ \tilde{T}-T_{0}\\ \tilde{U}-U_{0}\end{pmatrix}, \tag{22}\] where \(S_{0}\), \(T_{0}\) and \(U_{0}\) are the best global-fitted values, \(C^{-1}\) represents the inverse of the covariance matrix of \(S_{0}\), \(T_{0}\) and \(U_{0}\). In this paper, the \(S_{0}\), \(T_{0}\), \(U_{0}\) and \(C^{-1}\) are adopted from Tab. 1. \begin{table} \begin{tabular}{|c|c|c c c|} \hline & Result & \multicolumn{3}{c|}{Correlation} \\ \hline \(S\) & \(0.06\pm 0.1\) & \(1.00\) & & \\ \(T\) & \(0.11\pm 0.12\) & \(0.90\) & \(1.00\) & \\ \(U\) & \(-0.02\pm 0.09\) & \(-0.57\) & \(-0.82\) & \(1.00\) \\ \hline \end{tabular} \end{table} Table 1: Global fit results of the oblique parameters \(S\), \(T\), and \(U\) adopted from Ref. [38]. In this paper, based upon some typical models (See an enumeration of the \(Z^{\prime}\)-models in Ref. [73], and the references therein), we are going to show our calculated \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\), \(\delta\tilde{N}_{\nu}\), as well as the estimated \(e^{+}e^{-}\) collider sensitivity in three scenarios. They are * Scenario I: \(\hat{Z}^{\prime}\) only couples to invisible particles. This scenario is inspired from the dark matter model associated with a \(Z^{\prime}\), in which \(Z^{\prime}\) plays a crucial role connecting the visible world with the dark sector. \(Z^{\prime}\) might couple with the dark matter, and the dark matter particles decayed from it are invisible at a collider. Besides the dark matter, \(Z^{\prime}\) might also couple with the sterile neutrino, which might also be invisible if they are long-lived enough to decay outside the detector. This scenario is accomplished by straightforwardly assigning a \(c_{\hat{Z}^{\prime}\hat{Z}^{\prime}}\) value in (8) for convenience, rather than introducing some invisible fields for the \(\hat{Z}^{\prime}\) to decay into them. * Scenario II: \(\hat{Z}^{\prime}\) couples with the SM fermions universally among all three generations. Charged under the \(U(1)^{\prime}\) gauge symmetry universally among all three generations, the coupling constants are stringently constrained by the off-shell \(\hat{Z}^{\prime}\) mediated processes, leading to a particularly small \(c_{\hat{Z}^{\prime}\hat{Z}^{\prime}}\) to give a narrow but sharp valley-like structure imposed on the resonance. The initial momentum distribution in the beams and the ISR effect smear this structure to give a relatively "smooth" curve. Since the LEP collaborations only published the final electroweak precision measurement results, and they are acquired by comparing the experimental data with a trivial resonance structure, we only follow this principle to compare our pseudo-LEP results with the pseudo-SM templates to find the best-fitted electroweak observables, regardless of the non-standard resonance shape of the speedup-LEP data. Here we define the coupling constants as \[g_{u_{R}}\hat{Z}^{\prime}_{\mu}\bar{u}_{R}\gamma^{\mu}u_{R}+g_{u_ {L}}\hat{Z}^{\prime}_{\mu}\bar{u}_{L}\gamma^{\mu}u_{L} = \hat{Z}^{\prime}_{\mu}\bar{u}\gamma^{\mu}(\frac{g_{u_{L}}+g_{u_ {R}}}{2}-\frac{g_{u_{L}}-g_{u_{R}}}{2}\gamma^{5})u,\] \[g_{d_{R}}\hat{Z}^{\prime}_{\mu}\bar{d}_{R}\gamma^{\mu}d_{R}+g_{d _{L}}\hat{Z}^{\prime}_{\mu}\bar{d}_{L}\gamma^{\mu}d_{L} = \hat{Z}^{\prime}_{\mu}\bar{d}\gamma^{\mu}(\frac{g_{d_{L}}+g_{d_ {R}}}{2}-\frac{g_{d_{L}}-g_{d_{R}}}{2}\gamma^{5})u,\] \[g_{l_{R}}\hat{Z}^{\prime}_{\mu}\bar{l}_{R}\gamma^{\mu}l_{R}+g_{l _{L}}\hat{Z}^{\prime}_{\mu}\bar{l}_{L}\gamma^{\mu}l_{L} = \hat{Z}^{\prime}_{\mu}\bar{l}\gamma^{\mu}(\frac{g_{l_{L}}+g_{l_{R }}}{2}-\frac{g_{l_{L}}-g_{l_{R}}}{2}\gamma^{5})u,\] \[g_{\nu_{L}}\hat{Z}^{\prime}_{\mu}\bar{\nu}_{L}\gamma^{\mu}\nu_{L} = \hat{Z}^{\prime}_{\mu}\bar{\nu}(\frac{g_{\nu_{L}}}{2}-\frac{g_{ \nu_{L}}}{2}\gamma^{5})\nu.\] (23) * Scenario III: \(\hat{Z}^{\prime}\) couples with the SM fermions depending on their generations. For some particular models (Ref. [74] enumerated such models, and references can be found therein), a particular generation of particles might be charged under the \(U(1)^{\prime}\) group, or two generations of particles takes a particular combination of the \(U(1)^{\prime}\) charge (For an example, the \(U(1)_{L_{e}-L_{\mu}}\) models[75; 76; 77; 78; 79]). This not only affects the shape of the \(Z\)-resonance, but also breaks the universality of the \(l^{+}l^{-}\) branching ratios, where \(l=e,\mu,\tau\). In this paper, we only discuss the \(e\)-\(\mu\) asymmetry, and utilize the \[\frac{R_{e}}{R_{\mu}}\] (24) where \[R_{l}\equiv\frac{\Gamma_{\rm had}}{\Gamma_{l^{+}l^{-}}}\] (25) to observe such an asymmetry. ## IV Numerical results The effective Lagrangian (1) takes six parameters, \(m_{Z^{\prime}}\), \(\epsilon_{B}\), \(\Lambda_{W}\), \(\Lambda_{BW}\), \(\Lambda_{WW}\), \(\lambda_{BD}\), which are not convenient for a relatively intuitive presentation. Equivalently, \(\epsilon_{B}\), \(\epsilon_{W}\), \(\epsilon_{BW}\), \(\delta v^{2}\), \(\delta m^{2}\) and \(m_{\hat{Z}^{\prime}}\) appeared in (8) and (11) and can be treated as the free parameters for further discussions. Among them, \(\epsilon_{B}\) and \(\epsilon_{W}\) and \(m_{\hat{Z}^{\prime}}\) are the most important. In fact, the \(\delta v^{2}\), \(\epsilon_{BW}\) have nothing to do with the \(Z^{\prime}\) sector, so their contributions are perturbatively calculable by previous algorithms and we do not discuss them. \(\delta m^{2}\) can also give rise to non-perturbative mixings, however in this paper, we focus on the kinematic mixing effects. In the following of this section, we plot the \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\) and \(\delta\tilde{N}_{\nu}\) for Scenario I-III on the continuous \(\epsilon_{B}\)-\(\epsilon_{W}\) plain with some discrete \(\delta m_{Z^{\prime}}=m_{Z^{\prime}}-m_{Z}^{\rm SM}\) values adopted. For Scenario III when universality among generations is broken, \(\frac{\tilde{R}_{e}}{R_{\mu}}-1\) is also plotted. Here the definition of \(m_{Z}^{\rm SM}\) is the same as in (20). The Monte-Carlo algorithm that we utilize inevitably introduces statistical fluctuations in our results, thus smoothness is lost in the plotted figures. We use a second-order polynomial of \(\epsilon_{B}\) and \(\epsilon_{W}\) to fit each of the fluctuated \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\), \(\delta\tilde{N}_{\nu}\) and \(\frac{\tilde{R}_{e}}{R_{\mu}}-1\) by least square method to smooth the results. The constant terms of the second-order polynomials are set zero since all new physics corrections are destined to vanish at the \(\epsilon_{B}=\epsilon_{W}=0\) origin. The similar results from several independent runs with different random seeds verify the reliability of this fitting algorithm, so in this paper, we show the fitted results in the figures. In principle, we should compare our results with the global fitted results of all \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\), \(\delta\tilde{N}_{\nu}\), and \(\frac{\tilde{R}_{e}}{\tilde{R}_{\mu}}-1\) parameters. Unfortunately, in the literature \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\) are fitted with the assumption of universality and \(\delta\tilde{N}_{\nu}=0\). \(\delta\tilde{N}_{\nu}\) is also fitted with the assumption that the visible \(Z\)-propagator is not distorted, that is to say, \(\tilde{S}=0\), \(\tilde{T}=0\), \(\tilde{U}=0\). A complete global fitting including all these parameters is far beyond our target, and remember that we are only able to show readers the "sensitivity" of a LEP-like electron-positron collider without performing a real fitting process due to the lack of the published data, so in this paper, we still show the corresponding "STU-\((1,2)\)-\(\sigma\)", "\(\delta\tilde{N}_{\nu}\)-\((1,2)\)-\(\sigma\)", "\(R_{e}\)/\(R_{\mu}\)-\((1,2)\)-\(\sigma\)" contours in each of the figures. The prefixes "STU-", "\(\delta\tilde{N}_{\nu}\)-", "\(R_{e}\)/\(R_{\mu}\)-" indicate that the 1-\(\sigma\) and 2-\(\sigma\) fitted results originate from the global fitted oblique parameter in Tab. 1, indirect \(N_{\nu}\) results from Ref. [49], \(R_{e}\) and \(R_{\mu}\) data with their uncertainties adopted from Ref. [49]. However we should note that these contours only characterize the sensitivity of the collider on this model, and should not be regarded as real constraints. Besides, people are more interested about the sensitivity of some proposed future colliders. According to Ref. [52], the CEPC is expected to significantly improve the uncertainties of the electroweak precision measurements. With the expected sensitivities published in Ref. [52] around the \(Z\)-pole, we also plot the "CEPC-ST-\((1,2)\)-\(\sigma\)", "CEPC-\(\delta\tilde{N}_{\nu}\)-\((1,2)\)-\(\sigma\)", "CEPC-\(R_{e}\)/\(R_{\mu}\)-\((1,2)\)-\(\sigma\)" contours within one plot for each of the \(\epsilon_{B}\) and \(\epsilon_{W}\) plain. Note that Ref. [52] only shows the expected \(S\)-\(T\) results in Figure 11.18, so we are forced to give up the \(U\). Although in principle \(U\)-sensitivity can be extracted from the Tab. 11.16 of Ref. [52], however the complete covariance matrix is missing for a complete fitting. Since we only target at the collider's potential sensitivity, neglecting the \(U\) is expected not to affect the final results significantly. Another subtle thing is that Ref. [52] only gives \(R_{\mu}\), which is insufficient to estimate the \(R_{e}\)/\(R_{\mu}\) uncertainties. In fact, at a lepton collider, the \(e^{+}e^{-}\to e^{+}e^{-}\) channel data are slightly less precise than the corresponding \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) data because of the less accuracy of the electron/positron trajectory measurements. It is also reasonable to assume that the CEPC \(R_{e}\) uncertainty decreases synchronously with the \(R_{\mu}\) uncertainty as the integrated luminosity increases, so the \(R_{e}\) uncertainty at CEPC can be reasonably estimated by assuming that the ratio of the \(R_{e}\) and \(R_{\mu}\) uncertainties should remain similar to the LEP results recorded in Ref. [52]. Now we show our results for the three scenarios respectively. ### Scenario I: \(\hat{Z}^{\prime}\) couples with invisible matters In this scenario, the width \(\Gamma_{\hat{Z}^{\prime}}\) that \(\hat{Z}^{\prime}\) only decays to invisible matter is regarded as a input parameter, so \(c_{\hat{Z}^{\prime}\hat{Z}^{\prime}}\) and \(c_{\hat{Z}\hat{Z}}\) are given by \[c_{\hat{Z}^{\prime}\hat{Z}^{\prime}}=-m_{\hat{Z}^{\prime}}\Gamma_{\hat{Z}^{ \prime}},\quad c_{\hat{Z}\hat{Z}}=-m_{Z}^{\rm SM}\Gamma_{\hat{Z}}^{\rm SM}, \tag{26}\] Since the \(\hat{Z}^{\prime}\) has no coupling to SM fermions, there is no self energy diagram of \(\hat{Z}^{\prime}\)-\(\hat{Z}\). Therefore we have \[c_{\hat{Z}^{\prime}\hat{Z}}=0. \tag{27}\] If the \(\Gamma_{\hat{Z}^{\prime}}\) is close to \(\Gamma_{\hat{Z}}^{\rm SM}\), and the \(m_{\hat{Z}^{\prime}}\) is also close to \(m_{Z}^{\rm SM}\), maximum mixings between \(Z^{\prime}\)-\(Z\) might arise, however the overlapped and interfered peaks still look like a single \(Z\)-pole. For convenience we define \[\lambda_{Z^{\prime}}=\frac{\Gamma_{\hat{Z}^{\prime}}}{\Gamma_{\hat{Z}}^{\rm SM }}, \tag{28}\] which is the ratio of the widths of the two "interaction-eigenstates". We have tried several combinations of parameters like \(\delta m_{Z^{\prime}}=0,-0.1\) GeV, and \(\lambda_{Z^{\prime}}=0.5,0.9\), and found that the results are quite similar if \(\hat{Z}^{\prime}\) and \(\hat{Z}\) are nearly-degenerate. Therefore, we choose to plot the results when \(\lambda_{Z^{\prime}}=0.9\) and \(\delta m_{Z^{\prime}}=-0.1\) GeV in Fig. 2 as a paradigm. Scenario II: \(\hat{Z}^{\prime}\) couples with the SM fermions universally among all three generations In this scenario, \(\hat{Z}^{\prime}\) couples directly with the SM quarks and leptons. General definitions of the coupling constants have been listed in (23). To compute the \({\rm Im}\Pi_{\hat{Z}^{\prime}\hat{Z}}\), the fermions' couplings with the \(\hat{Z}_{\mu}\) fields are required. \[\mathcal{L}\supset\hat{Z}_{\mu}\bar{\psi}^{f}\gamma^{\mu}(V_{\hat{Z}}^{f}-A_{ \hat{Z}}^{f}\gamma^{5})\psi^{f},\quad f=u,d,l,\nu \tag{29}\] where according to the SM, \[V_{\hat{Z}}^{u}=\tfrac{1}{4{\rm cw}}-\tfrac{2\pi_{\rm W}^{2}}{3 {\rm cw}},\quad A_{\hat{Z}}^{u}=\tfrac{1}{4{\rm cw}},\quad V_{\hat{Z}}^{d}=- \tfrac{1}{4{\rm cw}}+\tfrac{s_{\rm W}^{2}}{3{\rm cw}},\quad A_{\hat{Z}}^{d}=- \tfrac{1}{4{\rm cw}},\] \[V_{\hat{Z}}^{l}=-\tfrac{1}{4{\rm cw}}+\tfrac{s_{\rm W}^{2}}{c{ \rm w}},\quad A_{\hat{Z}}^{l}=-\tfrac{1}{4{\rm cw}},\quad V_{\hat{Z}}^{\nu}= \tfrac{1}{4{\rm cw}},\quad A_{\hat{Z}}^{\nu}=\tfrac{1}{4{\rm cw}}. \tag{30}\] Denote \(V^{f}_{\hat{Z}^{\prime}}\equiv\frac{g_{f_{L}}+g_{f_{R}}}{2}\) and \(A^{f}_{\hat{Z}^{\prime}}\equiv\frac{g_{f_{L}}-g_{f_{R}}}{2}\), under the approximation that the decay products are massless, \(c_{\hat{Z}\hat{Z}}\), \(c_{\hat{Z}^{\prime}\hat{Z}}\) and \(c_{\hat{Z}^{\prime}\hat{Z}^{\prime}}\) are calculated to be \[c_{\hat{Z}\hat{Z}} = -m_{Z}^{\rm SM}\Gamma^{\rm SM}_{\hat{Z}},\] \[c_{\hat{Z}^{\prime}\hat{Z}^{\prime}} = -\frac{(m_{Z}^{\rm SM})^{2}}{12\pi}\sum_{f\neq t}(V^{f}_{\hat{Z}^ {\prime}}V^{f}_{\hat{Z}^{\prime}}+A^{f}_{\hat{Z}^{\prime}}A^{f}_{\hat{Z}^{ \prime}}),\] \[c_{\hat{Z}^{\prime}\hat{Z}} = -\frac{(m_{Z}^{\rm SM})^{2}}{12\pi}\sum_{f\neq t}(V^{f}_{\hat{Z}^ {\prime}}V^{f}_{\hat{Z}}+A^{f}_{\hat{Z}^{\prime}}A^{f}_{\hat{Z}}). \tag{31}\] In this scenario, the direct coupling constants between \(\hat{Z}^{\prime}\) and the SM particles should be stringently constrained. Therefore the \(c_{\hat{Z}^{\prime}\hat{Z}^{\prime}}\) and \(c_{\hat{Z}^{\prime}\hat{Z}}\) become much smaller than \(c_{\hat{Z}\hat{Z}}\). This actually suppresses the mixing angle and in this case, the \(\hat{Z}^{\prime}\)-like object cleaves a deep but narrow valley within the resonance induced by the \(Z\)-like object, which will later be smeared by the ISR and beam momentum distribution effects, just as we have plotted in Fig. 3 as an example. It is actually impossible for a practical lepton collider to scan every \(\sqrt{s}\) in a sufficient resolution to depict such inconspicuous structure, so what we can do is still adopt the scattered \(\sqrt{s}=[88.2,\,89.2,\,90.2,\,91.2,\,92.2,\,93.2,\,94.2]\)GeV samples to extract the \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\), \(\delta\tilde{N}_{\nu}\) parameters. In this paper, we only consider two cases, one is that the \(\hat{Z}^{\prime}\) only couples universally with all the leptons, and the other is that the \(\hat{Z}^{\prime}\) couple with leptons and quarks with the ratio of the coupling constants to be \(3:1\), inspired by the \(U(1)_{B-L}\) model. For the first case, since we find out that the chirality of the interaction terms defined in (23) does not affect the results to a significant extent, we only adopt the vector-type interaction pattern \(g_{l_{L}}=g_{l_{R}}\), and show an example when \(g_{l_{L}}=g_{l_{R}}=0.001\) in Fig. 4. Inspired by the \(U(1)_{B-L}\) models, we plot the Fig. 5 as an example when \(g_{l_{L}}=g_{l_{R}}=-3g_{u_{L}}=-3g_{u_{R}}=-3g_{d_{L}}=-3g_{d_{R}}=0.0015\). One can again evaluate the LEP sensitivity as well as the estimated CEPC sensitivity on the \(\epsilon_{B}\)-\(\epsilon_{W}\) plain. ### Scenario III: \(\hat{Z}^{\prime}\) couples with the SM fermions depending on the generations The \(Z^{\prime}\) couplings to fermions might be generation-dependent, in which case coefficients in (23) need to be changed into the generation-dependent version. Constrained by our current calculation ability and inspired by some eminent models, we only consider the case that \(\hat{Z}^{\prime}\) only couples with \(\mu\) and/or \(\tau\). The interaction terms are parametrized by \[\mathcal{L}_{\rm eff}\supset\hat{Z}^{\prime}_{\mu}\bar{\mu}\gamma^{\mu}(\frac{ g_{\mu_{L}}+g_{\mu_{R}}}{2}-\frac{g_{\mu_{L}}-g_{\mu_{R}}}{2})\mu+\hat{Z}^{ \prime}_{\mu}\bar{\tau}\gamma^{\mu}(\frac{g_{\tau_{L}}+g_{\tau_{R}}}{2}-\frac {g_{\tau_{L}}-g_{\tau_{R}}}{2})\tau. \tag{32}\] Figure 3: \(Z\) line-shape predicted by an example model that \(\hat{Z}^{\prime}\) couples with the SM fermions. The left panel shows the results when the electron/positron is “bare”, while the right panel considers the ISR and beam momentum distribution effects. Since \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) might be affected by (32), firstly, we delete the all the \(\mu^{+}\mu^{-}\) terms in (17) to find the best-fitted point in our pseudo-SM template, then again \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\), \(\delta\tilde{N}_{\nu}\) are extracted. To translate the definition in (24) into the expression of our simulated data, we adopt \[\frac{R_{e}}{R_{\mu}}=\frac{\sigma^{\rm NP}_{\mu^{+}\mu^{-}}(\sqrt{s}=m_{Z})}{ \sigma^{\rm NP}_{se^{+}e^{-}}(\sqrt{s}=m_{Z})}, \tag{33}\] where subscript "\(s\)" in \(\sigma^{\rm NP}_{se^{+}e^{-}}(m_{Z})\) indicates the \(s\)-channel cross section the \(e^{+}e^{-}\to e^{+}e^{-}\) channel. Certainly, \(\sigma^{\rm NP}_{se^{+}e^{-}}(m_{Z})\) is impossible to acquire straightforwardly due to the contamination from the \(t\)-, \(u\)-channels. Fortunately, in the pseudo-SM template, the universality is preserved, and the \(\sigma^{\rm PSM}_{\mu^{+}\mu^{-}}\) does not include the \(t\)-, \(u\)-channel contributions. Therefore, the \(\sigma^{\rm PSM}_{\mu^{+}\mu^{-}}(\sqrt{s}=91.2\) GeV) adopted from the pseudo-SM template can be utilized in place of \(\sigma^{\rm NP}_{se^{+}e^{-}}(m_{Z})\), and we also take \(\sqrt{s}=91.2\)GeV to replace the \(m_{Z}\), so \[\frac{R_{e}}{R_{\mu}}=\frac{\sigma^{\rm NP}_{\mu^{+}\mu^{-}}(\sqrt{s}=91.2\ {\rm GeV})}{\sigma^{\rm PSM}_{\mu^{+}\mu^{-}}(\sqrt{s}=91.2\ {\rm GeV})}. \tag{34}\] We will extract the current \(\frac{R_{e}}{R_{\mu}}\) data from Ref. [49], and plot the 1-\(\sigma\) and 2-\(\sigma\) contour in the \(\epsilon_{B}\)-\(\epsilon_{W}\) plain. The CEPC sensitivity is extracted from Ref. [52]. Notice that according to Ref. [52], the sensitivity of "\(R_{l}\)" denoted there is actually \(R_{\mu}\). Usually the detector's sensitivities on electron/positrons are a bit lower than those on muons, and it is reasonable to expect the uncertainties of \(R_{e}\) and \(R_{\mu}\) at the same collider will synchronously decrease as the integrated luminosity rises. Therefore, we estimate the expected uncertainty of the \(R_{e}\) at CEPC by multiplying the expected uncertainty of the \(R_{\mu}\) by the same factor of the ratio of the uncertainties of \(R_{e}\) and \(R_{\mu}\) recorded in Ref. [49]. Therefore, the expected uncertainty of \(\frac{R_{e}}{R_{\mu}}\) at CEPC can be extracted. Inspired by the models that \(Z^{\prime}\) couples with a particular generation of leptons, we show an example that the \(\hat{Z}^{\prime}\) only couples with the second family of the leptons in Fig. 6. Inspired by the \(U(1)_{L_{e}-L_{\mu}}\) models[75; 76; 77; 78; 79], we show an example in Fig. 7. ## V Summary In a nearly-degenerate \(Z^{\prime}\)-\(Z\) system, not only the widths, or equivalently, the imaginary part of the self-energy diagram of each fields, but also the indispensable "cross terms", that is to say, the imaginary part of the self-energy diagram connecting two different fields play important roles in calculating the line-shape observables. After diagonalizing the "mass matrix" with the imaginary contributions included, sometimes "SM-like \(Z\)" and "\(Z^{\prime}\)" cannot be well discriminated and they overlap coherently to form a single resonance-like object which might be recognized as a single particle. Relying on the effective field theory model in which many of the \(Z^{\prime}\) models can be accommodated, we simulate the shape of this resonance-like object, and follow the usual literature to utilize "oblique parameters" \(\tilde{S}\), \(\tilde{T}\), \(\tilde{U}\), and "neutrino species deviation" \(\delta\tilde{N}_{\nu}\) to describe the shape of it. Comparing the results with the current data mainly contributed by the LEP, one can estimate the sensitivity of the LEP at the \(\epsilon_{B}\) and \(\epsilon_{W}\) plain if the LEP data can be reanalyzed. As a paradigm of the future high-luminosity lepton colliders, the predicted sensitivity of this model at the CEPC is also evaluated. Besides, we also estimate the \(R_{e}/R_{\mu}-1\) of the non-universality models, and compare them with the LEP and CEPC sensitivities. ###### Acknowledgements. We thank to Chengfeng Cai, Zhao-Huan Yu, and Hong-Hao Zhang for helpful discussions. This work is supported in part by the National Natural Science Foundation of China under Grants No. 12005312, the Guangzhou Science and Technology Program under Grant No.202201011556, and the Sun Yat-Sen University Science Foundation. ## Appendix A Resummation of the imaginary parts of the self-energy diagrams Let us start from a group of real scalar particles for simplicity, e.g., \(\phi_{i}\), \(i=1,2,\ldots,n\), with the mass matrix to be \({\cal M}_{s}^{2}=\{m_{ij}^{2}\}\) where \(m_{ij}^{2}=m_{ji}^{2}\) so \({\cal M}_{s}^{2T}={\cal M}_{s}^{2}\), and the mass terms being \({\cal L}_{m}=-\frac{1}{2}\phi_{i}m_{ij}^{2}\phi_{j}\). The complete propagator of these scalar particles can be written in the form of a matrix \[\frac{i}{p^{2}I_{n\times n}-{\cal M}_{s}^{2}}\equiv i(p^{2}I_{n\times n}-{ \cal M}_{s}^{2})^{-1}. \tag{10}\] The usual diagonalization processes to find the mass eigenstates of \(\phi_{i}\)'s are equivalent to finding an orthogonal matrix \(V\) to diagonalize the propagator \[iV(p^{2}I_{n\times n}-{\cal M}_{s}^{2})^{-1}V^{-1}={\rm diag}\left[\frac{i}{p^ {2}-m_{1}^{2}},\frac{i}{p^{2}-m_{2}^{2}},\ldots,\frac{i}{p^{2}-m_{n}^{2}} \right], \tag{11}\] where \(V\) is a real matrix to satisfy \(VV^{T}=V^{T}V=I\). Expand the left-hand side of (11) according to \({\cal M}_{s}^{2}\), we acquire \[iV(p^{2}I_{n\times n}-{\cal M}_{s}^{2})^{-1}V^{-1}=iV\frac{1}{p^ {2}}\left(I-\sum_{t=1}^{n}(\frac{{\cal M}_{s}^{2}}{p^{2}})^{n}\right)V^{-1} \tag{12}\] \[= i\frac{1}{p^{2}}\left[I+\sum_{t=1}^{n}\left(\frac{V{\cal M}_{s}^ {2}V^{-1}}{p^{2}}\right)^{n}\right]=i\left(p^{2}I_{n\times n}-V{\cal M}_{s}^{2 }V^{-1}\right)^{-1}.\] It is then clear to see that \({\cal M}_{s}^{2}\) and the propagator (11) must be diagonalized at the same time. Including all the imaginary parts of the \(\phi_{i}\)-\(\phi_{j}\) self-energy diagrams introduces a resummation \[\sum_{t=0}^{\infty}\frac{i}{p^{2}I_{n\times n}-{\cal M}_{s}^{2}}\left[(-i){ \rm Im}(\Sigma(p^{2}))\frac{i}{p^{2}I_{n\times n}-{\cal M}_{s}^{2}}\right]^{t} =\frac{i}{p^{2}I_{n\times n}-{\cal M}_{s}^{2}-{\rm Im}(\Pi(p^{2}))}, \tag{13}\] where \(-i\Pi(p^{2})\) is a \(n\times n\) symmetric matrix with each of its element \(-i\Pi_{jk}(p^{2})\) the (cross) self-energy diagram connecting \(\phi_{j}\) and \(\phi_{k}\). Therefore, a complete diagonalization process described in (12) should be replaced by \[iV(p^{2}I_{n\times n}-{\cal M}_{s}^{2}-{\rm Im}(\Pi(p^{2})))^{-1}V^{-1}=i \left\{p^{2}I_{n\times n}-V[{\cal M}_{s}^{2}+{\rm Im}(\Pi(p^{2}))]V^{-1}\right\} ^{-1}. \tag{14}\] It is now clear that for the complete diagonalization processes including all the width informations, one should in turn diagonalize \({\cal M}_{s}^{2}+{\rm Im}(\Pi(p^{2}))\) at each momentum \(p\) instead of a mere \({\cal M}_{s}^{2}\). Again, it is easy to verify that \({\cal M}_{s}^{2}+{\rm Im}(\Pi(p^{2}))\) is a complex symmetric diagram, which guarantees the possibility of a successful diagonalization by a complex orthogonal diagram \(V=(V^{T})^{-1}\). If all the scalars are nearly-degenerate around the mass \(m_{d}\), a good approximation can be \(p^{2}=m_{d}^{2}\) to preserve the accuracy of the near-shell performances of the propagators. This is a generalization of the Breig-Wigner propagator for the single particle into a nearly-degenerate multiple particle group. Since we are discussing about the real scalars, and usually the complex orthogonal \(V\) contains not only real numbers, if we treat \(V\) as the matrix to recombine \(\phi_{1,2,\ldots,n}\) into "mass eigenstates" \(\phi^{\prime}_{j}=V_{jk}\phi_{k}\) as usual, then \(\phi^{\prime}_{j}\) is something "complex" but cannot be regarded as a "complex scalar field". Therefore, in this paper, we remind the reader that (A5) cannot be understood as a equivalence to diagonalizing the scalar fields, although we sometimes still apply this less rigorous terminology for brevity. For the \(W^{3}\)-\(B\)-\(Z^{\prime}\) system, the propagators should be accompanied with a Lorentz term \[\frac{ig_{\mu\nu}}{p^{2}I_{3\times 3}-{\cal M}_{V}^{2}}\equiv ig_{\mu\nu}(p^{2 }I_{3\times 3}-{\cal M}_{V}^{2})^{-1}.\] (A6) Here we adopt the Feynmann gauge. In principle Goldstone propagators should also be considered. However, around the \(Z\)-pole, only light leptons and quarks can be produced on-shell. Even the heaviest \(b\) quark takes the \(m_{B}\sim\) 3-5 GeV mass, so the Goldstone/Higgs contributions are suppressed by a \(\left(\frac{m_{b}}{v}\right)^{2}\ll 1\) factor, and can be safely neglected. Due to the Lorentz covariance, The one-loop self-energy diagram of the vector bosons can be decomposed and parametrized by \[i\Pi^{\mu\nu}_{3\times 3}(p^{2})=i\Pi_{3\times 3}(p^{2})g^{\mu\nu}+i\Pi^{\prime} _{3\times 3}(p^{2})p^{\mu}p^{\nu}.\] (A7) Then we can follow (A4) to resum the vector boson's propagators. Notice that all the masses of our initial and final state particles are ignorable since they are extremely relativistic particles, so once a \(p^{\mu}\) in (A7) appears, it will finally dots into the fermionic propagators of the external legs. The Ward-Takahashi identity version in the broken phase[80] transmute this into a Goldstone propagator, and its contributions are again suppressed by the smallness of the Yukawa couplings. Therefore, we are able to neglect the \(p^{\mu}p^{\nu}\) terms in (A7), and finally write down the resummed propagators when only imaginary parts of the (A7) are considered \[ig^{\mu\nu}\left[p^{2}I_{3\times 3}-{\cal M}_{v}^{2}-{\rm Im}(\Pi_{3\times 3}(p ^{2}))\right]^{-1}.\] (A8) In principle, \(\text{Im}(\Pi_{3\times 3}(p^{2}))\) should be computed for each \(p\). However, practical event generators are not designed for such kind of propagators. If we just fix \(p^{2}\approx m_{Z}^{\text{SM2}}\) in (10), the off-shell \(s\)-channel photon decay contributions are included, however the softer \(t\)-channel photon propagators might also become massive. To avoid the massive photon, we tentatively shut down the mixing terms, and diagonalize the \[\mathcal{M}_{V}^{\text{SM2}} = \begin{pmatrix}m_{\dot{Z}^{\prime}}^{2}&0&0\\ 0&\frac{\hat{g}^{\prime 2}}{4}\hat{v}^{2}&-\frac{\hat{g}^{\prime}\hat{g}}{4}\hat{v}^{ 2}\\ 0&-\frac{\hat{g}^{\prime}\hat{g}}{4}\hat{v}^{2}&\frac{\hat{g}^{2}}{4}\hat{v}^{ 2}\end{pmatrix}. \tag{11}\] Shutting down the mixing terms does not affect the self-energy calculations up to the one-loop order. Diagonalizing (11) requires \(V_{\text{SM}}\) defined in (14). \(V_{\text{SM}}^{T}\mathcal{M}_{V}^{\text{SM2}}V_{\text{SM}}\) gives \[\text{diag}[m_{\dot{Z}^{\prime}}^{2},m_{\dot{Z}}^{2},0], \tag{12}\] corresponding to the masses of the "\(\hat{Z}^{\prime}\)", "SM-like \(\hat{Z}^{\prime}\)" with the "hat" symbols to distinguish them from the true SM mass-eigenstates, and the photon "eigenstate" masses respectively. Here \(m_{\dot{Z}^{\prime}}^{2}=\frac{\hat{g}^{2}+\hat{g}^{\prime 2}}{4}\hat{v}^{2}\). Actually, it is the \(\hat{Z}^{\prime}\) and \(\hat{Z}\) that are nearly degenerate and might strongly mix together, and the photon lies far from the \(\hat{Z}^{\prime}\)-\(\hat{Z}\) mass spectrum, so we only have to consider \(\Pi_{\dot{Z}\dot{Z}}(p^{2})\), \(\Pi_{\dot{Z}^{\prime}\dot{Z}^{\prime}}(p^{2})\) and \(\Pi_{\dot{Z}^{\prime}\dot{Z}}(p^{2})\), whose contributions are not able to be perturbatively expanded. With the definition \(c_{XY}\equiv\text{Im}\left\{\Pi_{\text{X}\leftrightarrow\text{Y}}(\text{p}^{ 2}\approx\text{m}_{\text{X},\text{Y}}^{2})\right\}\), (12) becomes \[V_{\text{SM}}^{T}\mathcal{M}_{V}^{\text{SM2}}V_{\text{SM}}+iC_{ \hat{Z}/\hat{Z}^{\prime}}=\begin{pmatrix}m_{\dot{Z}^{\prime}}^{2}+ic_{\hat{Z}^ {\prime}\dot{Z}^{\prime}}&ic_{\hat{Z}^{\prime}\dot{Z}}&0\\ ic_{\hat{Z}^{\prime}\dot{Z}}&m_{\dot{Z}}^{2}+ic_{\hat{Z}\dot{Z}}&0\\ 0&0&0\end{pmatrix}. \tag{13}\] where \(iC_{\hat{Z}/\hat{Z}^{\prime}}\) indicates the corrections from the imaginary parts of the self-energy diagrams. Then we utilize \(V_{\text{SM}}^{-1}=V_{\text{SM}}^{T}\) to restore (13) into the form under the "interacting eigenstate basis", \[V_{\text{SM}}(V_{\text{SM}}^{T}\mathcal{M}_{V}^{\text{SM2}}V_{ \text{SM}}+iC_{\hat{Z}/\hat{Z}^{\prime}})V_{\text{SM}}^{T} \tag{14}\] \[= \begin{pmatrix}m_{\dot{Z}^{\prime}}^{2}+ic_{\dot{Z}^{\prime}\dot{ Z}}&-i\frac{\hat{g}^{\prime}}{\sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}}c_{\dot{Z}^{ \prime}\dot{Z}}&i\frac{\hat{g}}{\sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}}c_{\dot{Z}^{ \prime}\dot{Z}}\\ -i\frac{\hat{g}^{\prime}}{\sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}}c_{\dot{Z}^{ \prime}\dot{Z}}&\frac{\hat{g}^{\prime 2}}{4}\left(\hat{v}^{2}+\frac{4ic_{\hat{Z} \hat{Z}}}{\hat{g}^{\prime 2}+\hat{g}^{2}}\right)&-\frac{\hat{g}^{\prime}\hat{g}}{4}\left(\hat{ v}^{2}+\frac{4ic_{\hat{Z}\hat{Z}}}{\hat{g}^{\prime 2}+\hat{g}^{2}}\right)\\ i\frac{\hat{g}}{\sqrt{\hat{g}^{\prime 2}+\hat{g}^{2}}}c_{\dot{Z}^{ \prime}\dot{Z}}&-\frac{\hat{g}^{\prime}\hat{g}}{4}\left(\hat{v}^{2}+\frac{4ic_ {\hat{Z}\hat{Z}}}{\hat{g}^{\prime 2}+\hat{g}^{2}}\right)&\frac{\hat{g}^{2}}{4}\left(\hat{ v}^{2}+\frac{4ic_{\hat{Z}\hat{Z}}}{\hat{g}^{\prime 2}+\hat{g}^{2}}\right)\end{pmatrix}.\] Supplement (100) with the mixing terms \(\delta m^{2}\), \(\delta v^{2}\) appeared in (5), (8) is then acquired. As we have mentioned, we include the imaginary parts of the diagrams in the first panel of Fig. 8, while neglecting the rest of them with the photon external leg(s) involved. Since the mass of the photon is far from the nearly-denegerate \(Z^{\prime}\)-\(Z\) system, one find it possible to manipulate these photon-involved terms perturbatively in (101) and (102), in contrast with the non-perturbative \(\hat{Z}\)-\(\hat{Z}^{\prime}\) mixing terms. Neglecting the photon-involved terms gives the error of \(\sim\left[\frac{\text{Im}\left(\Pi_{\gamma\hat{Z}^{\prime}}(\prime)\right)}{ m_{\hat{Z}}^{2}}\right]^{2}\), which can be ignored in our current stage of theoretical discussions. In the practical event simulation processes that we have performed, we at first shut down all the mixing terms to calculate the width of the SM-Z boson by the event generator to extract the \(c_{\hat{Z}\hat{Z}}\) from it. Then, we compute \(c_{\hat{Z}^{\prime}\hat{Z}}\) and \(c_{\hat{Z}^{\prime}\hat{Z}^{\prime}}\) by hand relying on different model setups. After diagonalizing (8) by our own programs, we acquire the "masses" and "widths" of the "mass eigenstates", as well as the rotated "coupling constants" to be input into the event generator for further simulations, which is equivalent to diagonalizing the propagator matrix (100) to calculate the amplitudes.
2310.00095
Theory of angular momentum transfer from light to molecules
We present a theory describing interaction of structured light, such as light carrying orbital angular momentum, with molecules. The light-matter interaction Hamiltonian we derive is expressed through couplings between spherical gradients of the electric field and the (transition) multipole moments of a particle of any non-trivial rotation point group. Our model can therefore accommodate for an arbitrary complexity of the molecular and electric field structure, and can be straightforwardly extended to atoms or nanostructures. Applying this framework to ro-vibrational spectroscopy of molecules, we uncover the general mechanism of angular momentum exchange between the spin and orbital angular momenta of light, molecular rotation and its center-of-mass motion. We show that the non-zero vorticity of Laguerre-Gaussian beams can strongly enhance certain ro-vibrational transitions that are considered forbidden in the case of non-helical light. We discuss the experimental requirements for the observation of these forbidden transitions in state-of-the-art spatially-resolved spectroscopy measurements.
Mikhail Maslov, Georgios M. Koutentakis, Mateja Hrast, Oliver H. Heckl, Mikhail Lemeshko
2023-09-29T19:08:06Z
http://arxiv.org/abs/2310.00095v2
# Charge density model for the interaction of molecules with vortex beams ###### Abstract The interaction of molecules with the orbital angular momentum of light has long been argued to benefit structural studies and quantum control of molecular ensembles. We derive a general description of the light-matter interaction in terms of the coupling between spherical gradients of the electric field and an effective molecular charge density that exactly reproduces molecular multipole moments. Our model can accommodate for an arbitrary complexity of the molecular structure and is applicable to any electric field, with the exception of tightly focused beams. Within this framework, we derive the general mechanism of angular momentum exchange between the spin and orbital angular momenta of light, molecular rotation and its center-of-mass motion. We demonstrate that vortex beams strongly enhance certain ro-vibrational transitions that are considered forbidden in the case of a non-helical light. Finally, we discuss the experimental requirements for the observation of novel transitions in state-of-the-art spatially-resolved spectroscopy measurements. ## I Introduction Ro-vibrational spectroscopy is one of the most precise non-invasive methods for studying the properties of molecules [1; 2; 3]. It is used in biochemistry [4; 5; 6], medicine [7; 8], studies of the atmosphere [9; 10] and interstellar medium [11]. In molecular physics, it has been applied, e.g., to identify possible isomer structures of water [12; 13], study van der Waals interactions [14; 15] and hydrogen bonds [16; 17]. In a typical infrared spectrum, molecular rotational transitions are constrained by selection rules that stem from the angular momentum (AM) conservation. Each photon of the probe light carries at most one quantum of the _spin_ AM [18; 19], which limits the range of transitions that can be observed using a single-photon absorption spectroscopy. This range can be extended either by employing multi-photon schemes [20] or by increasing the AM carried by the single photon. In particular, in addition to the spin AM, light can also have _orbital_ angular momentum (OAM) in the form of the helical beam phase [18; 19]. Twisted photons with large values of OAM can be generated by combining fundamental laser modes [21; 22]: using a diffraction grating [23], a spiral phase plate [24] or a metasurface [25]. Outside the regime, where the spin and OAM are strongly coupled [26], they represent two conceptually different characteristics of light. The former is related to the polarization of the electric field, while the latter is induced by its spatial gradient [18; 19]. Molecular spectroscopy with a non-twisted (spin) light is a highly developed research field [1; 2; 3], but the interaction of molecules with the OAM is a subject of only recent theoretical [27; 28; 29; 30; 31; 32; 33] and experimental [34; 35] research. Aside from non-trivially modifying selection rules and extending the capabilities of ro-vibrational spectroscopy, vortex beams may also be a promising tool for quantum control and metrology. For instance, they could enable the creation of complex rotational molecular states using a single laser instead of a convoluted multiple-beam setup. Relatively small gas-phase molecules offer several advantages for such applications, compared to readily available condensed matter systems, based, e.g., on liquid crystals [36], micro and nanoparticles [37; 35], metamaterials [38; 39], molecular excitons [40] and microwave plasmonic resonators [41]. First, as opposed to fabricated systems, all molecules of the same kind are fundamentally identical, which offers an advantage in using them as qubits [42]. On the other hand, gas-phase molecules are almost entirely decoupled from their environment, which paves the way to cool them to ultracold temperatures while mantaining full control over their individual quantum states [43; 44], as well as to make use of very narrow spectroscopic linewidths for precision measurements and fundamental tests [45; 46]. Many of the previous theories on the interaction of molecules with vortex beams [27; 28; 29; 30] consider a system of point charges that emulates the molecule. There are two major drawbacks to this approach. First, the required number of effective charges and their amplitudes are difficult to justify. In particular, one needs to account for the redistribution of the electron density resulting from chemical bonding [47; 48], as well as the shielding of the nuclear charge by core electrons [49; 50; 51]. The second drawback is the mathematical complexity and narrow applicability of the resulting approach. Various implementations of point-charge models [27; 28; 29; 30] cannot be straightforwardly extended to more complex molecules, e.g., tri-atomic molecules, like water, which is argued to have rich physics associated with its bending mode [52; 53]. To overcome these limitations, we model the molecule with a _continuous_ charge distribution. Similar to the Wigner-Eckart theorem in atomic spectroscopy [54], our model absorbs most of the theoretical complexity of the molecular structure into macroscopic molecular multipole moments (dipole, quadrupole, etc). They can be measured experimentally [55; 56; 57; 58; 59] or extracted from quantum chemistry calculations [60; 61; 62; 63; 64]. The charge density is set to exactly reproduce the multipole moments, while embedding various internal molecular degrees of freedom. For simplicity, we consider only molecular rotations and a single vibrational mode, which corresponds to the case of a diatomic molecule. Nevertheless, our framework can scale to more complicated molecular geometries. It may also be extended to describe molecular chirality, thus being relevant for the recent studies on resolving enantiomers using a twisted light [32; 33; 34; 35; 65]. Finally, our model clearly illustrates the transfer of the spin and OAM of light to the internal and external (center-of-mass) molecular degrees of freedom. The paper is organized as follows. In Sec. II, we introduce the molecular charge density model. We explain how various degrees of freedom can be embedded into the model. In Sec. III, we derive the general expression for the light-matter interaction that, provided the validity of the multipole expansion, is valid for _any molecular structure_ and _any electric field profile_, with the exception of tightly focused beams. Our analysis of this expression reveals general selection rules for the angular momentum exchange. In Sec. IV, we consider the Laguerre-Gaussian electric field profile, and calculate contributions of various molecular rotational transitions to the total Hamiltonian. We infer scenarios in which vortex beams offer an advantage over the non-twisted light. In Sec. V, we calculate transition rates of ro-vibrational transitions. In Sec. VI, we suggest an experimental scheme that may be capable of measuring the enhancement of ro-vibrational transitions, induced by the OAM of light. We conclude the paper with the discussion of possible applications and extensions of our model in Sec. VII. ## II Charge density model Reasonably sized molecules behave like point sources when interacting with infrared or optical radiation. This results from the separation of characteristic length scales of the electric field and molecular charge density. For instance, in the case of the carbon monosulfide (CS), the wavelength corresponding to the lowest vibrational transition (\(\nu=0\to 1\)) is \(\lambda_{\rm vib}\approx 8\,\)um, while the bond length is \(a\approx 1.5\,\)A [59]. This large difference enables the well-defined multipole expansion [66; 67]. In particular, one can assume that the electric field interacts with multipole moments of the whole molecule rather than individual constituent charges. This allows us to pay less attention to microscopic details of the molecule, which have a minor effect on its interaction with radiation. Instead, we develop an effective description of the molecule by defining the dependence of its multipole moments on the internal molecular degrees of freedom. We consider a reference frame, co-rotating with the molecule, with the origin at its center of mass (herewith called the _molecular frame_). In this frame, we suggest the effective model for the physical molecular charge density \(\rho_{\rm mol}(\mathbf{r}^{\prime})\). For simplicity, our implementation describes only the dependence of \(\rho_{\rm mol}(\mathbf{r}^{\prime})\) on vibrations and rotations of the molecule and omits other field-induced displacements of intramolecular charges, e.g., it is adiabatic with respect to electronic transitions. Nevertheless, other degrees of freedom can be embedded into the model following our example. For instance, in Sec. VII, we discuss a possible extension of our model to describe the molecular chirality. We consider a charge distribution \(\rho(\mathbf{r}^{\prime})\) on a sphere of an infinitesimally small radius \(\chi\to 0\), centered at the molecular frame origin (see Fig. 1(a)). One can write down the expansion of \(\rho(\mathbf{r}^{\prime})\) in terms of real-valued spherical harmonics \[\rho_{\rm mol}(\mathbf{r}^{\prime})\sim\rho(\mathbf{r}^{\prime})=\sum_{\lambda,\mu} \alpha_{\lambda,\mu}\mathcal{Y}_{\lambda,\mu}(\Omega^{\prime})\delta(r^{\prime }-\chi)/\chi^{\lambda+2}\,, \tag{1}\] where the coordinates \(\mathbf{r}^{\prime}=\{r^{\prime},\Omega^{\prime}\}=\{r,\theta^{\prime},\phi^{ \prime}\}\) and real-valued harmonics \(\mathcal{Y}_{\lambda,\mu}(\Omega^{\prime})\) are defined as \[\mathcal{Y}_{\lambda,\mu}(\Omega^{\prime})=\begin{cases}i\big{(}Y_{\lambda,\mu }(\Omega^{\prime})-Y_{\lambda,\mu}^{*}(\Omega^{\prime})\big{)}/\sqrt{2}&\mu<0 \\ Y_{\lambda,0}(\Omega^{\prime})&\mu=0\\ \big{(}Y_{\lambda,-\mu}(\Omega^{\prime})+Y_{\lambda,-\mu}^{*}(\Omega^{\prime} )\big{)}/\sqrt{2}&\mu>0\end{cases}\,, \tag{2}\] where \(Y_{\lambda,\mu}(\Omega^{\prime})\) are complex-valued harmonics with the Condon-Shortley phase [68]. The choice of charge distribution (1) is not accidental. The harmonics \(\mathcal{Y}_{\lambda,\mu}(\Omega^{\prime})\) of a degree \(\lambda\) provide a complete basis for the irreducible representation of the SO(3) group of the dimension \((2\lambda+1)\)[68; 69; 70]. Therefore, the expansion coefficients \(\alpha_{\lambda,\mu}\), called _spherical_ multipole moments of the order \(\lambda\)[66], reflect point group symmetry of the molecule [71]. In an experiment, it is more practical to define _axial_ multipole moments, like \(d_{x}\), \(d_{y}\) and \(d_{z}\). They can be mapped to \(\alpha_{\lambda,\mu}\) by integrating the charge density \(\rho(\mathbf{r}^{\prime})\) with the Cartesian tensor of a rank \(\lambda\). For instance, for the given dipole moment vector \(\mathbf{d}=\{d_{x},d_{y},d_{z}\}\), the condition \[\begin{pmatrix}d_{x}\\ d_{y}\\ d_{z}\end{pmatrix}=\lim_{\chi\to 0}\int\rho(\mathbf{r}^{\prime})\begin{pmatrix}x^{ \prime}\equiv r^{\prime}\sin\theta^{\prime}\cos\phi^{\prime}\\ y^{\prime}\equiv r^{\prime}\sin\theta^{\prime}\sin\phi^{\prime}\\ z^{\prime}\equiv r^{\prime}\cos\theta^{\prime}\end{pmatrix}\mathrm{d}^{3}\mathbf{r}^ {\prime}\,, \tag{3}\] yields: \(\{\alpha_{1,-1},\alpha_{1,0},\alpha_{1,1}\}=\sqrt{\frac{3}{4\pi}}\{d_{y},d_{z},d_ {x}\}\). Similar mappings can be derived for all higher-order multipole moment tensors, e.g., quadrupole, as shown in Fig. 1(b). When calculating the corresponding integrals, which are similar to Eq. (3), the denominator \(1/\chi^{\lambda+2}\) in Eq. (1) balances out the Cartesian tensor and Jacobian and assures that the integrals are finite in the limit \(\chi\to 0\). Besides describing the molecular point group symmetry [71], multipole moments also reflect the longitudinal geometry of the molecule [60], including its vibrations. In particular, numerous studies have shown that the dipole [55; 56] and quadrupole [64] moments are different in the ground and excited vibrational states. Unlike the dependence of multipoles on molecule's rotational symmetry, their vibrational dependence is difficult to formalize in the case of an arbitrary polyatomic molecule. One possible approach is to model vibrations with the modes of the quantum harmonic oscillator and define transition multipole moments between vibrational states. If one considers a molecule with a single vibrational degree of freedom, e.g., diatomic molecule, multipole moments can be expanded in Taylor series with respect to the vibrational coordinate \(\hat{q}\). For instance, for the quadrupole moment matrix \(Q_{i,j}\), the expansion reads \[Q_{i,j}(\hat{q}) =Q_{i,j}(\hat{q}=0)+\left[\frac{\partial Q_{i,j}}{\partial q} \right]_{\hat{q}=0}\hat{q}+...\] \[=\sum_{\nu,\nu^{\prime}}Q^{\nu,\nu^{\prime}}_{i,j}(\hat{a}^{ \dagger})^{\nu^{\prime}}(\hat{a})^{\nu}\,, \tag{4}\] where \(i,j\in\{x,y,z\}\), \(\nu^{(\ell)}\in\mathbb{Z}^{+}\) and \(\hat{q}=q_{0}(\hat{a}^{\dagger}+\hat{a})\), where \(\hat{a}^{(\dagger)}\) is the (creation) annihilation operator of the vibrational mode and \(q_{0}\) is the characteristic distance. The coefficient \(Q^{\nu,\nu^{\prime}}_{i,j}\) is the quadrupole moment matrix associated with the transition from the vibrational state \(|\nu\rangle\) to the state \(|\nu^{\prime}\rangle\). This approach can be straightforwardly generalized to the case of multiple contributing vibrational modes \(\hat{q}_{n}\). Another approach to obtaining the transition multipole moments is to use various numerical _ab-initio_ methods [61; 62]. In Fig. 1(c), we present our calculation for the CS molecule based on the StoBe-deMon implementation [72] of density functional theory [73]. By using the local Perdew-Wang exchange-correlation potential of Ref. [74] and constraining the Kohn-Sham orbitals [73; 75] to be rotationally invariant around the interatomic axis, we obtain the adiabatic potential energy curve \(E^{\rm vib}(q)\) (markers). Here the vibrational coordinate \(q\) is the shift of the internuclear distance from its equilibrium value \(q_{\rm eq}\approx 1.552\,\text{\AA}\). The resulting curve can be reasonably approximated by the anharmonic Morse potential (solid line). We also calculate the vibrational levels \(E^{\rm vib}_{i}\) (dashed lines) and the corresponding wave functions \(\psi_{\nu}(q)\) (shades), which are asymmetric with respect to the equilibrium coordinate \(q_{\rm eq}\). In the inset of Fig. 1(c), we plot the dependence of the quadrupole moment \(Q_{zz}\) on the vibrational coordinate \(q\) (markers), which has non-linear profile (cf. solid line). The transition quadrupole moment matrix can be straightforwardly obtained by calculating the integral \[Q^{\nu,\nu^{\prime}}_{i,j}=\int\psi^{*}_{\nu^{\prime}}(q)Q_{i,j}(q)\psi_{\nu}( q){\rm d}q\,. \tag{5}\] In the case of a harmonic energy spectrum \(E^{\rm vib}(q)\) and symmetric wave functions \(\psi_{\nu(\nu^{\prime})}(q)\), the transition Figure 1: Summary of the effective charge density model. (a) Physical molecular charge distribution \(\rho_{\rm mol}(\mathbf{r}^{\prime})\) can be emulated by the charge density \(\rho(\mathbf{r}^{\prime})\) on a sphere with radius \(\chi\to 0\) in the molecular frame (left). The corresponding laboratory-frame charge density \(\rho(\mathbf{r},\hat{\Omega}_{\rm mol})\) can be obtained using the rotational operator \(\hat{\mathcal{D}}(\hat{\Omega}_{\rm mol})\), where \(\hat{\Omega}_{\rm mol}\) is the molecular orientation operator (right). (b) Spherical multipole moments \(\alpha_{\lambda,\mu}\) – coefficients in the expansion of \(\rho(\mathbf{r}^{\prime})\) over real-valued spherical harmonics \(\mathcal{Y}_{\lambda,\mu}(\Omega^{\prime})\), can be directly mapped to axial multipole moments, as exemplified here by the quadrupole moment matrix \(Q_{i,j}\), where \(i,j\in\{x,y,z\}\). (c) Transition quadrupole moments \(Q^{\nu,\nu^{\prime}}_{i,j}\) are calculated numerically using density functional theory. For the CS molecule, we obtain the adiabatic potential energy curve \(E^{\rm vib}(q)\) (markers) and the associated eigenenergies \(E^{\rm vib}_{\nu}\) (dashed lines) and eigenstates \(\psi_{\nu}(q)\) (shades), where \(q\) is the vibrational coordinate (main panel). Both the spectrum and eigenstates are asymmetric with respect to the equilibrium internuclear distance \(q_{\rm eq}\). We calculate the coordinate dependence of the quadrupole moment \(Q_{zz}(q)\) (inset). To obtain the transition quadrupole moment, we calculate the expectation value of \(Q_{zz}(q)\) with respect to the eigenstates \(\psi_{\nu(\nu^{\prime})}(q)\) (see Eq. (5)). If the potential energy curve \(E^{\rm vib}(q)\) is only slightly anharmonic, \(Q^{\nu,\nu^{\prime}}_{zz}\) can be approximated with the \((\nu+\nu^{\prime})\)’th derivative of \(Q_{zz}(q)\) at \(q=0\). quadrupole moment can be approximated, in the first order, by the derivative: \(Q_{i,j}^{\nu,\nu^{\prime}}\approx\left[\frac{\partial^{\nu+\nu^{\prime}}Q_{i,j}}{ \partial q^{\nu+\nu^{\prime}}}\right]_{q=0}\). The dependence of the charge density \(\rho(\mathbf{r}^{\prime})\) on the molecular rotation becomes apparent after casting the molecular-frame spherical harmonics (with angle \(\Omega^{\prime}\)) in terms of the laboratory-frame angles \(\Omega\) \[Y_{\lambda,\mu}(\Omega^{\prime})= \hat{\mathcal{D}}\big{(}\hat{\Omega}_{\text{mol}}\big{)}Y_{\lambda,\mu}(\Omega)\] \[=\sum_{\zeta}D_{\zeta,\mu}^{\lambda}\big{(}\hat{\Omega}_{\text{ mol}}\big{)}Y_{\lambda,\zeta}(\Omega)\,, \tag{6}\] where \(\hat{\mathcal{D}}\big{(}\hat{\Omega}_{\text{mol}}\big{)}\) is the rotation operator (see Fig. 1(a)), \(D_{\zeta,\mu}^{\lambda}\big{(}\hat{\Omega}_{\text{mol}}\big{)}\) are Wigner \(D\)-matrices [68] and \(\hat{\Omega}_{\text{mol}}\) is the molecular orientation operator. Finally, to illustrate our model better, we consider a general diatomic heteronuclear molecule with the dipole moment vector \(\mathbf{d}=\{0,0,\hat{d}_{z}^{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\ the molecule. Nevertheless, tight focusing was shown to aid the OAM transfer, as discussed in Sec. VII, rendering the description of such light sources an important future extension to our theory. Without the loss of generality, we consider a beam propagating along the \(z\)-axis. For such a beam, \(E_{0,z}=0\), which yields: \((\mathbf{r}\cdot\mathbf{E}_{0})=\sqrt{4\pi/3}\,r\sum_{\sigma=\pm 1}\epsilon_{\sigma}Y_{1 \sigma}(\Omega_{\mathbf{r}})\), where \(\epsilon_{\sigma}=(E_{0,x}+i\sigma E_{0,y})/\sqrt{2}\) expresses \(\mathbf{E}_{0}\), in terms of the circular polarization \(\sigma=\pm 1\). To proceed, we expand the spatial electric field profile \(E(\mathbf{R}+\eta\mathbf{r})\) around the center of mass position \(\mathbf{R}\), where \(r\ll R\). Unlike the previous studies [27; 28; 29; 30], that considered the Cartesian Taylor expansion, we employ the _spherical expansion_[81; 82; 83] \[E(\mathbf{R}+\eta\mathbf{r})=\exp(\eta\mathbf{r}\cdot\mathbf{\nabla}_{\mathbf{R}})E( \mathbf{R})\] \[=\sum_{nlm}c_{n,l}(\eta r)^{2n+l}Y_{l,m}^{*}(\Omega_{\mathbf{r}}) \big{[}\mathcal{R}_{l,m}(\mathbf{\nabla}_{\mathbf{R}})E(\mathbf{R})\big{]}\,, \tag{13}\] where \(c_{n,l}=\frac{\pi^{2l+2}\kappa^{2n}(l+n)!}{n!(2l+2n+1)!}\), \(n,l\geq 0\), \(m\in[-l,l]\) and \(\kappa\) is the wavenumber of the electric field. \(\mathcal{R}_{l,m}(\mathbf{\nabla}_{\mathbf{R}})\) are the solid harmonics of the gradient operator \(\mathbf{\nabla}_{\mathbf{R}}\), also known as spherical tensor gradient operators. Their detailed overview is the subject of Refs. [81; 82; 83]. For the sake of simplicity, we would refer to the term \(\big{[}\mathcal{R}_{l,m}(\mathbf{\nabla}_{\mathbf{R}})E(\mathbf{R})\big{]}\) as the _spherical gradient_. For differentiable electric field profiles \(E(\mathbf{R})\) and specific values of \(l\) and \(m\), it can be calculated analytically, after expanding the solid harmonics in Cartesian coordinates \[\mathcal{R}_{l,m}(\mathbf{\nabla}_{\mathbf{R}})=\sqrt{\frac{(2l+1)(l+m)!( l-m)!}{4\pi}}\] \[\times\sum_{k}\frac{(-\partial_{X}-i\partial_{Y})^{m+k}(\partial _{X}-i\partial_{Y})^{k}\partial_{Z}^{l-m-2k}}{2^{m+2k}(m+k)!k!(l-m-2k)!}\,, \tag{14}\] where \(\partial_{X}\), \(\partial_{Y}\) and \(\partial_{Z}\) are the Cartesian components of the gradient vector \(\mathbf{\nabla}_{\mathbf{R}}\), \(\max(-m,0)\leq k\leq\lfloor\frac{l-m}{2}\rfloor\), with \(\lfloor x\rfloor\) being the floor function yielding the largest integer less than \(x\). We further substitute the electric field expansion (13) into the Hamiltonian (12). We consider the molecular-frame charge density, given by Eq. (1), and apply the rotation rule (6) to obtain the charge density \(\rho(\mathbf{r})\) in the laboratory coordinates. After integrating over \(\eta\) and \(\mathbf{r}\) and taking the limit \(\chi\to 0\), we obtain the final relation for the interaction Hamiltonian \[\mathcal{H}_{\text{int}}=\sum_{\begin{subarray}{c}nlm\\ \sigma\mu\end{subarray}}\gamma_{n,l,m,\sigma}\hat{\alpha}_{\lambda,\mu} \epsilon_{\sigma}\big{[}\mathcal{R}_{l,m}(\mathbf{\nabla}_{\mathbf{R}})E(\mathbf{R}) \big{]}\Big{|}_{\mathbf{R}=\mathbf{\hat{R}}}\begin{cases}D_{m-\sigma,\mu}^{\lambda} \big{(}\hat{\Omega}\big{)}-(-1)^{\mu}D_{m-\sigma,-\mu}^{\lambda}\big{(}\hat{ \Omega}\big{)}&\mu<0\\ \sqrt{2}D_{m-\sigma,0}^{\lambda}\big{(}\hat{\Omega}\big{)}&\mu=0\,+\text{H.c.} \,,\\ D_{m-\sigma,-\mu}^{\lambda}\big{(}\hat{\Omega}\big{)}+(-1)^{-\mu}D_{m-\sigma, \mu}^{\lambda}\big{(}\hat{\Omega}\big{)}&\mu>0\end{cases} \tag{15}\] where \(\gamma_{n,l,m,\sigma}=c_{n,l}C_{l,0;1,0}^{\lambda,0}C_{\lambda,m-\sigma;1, \sigma}^{l,m}/(\lambda\sqrt{2})\), where the multipole order \(\lambda=2n+l+1\) and \(C_{l,m;l^{\prime},m^{\prime}}^{L,M}\) are the Clebsch-Gordan coefficients [68]. Operators \(\hat{\mathbf{R}}\), \(\hat{\alpha}_{\lambda,\mu}\) and \(\hat{\Omega}\!\equiv\!\hat{\Omega}_{\text{mol}}\) act respectively on the center-of-mass, molecular vibrational and rotational states. The Hamiltonian (15) is the major result of our model. Provided the validity of the multipole expansion, it holds for _any molecular charge density_ and _any profile of the electric field_, except for tightly focused beams. \(\mathcal{H}_{\text{int}}\) already reveals the angular momentum exchange and the corresponding selection rules. To illustrate this fact, we consider cylindrical coordinates \(\mathbf{R}=\{R,\Phi,Z\}\) and expand the electric field profile into Fourier series with respect to the polar angle \(\Phi\): \(E(\mathbf{R})=\sum_{M}E_{M}(R,Z)e^{iM\Phi}\). Each Fourier component \(E_{M}(R,Z)\) is an eigenvalue of the \(\hat{L}_{z,\Phi}=-i\hbar\,\partial/\partial\Phi\) operator and can be associated with the _magnetic quantum number_ M. The spherical gradient can be straightforwardly calculated, using Eq. (14) \[\big{[}\mathcal{R}_{l,m}(\mathbf{\nabla}_{\mathbf{R}})E(\mathbf{R})\big{]}=\sum_{M}\tilde{E} _{M,l}(R,Z)e^{i(M+m)\Phi}\,. \tag{16}\] As a result of applying the spherical gradient operator, Fourier amplitudes change: \(E_{M}(R,Z)\!\to\!\tilde{E}_{M,l}(R,Z)\), and magnetic quantum numbers shift: \(M\to M+m\). The non-zero spherical gradient for the certain \(l\) and \(m\) represents the transfer of \(M+m\) quanta of angular momentum from the beam to the center-of-mass motion (\(\hat{\mathbf{R}}\)). The ability to directly associate a derivative from the Taylor series to the angular momentum transfer is the main benefit of using the spherical expansion in Eq. (13) instead of the Cartesian one. Similarly, Wigner D-matrices \(D_{m-\sigma,\pm\mu}^{\lambda}\big{(}\hat{\Omega}_{\text{mol}}\big{)}\) describe the transfer of \(\sigma-m\) quanta of AM from the electric field to the molecular rotation (\(\hat{\Omega}_{\text{mol}}\)). Together, these transfers satisfy the angular momentum conservation: \(M+\sigma\) quanta of AM of the electric field get redistributed into \(M+m\) quanta of center-of-mass AM and \(\sigma-m\) quanta of molecular AM. Apart from the angular momentum exchange, Eq. (15) infers two major corollaries. If one considers only the electric-dipole component of the light-matter interaction, the only remaining terms in Eq. (15) are those with \(\lambda=1\), thus \(n=l=m=0\). The spherical gradient \(\big{[}\mathcal{R}_{0,0}(\mathbf{\nabla}_{\mathbf{R}})E(\mathbf{R})\big{]}\propto E(\mathbf{R})\) is independent of summation indices and can be extracted outside the sum. This means, that the spatial profile of the electric field \(E(\mathbf{R})\) is simply an amplitude modifier to the interaction Hamiltonian. It is unable to affect molecular rotational transitions. In this case, molecular transitions are controlled only by the circular polarization \(\sigma\), which was shown to be associated with the _spin angular momentum_ of light [18; 19]. This result is in agreement with the previous studies [27; 29]. In particular, for a diatomic molecule, described by the charge density (7), the contribution to the light-matter interaction from the electric dipole reads \[\mathcal{H}_{\rm int}^{\rm dip}=-\sqrt{\frac{4\pi}{3}}\tilde{d}_{z}^{\rm vib}E (\hat{\mathbf{R}})\sum_{\sigma}\epsilon_{\sigma}Y_{1,\sigma}(\hat{\Omega})\,, \tag{17}\] which is identical to the result of Ref. [29]. In contrast to this, if one considers the quadrupole interaction term, with \(\lambda=2\), thus \(n=0\) and \(l=1\), the Hamiltonian (15) couples the spherical gradient \(\big{[}\mathcal{R}_{1,m}(\mathbf{\nabla}_{\mathbf{R}})E(\mathbf{R})\big{]}\) to the D-matrix of the molecular rotation angle \(\hat{\Omega}\). For a diatomic molecule, the electric-quadrupole interaction term is given by \[\mathcal{H}_{\rm int}^{\rm quad}=\pi\sqrt{2/3}\,\hat{Q}_{z,z}^{\rm vib}\] \[\times\sum_{m\sigma}\epsilon_{\sigma}C_{2,m-\sigma;1,\sigma}^{1, m}\big{[}\mathcal{R}_{1,m}(\mathbf{\nabla}_{\mathbf{R}})E(\mathbf{R})\big{]}\Big{|}_{\mathbf{R}= \hat{\mathbf{R}}}Y_{2,m-\sigma}^{*}(\hat{\Omega})\,. \tag{18}\] It clearly illustrates the coupling between the center-of-mass coordinate \(\hat{\mathbf{R}}\) and the molecular rotation angle \(\hat{\Omega}\), mediated by the electric field (via summation index \(m\)). ## IV Laguerre-Gaussian beams and enhancement of rotational transitions To analyze the angular momentum exchange revealed by the Hamiltonian (15) in more detail, one needs to specify the spatial profile of the electric field \(E(\mathbf{R})\). We consider an electric field that can be expressed as a combination of Laguerre-Gaussian (LG) modes. These modes are solutions to the paraxial Helmholtz equation in cylindrical coordinates \(\mathbf{R}=\{R,\Phi,Z\}\). The magnitude of the electric field in a LG mode is given by [22] \[E_{PM}(\mathbf{R})=\gamma_{PM}\bigg{(}\frac{R}{\omega_{Z}}\bigg{)}^{ |M|}\exp\!\left[-\frac{R^{2}}{\omega_{Z}^{2}}\right]\!e^{-iM\Phi}e^{-i\kappa Z}\] \[\times\Bigg{\{}\frac{\omega_{0}}{\omega_{Z}}\exp\!\left[-i\bigg{(} \frac{\kappa R^{2}}{2\mathbb{R}_{Z}}-\psi_{Z}\bigg{)}\right]\!\mathcal{L}_{P}^ {|M|}\bigg{(}\frac{2R^{2}}{\omega_{Z}^{2}}\bigg{)}\Bigg{\}}\,, \tag{19}\] where \(\kappa=2\pi n/\lambda_{\rm beam}\) is the wavenumber, \(n\) is the refraction index, \(\lambda_{\rm beam}\) is the wavelength of light, and \(\mathcal{L}_{i}^{j}(x)\) with \(i\in\mathbb{Z}^{+},\,j\in\mathbb{R}\) are the generalized Laguerre polynomials. The beam is focused at \(Z=0\), with the waist function \(\omega_{Z}=\omega_{0}\sqrt{1+(Z/Z_{R})^{2}}\) along the \(z\)-axis, where \(\omega_{0}\) is the waist at the focus and \(Z_{R}=\pi\omega_{0}^{2}n/\lambda_{\rm beam}\) is the Rayleigh length. The radius of the wavefront curvature \(\mathbb{R}_{Z}=Z(1+(Z_{R}/Z)^{2})\) and the Gouy phase \(\psi_{Z}=(2P+|M|+1)\,{\rm atan}(Z/Z_{R})\). The electric field (IV) is normalized in the sense of the Dirac delta-function with respect to the axial coordinate \(Z\), and to unity with respect to coordinates \(R\) and \(\Phi\), with the normalization constant \(\gamma_{PM}=\sqrt{2^{|M|+1}P!/(\pi(P+|M|)!)}\). The electric field \(E_{PM}(\mathbf{R})\) is an eigenfunction of the \(\hat{L}_{z,\Phi}\) operator, defined in Sec. III, with the magnetic quantum number \(M\). Eq. (IV) with \(M=0\) describes a Gaussian beam without OAM, and serves within this manuscript as a benchmark representation of the non-helical light. The action of the spherical gradient on the electric field \(E_{PM}(\mathbf{R})\) is readily given by Eq. (16), where the transformed Fourier amplitude can be calculated using Eq. (14). However, the resulting expression is cumbersome and is therefore omitted here. Instead, to illustrate our findings in a simple way, we consider an approximation to the electric field (IV) \[E_{M}^{\rm foc}(\mathbf{R})=\gamma_{0M}\left(R/\omega_{0}\right)^{|M|}e^{-R^{2}/ \omega_{0}^{2}}e^{-iM\Phi}e^{-i\kappa Z}\,. \tag{20}\] This corresponds to a LG beam with \(P=0\) within the _in-focus approximation_, which is valid, when \(Z\ll Z_{R}\), thus \(\omega_{Z}\to\omega_{0}\), \(1/R_{Z}\to 0\) and \(\psi_{Z}\to 0\). Previous studies [28; 29] have already considered a similar spatial profile \[E_{M}^{\rm foc,cent}(\mathbf{R})=\frac{1}{\sqrt{|M|!}}\left(\frac{R}{\omega_{0}} \right)^{|M|}e^{-iM\Phi}e^{-ikZ}\,, \tag{21}\] which corresponds to Eq. (20) with the additional _in-center approximation_, which is valid when \(R\ll\omega_{0}\). The major drawback of the approximation (21) is the divergence of the electric field at large distances \(R\). While this problem can be solved by introducing a cutoff \(\omega_{\rm cut}\sim\omega_{0}\), the value of this cutoff was argued to affect the result of numerical calculations [84]. Moreover, the light-matter interaction was shown to depend on the average position of molecule's center of mass with respect to the cutoff distance \(\omega_{\rm cut}\)[29]. Therefore, to ensure the validity of our findings, we choose the cutoff-free approximation to the electric field (20). Using the approximation \(E_{M}^{\rm foc}(\mathbf{R})\), we examine the OAM transfer to the molecule. In the Hamiltonian (15), we choose the lowest-multipole term that couples the gradient of the electric field to the molecular rotation. Namely, we consider the electric-quadrupole interaction, i.e., the general case of Eq. (IV). We calculate the spherical gradients, which describe the OAM transfer, i.e., the gradients with \(m=\pm 1\), as the \(m=0\) gradient describes purely the spin transfer. The gradients read \[\big{[}\mathcal{R}_{1,\pm 1}(\mathbf{\nabla}_{\mathbf{R}})E_{M}^{\rm foc}( \mathbf{R})\big{]}=\] \[=-\sqrt{\frac{3}{8\pi}}e^{\pm i\Phi}\bigg{[}\frac{M}{R}\mp\bigg{(} \frac{|M|}{R}-\frac{2R}{\omega_{0}^{2}}\bigg{)}\bigg{]}E_{M}^{\rm foc}(\mathbf{R})\,. \tag{22}\] In what follows, the absolute value of the spherical gradient (IV) serves as a quantitative measure for the OAM transfer. The three terms in the square brackets of Eq. (22) correspond respectively to derivatives of the spiral beam phase (\(\sim e^{iM\Phi}\)), radial distribution for \(R\ll\omega_{0}\) (\(\sim(R/\omega_{0})^{|M|}\)) and exponential decay for \(R\gg\omega_{0}\) (\(\sim e^{-R^{2}/\omega_{0}^{2}}\)). One can also employ the approximation \(E_{M}^{\rm foc,cent}(\mathbf{R})\) to calculate the spherical gradient. In this case, the term in Eq. (22) associated with the exponential decay is missing. Hence, for the beam with \(M=0\), the amplitude of the OAM transfer is zero, as also shown in Ref. [29]. The possible conclusion is that a non-helical beam cannot transfer any additional angular momentum, apart from the spin, to the molecule. This conclusion, however, does not hold when a more accurate profile, like \(E_{M}^{\rm foc}(\mathbf{R})\), is taken into account. As revealed by Eq. (22) with \(M=0\), even a non-vortex Gaussian beam has a small impact on the molecular rotations, in agreement with Ref. [27]. This effect has a simple explanation. For instance, in the case of a diatomic molecule, the two nuclei, in general, experience a slightly different electric field, due to its spatial gradient, which creates an effective disbalance among different rotational states. After showing that the OAM transfer occurs both in the case of a non-twisted and twisted light, we examine how vortex beams can enhance the transfer, compared to non-vortex beams. As revealed by Eq. (22), there are two contributions to the OAM transfer. Namely, the derivative of the spiral beam phase (first term in square brackets) and the derivative of the radial profile (round brackets). While the latter contribution is present for a Laguerre-Gaussian beam with any value of \(M\), the former contribution describes exclusively the OAM transfer stimulated by the non-zero vorticity. This transfer channel becomes the dominant one in the \(R\to 0\) regime. In particular, the amplitude of the OAM transfer for a beam with \(M\neq 0\), which is \(\sim M/R\), can be significantly larger than the transfer amplitude for the light with \(M=0\), which is \(\sim R/\omega_{0}^{2}\). This indicates that the experimental observation of such an enhancement would require the molecules to be placed around the center of the beam, as further discussed in Sec. VI. Our numerical analysis also reveals that, in the _out-of-focus_ regime, i.e., when \(Z\gg Z_{R}\), besides the angular momentum exchange arising due to radial and phase variation of the electric field, there is an AM exchange induced by the non-zero curvature of the wavefront. This can be intuitively understood as follows, the curved wavefront effectively deforms the radial distribution of the electric field, which, in turn, affects the magnitude of AM transfer. ## V Ro-Vibrational transition rates To verify the major argument of Sec. IV, namely, that the OAM of light can significantly enhance certain rotational transitions, we calculate the transition rates of ro-vibrational transitions. We assume that the molecular state is the direct product:, i.e., we neglect weak correlation effects, e.g., correlations stemming from the ro-vibrational coupling. We leave the vibrational state \(\left|\psi_{\nu}^{\rm vib}\right\rangle\) implicit, since we directly calculate the transition multipole moments using numerical methods, as discussed in Sec. II. The rotational state \(\left|\psi_{J,N_{1},N_{2}}^{\rm rot}\right\rangle\) is an eigenstate of the kinetic energy of a rigid rotor: \(\mathcal{H}_{0,\rm rot}=B_{x}\hat{J}_{x}^{2}+B_{y}\hat{J}_{y}^{2}+B_{z}\hat{J }_{z}^{2}\), where \(\hat{\mathbf{J}}\) is the angular momentum of the molecular rotation. The corresponding wave function reads \[\left\langle\Omega_{\rm mol}\right|\psi_{J,N_{1},N_{2}}^{\rm rot}\big{\rangle} =D_{N_{1},N_{2}}^{J}(\Omega_{\rm mol})\,, \tag{23}\] where \(\left|\Omega_{\rm mol}\right\rangle\) are eigenstates of the molecular angle operator \(\hat{\Omega}_{\rm mol}\). For most molecules at relevant experimental temperatures, the thermal de-Broglie wavelength is significantly smaller than the wavelength of the probe light. For instance, for a CS molecule at \(T=20\,\)K: the de-Broglie wavelength is \(\lambda_{\rm dB}=h\sqrt{\beta/(2\pi m)}\approx 0.6\,\)A, where \(m\) is the molecular mass, \(\beta=1/(k_{\rm B}T)\) is the Boltzmann factor with the Boltzmann constant \(k_{\rm B}\). At the same time, the wavelength corresponding to the lowest vibrational transition is \(\lambda_{\rm beam}\sim 8\,\)um. Provided that, for a non-tightly focused beam, the beam waist is larger than the wavelength, we obtain: \(\omega_{0}>\lambda_{\rm beam}\gg\lambda_{\rm dB}\). Therefore, the wave function that describes the center-of-mass position inside the beam can be approximated by the three-dimensional \(\delta\)-function \[\left\langle\mathbf{R}\right|\psi_{\mathbf{R}_{\rm CM}}^{\rm CM}\big{\rangle}=\delta^{ (3)}(\mathbf{R}-\mathbf{R}_{\rm CM})\,, \tag{24}\] where \(\left|\mathbf{R}\right\rangle\) is an eigenstate of the center-of-mass position operator \(\hat{\mathbf{R}}\). The transition amplitude between the initial \(\left|\Psi_{i}\right\rangle\) and final \(\left|\Psi_{f}\right\rangle\) molecular states with respect to the full light-matter interaction Hamiltonian (15) reads \[\mathcal{M}_{i\to f}=\left\langle\Psi_{f}\right|\mathcal{H}_{\rm int} \left|\Psi_{i}\right\rangle\] \[=\sum_{nlm\mu}\mathcal{I}_{\nu,\nu^{\prime}}^{\rm vib;\,n,l,\mu} \mathcal{I}_{\mathbf{R}_{\rm CM},\mathbf{R}_{\rm CM}}^{\rm CM;\,l,m}\mathcal{I}_{J,N_{ 1},N_{2},J^{\prime},N_{1}^{\prime},N_{2}^{\prime}}^{\rm rot;\,n,l,m,\mu}\,, \tag{25}\] where states \(\left|\Psi_{i(f)}\right\rangle\) are characterised by sets of quantum numbers: \(\{\nu^{(\prime)},\mathbf{R}_{\rm CM}^{(\prime)},J^{(\prime)},N_{1}^{(\prime)},N_{2} ^{(\prime)}\}\) and the vibrational integral, \(\mathcal{I}_{\nu,\nu^{\prime}}^{\rm vib;\,n,l,\mu}=\left\langle\psi_{\nu^{ \prime}}^{\rm vib}\right|\alpha_{\lambda,\mu}\left|\psi_{\nu}^{\rm vib}\right\rangle\), with \(\lambda=2n+l+1\), can be expressed via the transition multipole moments, as discussed in Sec. II. The rotational and center-of-mass integrals are \[\mathcal{I}^{\text{rot};\,n,l,m,\mu}_{J,N_{1},N_{2},J^{\prime},N_{1}^{\prime},N_{2 }^{\prime}}=\bigg{[}\sum_{\sigma}\frac{8\pi^{2}}{2\lambda+1}\gamma_{n,l,m, \sigma}\epsilon_{\sigma}C^{J^{\prime},N_{1}^{\prime}}_{\lambda,m-\sigma;J,N_{ 1}}\bigg{]}\begin{cases}C^{J^{\prime},N_{2}^{\prime}}_{\lambda,\mu;J,N_{2}}-(- 1)^{\mu}C^{J^{\prime},N_{2}^{\prime}}_{\lambda,-\mu;l,N_{2}}&\mu<0\\ \sqrt{2}C^{J^{\prime},N_{2}^{\prime}}_{\lambda,0;J,N_{2}}&\mu=0\\ C^{J^{\prime},N_{2}^{\prime}}_{\lambda,-\mu;J,N_{2}}+(-1)^{-\mu}C^{J^{\prime}, N_{2}^{\prime}}_{\lambda,\mu;J,N_{2}}&\mu>0\end{cases}, \tag{26}\] \[\mathcal{I}^{\text{CM};\,l,m}_{\mathbf{R}_{\text{CM}},\mathbf{R}_{\text{CM}}}=\Big{<} \psi^{\text{CM}}_{\mathbf{R}_{\text{CM}}}\,\Big{|}\,\big{[}\mathcal{R}_{l,m}(\bm {\nabla}_{\mathbf{R}})E(\mathbf{R})\big{]}\Big{|}_{\mathbf{R}=\hat{\mathbf{R}}}\,\big{|}\psi^{ \text{CM}}_{\mathbf{R}_{\text{CM}}}\,\big{>}=\delta^{(3)}(\mathbf{R}_{\text{CM}}-\mathbf{ R}_{\text{CM}}^{\prime})\big{[}\mathcal{R}_{l,m}(\mathbf{\nabla}_{\mathbf{R}_{\text{CM}}})E( \mathbf{R}_{\text{CM}})\big{]}\,. \tag{27}\] Provided the transition amplitude \(\mathcal{M}_{i\to f}\), we obtain the transition rate using Fermi's golden rule \[\Gamma_{i\to f}(\omega)=\frac{2\pi}{\hbar}\big{|}\mathcal{M}_{i\to f} \big{|}^{2}\delta_{E}(\hbar\omega-\Delta E_{fi})\,, \tag{28}\] where \(\omega\) is the angular frequency of the photon that drives the transition and \(\Delta E_{fi}=E_{f}-E_{i}\) is the energy difference between the initial and final states. The \(\delta\)-function in Eq. (28) denotes the density of energy states and has therefore the units of inverse energy. ## VI Possible experimental scheme After defining transition rates of molecular ro-vibrational transitions, we proceed with a suggestion for the generic proof-of-principle experimental scheme. The proposed setup may be capable of revealing the enhancement of rotational transition amplitudes, induced by the non-zero vorticity of the probe light. We consider a vacuum chamber of the characteristic length \(L\) (along the \(z\)-axis) that contains molecules in the gas phase under pressure \(P\) and at temperature \(T\). As discussed in Sec. IV, the OAM-induced rotational enhancement depends on the position of the molecular center of mass with respect to the beam axis. For this reason, we suggest measuring the absorbance of light in a spatially-resolved manner. In particular, we propose measuring the ratio between the absorbed and incident power for the molecules, which reside within the optical path of light that is collected by an _adjustable_ circular aperture \(\mathbf{R}_{0}\), placed outside the chamber and centered on the beam axis. For small chambers (\(L<Z_{R}\)), such molecules are approximately confined to the cylinder, described by the effective aperture \(\mathbf{R}_{0}\) (see Fig. 2(a)). Note that due to the small average thermal center-of-mass velocity \(\langle v_{T}\rangle\) of molecules, their motion within the chamber is negligible on the time-scale of the photon propagation (\(\approx L/c\)). For instance, \(\langle v_{T}\rangle\,L/c\approx 20\) nm for \(L=12\) cm and \(T=20\) K. The absorbance \(\mathcal{A}(\omega)\) of photons with the angular frequency \(\omega\) can be calculated using the Beer-Lambert's law: \(\mathcal{A}(\omega)=1-\exp(-\chi_{\text{tot}}(\omega))\), where the total attenuation \(\chi_{\text{tot}}(\omega)=\sum_{i,f}\chi_{i\to f}(\omega)\) and \(\chi_{i\to f}(\omega)\) is the attenuation, associated with the single ro-vibrational transition \(i\to f\). Note that here, unlike Sec. V, indices \(i\) and \(f\) denote only the vibrational and rotational quantum numbers, i.e., \(i(f)\equiv\{\nu^{(\prime)},J^{(\prime)},N_{1}^{(\prime)},N_{2}^{(\prime)}\}\). The single-channel attenuation: \(\chi_{i\to f}(\omega)=\int_{-L/2}^{L/2}\tilde{\chi}_{i\to f}(\omega,Z) \text{d}Z\). The local attenuation at the position \(Z\) is the ratio: \(\tilde{\chi}(\omega,Z)_{i\to f}=\mathbb{P}_{\text{abs}}^{i\to f}(Z)/ \mathbb{I}_{\text{inc}}(Z)\), where \(\mathbb{I}_{\text{inc}}(Z)\) is the energy transfer rate of the incident electric field, given by \[\mathbb{I}_{\text{inc}}(Z)=\frac{c\varepsilon_{0}}{2}\int\limits_{0}^{\mathbf{R}_ {0}}|E(\mathbf{R})|^{2}\,\text{d}^{2}\mathbf{R}\,, \tag{29}\] and \(\mathbb{P}_{\text{abs}}^{i\to f}(Z)\) is the energy flux, i.e., energy transfer rate per unit length, absorbed by molecules, undergoing the transition \(i\to f\) and residing in a thin section of the chamber (\(\text{d}Z\)). The absorbed energy flux is given by \[\mathbb{P}_{\text{abs}}^{i\to f}(Z) =\rho_{0}\frac{\rho_{\text{B}}(E_{i},T)}{\mathcal{Z}}\] \[\times\int\limits_{0}^{R_{0}}\text{d}^{2}\mathbf{R}_{\text{CM}}^{i} \int\limits_{0}^{\infty}\text{d}^{3}\mathbf{R}_{\text{CM}}^{f}\Big{(}\hbar\omega \Gamma_{i\to f}(\omega)\Big{)}\,, \tag{30}\] where \(\rho_{0}=\beta P\) is the equilibrium density of molecules, \(\rho_{\text{B}}(E_{i},T)=\exp(-\beta E_{i})\) is the Boltzmann probability to occupy the initial state, \(\mathcal{Z}\!=\!\sum_{i}\rho_{\text{B}}(E_{i},T)\) is the canonical partition function. The integral over the initial center-of-mass state is two-dimensional and covers the effective aperture \(\mathbf{R}_{0}\) (see Fig. 2(a)). The integral over the final state is three-dimensional and covers the whole space. At a finite temperature \(T\), spectral lines are broadened, so instead of a \(\delta\)-function, used in the definition of the transition rate (28), we employ the Doppler-broadened line profile \[\delta_{E}(\hbar\omega-\Delta E_{fi}))\to\rho_{\text{DB}}( \omega,\Delta E_{fi},T)\\ =\sqrt{\frac{\beta mc^{2}}{2\pi(\Delta E_{fi})^{2}}}\exp\!\left(- \frac{\beta mc^{2}(\hbar\omega-\Delta E_{fi})^{2}}{2(\Delta E_{fi})^{2}}\right). \tag{31}\] In our numerical calculation, we consider CS molecules with the mass \(\mathbb{M}=44.076\) amu at the pressure \(P=1\) mbar and temperature \(T=20\) K. The rotational state of a linear molecule, like CS, can be described by two quantum numbers \(\{l,m\}\) instead of three \(\{J,N_{1},N_{2}\}\), as required in the general case (see Sec. V). For the molecular state, defined in Sec. V, the difference in energy between the initial and final states \(\Delta E_{fi}\) is the difference in rotational and vibrational energies. The combined rotational-vibrational energy for a linear molecule can be calculated using the Dunham expansion [85] with coefficients for CS provided in Ref. [86]. We consider a Laguerre-Gaussian beam (19) with a width \(\omega_{0}=$300\,\mathrm{\SIUnitSymbolMicro m}$\) that is tuned to the second vibrational overtone of CS, i.e., it excites the vibrational transition \(\nu=0\to\nu^{\prime}=2\) with the characteristic angular frequency \(\omega_{\mathrm{ib}}=$2488\,\mathrm{cm}^{-1}$\). Note that, compared to Eq. (19), the Rayleigh range in a real experimental setup is given by: \(Z_{R}=\pi\omega_{0}^{2}/(\lambda\mathfrak{M}^{2})\), where \(\mathfrak{M}^{2}\) is the beam quality factor [87]. For a Laguerre-Gaussian beam: \(\mathfrak{M}^{2}\sim 2|M|+1\). We calculate the corresponding transition dipole \(d_{z}^{0,2}=$9.14\cdot 10^{-3}\,\mathrm{D}$\) and quadrupole \(Q_{zz}^{0,2}=$-7.83\cdot 10^{-3}\,\mathrm{D}$\cdot$\)A moments using density functional theory, as described in Sec. II. In a real experiment, transition multipole moments can be also measured directly. Our numerical analysis has shown that the chamber length \(L=$12\,\mathrm{cm}$\) and the effective aperture \(R_{0}=$140\,\mathrm{\SIUnitSymbolMicro m}$\) maximize the overall transition amplitudes. We plot single-channel attenuation matrices \(\chi_{i\to f}(\omega)\) for LG beams with \(M=0\) and \(M=3\) in Fig. 2(b). To visually distinguish "forbidden" and "allowed" transitions, we set the cutoff for the attenuation: \(10^{-6}\). Despite the fact, that matrices for non-helical and helical light have very similar structure, our calculation reveals additional "forbidden" rotational transitions, enabled by the OAM of light. To examine the effect in more detail, we calculate the attenuation for different \(\Delta m\)-subchannels within the \(\Delta l=2\) rotational transition (see Fig. 2(c)). In this case, we do not impose any numerical cutoff and reveal that the transitions with \(|\Delta m|=2\) are substantially enhanced by the OAM of light, confirming the prediction of Sec. IV. From Fig. 2(b), one can notice that the selection rules on the azimuthal quantum number \(l\) remain the same for \(M=0\) and \(M=3\). In other words, \(\Delta l\) transitions are not modified by the OAM of light. This can be explained from the semiclassical perspective. The electric field in a vortex beam is an eigenstate of the \(\hat{L}_{z}\) operator, thus, it has the _cylindrical_ symmetry, whereas quantum number \(l\) is the eigenvalue of the \(\hat{L}^{2}\) operator and describes the _spherical_ symmetry of the molecular state. Since \([\hat{L}_{z},\hat{L}^{2}]=0\), the beam is incapable of modifying the spherical symmetry of a molecular state. The inability of the OAM to affect the selection rules on the azimuthal quantum number \(l\) necessitates an additional requirement for the experiment. In particular, in the aforementioned setup molecular ro-vibrational energies are degenerate with respect to the magnetic quantum number \(m\). This means that in an absorption spectroscopy, one can only measure a weighted sum of \(\chi_{i\to f}(\omega)\) for all possible \(\Delta m\). The transition with \(|\Delta m|=1\), which corresponds to the electric-dipole interaction with light, is much stronger than lower-order quadrupole transitions. Thus, the degeneracy renders it impossible to distinguish \(M=0\) and \(M=3\) beams. The straightforward solution is to lift the degeneracy of molecular energies with respect to \(m\). For instance, this can be done by applying a strong static electric field along the beam axis. Our numerical calculation indicates that the Stark field \(E_{\mathrm{Stark}}\sim$8\,\mathrm{kV}/\mathrm{cm}$\) is necessary to be able to resolve the Doppler-broadened spectral lines in the absorption spectroscopy of the CS. ## VII Conclusion and outlook Motivated by the potential benefits of using vortex beams for structural studies and quantum control of Figure 2: (a) Scheme of the generic proof-of-principle experiment. A probe beam with the width profile \(\omega_{z}\) propagates through the molecular chamber of the characteristic length \(L\). The transmitted light is collected on the aperture \(\tilde{R}_{0}\) outside the cavity. For small cavities (\(L<Z_{R}\)), the domain of molecules that interact with the light, collected on the aperture, can be approximated by the cylinder, defined by the effective aperture \(R_{0}\) (dotted line). (b) Single-channel attenuation \(\chi_{i\to f}(\omega)\) for rotational transitions \(|l,m\rangle\to|l^{\prime},m^{\prime}\rangle\) within the vibrational overtone (\(\nu=0\to\nu^{\prime}=2\)) of the CS molecule. Compared to the non-helical light (\(M=0\)), twisted light (\(M=3\)) enables more transitions with \(\chi_{i\to f}(\omega)>10^{-6}\). (c) Single-channel attenuation \(\chi_{i\to f}(\omega)\) for different \(\Delta m\)-subchannels within the \(\Delta l=2\) rotational transition, that corresponds to electric-quadrupole interaction. OAM of light substantially enhances \(|\Delta m|=2\) transitions that can be resolved in the presence of a Stark shift in the system. molecules, we developed a general analytical framework to describe the interaction of molecules with twisted light. Unlike the previous studies, which involved various point-charge models [27; 28; 29; 30], we modeled the molecule using a continuous charge distribution. We considered molecules interacting with long-wavelength, infrared or optical light, which allowed us to describe light-matter interaction in terms of multipole moments and avoid describing the details of the molecular structure. Using the prototypical case of a single vibrational degree of freedom coupled to the molecular rotations, we showed how various internal molecular characteristics can be embedded into the analytical definition of the multipole moments. We used the multipolar QED Hamiltonian, also known as the PZW Hamiltonian [76; 77; 78], which was proven practical in molecular spectroscopy applications [79]. We applied the spherical Taylor expansion [81; 82; 83] and derived the general light-matter interaction Hamiltonian that describes the coupling between spherical gradients of the electric field and molecular multipole moments. This Hamiltonian is applicable to _any molecular structure_, provided the validity of the multipole expansion, and to _any electric field_, with the notable exception of tightly focused beams. From the ground up, our analytical framework is built around the notion of the angular momentum: we make use of spherical coordinates and rotational point symmetry. Therefore, our theory unambiguously describes the angular momentum exchange between the spin and OAM of light, molecular rotation and its center-of-mass motion. In particular, we confirm a well-known result [27; 29] that, within the electric-dipole approximation, the OAM of light cannot couple to the molecular rotation irrespective of the beam profile. At the quadrupole order, however, we demonstrate that vortex beams strongly enhance certain ro-vibrational transitions, which are considered "forbidden" in the case of non-helical light. The enhancement strongly depends on the mean position of the molecular center of mass with respect to the beam axis. Recent years have seen a rapid development in the field of optical trapping and laser cooling of molecules [88; 89; 90; 91; 92]. Therefore, it is realistic to suggest measuring the ro-vibrational enhancement in these new setups. Another pathway is to study the gas-phase absorption spectroscopy in a spatially and frequency-resolved manner. We provide proof-of-principle calculations by suggesting an experiment that may be capable of probing the enhancement. Based on our findings, we discuss the experimental requirements for the observation of the enhanced ro-vibrational transitions. Our analytical framework is rather general, and should be applicable to a broad range of problems that concern angular momentum exchange induced by the light-matter interaction. Nevertheless, it relies on a few approximations. Lifting some of these approximations will be an important future step. In particular, one could extend the theory to beams in the regime of tight focusing, which was shown to alter and enhance the OAM interaction with chiral molecules and nanostructures [65]. In such electric fields, the spin and OAM of light are strongly coupled [26], which makes the analysis of the OAM transfer to the molecule more complicated. Another important extension concerns adapting our model to account for more complex molecular structures. In particular, polyatomic molecules of different point groups can be addressed by generalizations of the charge distribution, allowing for the coupling of multiple degrees of freedom to twisted light. A very interesting perspective along this direction is the extension of our approach to chiral molecules. In this context our model can address the minimal requirements for the generation of helical dichroism [31; 32; 33] that is a promissing experimental technique for the detection of chirality. Recent studies have shown that an important aspect of the chirality-related physics is the interplay of magnetic multipole transitions with electric multipole ones. Such transitions can also be expressed in terms of a model Hamiltonian, similar to the electric ones, by incorporating an effective magnetization density for the molecule. ## Acknowledgements We are grateful to Emilio Pisanty and Philipp Lunt for valuable discussions. G.M.K. gratefully acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 101034413. M.L. acknowledges support by the European Research Council (ERC) Starting Grant No. 801770 (ANGU-LON). O.H.H. acknowledges support by the Austrian Science Fund (FWF) [P 36040, M 2561]. Furthermore, the financial support by the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association is gratefully acknowledged.
2310.20345
Variational formalism for the Klein-Gordon oscillon
The variational method employing the amplitude and width as collective coordinates of the Klein-Gordon oscillon leads to a dynamical system with unstable periodic orbits that blow up when perturbed. We propose a multiscale variational approach free from the blow-up singularities. An essential feature of the proposed trial function is the inclusion of the third collective variable: a correction for the nonuniform phase growth. In addition to determining the parameters of the oscillon, our approach detects the onset of its instability.
I. V. Barashenkov, N. V. Alexeeva
2023-10-31T10:35:13Z
http://arxiv.org/abs/2310.20345v1
# Variational formalism for the Klein-Gordon oscillon ###### Abstract The variational method employing the amplitude and width as collective coordinates of the Klein-Gordon oscillon leads to a dynamical system with unstable periodic orbits that blow up when perturbed. We propose a multiscale variational approach free from the blow-up singularities. An essential feature of the proposed trial function is the inclusion of the third collective variable: a correction for the nonuniform phase growth. In addition to determining the parameters of the oscillon, our approach detects the onset of its instability. ## I Introduction Oscillon is a classical solution describing a long-lived localised pulsating structure of finite amplitude. Oscillons play a role in the dynamics of inflationary reheating, symmetry-breaking phase transitions, and false vacuum decay [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. They occur in the Einstein-Klein-Gordon equations [19; 20; 21; 22; 23; 24], axion models [25; 26; 27; 28; 29; 30], string phenomenology [31; 32; 33] and bosonic sector of the standard model [34; 35; 36; 37]. The (2+1)-dimensional oscillons have been studied in the context of the planar Abelian Higgs theory [38; 39]. Oscillons were discovered [40; 41; 42; 43] in the (3+1)-dimensional \(\Phi^{4}\) model, \[\Phi_{tt}-\Delta\Phi-\Phi+\Phi^{3}=0. \tag{1}\] The model, together with its (1+1)-dimensional counterpart, remains a workhorse of quantum field theory [44; 45; 46; 47; 48; 49; 50; 51; 52; 53] and cosmology [54]. Despite the apparent simplicity of equation (1), many properties of its oscillon solution have still not been fully understood [55]. Most of the mathematical analysis of oscillons has been carried out using asymptotic [55; 56; 57] and numerical techniques [1; 42; 43; 55; 58; 59; 60; 61] while qualitative insights called on variational arguments. In Ref [1], the \(\Phi^{4}\) oscillon was approximated by a localised waveform \[\Phi=1+Ae^{-(r/b)^{2}}, \tag{2}\] where \(A(t)\) is an unknown oscillating amplitude and \(b\) is an arbitrarily chosen value of the width. (Ref [62] followed a similar strategy when dealing with the two-dimensional sine-Gordon equation.) Once the ansatz (2) has been substituted in the lagrangian and the \(r\)-dependence integrated away, the variation of action produces a second-order equation for \(A(t)\). The variational method does not suggest any optimisation strategies for \(b\). Making \(b(t)\) another collective coordinate -- as it is done in the studies of the nonlinear Schrodinger solitons [63; 64] -- gives rise to an ill-posed dynamical system not amenable to numerical simulations. (See section II below.) With an obstacle encountered in (3+1) dimensions, one turns to a (1+1) dimensional version of the model for guidance. The analysis can be further simplified by considering oscillons approaching a symmetric vacuum as \(x\to\pm\infty\). A physically relevant model of this kind was considered by Kosevich and Kovalev [66]: \[\phi_{tt}-\phi_{xx}+4\phi-2\phi^{3}=0. \tag{3}\] Unlike its \(\Phi^{4}\) counterpart, the oscillon in the Kosevich-Kovalev model satisfies \(\phi\to 0\) as \(x\to\pm\infty\) and oscillates, symmetrically, between positive and negative values. The asymptotic representation of this solution is \[\phi=\frac{2\epsilon}{\sqrt{3}}\,\cos(\omega t)\,{\rm sech}( \epsilon x)-\frac{\epsilon^{3}}{24\sqrt{3}}\,\cos(3\omega t)\] \[\times{\rm sech}^{3}(\epsilon x)+O(\epsilon^{5}), \tag{4}\] where \(\omega^{2}=4-\epsilon^{2}\) and \(\epsilon\to 0\)[66]. Despite the difference in the vacuum symmetry, equations (1) and (3) belong to the same, Klein-Gordon, variety and share a number of analytical properties. The purpose of the present study is to identify a set of collective coordinates and formulate a variational description of the Klein-Gordon oscillon. A consistent variational formulation would determine the stability range of the oscillon, uncover its instability mechanism and explain some of its properties such as the amplitude-frequency relationship. Using the (1+1)-dimensional Kosevich-Kovalev equation (3) as a prototype system, we transplant the idea of multiple time scales to the collective-coordinate Lagrangian method. With some modifications, our approach should remain applicable to oscillons in the (3+1)-dimensional \(\Phi^{4}\) theory and other Klein-Gordon models. Before outlining the paper, three remarks are in order. First, equation (3) can be seen as a truncation of the sine-Gordon model. The fundamental difference between the Kosevich-Kovalev oscillon and the sine-Gordon breather is that the latter solution is exactly periodic while the amplitude of the former one decreases due to the third-harmonic radiation. (When the amplitude of the oscillations is small, the radiation is exponentially weak though; hence the decay is slow.) Second, it is appropriate to mention an alternative variational procedure [67] where one not only chooses the spatial part but also imposes the time dependence of the trial function. For instance, one may set \[\phi=A_{0}\cos(\omega t)e^{-(r/b)^{2}}.\] For a fixed \(\omega\), the action becomes a function of two time-independent parameters, \(A_{0}\) and \(b\). The shortcoming of this technique is that it does not allow one to examine the stability of the Klein-Gordon oscillon. Neither would it capture a slow modulation of the oscillation frequency --such as the one observed in numerical simulations of the \(\Phi^{4}\) model [42; 58; 60]. Our last remark concerns a closely related system, the nonlinear Schrodinger equation. The variational method has been highly successful in the studies of the Schrodinger solitons -- scalar and vector ones, with a variety of nonlinearities, perturbations, and in various dimensions [63]. Several sets of collective coordinates for the Schrodinger solitons have been identified. It is the remarkable simplicity and versatility of the variational method demonstrated in the nonlinear Schrodinger domain that motivate our search for its Klein-Gordon counterpart. The outline of the paper is as follows. In the next section we show that choosing the collective coordinates similar to the way they are chosen for the nonlinear Schrodinger soliton leads to singular finite-dimensional dynamics. A consistent variational procedure involving fast and slow temporal scales is formulated in section III. We assess the approximation by comparing the variational solution to the "true" oscillon obtained numerically. Section IV adds remarks on the role of the third collective coordinate and the choice of the trial function, while an explicit construction of the oscillon with adiabatically changing parameters has been relegated to the Appendix A. Finally, section V summarises conclusions of this study. ## II Singular amplitude-width dynamics ### Two-mode variational approximation The variational approach to equation (3) makes use of its Lagrangian, \[L=\frac{1}{2}\int\left(\phi_{t}^{2}-\phi_{x}^{2}-4\phi^{2}+\phi^{4}\right)dx. \tag{5}\] Modelling on the nonlinear Schrodinger construction [63; 64], we choose the amplitude and width of the oscillon as two collective variables: \[\phi=A\,\text{sech}\left(\frac{x}{b}\right). \tag{6}\] The amplitude \(A(t)\) is expected to oscillate between positive and negative values while the width ("breadth") \(b(t)\) should remain positive at all times. Substituting the Ansatz (6) in (5) gives the Lagrangian of a system with two degrees of freedom: \[L=\dot{A}^{2}b+\left(\frac{1}{3}+\frac{\pi^{2}}{36}\right)\frac{ \dot{b}^{2}A^{2}}{b}+\dot{A}\dot{b}A-\frac{A^{2}}{3b}\] \[+b\left(\frac{2}{3}A^{4}-4A^{2}\right). \tag{7}\] In (7), the overdot stands for the derivative with respect to \(t\). The equations of motion are \[\ddot{A}+4A-\left(\frac{1}{3}+\frac{\pi^{2}}{36}\right)\frac{\dot {b}^{2}}{b^{2}}A =\left(2\sigma+\frac{4}{3}\right)A^{3}\] \[-\left(2\sigma+\frac{1}{3}\right)\frac{A}{b^{2}}\] (8a) and \[\bar{b}+2\frac{\dot{A}}{A}\dot{b} =4\sigma\left(\frac{1}{b^{2}}-A^{2}\right)b,\] (8b) where we have introduced a short-hand notation for a numerical factor \[\sigma=\frac{1}{1+\pi^{2}/3}.\] ### Asymptotic solution The system (8) has a family of periodic solutions. For reasons that will become clear in what follows, these solutions are difficult to obtain by means of numerical simulations of equations (8). However the family can be constructed as a multiscale perturbation expansion -- in the limit of small \(A\) and large \(b\). To this end, we let \[A=\epsilon A_{1}+\epsilon^{3}A_{3}+...,\quad b=\frac{1}{\epsilon}+\epsilon b_{ 1}+..., \tag{9}\] where \(A_{1},A_{3},...\) and \(b_{1},b_{3},....\) are functions of a sequence of temporal variables \(\mathcal{T}_{0},\mathcal{T}_{2},...\), with \(\mathcal{T}_{2n}=\epsilon^{2n}t\) and \(\epsilon\to 0\). Writing \(d/dt=\partial/\partial\mathcal{T}_{0}+\epsilon^{2}\partial\mathcal{T}_{2}+...\) and substituting the expansions (9) in (8a), we set coefficients of like powers of \(\epsilon\) to zero. The order \(\epsilon^{1}\) gives a linear equation \[\frac{\partial^{2}A_{1}}{\partial\mathcal{T}_{0}^{2}}+4A_{1}=0.\] Without loss of generality we can take a solution in the form \[A_{1}=\psi e^{2i\mathcal{T}_{0}}+c.c.=2|\psi|\cos 2(\mathcal{T}_{0}-\theta), \tag{10}\] where \(\psi=\psi(\mathcal{T}_{2},....)=|\psi|e^{-2i\theta}\) is a complex-valued function of "slow" variables. The next order, \(\epsilon^{3}\), gives \[\frac{\partial^{2}A_{3}}{\partial\mathcal{T}_{0}^{2}}+4A_{3}=-2 \frac{\partial^{2}A_{1}}{\partial\mathcal{T}_{0}\partial\mathcal{T}_{2}}- \left(2\sigma+\frac{1}{3}\right)A_{1}\] \[+\left(2\sigma+\frac{4}{3}\right)A_{1}^{3}. \tag{11}\] Substituting for \(A_{1}\) from (10) and imposing the nonsecularity condition \[4i\frac{\partial\psi}{\partial\mathcal{T}_{2}}+\left(2\sigma+\frac{1}{3}\right) \psi-(6\sigma+4)\psi|\psi|^{2}=0, \tag{12}\] we determine a solution of (11): \[A_{3}=-\frac{1}{8}\left(\sigma+\frac{2}{3}\right)|\psi|^{3}\cos 6(\mathcal{T}_{ 0}-\theta). \tag{13}\] Turning to equation (8b), the leading order is \[\frac{\partial^{2}b_{1}}{\partial\mathcal{T}_{0}^{2}}+\frac{2}{A_{1}}\frac{ \partial A_{1}}{\partial\mathcal{T}_{0}}\frac{\partial b_{1}}{\partial \mathcal{T}_{0}}=\sigma(1-A_{1}^{2}). \tag{14}\] The general solution of this linear equation is given by \[b_{1}=\frac{\sigma}{4}(1-3|\psi|^{2})\tau\tan 2\tau+\frac{ \sigma}{16}|\psi|^{2}\cos 4\tau\] \[+\frac{C_{1}}{2}\tan 2\tau, \tag{15}\] where \(\tau=\mathcal{T}_{0}-\theta\) and \(C_{1}\) is an arbitrary constant in front of a homogeneous solution. (The second homogeneous solution was absorbed in the term \(1/\epsilon\) in the expansion (9).) Letting \(C_{1}=0\) and imposing the constraint \[1-3|\psi|^{2}=0 \tag{16}\] selects a regular solution: \[b_{1}=\frac{\sigma}{48}\cos 4\tau. \tag{17}\] Finally, the phase of the complex variable \(\psi\) is determined by equation (12). Substituting \(|\psi|\) from (16) we obtain \[\theta=\frac{1}{8}\mathcal{T}_{2}.\] Thus, the asymptotic solution of equations (8) has the form \[A=\frac{2}{\sqrt{3}}\epsilon\cos\omega t-\frac{3\sigma+2}{72\sqrt {3}}\epsilon^{3}\cos 3\omega t+O(\epsilon^{5}), \tag{18a}\] \[b=\frac{1}{\epsilon}+\frac{\sigma}{48}\epsilon\cos 2\omega t+O( \epsilon^{3}), \tag{18b}\] where \(\epsilon\to 0\) and \[\omega=2-\epsilon^{2}/4+O(\epsilon^{4}). \tag{18c}\] This solution describes a closed orbit in the phase space of the system (8). See Fig 1. ### Singular dynamics It is not difficult to realise that the asymptotic solution (18) is unstable. Indeed, the bounded solution (17) of equation (14) is selected by the initial condition \(\partial b_{1}/\partial\mathcal{T}_{0}=0\) at \(\mathcal{T}_{0}=\theta\). If we, instead, let \(\partial b_{1}/\partial\mathcal{T}_{0}=\delta\) with a small \(\delta\), the \(\tan 2\tau\) component will be turned on in the expression (15) and \(b_{1}\) will blow up at \(\mathcal{T}_{0}=\theta+\pi/4\). Fig 1 illustrates the evolution of a small perturbation of the periodic orbit. The numerical analysis of the system (8) indicates that periodic solutions with \(A(t)\) oscillating about zero are unstable for any value of the oscillation amplitude -- and not only in the small-\(A\) asymptotic regime. The instability originates from the topology of the four-dimensional phase space of the system that features a singularity at \(A=0\). Indeed, had the system not had a singularity and had the periodic orbit been stable, a small perturbation about it would have been oscillating, quasiperiodically, between positive and negative \(A\). The corresponding trajectory would be winding on a torus in the four-dimensional phase space, with the points where the trajectory passes through \(A=0\) filling a finite interval on the \(\dot{b}\)-axis. In the presence of the singularity, however, such a torus cannot form because any trajectory crossing through \(A=0\) at time \(t_{*}\) has to satisfy \(\dot{b}=0\) at the same time. Trajectories that do not pass through the plane \(A=\dot{b}=0\) follow one of two scenarios. In the "spreading" scenario, the width \(b(t)\) escapes to infinity (Fig 2(a)). The corresponding \(A(t)\) approaches zero but remains on one side of it at all times. In the alternative scenario, the Figure 1: Trajectories of the four-dimensional system (8) projected on the \((A,\dot{b})\) plane. The \(\infty\)-shaped curve describes the periodic solution (18) with \(\epsilon=0.1\). The magenta curve depicts a solution evolving from the initial conditions taken slightly off the periodic trajectory. The initial values \(A(0)\), \(b(0)\) and \(\dot{A}(0)\) for this perturbation are given by the first two terms in equations (18a) and (18b), with \(\epsilon=0.1\) and \(t=t_{0}=0.55\pi/\omega\). The initial condition for \(\dot{b}\) is \(\dot{b}(0)=-\frac{\sigma}{24}\epsilon\omega\sin(2\omega t_{0})+10^{-4}\), with the same \(\epsilon,\omega\) and \(t_{0}\). amplitude \(A(t)\) blows up while the width shrinks to zero (Fig 2(b)). Due to the singularity of solutions emerging from generic initial conditions, the system (8) is not amenable to numerical simulations beyond a few oscillation cycles. What is even more important, the all-\(\omega\) universal instability of periodic solutions of this four-dimensional system does not match up with the behaviour of the oscillon solutions of the full partial differential equation (3). Contrary to the predictions of the two-mode approximation, the simulations of equation (3) demonstrate that the nearly-periodic oscillons with frequencies in the range \(\sqrt{2}\lesssim\omega<2\) are stable. The amplitude and frequency of such oscillons do change due to the third-harmonic radiation; however, these changes are slow and may only be noticeable over long temporal intervals. (See Fig 3(a)). We note that an ill-posed system similar to (8) was encountered in the variational studies of the sine-Gordon breathers [65]. The spurious instability of periodic trajectories of the system (8) disqualifies the two-variable Ansatz (6) and prompts one to look for suitable alternatives. ## III Multiscale variational method ### Amplitude, width and phase correction To rectify the flaws of the "naive" variational algorithm, we consider \(\phi\) to be a function of two time variables, \(\mathcal{T}_{0}=t\) and \(\mathcal{T}_{1}=\epsilon t\). The rate of change is assumed to be \(O(1)\) on either scale: \(\partial\phi/\partial\mathcal{T}_{0},\partial\phi/\partial\mathcal{T}_{1}\sim 1\). We require \(\phi\) to be periodic in \(\mathcal{T}_{0}\), with a period of \(T\): \[\phi(\mathcal{T}_{0}+T;\,\mathcal{T}_{1})=\phi(\mathcal{T}_{0};\,\mathcal{T}_ {1}).\] As \(\epsilon\to 0\), the variables \(\mathcal{T}_{0}\) and \(\mathcal{T}_{1}\) become independent and the Lagrangian (5) transforms to \[L=\int\left[\left(\frac{\partial\phi}{\partial\mathcal{T}_{0}}+\epsilon\frac{ \partial\phi}{\partial\mathcal{T}_{1}}\right)^{2}-\phi_{x}^{2}-4\phi^{2}+\phi ^{4}\right]dx. \tag{19}\] The action \(\int Ldt\) is replaced with \[S=\int_{0}^{T}d\mathcal{T}_{0}\int d\mathcal{T}_{1}\,L\left(\phi,\frac{ \partial\phi}{\partial\mathcal{T}_{0}},\frac{\partial\phi}{\partial\mathcal{T }_{1}}\right). \tag{20}\] We choose the trial function in the form \[\phi=A\,\cos(\omega\mathcal{T}_{0}+\theta)\,\mathrm{sech}\left(\frac{x}{b} \right), \tag{21}\] where \(A,b\) and \(\theta\) are functions of the "slow" time variable \(\mathcal{T}_{1}\) while \(\omega=2\pi/T\). (Note that \(\phi\) does not have to be assumed small.) The interpretation of the width \(b\) is the same as in the Ansatz (6) while \(A\) represents the maximum of the oscillon's amplitude rather than the amplitude itself. Unlike the previous trial function (6), the variable \(A\) in (21) is assumed to remain positive at all times. The phase correction \(\theta\) is a new addition to the set of collective coordinates; its significance will be elucidated later (section IV.1). The choice of the spatial part of the Ansatz will also be discussed below (section IV.2). Once the explicit dependence on \(x\) and \(\mathcal{T}_{0}\) has been integrated away, equations (19) and (20) give an effective action \[S=T\int d\mathcal{T}_{1}\,\mathcal{L}\] with \[\mathcal{L}=(DA)^{2}b+\left(\frac{1}{3}+\frac{\pi^{2}}{36}\right) \frac{(Db)^{2}A^{2}}{b}+ADADb\] \[+(\omega+D\theta)^{2}bA^{2}-\frac{A^{2}}{3b}-4bA^{2}+\frac{1}{2} bA^{4} \tag{22}\] and \(D=\epsilon\frac{\partial}{\partial\mathcal{T}_{1}}\). Two Euler-Lagrange equations are \[D^{2}A+4A-(\omega+D\theta)^{2}A-\left(\frac{1}{3}+\frac{\pi^{2} }{36}\right)\frac{(Db)^{2}}{b^{2}}A\] \[=(1+\frac{3}{2}\sigma)A^{3}-\left(2\sigma+\frac{1}{3}\right)\frac {A}{b^{2}} \tag{23}\] and \[D\left[(\omega+D\theta)bA^{2}\right]=0.\] The last equation can be integrated to give \[(\omega+D\theta)bA^{2}=\ell, \tag{24}\] Figure 2: Two types of unstable evolution in equations (8). (a) \(A(t)\) approaches zero while \(b(t)\) grows exponentially. (b) \(A(t)\) grows to infinity (negative infinity in this simulation) while \(b(t)\) shrinks to zero. where \(\ell\) is a constant of integration. Eliminating the cyclic variable \(\theta\) between (23) and (24) we arrive at \[D^{2}A-\left(\frac{1}{3}+\frac{\pi^{2}}{36}\right)\frac{(Db)^{2}}{ b^{2}}A+4A-\frac{\ell^{2}}{b^{2}A^{3}}\] \[\qquad\qquad=(1+\frac{3}{2}\sigma)A^{3}-\left(2\sigma+\frac{1}{3} \right)\frac{A}{b^{2}}. \tag{25a}\] The third Euler-Lagrange equation for the Lagrangian (22) does not involve \(\theta\): \[D^{2}b+2\frac{DA}{A}Db=4\sigma\left(\frac{1}{b^{2}}-\frac{3}{4}A^{2}\right)b. \tag{25b}\] Equations (25) constitute a four-dimensional conservative system with a single control parameter \(\ell^{2}\). ### Slow dynamics and stationary points The oscillon corresponds to a fixed-point solution of the system (25). There are two coexisting fixed points for each \(\ell^{2}\) in the interval \((0,\frac{64}{9})\). We denote their components by \((A_{+},b_{+})\) and \((A_{-},b_{-})\), respectively. Here \[A_{\pm}^{2}=\frac{8}{3}\pm\sqrt{\frac{64}{9}-\ell^{2}},\quad b_{\pm}^{2}= \frac{4}{3}\frac{1}{A_{\pm}^{2}}. \tag{26}\] Turning to the stability of these, we note that all derivatives in equations (25) carry a small factor \(\epsilon\). Accordingly, most of the time-dependent solutions of that system evolve on a short scale \(\mathcal{T}_{1}\sim\epsilon\). This is inconsistent with our original assumption that \(\partial\phi/\partial\mathcal{T}_{1}=O(1)\). There is, however, a particular \(\ell\)-regime where solutions change slowly and the system (25) is consistent. Specifically, slowly evolving nonstationary solutions can be explicitly constructed in the vicinity of the value \(\ell_{c}^{2}=\frac{64}{9}\); see Appendix A. This value proves to be a saddle-centre bifurcation point separating a branch of stable equilibria, namely \((A_{-},b_{-})\), from an unstable branch, \((A_{+},b_{+})\). Since the asymptotic construction presented in the Appendix is limited to the neighbourhood of the bifurcation value \(\ell_{c}\), we do not have access to the oscillon perturbations outside that parameter region. Nevertheless, it is not difficult to realise that the two fixed points maintain their stability properties over their entire domain of existence, \(0\leq\ell^{2}<\ell_{c}^{2}\). Indeed, the stability may only change as \(\ell\) passes through the value \(\ell_{0}\) given by a root of \(\det M=0\), where \(M\) is the linearisation matrix. (The evolution is slow and the system (25) is consistent in the vicinity of that point.) There happens to be only one such root and it is given exactly by \(\ell_{c}\); see Appendix A. In order to compare the variational results to conclusions of the direct numerical simulations of equation (3), we return to the oscillon Ansatz (21). Switching from the parametrisation by \(\ell\) to the frequency parameter \(\omega\), two branches of fixed points (26) can be characterised in a uniform way: \[A=\frac{2}{\sqrt{3}}\sqrt{4-\omega^{2}},\quad b=\frac{1}{\sqrt{4-\omega^{2}}}. \tag{27}\] (The relations (27) result by letting \(\ell=\omega bA^{2}\) in (26).) The frequencies \(\omega_{c}\leq\omega<2\) correspond to stable oscillons and those in the interval \(0\leq\omega<\omega_{c}\) to unstable ones. Here \[\omega_{c}=\sqrt{2}. \tag{28}\] The third collective coordinate in (21) -- the phase correction \(\theta\) -- can be assigned an arbitrary constant value. Note that the expressions (27) agree with the asymptotic result (4) in the \(A,b^{-1}\to 0\) limit. ### Numerical verification We simulated the partial differential equation (3) using a pseudospectral numerical scheme with \(2^{13}\) Fourier modes. The scheme imposes periodic boundary conditions \(\phi(L)=\phi(-L)\) and \(\phi_{x}(L)=\phi_{x}(-L)\), where the interval should be chosen long enough to prevent any radiation re-entry. (Our \(L\) was pegged to the estimated width of the oscillon, varying between \(L=20\) and \(L=100\).) Using the initial data in the form \[\phi(x,0)=A_{0}\operatorname{sech}\left(\frac{x}{b_{0}}\right),\quad\phi_{t}( x,0)=0\] with \(b_{0}=(2/\sqrt{3})A_{0}^{-1}\) and varied \(A_{0}\), we were able to create stable oscillons with frequencies ranging from \(\omega=1.03\,\omega_{c}\) to \(\omega=2\). (Here \(\omega=2\pi/T\), where \(T\) is the observed period of the localised periodic solution.) This "experimental" stability domain is in good agreement with the variational result \(\omega_{c}\leq\omega<2\). The 3% discrepancy between two lower threshold values can be attributed to the emission of radiation and the oscillon's core deformation due to the third harmonic excitation. (The presence of the third harmonic in the oscillon's core is manifest already in the asymptotic solution (4).) The radiation intensifies and deformation becomes more significant as the oscillon's amplitude grows (Fig 3(a)); yet the variational approximation disregards both effects (see Fig 3(b)). Once the evolution has settled to an oscillon with a period \(T\), we would measure its amplitude \[A=\max_{T}\left|\phi(x,t)\right|_{x=0} \tag{29}\] and evaluate its width which we define by \[b=\frac{1}{2A^{2}}\max_{T}\int_{-L}^{L}\phi^{2}(x,t)dx. \tag{30}\] In (29)-(30), the maximum is evaluated over the time interval \(t_{0}\leq t<t_{0}+T\), where \(t_{0}\) was typically chosen as the position of the third peak of \(\phi(0,t)\). Figure 4 compares the amplitude and width of the numerically generated oscillon with their variational approximations (27). The difference between the numerical and variational results grows as \(\omega\) approaches \(1.03\,\omega_{c}\) -- yet the relative error in the amplitude remains below 8% and the error in the width does not exceed 12.5%. ## IV Two remarks on the method ### Modulation, instability and significance of \(\theta\) The inclusion of the cyclic coordinate \(\theta(T_{1})\) is crucial for our variational approach. To show that, we compare the system (25) incorporating, implicitly, three degrees of freedom with its two-degree (\(A\) and \(b\)) counterpart. Linearising equations (25) about the fixed point (27) and considering small perturbations with the time dependence \(e^{(\lambda/\epsilon)T_{1}}\), we obtain a characteristic equation \[\lambda^{4}+(16-5A^{2}+3\sigma A^{2})\lambda^{2}-18\sigma A^{2}\left(A^{2}- \frac{8}{3}\right)=0. \tag{31}\] When \(A^{2}\) is away from \(0\) or \(8/3\), all eigenvalues \(\lambda\) are of order \(1\). This means that contrary to the assumption under which the system (25) was derived, small perturbations evolve on a short scale \(T_{1}\sim\epsilon\) rather than \(T_{1}\sim 1\). The variational method cannot provide trustworthy information on the stability or modulation frequency of the oscillons with those \(A\). There are two regions where a pair of \(O(\epsilon)\)-eigenvalues occurs and, consequently, our approach is consistent. One region consists of small \(A\sim\epsilon\); this range accounts for the asymptotic regime (4). The second region is defined by \(|A^{2}-8/3|=O(\epsilon^{2})\) or, equivalently, by \(|\omega-\omega_{c}|\sim\epsilon^{2}\). As \(\omega\) is reduced through \(\omega_{c}\), a pair of opposite imaginary eigenvalues converges at the origin and moves onto the positive and negative real axis: \[\lambda^{2}=-\frac{16\sqrt{2}\sigma}{\sigma+1/3}(\omega-\omega_{c})+O\left(( \omega-\omega_{c})^{2}\right).\] At this point, a slow modulation of the principal harmonic \(\cos(\omega_{c}t)\) with the modulation frequency \(\sim(\omega-\omega_{c})^{1/2}\) gives way to an exponential growth of the perturbation. (For an explicit construction of the time-dependent solutions of the system (25), see Appendix A.) Had we not included \(\theta(T_{1})\) in our trial function -- that is, had we set \(\theta=0\) in equation (21) -- we would have ended up with the same fixed point (27) but a different characteristic equation: \[\lambda^{4}+(3\sigma-2)A^{2}\lambda^{2}-9\sigma A^{4}=0. \tag{32}\] Figure 3: Top panel: the Kosevich-Kovalev oscillon with \(\omega=1.06\,\omega_{c}\) (where \(\omega_{c}=\sqrt{2}\)). The oscillon is stable: despite the energy loss to radiation waves, any changes in its period and amplitude are hardly visible. This figure is obtained by the numerical simulation of equation (3). Bottom panel: the variational approximation (21) with the matching \(\omega\). Here \(A\) and \(b\) are as in (27) with \(\omega=1.06\,\omega_{c}\), and \(\theta=0\). Except for the absence of the radiation waves, the variational pattern is seen to be a good fit for the true oscillon. Figure 4: The amplitude and width of the oscillon as functions of its frequency. The solid curves depict results of the numerical simulations of the partial differential equation (3). The blue curve traces the amplitude-frequency and the brown one gives the width-frequency dependence. The nearby dashed lines describe the corresponding variational approximations (27). Equation (32) does not have roots of order \(\epsilon\) outside the asymptotic domain \(A\sim\epsilon\). Therefore, the multiscale variational Ansatz excluding the cyclic variable \(\theta(T_{1})\) is inconsistent with the slow evolution of the collective coordinates \(A(T_{1})\) and \(b(T_{1})\). ### Insensitivity to spatial shape variations The \(x\)-part of the trial function (21) was chosen so as to reproduce the asymptotic representation (4) and match the amplitude-frequency relationship as \(\omega\to 2\). As for the global behaviour of the \(A(\omega)\) curve, the variation of the spatial profile of the trial function has little effect on it -- as long as the function remains localised. To exemplify this insensitivity to the Ansatz variations, we replace the exponentially localised trial function (21) with a gaussian: \[\phi=A\,\cos(\omega\mathcal{T}_{0}+\theta)\,e^{-(x/b)^{2}}. \tag{33}\] As in (21), the amplitude \(A\), width \(b\) and phase shift \(\theta\) are assumed to be functions of the slow time variable \(\mathcal{T}_{1}=\epsilon t\). Substituting in (20) gives an effective action with the Lagrangian \[\mathcal{L}=(DA)^{2}b+\frac{3}{4}\frac{(Db)^{2}A^{2}}{b}+ADADb\] \[+(\omega+D\theta)^{2}bA^{2}-\frac{A^{2}}{b}-4bA^{2}+\frac{3\sqrt {2}}{8}bA^{4}. \tag{34}\] (Here, as before, \(D=\epsilon\partial/\partial T_{1}\)). Equation (34) has the same form as (22) with the only difference residing in the value of some of the coefficients. The Euler-Lagrange equations resulting from (34) have a fixed-point solution \[A=\frac{2^{7/4}}{3}\sqrt{4-\omega^{2}},\quad b=\frac{1}{\sqrt{3}}\frac{1}{ \sqrt{4-\omega^{2}}}. \tag{35}\] Note that the gaussian amplitude and width are related to \(\omega\) by exactly same laws as the amplitude and width of the secant-shaped approximation (equations (27)). If \(A_{g}\) stands for the amplitude (35) and \(A_{s}\) for the secant-based result (27), the ratio \(A_{g}(\omega)/A_{s}(\omega)\) is given by \(\sqrt[4]{8/9}\approx 0.971\). Thus the gaussian-based amplitude-frequency curve reproduces the qualitative behaviour of the curve (27), with the gaussian amplitude being only 3%-different from the amplitude of the secant-shaped variational oscillon. Linearising the Euler-Lagrange equations about the fixed point (35) we obtain a gaussian analog of the characteristic equation (31): \[\lambda^{4}+\left(16-\frac{27\sqrt{2}}{8}A^{2}\right)\lambda^{2}-\frac{27}{8 }A^{2}\left(A^{2}-\frac{32}{9\sqrt{2}}\right)=0. \tag{36}\] The critical value of \(A^{2}\) above which a pair of opposite eigenvalues moves onto the real axis is \(32/9\sqrt{2}\). Remarkably, the corresponding threshold frequency \(\omega_{c}=\sqrt{2}\) coincides with the value (28) afforded by the secant Ansatz. ## V Conclusions This study was motivated by the numerous links and similarities between the Klein-Gordon oscillons and solitons of the nonlinear Schrodinger equations. A simple yet powerful approach to the Schrodinger solitons exploits the variation of action. By contrast, the variational analysis of the Klein-Gordon oscillons has not been nearly as successful. One obstacle to the straightforward ("naive") variational treatment of the oscillon is that its width proves to be unsuitable as a collective coordinate in that approach. The soliton's amplitude and width comprise a standard choice of variables in the Schrodinger domain, but making a similar choice in the Klein-Gordon Lagrangian results in a singular four-dimensional system. This paper presents a variational method free from singularities. The method aims at determining the oscillon's parameters, domain of existence and stability-instability transition points. The proposed formulation is based on a fast harmonic Ansatz supplemented by the adiabatic evolution of the oscillon's collective coordinates. An essential component of the set of collective coordinates is the "lazy phase": a cyclic variable accounting for nonuniform phase acquisitions. We employed the Kosevich-Kovalev model as a prototype equation exhibiting oscillon solutions. Our variational method establishes the oscillon's domain of existence (\(0<\omega<2\)) and identifies the frequency \(\omega_{c}\) at which the oscillon loses its stability (\(\omega_{c}=\sqrt{2}\)). The predicted stability domain is in good agreement with numerical simulations of the partial differential equation (3) which yield stable oscillons with frequencies \(1.03\,\omega_{c}\leq\omega<2\). The variational amplitude-frequency and width-frequency curves are consistent with the characteristics of the numerical solutions. ###### Acknowledgements. Discussions with Alexander Kovalev are gratefully acknowledged. This study was supported by a collaboration grant from the National Research Foundation of South Africa and Joint Institute for Nuclear Research (NRF grant No.120467). ## Appendix A Slow evolution near the onset of instability The aim of this Appendix is to construct a slowly changing solution of the system (25) consistent with the assumption used in the derivation of that system. The construction is carried out in the vicinity of the parameter value signifying the onset of instability of the fixed point. We let \[\ell^{2}=\ell_{0}^{2}-\epsilon^{4}, \tag{10}\] where \(\ell_{0}\) is the parameter value to be determined. The unknowns are expanded as \[A=A_{0}+\epsilon^{2}A_{1}+\epsilon^{4}A_{2}+...,\] \[b=b_{0}+\epsilon^{2}b_{1}+\epsilon^{4}b_{2}+.... \tag{11}\] Here \((A_{0},b_{0})\) is either of the two fixed points (26) corresponding to \(\ell=\ell_{0}\). Substituting (10)-(11) in (25) we equate coefficients of like powers of \(\epsilon\). The order \(\epsilon^{2}\) gives \[M\vec{Y}_{1}=0,\] where the matrix \(M\) has the form \[\left(\begin{array}{cc}4+\frac{9\ell_{0}^{2}}{4\epsilon_{0}^{2}}-\frac{11+12 \sigma}{4}A_{0}^{2}&\left[\frac{3}{2}\frac{\ell_{0}^{2}}{A_{0}^{2}}-\frac{1+6 \sigma}{2}A_{0}^{2}\right]\frac{A_{0}}{b_{0}}\\ 6\sigma A_{0}b_{0}&6\sigma A_{0}^{2}\end{array}\right) \tag{12}\] and the vector \(\vec{Y}_{1}\) consists of the linearised perturbations of the fixed point: \[\vec{Y}_{1}=\left(\begin{array}{c}A_{1}\\ b_{1}\end{array}\right).\] Setting \(\det M=0\) determines the value of \(\ell_{0}^{2}\). This value turns out to coincide with \(\ell_{c}^{2}\), the endpoint of the interval of existence of the fixed points: \[\ell_{0}^{2}=\ell_{c}^{2}=\frac{64}{9}. \tag{13}\] As \(\ell\) approaches \(\ell_{c}\), the fixed points \((A_{+},b_{+})\) and \((A_{-},b_{-})\) join to become \((A_{0},B_{0})\). Here \[A_{0}=\sqrt{\frac{8}{3}},\quad b_{0}=\frac{1}{\sqrt{2}}. \tag{14}\] The components of the null eigenvector \(\vec{Y}_{1}\) are readily identified: \[A_{1}=A_{0}y,\quad b_{1}=-b_{0}y.\] Here \(y=y({\cal T}_{1})\) is an arbitrary scalar function that will be determined at the next order of the expansion. At the order \(\epsilon^{4}\) we obtain \[M\vec{Y}_{2}=\vec{F}_{2}, \tag{15}\] where \[\vec{F}_{2}=\left(\begin{array}{c}f_{2}\\ g_{2}\end{array}\right)\] with \[f_{2}=-A_{0}\partial_{1}^{2}y+8A_{0}\left(\frac{4}{3}-\sigma\right)y^{2}- \frac{1}{A_{0}^{3}b_{0}^{2}},\] \[g_{2}=b_{0}\partial_{1}^{2}y+16\sigma b_{0}y^{2}.\] The solvability condition for equation (15) is \[\vec{Z}\cdot\vec{F}_{2}=0, \tag{16}\] where \[\vec{Z}=\left(\begin{array}{c}A_{0}b_{0}\\ \frac{4}{3}(1-\frac{1}{3\sigma})\end{array}\right)\] is the adjoint null eigenvector of the matrix \(M\). Substituting for \(A_{0}\) and \(b_{0}\) from (14), equation (16) yields \[\left(1+\frac{1}{3\sigma}\right)\partial_{1}^{2}y=16y^{2}-\frac{9}{16}. \tag{17}\] The amplitude equation (17) has the form of the second Newton's law for a classical particle moving in the potential \[U(y)=\frac{9}{16}y-\frac{16}{3}y^{3}.\] The potential has two equilibria: a minimum at \(y_{-}=-\frac{3}{16}\) and a maximum at \(y_{+}=\frac{3}{16}\). These correspond to the two fixed points of the system (25): the minimum pertains to \((A_{-},b_{-})\) and the maximum to \((A_{+},b_{+})\). Accordingly, the point \((A_{-},b_{-})\) is stable and \((A_{+},b_{+})\) unstable. The stable fixed point is surrounded by a family of closed orbits. The corresponding periodic solutions of equation (17) are expressible in Jacobi functions: \[y({\cal T}_{1})=-\frac{k^{2}+1}{3}\mu+k^{2}\mu\,{\rm sn}^{2}\left(\sqrt{\frac{ 8\sigma\mu}{1+3\sigma}}{\cal T}_{1},k\right),\] where \[\mu=\frac{9}{16}\sqrt{\frac{k^{2}+1}{k^{6}+1}}.\] The elliptic modulus \(k\), \(0\leq k\leq 1\), serves as the parameter of the family.
2309.13834
Prior Bilinear Based Models for Knowledge Graph Completion
Bilinear based models are powerful and widely used approaches for Knowledge Graphs Completion (KGC). Although bilinear based models have achieved significant advances, these studies mainly concentrate on posterior properties (based on evidence, e.g. symmetry pattern) while neglecting the prior properties. In this paper, we find a prior property named "the law of identity" that cannot be captured by bilinear based models, which hinders them from comprehensively modeling the characteristics of KGs. To address this issue, we introduce a solution called Unit Ball Bilinear Model (UniBi). This model not only achieves theoretical superiority but also offers enhanced interpretability and performance by minimizing ineffective learning through minimal constraints. Experiments demonstrate that UniBi models the prior property and verify its interpretability and performance.
Jiayi Li, Ruilin Luo, Jiaqi Sun, Jing Xiao, Yujiu Yang
2023-09-25T02:44:33Z
http://arxiv.org/abs/2309.13834v1
# Prior Bilinear Based Models for Knowledge Graph Completion ###### Abstract Bilinear based models are powerful and widely used approaches for Knowledge Graphs Completion (KGC). Although bilinear based models have achieved significant advances, these studies mainly concentrate on posterior properties (based on evidence, e.g. symmetry pattern) while neglecting the prior properties. In this paper, we find a prior property named "the law of identity" that cannot be captured by bilinear based models, which hinders them from comprehensively modeling the characteristics of KGs. To address this issue, we introduce a solution called Unit Ball Bilinear Model (UniBi). This model not only achieves theoretical superiority but also offers enhanced interpretability and performance by minimizing ineffective learning through minimal constraints. Experiments demonstrate that UniBi models the prior property and verify its interpretability and performance. ## 1 Introduction Knowledge Graphs (KGs) store human knowledge in the form of triple \((h,r,t)\), which represents a relation \(r\) between a head entity \(h\) and a tail entity \(t\)(Ji et al., 2021). KGs benefit a lots of downstream tasks and applications, e.g., recommender system (Zhang et al., 2016), dialogue system (He et al., 2017) and question answering (Mohammed et al., 2018). Since actual KGs are usually incomplete, researchers are interested in predicting missing links to complete them, termed as Knowledge Graph Completion (KGC). As a common solution, Knowledge Graph Embedding (KGE) completes KGs by learning low-dimensional representations of entities and relations. As one typical category of KGE, bilinear based models have achieved great advances (Trouillon et al., 2016; Hitchcock, 1927; Liu et al., 2017; Nickel et al., 2011), these work only focus on posterior properties, which are based on evidence of triples, such as relational patterns Xu and Li (2019); Liu et al. (2017) and complex relations Gao et al. (2021); Tang et al. (2020). For example, we treat a relation as symmetric based on the observation that \((h,r,t)\) and \((t,r,h)\) are co-occurring. Here, **we ask does a prior property exist?** Our answer is **the law of identity** in Logic Wang (2016), which means that everything is only identical to itself. In KGs, this rule implies that not only the representations of entities are different but also the representation of _identity_ should be unique so that it can be decided without any facts, or _a priori_. However, we find that the uniqueness of _identity_ have not been captured by previous bilinear based model, which prevents them to fully capture the properties of KGs. To present the problem more clearly, we first need to introduce some notation. A model with a score function \(s(h,r,t)\) can model the uniqueness of _identity_ means that \(\forall h\neq t,s(h,r,h)>s(h,r,t)\) holds if and only if \(r\) is _identity_ and its universal representation is unique. In addition, the score function \(s(\cdot)\) of bilinear based model is \(\mathbf{h}^{\top}\mathbf{R}\mathbf{t}\), where \(\mathbf{h},\mathbf{R},\mathbf{t}\) are the representations of \(h\), \(r\), and \(t\). In terms of such uniqueness, bilinear based models have two flaws. On the one hand, Fig. 1(a) demonstrates \(\mathbf{e}_{1}^{\top}\mathbf{I}\mathbf{e}_{1}<\mathbf{e}_{1}^{\top}\mathbf{I} \mathbf{e}_{2}\), which means that the relation matrices per se do not model \(identity\) perfectly. On the other hand, Fig. 1(b) shows even if a matrix, e.g. \(\mathbf{I}\), does. Its scaled one \(k\mathbf{I}\) can also model _identity_ and thus breaks the uniqueness. Obviously, modeling this property requires both entities and relations to be restricted, which reduces expressiveness. Yet, we make this cost negligible by minimizing the constraints, one per entity or relation, while modeling the desired property. To be specific, we normalize the vectors of the entities and the spectral radius of the matrices of the relations to \(1\). Since the model captures entities in a unit ball as shown in Fig. 1(c), we name it Unit Ball Bilinear Model (UniBi) In addition to the theoretical superiority, UniBi is more powerful and interpretable since modeling identity uniquely requires normalizing the scales that barely contain any useful knowledge. On the one hand, scale normalization prevents ineffective learning on scales and makes UniBi focus more on useful knowledge. On the other hand, it shows the relationship between the complexity of relations and the singular values in the matrices used to represent them. Experiments verify that UniBi models the law of identity with improvement on performance and interpretability. Therefore, UniBi is a prior KGE model and a potential paradigm for bilinear based models. ## 2 Preliminaries ### Background A Knowledge Graph \(\mathcal{K}\) is a set that contains the facts about sets of entities \(\mathcal{E}\) and relations \(\mathcal{R}\). Each fact is stored by a triple \((e_{i},r_{j},e_{k})\in\mathcal{E}\times\mathcal{R}\times\mathcal{E}\) where \(e_{i}\) and \(r_{j}\) denote the \(i\) -th entity and the \(j\) -th relation, respectively. KGE learns embeddings for each entity and relation via a score function \(s:\mathcal{E}\times\mathcal{R}\times\mathcal{E}\rightarrow\mathbb{R}\). To verify the performance of a KGE method, \(\mathcal{K}\) is first divided into \(\mathcal{K}_{train}\) and \(\mathcal{K}_{test}\). Then, the method is trained on \(\mathcal{K}_{train}\) to learn the embeddings \(\mathbf{e}\) and \(\mathbf{r}\) (or \(\mathbf{R}\)) for each entity \(e\in\mathcal{E}\) and relation \(r\in\mathcal{R}\). Finally, the model is expected to give a higher rank if \((e_{i},h_{j},e_{k})\in\mathcal{K}\) while a lower rank if \((e_{i},h_{j},e_{k})\notin\mathcal{K}\), for each query \((e_{i},h_{j},?)\) from \(\mathcal{K}_{test}\) and the candidate entity \(e_{k}\in\mathcal{E}\). In addition to the above tail prediction, models are also required to test on head prediction conversely. We transform it to tail prediction by introducing reciprocal relations following Lacroix et al. (2018). In logic, posterior and prior knowledge are distinguished by whether it holds without evidence. In KGs, we say relational patterns and complex relations are posterior because we cannot assign them to any of the relations without a triplet or other information. For example, we need to observe that Figure 1: The flaws of bilinear based models and our solution in terms of modeling the uniqueness of _identity_. (a) Identity matrix fails to model _identity_. (b) Scaled identity matrix could also model _identity_. (c) An illustration of UniBi. All entities are embedded in the unit sphere and stay in the unit ball after relation specific transformations. \((h,r,t)\) and \((t,r,h)\) are co-occurring to decide a relation \(r\) has the symmetry pattern or observe \((h,r,t_{1})\) and \((h,r,t_{2})\) to conclude \(r\) is a 1-N relation. We claim that the law of identity is a prior property that is true for all entities in any KGs. To be specific, the law of identity is one of the basic prior rule in Logic Wang (2016), it means that everything is identical to itself, or \(\forall x,x=x\). And this law corresponds to the definition of entity2. Footnote 2: Cambridge Dictionary defines it as ”Something that exists apart from other things, having its own independent existence”. ### Other Notations We utilize \(\hat{\mathbb{E}}\) and \(\hat{\mathbb{R}}\) to denote the set of all possible representations of entities and relations. And we use \(\mathbf{e}\in\hat{\mathbb{E}}\) and \(\mathbf{R}\in\hat{\mathbb{R}}\) to denote the embedding vector of the entity \(e\) and the transformation matrix specific to the relation \(\mathbf{R}\). Furthermore, we use \(\|\cdot\|\) to denote the L2 norm of the vectors, \(\|\cdot\|_{F}\) and \(\rho(\cdot)\) to represent the Frobenius norm and the spectral radius of a matrix. In this paper, we focus on \(n\)-dimensional real space \(\mathbb{R}^{n}\), which means \(\hat{\mathbb{E}}\subseteq\mathbb{R}^{n}\) and \(\hat{\mathbb{R}}\subseteq\mathbb{R}^{n\times n}\). We also consider real vector spaces whose vectors are complex \(\mathbb{C}^{n}\) or hypercomplex space \(\mathbb{H}^{n}\), since they are isomorphic to \(\mathbb{R}^{2n}\) or \(\mathbb{R}^{4n}\). ## 3 Related Work Previous work mainly handle two kind of posterior properties, namely relational patterns and complex relations. On the one hand, relational patterns are the intrinsic properties of relations, and is formally introduced by ComplEx Trouillon et al. (2016). Base on this, RotatE Sun et al. (2019) proposes composition pattern, Analogy Liu et al. (2017) introduces analogy pattern, or commutative pattern, and Dihedral Xu and Li (2019) adds non-commutative pattern. On the other hand, complex relations are the extrinsic properties of relation, and is introduced in TransH Wang et al. (2014) to denote the relations that are not 1-1, or 1-N, N-1, N-N. Beyond posterior properties, previous work of KGE can be roughly divided into the following three categories: distance, bilinear and others. Distance based models choose Euclidean distance for their score functions. TransE (Bordes et al., 2013) inspired by Word2Vec (Mikolov et al., 2013) in Natural Language Processing proposes the first distance based model, which uses translation as the linear transformation \(s(h,r,t)=-\|\mathbf{h}+\mathbf{r}-\mathbf{t}\|\). TransH (Lin et al., 2015) and TransR (Lin et al., 2015) find that TransE difficult to handle complex relations and thus apply linear projections before translation. Apart from translation, RotatE (Sun et al., 2019) first introduces rotation as the transformation. RotE (Chami et al., 2020) further combines translation and rotation. Some works also introduce hyperbolic spaces (Balazevic et al., 2019, Chami et al., 2020, Wang et al., 2021). In contrast, bilinear based models have score functions in the bilinear form \(s(h,r,t)=\mathbf{h}^{\top}\mathbf{R}\mathbf{t}\). RESCAL (Nickel et al., 2011) is the first bilinear based model whose relation matrices are unconstrained. Although RESCAL is expressive, it contains too many parameters and tends to overfitting. DistMult (Yang et al., 2015) simplifies these matrices into diagonal ones. ComplEx (Trouillon et al., 2016) further introduces complex values to model the skew-symmetry pattern. Analogy (Liu et al., 2017) uses block-diagonal to model the analogical pattern and subsumes DistMult, ComplEx, and HoIE (Nickel et al., 2016). Moreover, QuatE (Zhang et al., 2019) extends complex values to quaternion and GeomE (Xu et al., 2020) utilizes geometric algebra to subsume all these models. In addition, other works using black-box networks (Dettmers et al., 2018, Nguyen et al., 2018, Yao et al., 2019, Schlichtkrull et al., 2018, Zhang et al., 2020) or additional information (An et al., 2018, Ren et al., 2016) are beyond the scope of this paper. ## 4 Method In this section, we first discuss prior property, and give the condition for bilinear based models to model the law of identity in Section 4.1. We then propose a model named UniBi that satisfies it with least constraint in Section 4.2 and an efficient modeling for UniBi in Section 4.3. In addition, we also discuss its improvement on performance and interpretability via scale normalization in Section 4.4. ### Prior Property and Identity Relation In Section 2, we have shown that the law of identity means \(\forall x,x=x\). We consider how entities and _identity_ relation, i.e. \(x\) and \(=\), are embedded, and the fact the embedding of _identity_ should be specified _a priori_, we find that this is equivalent to the following definition: **Definition 1**.: _A KG model can model the law of identity, which means that the embeddings of entities are different and the embeddings of the identity relation is unique._ We notice the differences in the embedding of entities are met effortlessly, yet the uniqueness of identity cannot be modeled by bilinear based models. We demonstrate two cases in which bilinear based models violate it. On the one hand, Fig. 1(a) demonstrates \(\mathbf{e}_{1}^{\top}\mathbf{I}\mathbf{e}_{1}<\mathbf{e}_{1}^{\top}\mathbf{I} \mathbf{e}_{2}\), which means that the matrix of a relation per se is not guaranteed for modeling _identity_. On the other hand, Fig. 1(b) shows even if a matrix, e.g., \(\mathbf{I}\), does. Its scaled one \(k\mathbf{I}\) where \(k>0,k\neq 1\) can also model _identity_, which contradicts the quantification of uniqueness. Therefore, we give a formal definition based on definition 1 as following to investigate how to modify bilinear based models to model this uniqueness and the law of identity. **Definition 2**.: _A bilinear model can model the law of identity means:_ \[\exists!\ \mathbf{R}\in\hat{\mathbb{R}},\forall\mathbf{h},\mathbf{t}\in \hat{\mathbb{E}},\mathbf{h}\neq\mathbf{t},\ \mathbf{h}^{\top}\mathbf{R}\mathbf{h}>\mathbf{h}^{\top}\mathbf{R}\mathbf{t}, \tag{1}\] _where \(\exists!\) is the uniqueness quantification._ ### Unit Ball Bilinear Model From the above examples, it is easy to see that modeling the law of identity requires both entities and relations be restricted, which will reduce expressiveness. To solve this dilemma, we make the cost negligible by minimizing the constraints, one per entity or relation, while modeling the desired property. To be specific, we normalize the vectors of the entities and the spectral radius of the matrices of the relations to \(1\) by setting \(\hat{\mathbb{E}}=\{\mathbf{e}\,|\,\|\mathbf{e}\|=1,\mathbf{e}\in\mathbb{R}^{n}\}\) and \(\hat{\mathbb{R}}=\{\mathbf{R}\,|\,\rho(\mathbf{R})=1,\mathbf{R}\in\mathbb{R}^ {n\times n}\}\). We name the proposed model as Unit Ball Bilinear Model (UniBi), since it captures entities in a unit ball as shown in Fig. 1(c). The score function of UniBi is3: Footnote 3: We present in more detail the similarities and differences between our constraints and those of our predecessors in the Appendix D \[s(h,r,t)=\mathbf{h}^{\top}\mathbf{R}\mathbf{t},\ \|\mathbf{h}\|,\|\mathbf{t}\|=1,\rho(\mathbf{R})=1. \tag{2}\] We then have the following theorem. **Theorem 1**.: _UniBi is capable to model the law of identity in terms of definition 2._ Proof.: Please refer to the Appendix A.2 ### Efficient Modeling #### 4.3.1 Efficient Modeling for Spectral Radius Although the proposed model has been proven to model the law of identity, it still has a practical disadvantage, since it is difficult to directly represent all matrices whose spectral raidus are \(1\). In addition, it is also time-consuming to calculate the spectral radius \(\rho(\cdot)\) via singular values decomposition (SVD). To avoid unnecessary decomposition, we divide a relation matrix into three parts \(\mathbf{R}=\mathbf{R}_{h}\mathbf{\Sigma}\mathbf{R}_{t}\) where \(\mathbf{R}_{h},\mathbf{R}_{t}\) are orthogonal matrices and \(\mathbf{\Sigma}=\text{Diag}[\sigma_{1},\dots,\sigma_{n}]\) is a positive semidefinite diagonal matrix. And we maintain the independence of these three components during training. Therefore, it becomes simple to obtain matrices whose spectral radius is 1, that is, \(\frac{\mathbf{R}_{h}\Sigma\mathbf{R}_{t}}{\sigma_{max}}\). And we transform the score function Eq. 2 into the following form. \[s(h,r,t)=\frac{\mathbf{h}^{\top}\mathbf{R}_{h}\mathbf{\Sigma}\mathbf{R}_{t} \mathbf{t}}{\sigma_{max}\|\mathbf{h}\|\|\mathbf{t}\|}, \tag{3}\] where \(\sigma_{max}\) is the maximum among \(\sigma_{i}\). #### 4.3.2 Efficient Modeling for Orthogonal Matrix In addition, we find that the calculation of the orthogonal matrix is still time-consuming (Tang et al., 2020). To this end, we only consider the diagonal orthogonal block matrix, where each block is a low-dimensional orthogonal matrix. Specifically, we use \(k\)-dimensional rotation matrices to build \(\mathbf{R}_{h}\) and \(\mathbf{R}_{t}\). Taking \(\mathbf{R}_{h}\) as an example \(\mathbf{R}_{h}=\text{Diag}[\mathbf{SO}(k)_{1},\dots,\mathbf{SO}(k)_{\frac{n}{ k}}]\), where \(\mathbf{SO}(k)_{i}\) denotes the \(i\)-th special orthogonal matrix, that is, the rotation matrix. The rotation matrix only represents the orthogonal matrices whose determinant are \(1\) and does not represent the ones whose determinant are \(-1\). To this end, we introduce two diagonal sign matrices of \(n\)-th order \(\mathbf{S}_{h},\mathbf{S}_{t}\in\mathbb{S}\) where \[\mathbb{S}=\{\mathbf{S}\mid\mathbf{S}_{ij}=\begin{cases}\pm 1,&if\ i=j,\\ 0,&if\ i\neq j.\end{cases}\}. \tag{4}\] Thus, we could rewrite the score function Eq. 3 to \[s(h,r,t)=\frac{\mathbf{h}^{\top}\mathbf{R}_{h}\mathbf{S}_{h}\mathbf{\Sigma} \mathbf{S}_{t}\mathbf{R}_{t}\mathbf{t}}{\sigma_{max}\|\mathbf{h}\|\|\mathbf{t} \|}. \tag{5}\] However, the sign matrix \(\mathbf{S}_{h}\) and \(\mathbf{S}_{t}\) are discrete. To address this problem, we notice that \(\mathbf{S}_{h}\), \(\mathbf{\Sigma}\), \(\mathbf{S}_{t}\) can be merged into a matrix \(\mathbf{\Xi}\) that \[\mathbf{\Xi}_{ij}=\begin{cases}s_{i}s_{j}\sigma_{i},&if\ i=j,\\ 0&if\ i\neq j.\end{cases} \tag{6}\] where \(s_{i}=(\mathbf{S}_{h})_{ii}\), \(s_{j}=(\mathbf{S}_{t})_{jj}\), \(i,j=1,\dots,n\) and \(\mathbf{\Xi}=\text{Diag}[\xi_{1},\dots,\xi_{n}]\). Thus, we incorporate the discrete matrices \(\mathbf{\bar{S}}_{h},\mathbf{S}_{t}\) into the continuous matrix \(\mathbf{\Xi}\). \[s(h,r,t)=\frac{\mathbf{h}^{\top}\mathbf{R}_{h}\mathbf{\Xi}\mathbf{R}_{t} \mathbf{t}}{|\xi_{max}|\|\mathbf{h}\|\|\mathbf{t}\|}, \tag{7}\] where \(|\xi_{max}|\) is the maximum among \(|\xi_{i}|\). ### Other benefits from scale normalization In addition to theoretical superiority, UniBi is more powerful and interpretable, since modeling the law of identity requires normalizing the scales that barely contain any useful knowledge. On the one hand, it is obvious that modeling the law of identity needs to avoid the cases in Fig. 1(a) and Fig. 1(b), which requires normalizing the scales of entities and relations. On the other hand, it is counter-intuitive that the scale information is useless for bilinear based models. Scale information is treated as useless because what really matters is not the absolute values but the relative ranks of scores. And scale contributes nothing to the ranks, since they remain the same after we multiply these scores by a positive factor: \[s^{\prime}(h,r,t)=(k_{e}\mathbf{h})^{\top}(k_{r}\mathbf{R})(k_{e}\mathbf{t}) =k_{e}^{2}k_{r}(\mathbf{h}^{\top}\mathbf{R}\mathbf{t})=k_{e}^{2}k_{r}\cdot s (h,r,t), \tag{8}\] where \(k_{e},k_{r}>0\). Therefore, we treat learning on scales as ineffective4. Footnote 4: Further discussion in Appendix E. #### 4.4.1 Performance As illustrated in Fig. 11, UniBi has better performance, since it prevents ineffective learning with the least constraints. On the one hand, by preventing ineffective learning, UniBi focuses more on learning useful knowledge, which helps improve performance. On the other hand, it pays a negligible cost of expressiveness, since it adds only one equality constraint to each entity or relation, which is ignorable when the dimension \(n\) is high. In other words, although our scale normalization is a double-edged sword, its negative effect is negligible, and thus leads to a better performance. It should be noticed that the loss on expressiveness may outweighs the gain on learning, if scale normalization is replaced by a sticker one. For example, if we constrain the matrix to be orthogonal, the cost of expressiveness is no longer negligible, since an orthogonal matrix requires that each of its singular values be \(1\), which is \(n\) equality constraints. #### 4.4.2 Interpretability In addition to performance, scale normalization also helps us to understand complex relations. Complex relations are defined by whither hptr (_h_ead _p_er tail of a _r_elation) or tphr (_tail per head of a _r_elation) is higher than a special threshold 1.5 [Wang et al., 2014]. Therefore, all relations are divided into 4 types, i.e. 1-1, 1-N, N-1, and N-N. However, we think this division is too coarse-grained and suggest a fine-grained continuing metric complexity instead. To better demonstrate this idea, we gave an example in Fig. 2(a) and the definition of complexity as follows. **Definition 3**.: _The complexity of a relation is the sum of its hptr and tphr._ Intuitively, complex relations are handled by aggregating entities through projection [Wang et al., 2014, Lin et al., 2015], which implies that the higher the complexity of a relation, the stronger its aggregation effect, and vice versa. We note that this aggregation effect can be well characterized by the relative ratio, or imbalance degree, of singular values of the matrices of relations. For any relation matrix \(\mathbf{R}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\), both \(\mathbf{U}\) and \(\mathbf{V}\) are isometry, and only the singular values of the scaling matrix \(\mathbf{\Sigma}\) contribute to the aggregation. Moreover, the singular values of UniBi are less than or equal to \(1\)5, since the spectral radius, i.e. the maximum singular value, are normalized. This shows a promising correspondence between the singular values of our model and the aggregation and further to the complexity, as demonstrated in Fig. 2(b). Therefore, we could use singular values to represent the complexity of relations, which increases the interpretability of UniBi. Footnote 5: We discuss this characteristic in more depth from the perspective of group in Appendix F. It is worth mentioning that this interpretabity can be transferred to other bilinear based models if they normalize the spectral radius of their relation matrix as UniBi does. ## 5 Experiment In this section, we give the experiment settings in Section 5.1. We verify that UniBi is capable to model the law of identity, while previous bilinear based models are not in Section 5.2. UniBi is comparable to previous SOTA bilinear models in the link prediction task, as shown in Section 5.3. Figure 2: Complexity and contraction. (a) A toy example to show how to calculate complexity. (2) the aggregation corresponds to the singular values less than 1. In addition, we demonstrate the robustness of UniBi in Section 5.4 and the interpretability about complexity in Section 5.5. ### Experiment Settings DatasetWe evaluate models on three commonly used benchmarks, i.e. WN18RR (Dettmers et al., 2018), FB15k-237 (Toutanova and Chen, 2015) and YAGO3-10-DR (Akrami et al., 2020). They are proposed by removing the reciprocal triples that cause data leakage in WN18, FB15K and YAGO3-10 respectively. Their statistics are listed in Tbl. 1. Evaluation metricsWe use Mean Reciprocal Rank (MRR) and Hits@k (k = 1, 3, 10) as the evaluation metrics. MRR is the average inverse rank of the correct entities that are insensitive to outliers. Hits@k denotes the proportion of correct entities ranked above k. BaselinesHere we consider two specific versions of UniBi: UniBi-O(2), UniBi-O(3), which use rotation matrices in \(2\) and \(3\) dimensions to construct the orthogonal matrix. To be specific, we use the unit complex value and the unit quaternion to model the 2D and 3D rotations using the \(2\times 2\) and \(4\times 4\) matrices, respectively. For more details, see the Appendix G.1. UniBi is compared with these bilinear models: RESCAL (Nickel et al., 2011), CP (Hitchcock, 1927), ComplEx (Trouillon et al., 2016), and QuatE (Zhang et al., 2019). In addition, it also compared to other models: RotatE (Sun et al., 2019), MurE (Balazevic et al., 2019) and RotE (Chami et al., 2020), Turcker (Balazevic et al., 2019) and ConvE (Dettmers et al., 2018), PairRE Chao et al. (2021), TripleRE. OptimizationWe adopt the reciprocal setting (Lacroix et al., 2018), which creates a reciprocal relation \(r^{\prime}\) for each \(r\) and a new triple \((e_{k},r^{\prime}_{j},e_{i})\) for each \((e_{i},r_{j},e_{k})\in\mathcal{K}\). Instead of using Cross Entropy directly (Lacroix et al., 2018; Zhang et al., 2020, 2019), we add an extra scalar \(\gamma>0\) before softmax function. Since UniBi is bounded, it brings an upper bound to loss that makes the model difficult to optimize as discussed by Wang et al. (2017). \[L=-\sum_{(h,r,t)\in\mathcal{K}_{train}}\log\left(\frac{\exp(\gamma\cdot s(h,r, t))}{\sum_{t^{\prime}\in\mathcal{E}}\exp(\gamma\cdot s(h,r,t^{\prime}))} \right)+\lambda\cdot Reg(h,r,t), \tag{9}\] where \(Reg(h,r,t)\) is the regularization term and \(\lambda>0\) is its factor. Specifically, we only take \(Reg(h,r,t)\) as DURA (Zhang et al., 2020) in experiments, since it significantly outperforms other regularization terms. In addition, \(\gamma\) is set to \(1\) for previous methods and greater than \(1\) for UniBi. And we set the dimension \(n\) to \(500\). For other details on the implementation, see Appendix G.2. ### Modeling Prior Property In this part, we verify that 1) UniBi is capable to model the law of identity while previous models fail, and 2) both constraints on the embedding of entities and relations are indispensable. We explicitly add _identity_ as a new relation to benchmarks and use its corresponding matrix to determine whether the uniqueness is modeled. In particular, since entities are different, modeling the law means model the uniqueness of _identity_, which requires the matrix of _identity_ relation is supposed to converge to the identity matrix \(\mathbf{I}\) or a scaled one. To evaluate it, we introduce a new metric imbalance degree \(\Delta=(\sum_{i}\frac{\sigma_{i}}{\sigma_{max}}-1)^{2}\). We first compare UniBi with CP (Hitchcock, 1927) and RESCAL (Nickel et al., 2011), the least and most expressive bilinear model on FB15k-237. Besides, we also apply DURA (Zhang et al., 2020) to models to explore whether these methods are able to model the law of identity under extra regularization. As demonstrated in Fig. 3(a), the imbalance degree \(\Delta\) of UniBi converges to \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & \(|\mathcal{E}|\) & \(|\mathcal{R}|\) & Training & Validation & Test \\ \hline WN18RR & 40,943 & 11 & 86,835 & 3,034 & 3,134 \\ FB15k-237 & 14,541 & 237 & 272,115 & 17,535 & 20,466 \\ YAGO3-10-DR & 122,873 & 36 & 732,556 & 3,390 & 3,359 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the benchmark datasets. while others fails, which verifies that UniBi is capable to uniquely model _identity_. In addition, the imbalance of other models decreases to some extent when using DURA, yet they are still unable to uniquely model _identity_. Then, to show that UniBi can uniquely converge to _identity_, we use two matrices \(\mathbf{R}_{1}\) and \(\mathbf{R}_{2}\) to model it independently. As shown in Fig. 3(b), the error between \(\mathbf{R}_{1}\) and \(\mathbf{R}_{2}\) also converges to \(0\), which means that they all converge to \(\mathbf{I}\). We then perform an ablation study to verify that both the entity constraint (EC) and the relation constraint (RC) are needed to model _identity_ uniquely. The experiments show that only using either constraint is not enough to model the uniqueness of _identity_, as illustrated in Fig. 3(c). And this verify the existence of problems shown in Fig. 1(a) and Fig. 1(b). ### Main Results In this part, we demonstrate that the constraints helps UniBi to achieve better performance. We mainly compared our model with previous SOTA models, i.e. CP (Hitchcock, 1927), ComplEx (Trouillon et al., 2016) and RESCAL (Nickel et al., 2011) using DURA regularization. Although these models have been implemented by Zhang et al. (2020), the dimensions of CP and ComplEx are very high and have not been tested on YAGO3-10-DR, so we reimplement them in this paper. In addition, we further remove the constraint of UniBi-O(n) as ablations to eliminate the influence of other factors. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**WN18RR**} & \multicolumn{3}{c}{**FFB15k-237**} & \multicolumn{3}{c}{**YAGO3-10-DR**} \\ Model & MRR & Hist\(\pm\)1 & Hist\(\pm\)10 & MRR & Hist\(\pm\)10 & MRR & Hist\(\pm\)1 & Hist\(\pm\)10 \\ \hline DistMehl & 0.43 & 0.39 & 0.49 & 0.241 & 0.155 & 0.419 & 0.192 & 0.133 & 0.307 \\ ConvE & 0.43 & 0.40 & 0.52 & 0.325 & 0.237 & 0.501 & 0.204 & 0.147 & 0.315 \\ TuckerR & 0.470 & 0.443 & 0.526 & 0.358 & 0.266 & 0.544 & 0.207 & 0.148 & 0.320 \\ QuatE & 0.488 & 0.438 & 0.582 & 0.348 & 0.248 & 0.550 & - & - & - \\ RetAE & 0.476 & 0.428 & 0.571 & 0.338 & 0.241 & 0.533 & 0.214 & 0.153 & 0.332 \\ MurP & 0.481 & 0.440 & 0.566 & 0.335 & 0.243 & 0.518 & - & - & - \\ RetE & 0.494 & 0.446 & **0.585** & 0.346 & 0.251 & 0.538 & - & - & - \\ PairRe & - & - & - & 0.351 & 0.256 & 0.544 & - & - & - \\ TripleRE & - & - & - & 0.251 & 0.251 & 0.552 & - & - & - \\ CPT & \(0.457_{0.001}\) & \(0.441_{0.002}\) & \(0.549_{0.003}\) & \(0.361_{0.001}\) & \(0.266_{0.001}\) & \(0.551_{0.001}\) & \(0.241_{0.001}\) & \(0.175_{0.002}\) & \(0.370_{0.003}\) \\ ComplEx+ & \(0.457_{0.002}\) & \(0.445_{0.001}\) & \(0.571_{0.002}\) & \(0.363_{0.001}\) & \(0.209_{0.001}\) & \(0.552_{0.002}\) & \(0.238_{0.001}\) & \(0.174_{0.002}\) & \(0.360_{0.004}\) \\ RESCAL+ & \(\mathbf{0.495_{0.001}}\) & \(\mathbf{0.452_{0.002}}\) & \(0.575_{0.002}\) & \(0.364_{0.003}\) & \(0.272_{0.003}\) & \(0.547_{0.002}\) & \(0.233_{0.003}\) & \(0.168_{0.004}\) & \(0.360_{0.004}\) \\ \hline UniBi-O(2) & 0.4872000 & 0.469000 & 0.560000 & 0.470000 & 0.4724000 & 0.5610000 & **0.2473000** & 0.12790000 & 0.1276000 \\ \hline -w/o constraint & \(0.458_{0.001}\) & \(0.441_{0.002}\) & \(0.568_{0.003}\) & \(0.361_{0.001}\) & \(0.267_{0.001}\) & \(0.501_{0.001}\) & \(0.242_{0.001}\) & \(0.176_{0.001}\) & \(0.371_{0.002}\) \\ \hline UniBi-O(3) & 0.4924000 & **0.452** & 0.571 & 0.3690000 & **0.2745** & 0.0581000 & 0.290000 & **0.1280000** & **0.3724000** \\ \hline -w/o constraint & \(0.458_{0.001}\) & \(0.446_{0.002}\) & \(0.567_{0.003}\) & \(0.361_{0.001}\) & \(0.265_{0.001}\) & \(0.530_{0.001}\) & \(0.241_{0.001}\) & \(0.175_{0.002}\) & \(0.370_{0.003}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results on WN18RR, FB15k-237 and YAGO3-10-DR datasets. We reimplement RotE, CP, RESCAL, ComplEx with \(n=500\) and denoted by \(\dagger\), while we take results on WN18RR and FB15k-237 from the origin papers and YAGO3-10-DR from Akrami et al. (2020). Best results are in **bold** while the seconds are underlined. Figure 3: UniBi is capable to uniquely model _identity_. (a) the imbalance degree (\(\Delta\)) of UniBi converges to \(0\) while others diverge. (b) The errors between different matrices modeling _identity_ converge to \(0\) on different datasets. (c) Both entity constrain (EC) and relation constrain (RC) are indispensable for UniBi to model _identity_. In Tbl. 2, UniBi achieves comparable results to previous bilinear based models and the unconstrained versions. UniBi is only slightly and justifiably below RESCAL on WN18RR, since RESCAL needs require much more time and space6. Footnote 6: Detailed in Appendix H. ### UniBi prevents ineffective learning In this part, we verify the superiority of UniBi comes from preventing ineffective learning. We conduct further comparisons without regularization. In addition, we also adopt EC and RC in Section 5.2 to study the effect of both constraints. All experiments are implemented on WN18RR. On the one hand, UniBi decreases slightly, while others decrease significantly when the regularization term is removed, as demonstrated in Fig. 4(a). It shows that learning of UniBi is less dependent on extra regularization, since it is better at learning by preventing the ineffective part. On the other hand, we illustrate the MRR metric of UniBi and its ablation models on validation set as epoch grows in Fig. 4(b). It shows that either constrain alleviates overfitting to some extend but fails to prevent the downward sliding behind since the scale of the unconstrained part may diverge. Thus, both constraints are verified to be indispensable for preventing the ineffective learning and improving performance of UniBi. ### Correlation to Complexity To verify the statement in Section 4.4.2, we study the connection between singular values and the complexity of each relation on three benchmarks, where complexity is calculated following definition 3. Furthermore, we measure the singular values of a relation by the imbalance degree \(\Delta\). To differentiate \(\Delta\) of a relation \(r\) and its reciprocal relation \(r^{\prime}\), we use \(\Delta_{r}\) and \(\Delta_{r^{\prime}}\) to denote them. As demonstrated in Fig. 5, we find that singular values are highly correlated with the complexity of a relation. Furthermore, we notice that \(\Delta_{r}\) and \(\Delta_{r^{\prime}}\) are very close even if a relation is unbalanced (1-N or N-1), which shows that complexity is handled by aggregation regardless of direction. ## 6 Conclusion In this paper, we propose a new perspective, prior property, on analysis and modeling KGs beyond posterior properties. We discover a prior property named the law of identity. Moreover, We notice that bilinear based models fail to model this property and propose a proved solution named UniBi. Specifically, UniBi applies well-designed normalization on the embedding of entities and relations with minimal constraints. Figure 4: UniBi benefits from preventing ineffective learning. (a) UniBi less relies on regularization and other models are not. (b) Neither entity constraint (EC) nor relation constrain (RC) alone stops the sliding of performance. Furthermore, UniBi gains other advantages through the normalization. On the one hand, the normalization prevents ineffective learning and leads to better performance; on the other hand, it reveals that the relative ratio of singular values corresponds to the complexity of relations and improves interpretability. In summary, we believe the question of prior property and the paradigm of UniBi can provide interesting and useful directions for the studies of bilinear based models. ### Retrieval of style files The style files for NeurIPS and other conference information are available on the World Wide Web at [http://www.neurips.cc/](http://www.neurips.cc/) The file neurips_2022.pdf contains these instructions and illustrates the various formatting requirements your NeurIPS paper must satisfy. The only supported style file for NeurIPS 2022 is neurips_2022.sty, rewritten for LaTeX 2.5. **Previous style files for LaTeX 2.09, Microsoft Word, and RTF are no longer supported!** The LaTeX style file contains three optional arguments: final, which creates a camera-ready copy, preprint, which creates a preprint for submission to, e.g., arXiv, and nonatbib, which will not load the natbib package for you in case of package clash. Preprint optionIf you wish to post a preprint of your work online, e.g., on arXiv, using the NeurIPS style, please use the preprint option. This will create a nonanonymized version of your work with the text "Preprint. Work in progress." in the footer. This version may be distributed as you see fit. Please **do not** use the final option, which should **only** be used for papers accepted to NeurIPS. At submission time, please omit the final and preprint options. This will anonymize your submission and add line numbers to aid review. Please do _not_ refer to these line numbers in your paper as they will be removed during generation of camera-ready copies. The file neurips_2022.tex may be used as a "shell" for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own.
2309.13820
Strongly Efficient Rare-Event Simulation for Regularly Varying Lévy Processes with Infinite Activities
In this paper, we address rare-event simulation for heavy-tailed L\'evy processes with infinite activities. The presence of infinite activities poses a critical challenge, making it impractical to simulate or store the precise sample path of the L\'evy process. We present a rare-event simulation algorithm that incorporates an importance sampling strategy based on heavy-tailed large deviations, the stick-breaking approximation for the extrema of L\'evy processes, the Asmussen-Rosi\'nski approximation, and the randomized debiasing technique. By establishing a novel characterization for the Lipschitz continuity of the law of L\'evy processes, we show that the proposed algorithm is unbiased and strongly efficient under mild conditions, and hence applicable to a broad class of L\'evy processes. In numerical experiments, our algorithm demonstrates significant improvements in efficiency compared to the crude Monte-Carlo approach.
Xingyu Wang, Chang-Han Rhee
2023-09-25T02:02:56Z
http://arxiv.org/abs/2309.13820v2
Strongly Efficient Rare-Event Simulation for Multiple-Jump Events in Regularly Varying Levy Processes with Infinite Activities ###### Abstract In this paper, we address rare-event simulation for heavy-tailed Levy processes with infinite activities. Specifically, the presence of infinite activities poses a significant computational challenge, making it impractical to simulate or store the sample path of the Levy process. Building upon the importance sampling scheme in [14], we present a rare-event simulation algorithm that incorporates the sample path large deviations for heavy-tailed Levy processes, the stick-breaking approximation for the extrema of Levy processes, the Asmussen-Rosinski approximation for small-jump Levy processes, and the randomized debiasing Monte-Carlo scheme. By establishing a novel characterization for the Lipschitz continuity of the law of Levy processes, we show that the proposed algorithm is unbiased and strongly efficient under mild conditions, and hence applicable to a broad class of Levy processes. In numerical experiments, our algorithm demonstrates significant improvements in efficiency when compared to crude Monte-Carlo method. ## 1 Introduction In this paper, we propose an importance sampling scheme designed to efficiently estimate the probability of rare events in heavy-tailed Levy processes with infinite activities. In particular, the heavy-tailedness in the increments of the Levy processes is captured by the notion of regular variation (see Definition 1). The prevalence of heavy-tailed phenomena extends across a diverse range of stochastic dynamics and systems, manifesting in crucial areas such as the spread of COVID-19 (see, e.g., [15]), traffic in computer and communication networks (see, e.g., [40, 26]), financial assets and risk processes (see, e.g., [24, 10]), and the training of deep neural networks (see, e.g., [33]). The estimation of the probability of rare events in heavy-tailed Levy processes also holds considerable significance in many applications, including risk management [1], mathematical finances [50], and queueing systems [37]. More formally speaking, the objective of this paper is to estimate the probability of form \(\mathbf{P}(A_{n})\) where \(A_{n}=\{\bar{X}_{n}\in A\}\), and \(\bar{X}_{n}=\{\bar{X}_{n}(t)=\frac{1}{n}X(nt):\ t\in[0,1]\}\) is a scaled version of some regularly varying Levy process \(X\). Here, \(A\) is subset of \(\mathbb{D}\), the space of the real-valued RCLL functions over \([0,1]\). Two key features characterize the problem setup. First, the occurrence of the rare events \(A_{n}\) typically necessitates multiple large jumps in \(\bar{X}_{n}\). To facilitate exposition, we focus on the scenario with \[A=\Big{\{}\xi\in\mathbb{D}:\sup_{t\in[0,1]}\xi(t)\geq a;\sup_{t \in(0,1]}\xi(t)-\xi(t-)<b\Big{\}}. \tag{1.1}\] Intuitively speaking, this characterization indicates that the supremum of the path \(\xi\) over \([0,1]\) exceeds the threshold \(a\), even though no upward jump in \(\xi\) is larger than \(b\). One can easily see that, for a step function (initialized at the origin) to fall into set \(A\), the minimum number of jumps required is dictated by the ratio between \(a\) and \(b\). Nevertheless, it is worth noticing that the framework developed in this paper can be effortlessly adapted to other multiple-jump rare events. Second, the regularly varying Levy process \(X\) also exhibits infinite activities; see Section 2.4 for the rigorous definition. In simple terms, the existence of either Brownian motion components or infinitely many jumps in \(X\) makes it computationally infeasible to simulate or store the entire sample path of \(X\) within any given interval. We discuss the three main challenges that arise in the current setup. Firstly, the nature of the rare events makes the crude Monte-Carlo simulation method extremely inefficient. For instance, when estimating \(p_{n}=\mathbf{P}(\bar{X}_{n}\in A)\), the crude Monte-Carlo estimator \(\mathbb{1}\{\bar{X}_{n}\in A\}\) attains a standard error of order \(\sqrt{p_{n}}\). Consequently, the number of samples required to estimate the target probability to a given relative accuracy is of order \(\sqrt{1/p_{n}}\). As \(n\to\infty\), the crude Monte-Carlo method becomes prohibitively expensive due to \(p_{n}\to 0\). To resolve this issue, we aim to design estimators \(L_{n}\) for probabilities \(\mathbf{P}(\bar{X}_{n}\in A)\) that attains strong efficiency (see Definition 2). Such strongly efficient estimators maintain a uniformly bounded (w.r.t. \(n\)) relative error, thus maintaining the same level of efficiency (regarding the required number of simulation trials) regardless of the rarity of the target events. To achieve this goal, we employ importance sampling, a frequently used variance reduction technique in Monte-Carlo simulation; see [2] for a standard treatment on this topic. The essence of importance sampling lies in the use of an alternative sampling distribution \(\mathbf{Q}\) (instead of the nominal distribution \(\mathbf{P}\)), under which the rare event occurs more frequently. Then we correct the estimation by incorporating the likelihood ratio \(d\mathbf{P}/d\mathbf{Q}\). Nevertheless, the performance of importance sampling technique is particularly sensitive to the choice of the importance sampling distribution \(\mathbf{Q}\). Seemingly plausible yet theoretically unjustified choices of \(\mathbf{Q}\) often fail to reduce the variance or even result in infinite variance during estimation; see, e.g., [29, 28]. Therefore, principled approaches are required to achieve theoretical guarantee in the designing of importance sampling estimators. In light-tailed settings, importance sampling schemes guided by large deviation theories have proven to be highly effective. Typically, the law of the light-tailed system, when conditioning on the rare events, would (asymptotically) coincide with a modulated version of the dynamics where the law of all the increments is exponentially tilted. An importance sampling distribution is then crafted by applying the exponential change of measure to the increments (see, for instance, [11, 51]). Furthermore, in the context of queueing networks, [19] unveiled and capitalized the connections between importance sampling, large deviations, and differential games. This leads to an adaptive and state-dependent importance sampling algorithm, which is asymptotically optimal for rare-event simulation in queueing networks. Interested readers can find theoretical foundations and additional algorithmic developments for this approach in references such as [20, 21, 22]. In comparison, the conventional exponential tilting approach falls short in providing a principled and provably efficient design of the importance sampling estimators in heavy-tailed settings, as noted in references such as [5]. Variance reduction techniques, such as conditional Monte Carlo (e.g., [3, 36]) and Markov Chain Monte Carlo (e.g., [32]), have proven to be valuable when addressing certain types of heavy-tailed rare event simulation tasks. Moreover, different approaches have been explored to apply importance sampling in heavy-tailed systems. For example, [18] proposes a state-dependent importance sampling algorithm for the boundary-crossing probability of a random walk with regularly varying increments by progressively modifying the probability of observing a large jump at each step; in [6], the estimation of the first-passage probability of heavy-tailed random walks is carried out using a state-dependent importance sampling algorithm based on Doob's \(h\)-transform and Lyapunov inequalities (see also [7, 8, 9] for applications of the same technique in multidimensional and queueing contexts). Nevertheless, it is worth noticing that the previously mentioned works are often tailored for specific types of processes or specific rare events, and their generalizations (such as the identification of Lyapunov functions) can be highly non-trivial beyond the simple settings. Fortunately, the recent developments of heavy-tailed large deviations, as exemplified by [47] and [53], have laid the foundation of an efficient and universal importance sampling estimators for heavy-tailed systems. At the core of theory is the catastrophe principle that characterizes the rarity and identifies the most likely causes for rare events in heavy-tailed systems. In heavy-tailed dynamics, rare event typically arise due to the catastrophic failures of a few components in the system. The exact number of such catastrophic failures dictates the asymptotic rate of decay and the most likely causes of the rare events, thus establishing a discrete hierarchy for rare events in heavy-tailed systems. More specifically, the results in [47] show that the asymptotics of form \(\mathbf{P}(\bar{X}_{n}\in E)=\boldsymbol{O}\big{(}(n\nu[n,\infty))^{l^{*}(E)} \big{)}\) (as \(n\to\infty\)) hold for all sets \(E\subset\mathbb{D}\) satisfying a mild topological condition. Here, \(\nu\) is the Levy measure of the Levy process \(X\) and \(l^{*}(E)\) is the minimum number of jumps required for a step function to fall into set \(E\). Consequently, for a given event \(B\subseteq\mathbb{D}\) such that \(l^{*}(B)=l^{*}(A)\) (recall that the target rare events \(A_{n}=\{\bar{X}_{n}\in A\}\) are defined based on some \(A\subseteq\mathbb{D}\)), we can expect \(\mathbf{P}(\bar{X}_{n}\in A)\) and \(\mathbf{P}(\bar{X}_{n}\in B)\) to decay at a similar rate as \(n\to\infty\), suggesting that the event \(\{\bar{X}_{n}\in B\}\) can provide a reliable approximation to the target event \(\{\bar{X}_{n}\in A\}\) when designing the importance sampling distribution. Building upon these ideas and employing defensive importance sampling with a carefully chosen set \(B\), [14] has demonstrated a readily implementable and strongly efficient importance sampling scheme applicable to wide class of rare events in regularly varying random walks and compound Poisson processes. In this work, we adopt and extend this framework to encompass Levy processes with infinite activities. The specifics of the set \(B\) (and hence the importance sampling distribution) are detailed in Section 3.1. Naturally, a crucial aspect in the implementation of the algorithm is the sampling from \(\mathbf{P}(\ \cdot\ |\bar{X}_{n}\in B)\). This is addressed by Algorithm 1 in Section 3.4. The second challenge lies in the simulation of the Levy process \(X\) with infinite activities, which sets our work apart from existing ones such as [14] and introduces new technical obstacles in the design of the algorithm. As mentioned earlier, the presence of infinite activities makes it computationally infeasible to simulate or store the entire sample path of \(X\). Our algorithm proposes a potential solution by first identifying all upward jumps with sizes exceeding a threshold \(\gamma\in(0,b)\) and then simulating the supremum of the process \(X\) between the arrival times of these large jumps. Note that this still requires the simulation the suprema of Levy processes with infinite activities. However, the law of the suprema of Levy processes is generally unknown (see [43]). Besides, methods for the exact (or even \(\epsilon\)-exact) simulation of these suprema or the value of the process at the first passage of a given threshold are only available for a few special cases; see, e.g., [31], [17], and [12]. Another direction that has been explored in existing literature utilizes the Winener-Hopf factorization in the fluctuation theory of Levy processes; see [38, 25]. The drawback, however, is that this approach requires the capability of simulating \(X(\boldsymbol{e}_{\lambda})\), where \(\boldsymbol{e}_{\lambda}\) is an exponential random variable with rate \(\lambda\) that is independent of the Levy process \(X\) (i.e., simulating the Levy process at a random exponential time). Unfortunately, an algorithm for the simulation \(X(\boldsymbol{e}_{\lambda})\) is also not available for most Levy processes. To overcome this challenge, we first construct a series of progressively more accurate approximations to the suprema of Levy processes, and then remove the bias in the approximations. Specifically, we employ stick-breaking approximations (SBA) algorithms in [30] for the extrema of Levy processes with infinite activities. This algorithm is built upon the theory of the convex minorants and concave majorants of Levy processes in [45], which will be reviewed in Section 2.4. In simple terms, given a Levy process \(X(t)\) and its running suprema \(M(t)=\sup_{s\in[0,t]}X(s)\), the joint law of \((X(t),M(t))\) admits the representation \[\big{(}X(t),M(t)\big{)}\stackrel{{ d}}{{=}}\Big{(}\sum_{j\geq 1 }\xi_{j},\ \sum_{j\geq 1}\max\{\xi_{j},0\}\Big{)}. \tag{1.2}\] Here, \((l_{j})_{j\geq 1}\) is a sequence of non-negative RVs satisfying \(\sum_{j\geq 1}l_{j}=1\) and \(\mathbf{E}l_{j}=t/2^{j}\ \forall j\geq 1\), and \(\xi_{j}\stackrel{{ d}}{{=}}X(l_{j})\) are independently generated when conditioning on the values of \((l_{j})_{j\geq 1}\). While it is computationally infeasible to generate the entire sequence of \((l_{j})_{j\geq 1}\) or \((\xi_{j})_{j\geq 1}\), by simulating \(l_{j},\xi_{j}\) up to step \(m\) we obtain \(\big{(}\sum_{j=1}^{m}\xi_{j},\sum_{j=1}^{m}\max\{\xi_{j},0\}\big{)}\) as an approximation to \((X(1),M(1))\). In particular, the geometric rate of decay in \(\mathbf{E}l_{j}\) ensures that, in expectation, the error in our \(m\)-step approximation decays exponentially fast as \(m\to\infty\). Such an approximation naturally lend itself to the unbiased estimation technique in [48], which allows us to construct an unbiased estimator from a sequence of approximations that becomes progressively more accurate. Further details on the unbiased estimation technique are available in Section 2.3. This unbiased estimation technique can also be interpreted as a randomized version of the multilevel Monte Carlo scheme [34, 27]. For more information on the connection between unbiased estimators and multilevel Monte Carlo, refer to [52]. Combining the unbiased estimator with SBA, we are able to design an estimation algorithm that is unbiased for \(\mathbf{P}(A_{n})\) and terminates after generating finitely many \(l_{j},\xi_{j}\). In practice, implementing the unbiased estimator using the stick-break approximation based on representation (1.2) requires the ability to sample \(X(t)\) for arbitrary \(t>0\), which is generally not feasible for Levy processes with infinite activities. Therefore, we appeal to the classical Asmussen-Rosinski approximation (ARA) in [4]. This approximation involves replacing the small jump martingale in the Levy process \(X\) with a Brownian motion term of the same variance. Thus, we approximate \(X\) with a mixture of a Brownian motion term and a compound Poisson process (with drift). To maintain the unbiasedness of our estimator and remove the errors introduced in ARA, we also incorporate ARA into the unbiased estimation technique. The construction of these approximators will be detailed in Section 3.3. It is worth noticing that the combination of unbiased estimation and stick-breaking approximations has been mentioned in section 2.5 of [30]. However, our work is the first to carefully orchestrate SBA, ARA, and the unbiased estimation technique and we demonstrate the effectiveness of this combination in the contexts of rare-event simulation. Our efforts culminates in Theorem 3.4, which establishes the unbiasedness and strong efficiency of our proposed Algorithm 2. Finally, the third major challenge pertains to the continuity of the law of the running supremum \(M(t)=\sup_{s\in[0,t]}X(s)\). For the set \(A\) defined in (1.1), the indicator function \(\mathbb{1}_{A}(\xi)\) is discontinuous at any \(\xi\in\mathbb{D}\) with \(\sup_{t\in[0,1]}\xi(t)=a\). As a result, when analyzing the estimation error of \(\mathbf{P}(\bar{X}_{n}\in A)\) one must evaluate the probability that the supremum of \(X(t)\), denoted by \(M(t)=\sup_{s\in[0,t]}X(s)\), concentrates around \(na\). Furthermore, to achieve strong efficiency we need to establish an upper bound on the relative error that is uniform w.r.t. all \(n\). From the perspective of Holder continuity, this roughly translates into such conditions: there exist some \(C\in(0,\infty)\) and \(\theta\in(0,1]\) such that \(\mathbf{P}(M(n)\in[na-\delta,na+\delta])<C\delta^{\theta}\)\(\forall\delta\in(0,1]\) holds uniformly for all \(n\) large enough. Nevertheless, the continuity of the law of the supremum \(M(t)\) remains an active area of study, with many essential questions left open. Recent developments regarding the law of \(M(t)\) are mostly qualitative or focusing on bounding the cumulative distribution function (cdf); see, e.g., [13, 16, 39, 41, 44, 42]. Most of these work do not provide quantitative characterization of the continuity, nor do they establish a version of Holder continuity that is uniform with respect to time \(t\) or location \(x\) in \(\mathbf{P}(M(t)\in[x-\delta,x+\delta])\). In short, tackling this aspect of the challenge requires establishing novel and useful quantitative characterizations of the law of supremum \(M(t)\). Again, the stick-breaking representation in (1.2) proves to be a valuable tool when analyzing the continuity of \(M(t)\). Our strategy involves passing the continuity of \(X(t)\) onto \(M(t)\) through the convolution structure of \(M(t)\stackrel{{ d}}{{=}}\sum_{j\geq 1}\max\{X_{j}(l_{j}),0\}\), where \(X_{j}\) are iid copies of the process \(X\). In particular, it turns out that a sufficient condition for our rare-event simulation setup is \[\mathbf{P}\Big{(}X^{<z}(t)\in[x,x+\delta]\Big{)}\leq\frac{C}{t^{\lambda}\wedge 1 }\delta^{\theta}\qquad\forall z\geq z_{0},\ t>0,\ x\in\mathbb{R},\ \delta\in[0,1]. \tag{1.3}\] Here, \(X^{<z}\) is a modulated version of the process \(X\) where all the upward jumps with sizes larger than \(z\) are removed. Rigorous definitions can be found in Section 3. At first glance, the condition (1.3) may seem restrictive, as it claims a version of Holder continuity that holds uniformly in \(t\), \(x\), and the truncation threshold \(z\). Fortunately, in Section 4 we establish several sets of sufficient conditions for (1.3) that are easy to verify. The general idea is to examine the characteristic functions of the Levy measure \(\nu\) and prove its close resemblance to some \(\alpha\)-stable or semi-stable processes that required continuity properties. In particular, we show that (1.3) is a mild condition for Levy processes with infinitive activities, as it only requires the mass of \(\nu\) to approach \(\infty\) (hence attaining infinite activities in \(X\)) at a pace not too slow. This observation highlights the wide applicability of our approach to Levy processes with infinite activities. The paper is structured as follows. Section 2 provides a review of the theoretical foundations of our algorithms, including the heavy-tailed large deviation theories (Section 2.2), the debiasing technique (Section 2.3), and the stick-breaking approximations (Section 2.4). Section 3 presents the importance sampling algorithm and establishes its strong efficiency. Section 4 investigates the continuity of the law of \(X(t)\) and provides sufficient conditions for (1.3), a critical assumption behind our strongly efficient importance sampling scheme. Numerical experiments are reported in Section 5. The proofs of our technical results are collected in Section 6. ## 2 Preliminaries In this section we first introduce a series of notations that will be frequently used in the remainder of the paper, and then review several important results as the building blocks for our strongly efficient rare-event simulation algorithm. ### Notations For any positive integer \(k\), let \([k]=\{1,2,\cdots,k\}\). For any \(x,y\in\mathbb{R}\), let \(x\wedge y\triangleq\min\{x,y\}\) and \(x\lor y\triangleq\max\{x,y\}\). For any \(x\in\mathbb{R}\), we define \((x)^{+}\triangleq x\lor 0\) as the positive part of \(x\), and \[\lfloor x\rfloor\triangleq\max\{n\in\mathbb{Z}:\ n\leq x\},\qquad\lceil x \rceil\triangleq\min\{n\in\mathbb{Z}:\ n\geq x\}\] as the floor and ceiling function to get rounded-down or rounded-up integer values from \(x\). Given a measure space \((\mathcal{X},\mathcal{F},\mu)\) and any \(A\in\mathcal{F}\), we use \(\mu|_{A}\) to denote restriction of the measure \(\mu\) on \(A\), which is defined as \[\mu|_{A}(\cdot)\triangleq\mu(A\cap\cdot).\] For any random variable \(X\) and any Borel measureable set \(A\), let \(\mathscr{L}(X)\) be the law of \(X\), and \(\mathscr{L}(X|A)\) be the law of \(X\) conditioned on event \(A\). Let \((\mathbb{D}_{[0,1],\mathbb{R}},\mathbf{d})\) be the metric space of \(\mathbb{D}=\mathbb{D}_{[0,1],\mathbb{R}}\), the space of all real-valued RCLL functions with domain \([0,1]\), equipped with Skorokhod \(J_{1}\) metric \(\mathbf{d}\). Here the metric is defined by \[\mathbf{d}(x,y)\triangleq\inf_{\lambda\in\Lambda}\sup_{t\in[0,1]}|\lambda(t)-t| \vee|x(\lambda(t))-y(t)| \tag{2.1}\] with \(\Lambda\) being the set of all increasing homeomorphisms from \([0,1]\) to itself. We introduce a few definitions. First, we start with the concept of regular variation, which is commonly used to capture the heavy-tailed phenomena. **Definition 1**.: _For any measurable function \(\phi:(0,\infty)\to(0,\infty)\), we say that \(\phi\) is regularly varying as \(x\to\infty\) with index \(\beta\) (denoted as \(\phi\in\mathcal{RV}_{\beta}\)) if \(\lim_{x\to\infty}\phi(tx)/\phi(x)=t^{\beta}\) for all \(t>0\). We also say that a measurable function \(\phi(\eta)\) is regularly varying as \(\eta\downarrow 0\) with index \(\beta\) if \(\lim_{\eta\downarrow 0}\phi(t\eta)/\phi(\eta)=t^{\beta}\) for any \(t>0\). We denote this as \(\phi(\eta)\in\mathcal{RV}_{\beta}(\eta)\)._ For details on the definition and properties of regularly varying functions, see, for example, chapter 2 of [46]. Next, we discuss the Levy-Ito decomposition of one-dimensional Levy processes. The law of any one-dimensional Levy process \(\{X(t):t\geq 0\}\) can be completely characterized by its generating triplet \((c,\sigma,\nu)\) where \(c\in\mathbb{R}\) represents the constant drift, \(\sigma\geq 0\) is the magnitude of the Brownian motion term, and the Levy measure \(\nu\) characterizes the intensity of the jumps. More precisely, we have \[X(t)\stackrel{{ d}}{{=}}ct+\sigma B(t)+\int_{|x|\leq 1}x[N([0,t] \times dx)-t\nu(dx)]+\int_{|x|>1}xN([0,t]\times dx) \tag{2.2}\] where \(B\) is a standard Brownian motion, the measure \(\nu\) satisfies \(\int(|x|^{2}\wedge 1)\nu(dx)<\infty\), and \(N\) is a Poisson random measure independent of \(B\) with intensity measure \(\text{Leb}([0,\infty))\times\nu\). For standard references on this topic, see chapter 4 of [49]. Lastly, given two sequences of non-negative real numbers \(x_{n},y_{n}\), we say \(x_{n}=\boldsymbol{O}(y_{n})\) if there exists some \(C\in[0,\infty)\) such that \(x_{n}\leq Cy_{n}\ \forall n\geq 1\), and we say \(x_{n}=\boldsymbol{o}(y_{n})\) of \(\lim_{n\to\infty}x_{n}/y_{n}=0\). The goal of this paper is described in the following definition of strong efficiency. **Definition 2**.: _Given a probability space \((\Omega,\mathcal{F},\mathbf{P})\), let \(A_{n}\) be a sequence of events and \(L_{n}\) be a sequence of random variables. We say that \((L_{n})_{n\geq 1}\) are **unbiased and strongly efficient** estimators of \((A_{n})_{n\geq 1}\) if_ \[\mathbf{E}L_{n}=\mathbf{P}(A_{n})\ \forall n\geq 1;\qquad\mathbf{E}L_{n}^{2}= \boldsymbol{O}\big{(}\mathbf{P}^{2}(A_{n})\big{)}\ \ \text{as}\ n\to\infty.\] We stress that strongly efficient estimators \((L_{n})_{n\geq 1}\) can achieve uniformly bounded relative errors for all \(n\geq 1\). In other words, the number of Monte-Carlo simulation runs required to estimate the target probability \(\mathbf{P}(A_{n})\) to a given relative accuracy is uniformly bounded w.r.t. all \(n\). ### Sample-Path Large Deviations of Regularly Varying Levy Processes The main ingredient of the importance sampling distribution in our algorithm is the recent development of the sample-path large deviations of Levy processes with regularly varying increments in [47]. In Section 2.2 we first familiarize the readers with the one-sided version of the results and then review the more complicated two-sided version. First, consider a Levy process \(X\) that is centered (i.e., \(\mathbf{E}X(t)=0\)) and has generating triplet \((c,\sigma,\nu)\) such that the Levy measure \(\nu\) is supported on \((0,\infty)\). In other words, all the jumps (discontinuities) in \(X\) will be positive, hence one-sided. Moreover, we are interested in the heavy-tailed setting where the function \(H_{+}(x)=\nu[x,\infty)\) is regularly varying as \(x\to\infty\) with index \(-\alpha\) where \(\alpha>1\). We define a scaled version of \(X\) as \(\bar{X}_{n}(t)\triangleq\frac{1}{n}X(nt)\), and let \(\bar{X}_{n}\triangleq\{\bar{X}_{n}(t):\ t\in[0,1]\}\). Note that \(\bar{X}_{n}\) is a random element taking values in \(\mathbb{D}\). To describe the sample-path large deviation of \(\bar{X}_{n}\) in this one-sided setting, we introduce a few notations. For all \(l\geq 1\), let \(\mathbb{D}_{l}\) be the subset of \(\mathbb{D}\) containing all the non-decreasing step functions that has \(l\) jumps and vanishes at the origin. Let \(\mathbb{D}_{0}=\{\boldsymbol{0}\}\) where \(\boldsymbol{0}(t)\equiv 0\) is the zero function. Let \(\mathbb{D}_{<l}=\cup_{j=0,1,\cdots,l-1}\mathbb{D}_{l}\). For any \(\beta>0\), let \(\nu_{\beta}\) be the measure concentrated on \((0,\infty)\) with \(\nu_{\beta}(x,\infty)=x^{-\beta}\). For any positive integer \(l\), use \(\nu_{\beta}^{l}\) to denote the \(l-\)fold product measure of \(\nu_{\beta}\) restricted on \(\{y\in(0,\infty)^{l}:\ y_{1}\geq y_{2}\geq\cdots\geq y_{l}\}\), and define the measure \[\mathbf{C}_{\beta}^{l}(\cdot)\triangleq\mathbf{E}\Bigg{[}\nu_{\beta}^{l}\big{\{} y\in(0,\infty)^{l}:\ \sum_{j=1}^{l}y_{j}\mathbbm{1}_{[U_{j},1]}\in\cdot\big{\}}\Bigg{]}\] where \((U_{j})_{j\geq 1}\) is an iid sequence of \(\mathrm{Unif}(0,1)\); while for \(l=0\), let \(\mathbf{C}_{\beta}^{0}\) be the Dirac measure on \(\boldsymbol{0}\). The following large deviation results will be useful when designing rare-event simulation algorithms for \(\bar{X}_{n}\). Throughout the rest of this paper, all measurable sets are understood to be Borel measurable. **Result 1** (Theorem 3.1 of [47]).: _Let \(A\subset\mathbb{D}\) be measurable. Let \(l^{*}\triangleq\min\{l\in\mathbb{N}:\ \mathbb{D}_{l}\cap A\neq\emptyset\}\). If \(A\) is bounded away from \(\mathbb{D}_{<l^{*}}\) in the sense that \(\boldsymbol{d}(A,\mathbb{D}_{<l^{*}})>0\), then_ \[\mathbf{C}_{\alpha}^{l^{*}}(A^{\circ})\leq\liminf_{n\to\infty}\frac{\mathbf{P} (\bar{X}_{n}\in A)}{(n\nu[n,\infty))^{l^{*}}}\leq\limsup_{n\to\infty}\frac{ \mathbf{P}(\bar{X}_{n}\in A)}{(n\nu[n,\infty))^{l^{*}}}\leq\mathbf{C}_{\alpha}^ {l^{*}}(A^{-})<\infty\] _where \(A^{\circ},A^{-}\) are the interior and closure of \(A\) respectively._ A similar large deviation result, albeit admitting a slightly more sophisticated form, can be developed for the two-sided cases where the Levy process \(X(t)\) has both positive and negative jumps. Now let \(X(t)\) be a centered Levy process such that for \(H_{+}(x)=\nu[x,\infty)\) and \(H_{-}(x)=\nu(-\infty,-x]\), we have \(H_{+}\in\mathcal{RV}_{-\alpha}\) and \(H_{-}(x)\in\mathcal{RV}_{-\alpha^{\prime}}\) as \(x\to\infty\) for some \(\alpha,\alpha^{\prime}>1\). Let \(\mathbb{D}_{j,k}\) be the set containing all step functions in \(\mathbb{D}\) vanishing at the origin that has exactly \(j\) upward jumps and \(k\) downward jumps. As a convention, let \(\mathbb{D}_{0,0}=\{\boldsymbol{0}\}\). Given \(\alpha,\alpha^{\prime}>1\), let where \(\mathbb{I}_{<j,k}\triangleq\big{\{}(l,m)\in\mathbb{N}^{2}\backslash\{(j,k)\}:\ l( \alpha-1)+m(\alpha^{\prime}-1)\leq j(\alpha-1)+k(\alpha^{\prime}-1)\big{\}}.\) Next, let \(\mathbf{C}_{0,0}\) be the Dirac measure on \(\mathbf{0}\), and for any \((j,k)\in\mathbb{N}^{2}\backslash\{(0,0)\}\) let \[\mathbf{C}_{j,k}(\cdot)\triangleq\mathbf{E}\Bigg{[}\nu_{\alpha}^{j}\times\nu_{ \alpha^{\prime}}^{k}\bigg{\{}(x,y)\in(0,\infty)^{j}\times(0,\infty)^{k}:\ \sum_{l=1}^{j}x_{l} \mathbb{1}_{[U_{l},1]}-\sum_{m=1}^{k}y_{m}\mathbb{1}_{[V_{m},1]}\in\cdot\bigg{\}} \Bigg{]} \tag{2.3}\] where \(U_{l},V_{m}\) are two independent sequences of iid \(\text{Unif}(0,1)\) RVs. Now we are ready to state the two-sided result. **Result 2** (Theorem 3.4 of [47]).: _Let \(A\subset\mathbb{D}\) be measurable. Let_ \[\big{(}\mathcal{J}(A),\mathcal{K}(A)\big{)}\in\underset{(j,k)\in\mathbb{N}^{2 },\ \mathbb{D}_{j,k}\cap A\neq\emptyset}{\text{argmin}}j(\alpha-1)+k(\alpha^{\prime }-1).\] _If \(A\) is bounded away from \(\mathbb{D}_{<\mathcal{J}(A),\mathcal{K}(A)}\), then_ \[\mathbf{C}_{\mathcal{J}(A),\mathcal{K}(A)}(A^{\circ}) \leq\liminf_{n\to\infty}\frac{\mathbf{P}(\bar{X}_{n}\in A)}{(n \nu[n,\infty))^{\mathcal{J}(A)}(n\nu(-\infty,-n])^{\mathcal{K}(A)}}\] \[\leq\limsup_{n\to\infty}\frac{\mathbf{P}(\bar{X}_{n}\in A)}{(n \nu[n,\infty))^{\mathcal{J}(A)}(n\nu(-\infty,-n])^{\mathcal{K}(A)}}\leq \mathbf{C}_{\mathcal{J}(A),\mathcal{K}(A)}(A^{-})<\infty\] _where \(A^{\circ},A^{-}\) are the interior and closure of \(A\) respectively._ ### Unbiased Estimators As will be demonstrated soon, when designing the strongly efficient rare-event simulation algorithm we will first identify a series of fine approximations to the Levy process \(X\) with infinite activities, and then remove the errors in the approximations so that the proposed estimator is indeed unbiased. To achieve unbiasedness in our algorithm, we apply the the debiasing techniques in [48]. In particular, due to \(\tau\) being finite (almost surely) in Result 3 below, it is possible to simulate the estimator \(Z\) as it only requires the algorithm to generate \(Y_{0},Y_{1},\cdots,Y_{\tau}\) instead of the infinite sequence \((Y_{n})_{n\geq 0}\). **Result 3** (Theorem 1 in [48]).: _Given a random variable \(Y\), a sequence of random variables \((Y_{n})_{n\geq 0}\) such that \(\lim_{m\to\infty}\mathbf{E}Y_{m}=\mathbf{E}Y\), and a positive integer-valued random variable \(\tau\) with unbounded support such that \(\tau\) is independent of \((Y_{m})_{m\geq 0}\) and \(Y\), if_ \[\sum_{m\geq 1}\mathbf{E}|Y_{m-1}-Y|^{2}\Big{/}\mathbf{P}(\tau\geq m)<\infty,\] _then \(Z\triangleq\sum_{m=0}^{\tau}(Y_{m}-Y_{m-1})\Big{/}\mathbf{P}(\tau\geq m)\) (with the convention \(Y_{-1}=0\)) satisfies_ \[\mathbf{E}Z=\mathbf{E}Y,\qquad\mathbf{E}Z^{2}=\sum_{m\geq 0}\bar{v}_{m}\Big{/} \mathbf{P}(\tau\geq m)\] _where \(\bar{v}_{m}=\mathbf{E}|Y_{m-1}-Y|^{2}-\mathbf{E}|Y_{m}-Y|^{2}\)._ ### Stick-Breaking Approximations for Levy Processes Next, we review the distribution of the concave majorant of a Levy process with infinite activities [45], which paves the way to the stick-breaking approximation algorithm proposed in [30] for the joint law of the endpoint and maximum value of a Levy process. For any Levy process \(X(t)\) with generating triplet \((c,\sigma,\nu)\), we say that \(X\) has **infinite activities** if \(\sigma>0\) or \(\nu(\mathbb{R})=\infty\). Let \(M(t)\triangleq\sup_{s\leq t}X(s)\) be the running supremum of \(X(t)\). The results in [45] provide an intriguing representation of the joint law of \(X(t)\) and \(M(t)\). Specifically, fix some \(T>0\). Let \(V_{i}\) be a sequence of iid \(\mathrm{Unif}(0,1)\) RVs, and recursively define \[l_{1}=TV_{1},\qquad l_{i}=V_{i}(T-l_{1}-\cdots-l_{i-1})\ \forall i\geq 2. \tag{2.4}\] Conditioning on the value of \((l_{i})_{i\geq 1}\), let \(\xi_{i}\) be a random copy of \(X(l_{i})\) and all \(\xi_{i}\) be independently generated. **Result 4** (Theorem 1 in [45]).: _If Levy process \(X\) has infinite activities, then (with \((x)^{+}=\max\{x,0\}\))_ \[\big{(}X(T),M(T)\big{)}\stackrel{{ d}}{{=}}\big{(}\sum_{i\geq 1 }\xi_{i},\sum_{i\geq 1}(\xi_{i})^{+}\big{)}. \tag{2.5}\] By generating finitely many \(\xi_{i}\) instead of the entire sequence \((\xi_{i})_{i\geq 1}\), we obtain the stick-breaking approximation to \(X(T),M(T)\); see [30] for details. In particular, the geometric convergence rate of the approximation comes from the fact that \(T-\mathbf{E}[l_{1}+\cdots+l_{n}]=T/2^{n}\), that is, the part that has not been simulated up until step \(n\) is decaying geometrically fast in expectation. This stick-breaking approximation will be at the core of our algorithm when approximating the running supremum of a Levy process with infinite activities. In particular, we utilize a coupling between multiple Levy processes that is based on the stick-breaking representation above. For clarity of our description, here we focus on a case where we have two Levy processes \(X\) and \(\widetilde{X}\) with generating triplets \((c,\sigma,\nu)\) and \((\widetilde{c},\widetilde{\sigma},\widetilde{\nu})\) respectively. Suppose that both \(X\) and \(\widetilde{X}\) have infinite activities. One can first generate \(l_{i}\) as described in (2.4). Conditioning on the values of \((l_{i})_{i\geq 1}\), we can then independently generate \(\xi_{i}\) and \(\widetilde{\xi}_{i}\) where \(\xi_{i}\) is a random copy of \(X(l_{i})\) and \(\widetilde{\xi}_{i}\) is a random copy of \(\widetilde{X}(l_{i})\). Let \(\widetilde{M}(t)\triangleq\sup_{s\leq t}\widetilde{X}(s)\). It then follows from Result 4 that we have identified a coupling between \(X(T),M(T),\widetilde{X}(T),\widetilde{M}(T)\) where \[\big{(}X(T),M(T),\widetilde{X}(T),\widetilde{M}(T)\big{)}\stackrel{{ d}}{{=}}\big{(}\sum_{i\geq 1}\xi_{i},\sum_{i\geq 1}(\xi_{i})^{+}, \sum_{i\geq 1}\widetilde{\xi}_{i},\sum_{i\geq 1}(\widetilde{\xi}_{i})^{+} \big{)}. \tag{2.6}\] **Remark 1**.: _Even though for the purpose of this paper we only need the information of \(X(T)\), \(M(T)\), \(\widetilde{X}(T)\), and \(\widetilde{M}(T)\), it is worth noticing that the method described above in fact allows us to construct a probability space \((\Omega,\mathcal{F},\mathbf{P})\) that supports the entire sample paths \(X,\widetilde{X}\) whose endpoint values \(X(T),\widetilde{X}(T)\) and suprema \(M(T),\widetilde{M}(T)\) admit the joint law in (2.6). In particular, once we obtain \(l_{i}\) based on (2.4), one can generate \(\Xi_{i}\) that are iid copies of the entire paths of \(X\). That is, we generate a piece of sample path \(\Xi_{i}\) on the stick \(l_{i}\), and the quantities \(\xi_{i}\) introduced earlier can be obtained by setting \(\xi_{i}=\Xi_{i}(l_{i})\). To recover the sample path of \(X\) based on the pieces \(\Xi_{i}\), it suffices to apply Vervatt transform onto each \(\Xi_{i}\) and then reorder the pieces based on their slopes. We omit the details here and refer the readers to theorem 4 in [45]. The takeaway is that, whenever we apply the coupling described above, one can safely assume the existence of underlying Levy processes \(X\) and \(\widetilde{X}\) supported on the same probability space such that the law in (2.6) holds._ ## 3 Algorithm In this section, we describe the structure of the rare events we are interested in and propose a strongly efficient simulation algorithm for such rare events. Throughout the rest of this paper, let \(X(t)\) be a Levy process with generating triplet \((c_{X},\sigma,\nu)\) satisfying the following assumption on the heavy-tailedness in \(X(t)\). **Assumption 1**.: \(\mathbf{E}X(1)=0\)_. Regarding the Levy measure \(\nu\), the Blumenthal-Getoor index \(\beta\triangleq\inf\{p>0:\int_{(-1,1)}|x|^{p}\nu(dx)<\infty\}\) satisfies \(\beta<2\). Besides, one of the two claims below holds._ * _(One-sided case)_ \(\nu\) _is supported on_ \((0,\infty)\)_, and function_ \(H_{+}(x)=\nu[x,\infty)\) _is regularly varying as_ \(x\to\infty\) _with index_ \(-\alpha\) _where_ \(\alpha>1\) * _(Two-sided case) There exist_ \(\alpha,\alpha^{\prime}>1\) _such that_ \(H_{+}(x)=\nu[x,\infty)\) _is regularly varying as_ \(x\to\infty\) _with index_ \(-\alpha\) _and_ \(H_{-}(x)=\nu(-\infty,-x]\) _is regularly varying as_ \(x\to\infty\) _with index_ \(-\alpha\)_._ We impose another key assumption that revolves around the continuity of the law of \(X\). Specifically, for any \(z>0\) let \(X^{<z}\) be the Levy process with generating triplet \((c_{X},\sigma,\nu|_{(-\infty,z)})\). That is, one can consider \(X^{<z}\) as a modulated version of \(X\) where all the upward jumps with size larger than \(z\) are removed. **Assumption 2**.: _There exist \(z_{0},C,\lambda>0\) and \(\theta\in(0,1]\) such that_ \[\mathbf{P}\Big{(}X^{<z}(t)\in[x,x+\delta]\Big{)}\leq\frac{C}{t^{\lambda}\wedge 1 }\delta^{\theta}\qquad\forall z\geq z_{0},\ t>0,\ x\in\mathbb{R},\ \delta\in[0,1].\] This assumption can be interpreted as a strengthened version of Holder continuity of \(X^{<z}(t)\). An in-depth discussion on sufficient conditions for Assumption 2 will be given in Section 4, where we show that Assumption 2 is a mild condition for Levy process with infinite activities and is easy to verify. Let \(\bar{X}_{n}(t)=\frac{1}{n}X(nt)\) and \(\bar{X}_{n}=\{\bar{X}_{n}(t):\ t\in[0,1]\}\) be the scaled version of the process. Next, define events \[A\triangleq\{\xi\in\mathbb{D}:\sup_{t\in[0,1]}\xi(t)\geq a;\sup_{t\in(0,1]} \xi(t)-\xi(t-)<b\},\qquad A_{n}\triangleq\{\bar{X}_{n}\in A\}. \tag{3.1}\] For technical reasons, we impose a mild condition on the values of the constants \(a,b>0\). **Assumption 3**.: \(a,b>0\) _and \(a/b\notin\mathbb{Z}\)._ Note that if a RCLL path \(\xi\) belongs to the set \(A\), that means the path \(\xi\) crossed barrier \(a\) even though no upward jumps in \(\xi\) is larger than \(b\). Under the scaling of \(\bar{X}_{n}(t)=\frac{1}{n}X(nt)\), it is worth noticing that \(\bar{X}_{n}\) typically resembles the zero function \(\mathbf{0}\) for large \(n\). Therefore, it is rather unlikely to observe the barrier crossing phenomenon characterized in events \(A_{n}=\{\bar{X}_{n}\in A\}\) for large \(n\). Due to the involved nature of our strongly efficient rare-event simulation algorithm, we will take one step at a time and scrutinize one component of the algorithm in each of the following subsections. The analysis culminates at Theorem 3.4, where we show that the proposed importance sampling algorithm (i.e., Algorithm 2) is unbiased and strongly efficient. ### Importance Sampling Distributions \(\mathbf{Q}_{n}\) At the core of our strongly efficient algorithm is the importance sampling strategy designed under the guidance of the large deviation theories. The framework we describe here can be viewed as an extension of the one in [14]. First, note that \[l^{*}\triangleq\lceil a/b\rceil \tag{3.2}\] characterizes the number of jumps required to cross the barrier \(a\) starting from the origin if no jump is allowed to be larger than \(b\). For any \(\gamma\in(0,b)\), define sets \(B_{n}^{\gamma}\triangleq\{\bar{X}_{n}\in B^{\gamma}\}\) where \[B^{\gamma}\triangleq\{\xi\in\mathbb{D}:\#\{t\in[0,1]:\xi(t)-\xi(t-)\geq\gamma \}\geq l^{*}\}. \tag{3.3}\] Intuitively speaking, the parameter \(\gamma\in(0,b)\) in the algorithm is the threshold for _large_ jumps, and any path \(\xi\) belonging to set \(B^{\gamma}\) has at least \(l^{*}\) upward jumps that are large. We then apply importance sampling with a defensive mixture (see [35]) and propose (for some \(w\in(0,1)\)) \[\mathbf{Q}_{n}(\cdot)\stackrel{{\Delta}}{{=}}w\mathbf{P}(\cdot)+( 1-w)\mathbf{P}(\ \cdot\ |B_{n}^{\gamma}). \tag{3.4}\] The sampling from \(\mathbf{P}(\ \cdot\ |B_{n}^{\gamma})\), and hence \(\mathbf{Q}_{n}(\cdot)\), will be addressed in Section 3.4. Now that the design of the importance sampling distribution \(\mathbf{Q}_{n}\) is clear, a natural choice of an estimator for \(\mathbf{P}(A_{n})\) is of form \(\mathbb{1}_{A_{n}}\cdot\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}\) since \[\mathbf{E}^{\mathbf{Q}_{n}}\bigg{[}\mathbb{1}_{A_{n}}\frac{d\mathbf{P}}{d \mathbf{Q}_{n}}\bigg{]}=\mathbf{E}[\mathbb{1}_{A_{n}}]=\mathbf{P}(A_{n}).\] Here we use \(\mathbf{E}^{\mathbf{Q}_{n}}\) to denote the expectation operator under law \(\mathbf{Q}_{n}\) and \(\mathbf{E}\) for the expectation under \(\mathbf{P}\). Indeed, this is the importance sampling estimator proposed in [14]. However, in their paper the object of study is compound Poisson process with constant drift, whose entire sample path over a fixed time horizon \([0,T]\) can be simulated exactly. In other words, the exact evaluation of \(\mathbb{1}_{A_{n}}\) is computationally possible. We instead deal with Levy processes \(X\) with infinite activities. The entirety of the sample path of \(X\) cannot be simulated exactly with finite computational resources, and the evaluation of \(\mathbb{1}_{A_{n}}\) (i.e., verification of the barrier crossing with jumps bounded by \(b\)) is in general not computationally feasible. To sidestep this issue, one possibility is to consider estimators \[L_{n}\triangleq Z_{n}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}=\frac{Z_{n}}{w+ \frac{1-w}{\mathbf{P}(B_{n}^{\perp})}\mathbb{1}_{B_{n}^{\perp}}} \tag{3.5}\] such that \(Z_{n}\) can be simulated within finite computational resources and recovers the expectation of indicator \(\mathbb{1}_{A_{n}}\) under the importance sampling distribution \(\mathbf{Q}_{n}\). In the remainder of this section we elaborate the design of the estimators \(Z_{n}\). ### Estimators \(Z_{n}\) Given \(n\geq 1\), consider the following decomposition of the Levy process \(X\). Recall that \(\gamma\in(0,b)\) is the parameter in (3.3) functioning as the threshold for _large_ jumps. Let \[\begin{split}& J_{n}(t)\triangleq\sum_{s\leq t}\Delta X(t) \mathbb{1}\big{(}\Delta X(t)\geq n\gamma\big{)},\\ &\Xi_{n}(t)\triangleq X(t)-J_{n}(t)=X(t)-\sum_{s\leq t}\Delta X( t)\mathbb{1}\big{(}\Delta X(t)\geq n\gamma\big{)}.\end{split} \tag{3.6}\] We highlight a few useful facts regarding this decomposition. * \(\mathbf{Q}_{n}\) only alters the law of \(J_{n}\), so the law of \(\Xi_{n}\) stays the same under either \(\mathbf{Q}_{n}\) or \(\mathbf{P}\) and it is equivalent to that of \(X^{<n\gamma}\), i.e. a Levy process with generating triplet \((c_{X},\sigma,\nu|_{(-\infty,n\gamma)})\); * Under \(\mathbf{P}\), the process \(J_{n}\) is a Levy process with generating triplet \((0,0,\nu|_{[n\gamma,\infty)})\) (more precisely, it is a compound Poisson process); * Under \(\mathbf{Q}_{n}\), the process \(\{J_{n}(t):\ t\in[0,n]\}\) has the same law as that of Levy process with generating triplet \((0,0,\nu|_{[n\gamma,\infty)})\) conditioning on the fact that this process has at least \(l^{*}\) jumps on \([0,n]\); * Under either \(\mathbf{P}\) or \(\mathbf{Q}_{n}\), the two processes \(J_{n}\) and \(\Xi_{n}\) are independent. Meanwhile, let \(\bar{J}_{n}(t)=\frac{1}{n}J_{n}(nt),\bar{J}_{n}=\{\bar{J}_{n}(t):\ t\in[0,1]\}\) and \(\bar{\Xi}_{n}(t)=\frac{1}{n}\Xi_{n}(nt),\bar{\Xi}_{n}=\{\bar{\Xi}_{n}(t):t\in[0,1]\}\). Due to \(\gamma\in(0,b)\), in the definition of events \(A_{n}=\{\bar{X}_{n}\in A\}\) in (3.1) the condition \(\sup_{t\in(0,1]}\xi(t)-\xi(t-)<b\) only concerns the large jump process \(\bar{J}_{n}\) since any upward jump in \(\bar{\Xi}_{n}\) is bounded by \(\gamma<b\). Therefore, we define \[E\triangleq\{\xi\in\mathbb{D}:\ \sup_{t\in(0,1]}\xi(t)-\xi(t-)<b\},\qquad E_{n} \triangleq\{\bar{J}_{n}\in E\} \tag{3.7}\] and use indicator \(\mathbbm{1}_{E_{n}}\) in our estimator to detect the condition \(\sup_{t\in(0,1]}\xi(t)-\xi(t-)<b\) in \(A_{n}\). Next, let \[M(t)\triangleq\sup_{s\leq t}X(s),\qquad Y_{n}^{*}\triangleq\mathbbm{1}\big{(}M(n )\geq na\big{)}.\] As discussed above, exact evaluation of \(Y_{n}^{*}\) is in general not computationally possible. Instead, suppose that we have a sequence of random variables \((\hat{Y}_{n}^{m})_{m\geq 0}\) that only take values in \(\{0,1\}\) and gradually approximate \(Y_{n}^{*}\) as \(m\to\infty\). In light of the debiasing technique in Result 3, it is natural to consider the design of \(Z_{n}\) as (under the convention that \(\hat{Y}_{n}^{-1}\equiv 0\)) \[Z_{n}=\sum_{m=0}^{\tau}\frac{\hat{Y}_{n}^{m}-\hat{Y}_{n}^{m-1}}{\mathbf{P}(\tau \geq m)}\mathbbm{1}_{E_{n}} \tag{3.8}\] where \(\tau\) is \(\mathrm{Geom}(\rho)\) for some \(\rho\in(0,1)\) and is independent of everything else. This construction of \(Z_{n}\) is justified by the following proposition. The proof will be provided in Section 6.1. **Proposition 3.1**.: _Suppose there exist \(C_{0}>0\), \(\rho_{0}\in(0,1)\), \(\mu>2l^{*}(\alpha-1)\), and \(\bar{m}\geq 0\) such that_ \[\mathbf{P}\Big{(}Y_{n}^{*}\neq\hat{Y}_{n}^{m}\Bigm{|}\mathcal{D}(\bar{J}_{n})= k\Big{)}\leq C_{0}\rho_{0}^{m}\cdot(k+1)\qquad\forall k\geq 0,n\geq 1,m\geq\bar{m} \tag{3.9}\] _where \(\mathcal{D}(\xi)\) counts the number of discontinuities in \(\xi\) for any \(\xi\in\mathbb{D}\). Besides, suppose that for all \(\Delta\in(0,1)\),_ \[\mathbf{P}\Big{(}Y_{n}^{*}\neq\hat{Y}_{n}^{m},\ \bar{X}_{n}\notin A^{\Delta} \Bigm{|}\mathcal{D}(\bar{J}_{n})=k\Big{)}\leq\frac{C_{0}\rho_{0}^{m}}{\Delta^ {2}n^{\mu}}\qquad\forall n\geq 1,m\geq 0,k=0,1,\cdots,l^{*}-1 \tag{3.10}\] _where \(A^{\Delta}=\big{\{}\xi\in\mathbb{D}:\sup_{t\in[0,1]}\xi(t)\geq a-\Delta\big{\}}\). Then for all \(\gamma\in(0,b)\) small enough and all \(\rho\in(\rho_{0},1)\), the estimators \(L_{n}\) specified in (3.5)\((\ref{eq:1})\) are **unbiased and strongly efficient** for \(\mathbf{P}(A_{n})=\mathbf{P}(\bar{X}_{n}\in A)\) under the importance sampling distribution \(\mathbf{Q}_{n}\) in (3.4)._ ### Construction of the Approximations \(\hat{Y}_{n}^{m}\) In light of Proposition 3.1, our next goal is to design approximations \(\hat{Y}_{n}^{m}\) to meet conditions (3.9) and (3.10). To this end, we discuss how to generate the sequence \(\hat{Y}_{n}^{m}\) that approximates \(Y_{n}^{*}\triangleq\mathbbm{1}\big{(}M(n)\geq na\big{)}\). Recall that \(a,b>0\) are the parameters defining the multiple jump barrier crossing rare events \(A_{n}\) (see (3.1)) and \(\gamma\in(0,b)\) in the parameter of the algorithm that determines the large jump threshold in \(B_{n}^{\gamma}\) (see (3.3)) in our construction of the importance sampling distribution \(\mathbf{Q}_{n}\). Now consider the decomposition of \(X(t)=\Xi_{n}(t)+J_{n}(t)\) in (3.6). Under both \(\mathbf{Q}_{n}\) and \(\mathbf{P}\), we have that \(\Xi_{n}\) and \(J_{n}\) are independent and the law of \(\Xi_{n}\) is equivalent to that of \(X^{<n\gamma}\), i.e. a Levy process with generating triplet \((c_{X},\sigma,\nu|_{(-\infty,n\gamma)})\). Therefore, one can first simulate \(J_{n}(t)\triangleq\sum_{s\leq t}\Delta X(t)\mathbbm{1}\big{(}\Delta X(t)\geq n\gamma \big{)}\) under \(\mathbf{Q}_{n}\) and then sample \(\Xi_{n}\) (or at least approximate its supremum). Suppose that the process \(J_{n}\) has made \(k\) jumps over \([0,n]\) (i.e., \(\mathcal{D}(\bar{J}_{n})=k\)) and admits the form \[J_{n}(t)=\sum_{i=1}^{k}z_{i}\mathbbm{1}_{[u_{i},1]}(t)\qquad\forall t\in[0,n] \tag{3.11}\] for some \(z_{i}\in[n\gamma,\infty)\) and \(u_{i}\in[0,n]\). Without loss of generality we assume the sequence \(u_{1}<u_{2}<\cdots<u_{k}\) is ordered. This allows us to partition the timeline \([0,n]\) into \(k+1\) disjoint intervals \([0,u_{1}),[u_{1},u_{2}),\cdots,[u_{k-1},u_{k}),[u_{k},1]\). We adopt the convention \(u_{0}=0,u_{k+1}=1\) and set \(I_{i}=[u_{i-1},u_{i})\) for \(i\in[k]\) and \(I_{k+1}=[u_{k},1]\). Now let \(\zeta(t)=\sum_{i=1}^{k}z_{i}\mathbbm{1}_{[u_{i},1]}(t)\). As a result, we can define \[M_{n}^{(i)}(\zeta)\triangleq\sup_{t\in I_{i}}\Xi_{n}(t)-\Xi_{n}(u_{i-1}). \tag{3.12}\] Then for the random function \[Y_{n}^{*}(\zeta)\triangleq\max_{i\in[k+1]}\mathbb{1}\Big{(}\Xi_{n}(u_{i-1})+ \zeta(u_{i-1})+M_{n}^{(i)}\geq na\Big{)}, \tag{3.13}\] we have \(Y_{n}^{*}(J_{n})=\mathbb{1}\big{(}\sup_{t\in[0,n]}X(t)\geq na\big{)}\). Naturally, to approximate such \(M_{n}^{(i)}(\zeta)\) one would consider applying the stick-breaking approximations (SBA) introduced in Section 2.4. However, approximating the supremum of \(X^{<n\gamma}\) on \([0,u]\) using SBA requires the capability to simulate \(X^{<n\gamma}(t)\) for any \(t>0\). In general, the law of a Levy process with infinite activities does not admit an explicit form, and it is unclear how to perform the exact simulation/sampling of Levy process with infinite activities. To overcome this issue, we incorporate the the Asmussen-Rosinski approximation (ARA) proposed in [4]. The idea is to pick some small threshold level \(\kappa\in(0,1)\) and substitute the jump martingale constituted by all jumps bounded by \(\kappa\) with a Brownian motion of the same variance. Here we state our naming conventions of all the definitions below. We use \(n\) to denote the scale function, and \(m\) for the approximation level in ARA and SBA. The index \(i\) tells us which interval \([u_{i-1},u_{i}]\) is concerned. For instance, \(\Xi_{n}\) has the law of the Levy process \(X^{<n\gamma}\), and the law of \(\tilde{\Xi}_{n}^{m}\) approximates that of \(X^{<n\gamma}\) by substituting part of the small jump martingale with a Brownian motion component of the same variance; \((l_{j}^{(i)})_{j\geq 1},(\xi_{j}^{(i)})_{j\geq 1}\) form a representation of type (2.5) on \([u_{i-1},u_{i}]\) for \(\Xi_{n}\) while \((\xi_{j}^{(i),m})_{j\geq 1}\) constitutes the stick-breaking representation of \(\tilde{\Xi}_{n}^{m}\) on \([u_{i-1},u_{i}]\). Specifically, let \[\kappa_{n,m}=\frac{\kappa^{m}}{n^{r}}\qquad\forall n\geq 1,m\geq 0 \tag{3.14}\] where \(\kappa\in(0,1)\) and \(r>0\) are two other parameters in our algorithm. As a convention we set \(\kappa_{n,-1}=1\). Without loss of generality, we focus on \(n\) large enough such that \(n\gamma>1=\kappa_{n,-1}\). For the Levy process \(\Xi_{n}=X^{<n\gamma}\) with the generating triplet \((c_{X},\sigma,\nu|_{(-\infty,n\gamma)})\), consider the following decomposition (with \(B(t)\) being a standard Brownian motion) \[\Xi_{n}(t) =c_{X}t+\sigma B(t)+\underbrace{\sum_{s\leq t}\Delta X(s) \mathbb{1}\Big{(}\Delta X(s)\in(-\infty,-1]\cup[1,n\gamma)\Big{)}}_{\triangleq J _{n,-1}(t)} \tag{3.15}\] In particular, for any \(m\geq 0\) we can see that \(J_{n,m}\) is a martingale, and the variance of \(J_{n,m}(1)\) is \(\bar{\sigma}^{2}(\kappa_{n,m-1})-\bar{\sigma}^{2}(\kappa_{n,m})\) where \[\bar{\sigma}^{2}(c)\triangleq\int_{(-c,c)}x^{2}\nu(dx)\qquad\forall c\in(0,1]. \tag{3.16}\] To apply ARA, let \(W^{(m)}\) be a sequence of iid copies of standard Brownian that is also independent of \(B(t)\). Define \[\tilde{\Xi}_{n}^{m}(t)=c_{X}t+\sigma B(t)+J_{n,-1}(t)+\sum_{j=0}^{m}J_{n,j}(t) +\sum_{j\geq m+1}\sqrt{\bar{\sigma}^{2}(\kappa_{n,j-1})-\bar{\sigma}^{2}( \kappa_{n,j})}\cdot W^{(j)}(t) \tag{3.17}\] Here the process \(\tilde{\Xi}_{n}^{m}\) can be interpreted as an approximation to \(\Xi_{n}\) where the jump martingale of jumps bounded by \(\kappa_{n,m}\) is substituted by a standard Brownian motion with the same variance. Note that (i) for all \(m\geq 1\), the random variable \(\tilde{\Xi}_{n}^{m}(t)\) is exactly simulatable as it is a convolution of compound Poisson process (plus constant drift) with a Gaussian random variable; (ii) as \(m\to\infty\), the law of \(\tilde{\Xi}_{n}^{m}\) approaches that of the \(\Xi_{n}\). Utilizing \(\tilde{\Xi}_{n}^{m}\) constructed under ARA, we are able to apply the SBA technique as follows. Given any step function \(\zeta\), we define random functions \(\hat{Y}_{n}^{m}(\zeta)\) below. For the estimators \(Z_{n}\) in (3.8) we simply plug in \(\hat{Y}_{n}^{m}(\bar{J}_{n})\). Specifically, consider a step function \(\zeta(t)=\sum_{i=1}^{k}z_{i}\mathbb{1}_{[u_{i},n]}(t)\) with \(z_{i}\in[n\gamma,\infty)\) and \(u_{i}\in[0,t]\) with \(u_{1}<u_{2}<\cdots<u_{n}\), Recall the definition of \(M_{n}^{(i)}(\zeta)\) in (3.12) with \(I_{i}=[u_{i-1},u_{i})\) for \(i\in[k]\) and \(I_{k+1}=[u_{k},1]\). We will abuse the notations a bit and omit the index \(n\) in the subscripts or superscripts of \(l_{j}^{(i)},\xi_{j}^{(i),m},\xi_{j}^{(i)}\) defined above when there is no ambiguity. Now define \[l_{1}^{(i)} =V_{1}^{(i)}(u_{i+1}-u_{i}); \tag{3.18}\] \[l_{j}^{(i)} =V_{j}^{(i)}(u_{i+1}-u_{i}-l_{1}^{(i)}-l_{2}^{(i)}-\cdots-l_{j-1} ^{(i)})\qquad\forall j\geq 2 \tag{3.19}\] where \(V_{j}^{(i)}\) is an iid sequence of \(\text{Unif}(0,1)\). Next, conditioning on \((l_{j}^{(i)})_{j\geq 1}\), one can sample \(\xi_{j}^{(i),m},\xi_{j}^{(i)}\) as \[\big{(}\xi_{j}^{(i)},\xi_{j}^{(i),1},\xi_{j}^{(i),2},\xi_{j}^{(i),3},\cdots \big{)}\overset{d}{=}\Big{(}\Xi_{n}(l_{j}^{(i)}),\tilde{\Xi}_{n}^{1}(l_{j}^{ (i)}),\tilde{\Xi}_{n}^{2}(l_{j}^{(i)}),\tilde{\Xi}_{n}^{3}(l_{j}^{(i)}),\cdots \Big{)}. \tag{3.20}\] The coupling in (2.6) then implies \[\begin{split}&\big{(}\Xi_{n}(u_{i})-\Xi_{n}(u_{i-1}),M_{n}^{(i)} (\zeta),\tilde{\Xi}_{n}^{1}(u_{i})-\tilde{\Xi}_{n}^{1}(u_{i-1}),\tilde{M}_{n} ^{(i),1}(\zeta),\tilde{\Xi}_{n}^{2}(u_{i})-\tilde{\Xi}_{n}^{2}(u_{i-1}),\tilde {M}_{n}^{(i),2}(\zeta),\cdots\big{)}\\ &\overset{d}{=}\Big{(}\sum_{j\geq 1}\xi_{j}^{(i)},\sum_{j \geq 1}(\xi_{j}^{(i)})^{+},\sum_{j\geq 1}\xi_{j}^{(i),1},\sum_{j\geq 1}(\xi_{j}^{(i ),1})^{+},\sum_{j\geq 1}\xi_{j}^{(i),2},\sum_{j\geq 1}(\xi_{j}^{(i),2})^{+}, \cdots\Big{)}.\end{split} \tag{3.21}\] Lastly, by terminating the summation after \(m+\lceil\log_{2}(n^{d})\rceil\) steps, we get \[\hat{M}_{n}^{(i),m}(\zeta)=\sum_{j=1}^{m+\lceil\log_{2}(n^{d})\rceil}(\xi_{j} ^{(i),m})^{+} \tag{3.22}\] that can be simulated exactly and it approximates \(\sum_{j\geq 1}(\xi_{j}^{(i),m})^{+}\overset{d}{=}\sup_{t\in I_{i}}\tilde{\Xi}_{n }^{m}(t)-\tilde{\Xi}_{n}^{m}(u_{i-1})\) as \(m\to\infty\). Intuitively speaking, the extra \(\lceil\log_{2}(n^{d})\rceil\) term is introduced to guarantee the accuracy of SBA as \(n\to\infty\) without significantly increasing the computational cost. Here \(d>0\) is another parameter in the algorithm. In summary, we define the random function \[\hat{Y}_{n}^{m}(\zeta)=\max_{i\in[k+1]}\mathbb{1}\Big{(}\sum_{q=1}^{i-1}\sum_{ j\geq 0}\xi_{j}^{(q),m}+\sum_{q=1}^{i-1}z_{q}+\hat{M}_{n}^{(i),m}(\zeta)\geq na\Big{)}; \tag{3.23}\] here \(\sum_{q=1}^{i-1}z_{q}=\zeta(u_{i-1})\) due to \(\zeta(t)=\sum_{i=1}^{k}z_{i}\mathbb{1}_{[u_{i},n]}(t)\), and \(\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}\overset{d}{=}\tilde{\Xi}_{n}^{m}(u_{i-1})\) due to the coupling in (3.21). For \(Z_{n}\) in (3.8), we plug in \(\hat{Y}_{n}^{m}(J_{n})\). At first glance, one may get the impression that the simulation of \(\hat{Y}_{n}^{m}\) is still computationally challenging due to the existence of infinite sequences. For example, in (3.23) we have \(\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q),m}\), and the definition of \(\tilde{\Xi}_{n}^{m}\) in (3.17) involves infinitely many iid Brownian motions. Fortunately, the a.s. finite truncation index \(\tau\) in \(Z_{n}\) guarantees that, when simulating \(Z_{n}\), once \(\tau\) is decided there is no need to simulate \(\hat{Y}_{n}^{m}\) beyond \(m\leq\tau\). As a result, in (3.17) one can always combine \(\sum_{j\geq\tau+1}\sqrt{\bar{\sigma}^{2}(\kappa_{n,j-1})-\bar{\sigma}^{2}(\kappa_ {n,j})}\cdot W^{(j)}(t)\) into a single Brownian motion term. Similarly, to simulate \(\hat{Y}_{n}^{m}\) for all \(m\leq\tau\), when generating \(\hat{M}_{n}^{(i),m}=\sum_{j=1}^{m}(\xi_{j}^{(i),m})^{+}\) we only need \(\xi_{j}^{(i),m}\) for all \(j\leq\tau\). Therefore, for the infinite summation \(\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}\) in (3.23), it is safe to combine \(\sum_{j>\tau}\xi_{j}^{(q)}\) and generate the sum in one shot instead of individually sampling all the pieces \(\xi_{j}^{(q)}\). In short, the estimator \(Z_{n}\) (and hence \(L_{n}\)) can be generated using finite computational resources; the steps are detailed in Algorithm 2. As shown in the next two results, with appropriate parameterization, the algorithm proposed above fulfills conditions (3.9) and (3.10), thus achieving the strong efficiency as guaranteed by Proposition 3.1. We defer the proofs to Section 6.2. **Proposition 3.2**.: _Let \(\beta_{+}\in(\beta,2)\) where \(\beta<2\) is the Blumenthal-Getoor index (see Assumption 1). Given any \(\kappa\in(0,1)\) close enough to \(0\) and any \(r,d>0\) large enough such that_ \[\kappa^{2-\beta_{+}}<\frac{1}{2},\qquad d\geq 2,\qquad 2(r-\beta_{+})\geq 2,\] _there exists \(C_{0}>0\) such that for all \(\rho_{0}\in(0,1)\) close enough to 1 and all \(\gamma\in(0,b)\),_ \[\mathbf{P}\Big{(}Y_{n}^{*}(J_{n})\neq\hat{Y}_{n}^{m}(J_{n})\ \Big{|}\ \mathcal{D}(\bar{J}_{n})=k\Big{)}\leq C_{0}\rho_{0}^{m}\cdot(k+1) \qquad\forall n\geq 1,m,k\geq 0.\] **Proposition 3.3**.: _Let \(\mu>0\). Let \(\beta_{+}\in(\beta,2)\) where \(\beta<2\) is the Blumenthal-Getoor index (see Assumption 1). Given any \(\kappa\in(0,1)\) and any \(r,d>0\) large enough such that_ \[2(r-\beta_{+})+1>\mu,\qquad\frac{d+1}{2}>\mu,\] _there exists \(C_{0}<\infty\) such that for all \(\rho_{0}\in(0,1)\) close enough to 1 and all \(\gamma\in(0,b)\),_ \[\mathbf{P}\Big{(}Y_{n}^{*}(J_{n})\neq\hat{Y}_{n}^{m}(J_{n}),\ \bar{X}_{n}\notin A^{\Delta}\ \Big{|}\ \mathcal{D}(\bar{J}_{n})=k\Big{)}\leq\frac{C_{0}\rho_{0}^{m}}{\Delta^{2}n^{ \mu}}\quad\forall n\geq 1,m\geq 0,\Delta\in(0,1),k=0,1,\cdots,l^{*}-1\] _where \(A^{\Delta}=\big{\{}\xi\in\mathbb{D}:\sup_{t\in[0,1]}\xi(t)\geq a-\Delta\big{\}}\)._ ### Sampling from \(\mathbf{P}(\ \cdot\ \big{|}B_{n}^{\gamma})\) Now, we revisit the sampling of \(J_{n}\) from \(\mathbf{P}(\ \cdot\ \big{|}B_{n}^{\gamma})\), which is at the core of the implementation of the importance sampling distribution (with defensive mixture) \(\mathbf{Q}_{n}\) in (3.4). First, recall that under \(\mathbf{P}\), the process \(J_{n}\) is a compound Poisson process with generating triplet \((0,0,\nu|_{[n\gamma,\infty)})\). More precisely, let \(\widetilde{N}_{n}(\cdot)\) be a Poisson process with rate \(\nu[n\gamma,\infty)\), and let \((S_{i})_{i\geq 1}\) be the arrival times of jumps in \(\widetilde{N}_{n}(\cdot)\). Let \((W_{i})_{i\geq 1}\) be a sequence of iid random variables from distribution \(\nu_{n}^{\text{normalized}}\), which is defined as \[\nu_{n}^{\text{normalized}}(\cdot)=\frac{\nu_{n}(\cdot)}{\nu[n\gamma,\infty)},\qquad\nu_{n}(\cdot)=\nu\big{(}\cdot\cap[n\gamma,\infty)\big{)}.\] Under law \(\mathbf{P}\), we have (for all \(t\geq 0\)) \[J_{n}(t)=\sum_{i=1}^{\widetilde{N}_{n}(t)}W_{i}=\sum_{i\geq 1}W_{i}\mathbb{1}_{ [S_{i},\infty)}(t).\] Furthermore, for each \(k\geq 0\), conditioning on \(\{\widetilde{N}_{n}(n)=k\}\), we know that the law of \(S_{1},\cdots,S_{k}\) is equivalent to that of the order statistics of \(k\) iid samples from \(\operatorname{Unif}(0,n)\), and \(W_{i}\)'s are still independent of \(S_{i}\)'s with the law unaltered. Therefore, to sample \(J_{n}\) from \(\mathbf{P}(\ \cdot\ \big{|}B_{n}^{\gamma})\), it suffices to first sample some \(k\) as \(\operatorname{Poisson}(n\cdot\nu_{n}[n\gamma,\infty))\), conditioning on \(k\geq l^{*}\), and then independently generate \(S_{1},\cdots,S_{k}\) and \(W_{1},\cdots,W_{k}\) under the law of \(\mathbf{P}(\cdot|\{\widetilde{N}_{n}(n)=k\})\). It is worth mentioning that the sampling of \(W_{i}\) (i.e., under the law of \(\nu_{n}^{\text{normalized}}\) can be addressed with the help of the inverse of the measure. Define \(Q_{n}^{\leftarrow}(y)\triangleq\ \inf\{s>0:\nu_{n}[s,\infty)<y\}\) as the inverse of \(\nu_{n}\) and observe that \[y\leq\nu_{n}[s,\infty)\qquad\Longleftrightarrow\qquad Q_{n}^{\leftarrow}(y) \geq s.\] More importantly, for \(U\sim\text{Unif}(0,\nu_{n}[n\gamma,\infty))\), we have \(Q_{n}^{\leftarrow}(U)\sim\nu_{n}^{\text{normalized}}\). See Algorithm 1 for the detailed steps. ``` 0:\(n\in\mathbb{N},l^{*}\in\mathbb{N},\gamma>0\), the Levy measure \(\nu\). 1: Sample \(k\) from a Poisson distribution with rate \(n\cdot\nu_{n}[n\gamma,\infty)\) conditioning on \(k\geq l^{*}\) 2: Simulate \(\Gamma_{1},\cdots,\Gamma_{k}\stackrel{{ iid}}{{\sim}}Unif(0,\nu_{n}[n \gamma,\infty))\) 3: Simulate \(U_{1},\cdots,U_{k}\stackrel{{ iid}}{{\sim}}Unif(0,n)\) 4:Return\(J_{n}=\sum_{i=1}^{k}Q_{n}^{\prec}(\Gamma_{i})\mathbf{1}_{[U_{i},n]}\) ``` **Algorithm 1** Simulation of \(J_{n}\) from \(\mathbf{P}(\ \cdot\ |B_{n}^{\gamma})\) ### Strong Efficiency and Computational Complexity ``` 0:\(w\in(0,1),\gamma>0,r>0,d>0,\kappa\in(0,1),\rho\in(0,1)\) for parameters in the algorithm; \(a,b>0\) as characterization of set \(A\); \((c_{X},\sigma,\nu)\) is the generating triplet of \(X\); \(\widehat{\sigma}(\cdot)\) is defined in (3.16). 1: Set \(t_{n}=\lceil\log_{2}(n^{d})\rceil\) and \(\kappa_{n,m}=\kappa^{m}/n^{r}\) for any \(n\geq 1,m\geq 0\). 2: Sample \(U\sim Unif(0,1)\)\(\triangleright\) Sample \(J_{n}\) from \(\mathbb{Q}\) 3:if\(U<w\)then 4: Sample \(J_{n}=\sum_{i=1}^{k}z_{i}1_{[u_{i},n]}\) as a compound Poisson process over \([0,n]\) with jump intensity measure \(\nu|_{[n\gamma,\infty)}\) 5:else 6: Sample \(J_{n}=\sum_{i=1}^{k}z_{i}1_{[u_{i},n]}\) using Algorithm 1 7: Let \(u_{0}=0,u_{k+1}=n\). 8: Sample \(\tau\sim Geom(\rho)\)\(\triangleright\) Decide Truncation Index \(\tau\) 9:for\(i=1,\cdots,k+1\)do\(\triangleright\) Stick-breaking procedure 10: Sample \(V_{i}^{(i)}\sim Unif(0,1)\), and let \(l_{1}^{(i)}=V_{1}^{(i)}(u_{i}-u_{i-1})\) 11:for\(j=2,3,\cdots,t_{n}+\tau\)do 12: Sample \(V_{j}^{(i)}\sim Unif(0,1)\), and let \(l_{j}^{(i)}=V_{j}^{(i)}(u_{i}-u_{i-1}-l_{1}^{(i)}-l_{2}^{(i)}-\cdots-l_{j-1}^{ (i)})\) 13: Set \(l_{n+\tau+1}^{(i)}=u_{i}-u_{i-1}-l_{1}^{(i)}-l_{2}^{(i)}-\cdots-l_{n+\tau}^{(i)}\) 14:for\(i=1,\cdots,k+1\)do\(\triangleright\) Decide \(\xi_{m}^{(i),j}\)--the increment on stick \(l_{j}^{(i)}\) under ARA at level \(m\) 15:for\(j=1,2,\cdots,t_{n}+\tau+1\)do 16: Sample \(B^{(i),j}\sim N(0,\widehat{\sigma}^{2}\cdot l_{j}^{(i)})\) 17: Sample \(J_{-1}^{(i),j}\sim F(l_{j}^{(i)},\nu|_{(-\infty,n\gamma)(\cdot-\kappa_{n,0},\kappa_{n,0})})\) 18:for\(m=0,1,\cdots,\tau-1\)do 19: Sample \(J_{m}^{(i),j}\sim F(l_{j}^{(i)},\nu|_{(-\kappa_{n,m},\kappa_{n,m})\setminus(- \kappa_{n,m+1},\kappa_{n,m+1})})\) 20: Sample \(W_{m}^{(i),j}\sim N(0,\widehat{\sigma}^{2}(\kappa_{n,1})-\widehat{\sigma}^{ 2}(\kappa_{n,1+1}))\cdot l_{j}^{(i)})\) 21: Sample \(W_{\tau}^{(i),j}\sim N(0,\widehat{\sigma}^{2}(\kappa_{n,\tau})\cdot l_{j}^{(i)})\) 22:for\(m=0,\cdots,\tau\)do 23: Let \(\xi_{n}^{(i),j}=c_{X}\cdot l_{j}^{(i)}+B^{(i),j}+\sum_{q=1}^{m-1}J_{q}^{(i),j }+\sum_{q=m}^{\tau}W_{q}^{(i),j}\) 24:for\(m=0,1,\cdots,\tau\)do\(\triangleright\) Evaluate \(\hat{Y}_{n}^{m}\) 25:for\(i=1,2,\cdots,k+1\)do 26: Let \(\hat{M}_{n}^{(i)}=\sum_{i=1}^{l-1}\sum_{q=1}^{l+\tau+1}\xi_{m}^{(i),j}+\sum_{j= 1}^{t_{n}+\tau}(\xi_{m}^{(i),j})^{+}\) 27: Let \(\hat{Y}_{n}^{m}=1\big{\{}\max_{i,\cdots,k+1}[\hat{M}_{n}^{(i)}+J_{n}(u_{i-1})] \geq na\big{\}}\) 28: Let \(Z_{n}=\hat{Y}_{n}^{0}+\sum_{i=1}^{\tau}(\hat{Y}_{n}^{m}-\hat{Y}_{n}^{m-1})/ \rho^{m-1}\)\(\triangleright\) Return the Estimator \(L_{n}\) 29:if\(\max_{i=1,\cdots,k}z_{i}>b\)then 30: Return \(L_{n}=0\). 31:else 32: Let \(\lambda_{n}=n\nu[n\gamma,\infty),\ p_{n}=1-\sum_{l=0}^{l^{*}-1}e^{-\lambda_{n} }\frac{\lambda_{n}^{1}}{l!},\ I_{n}=1\big{\{}J_{n}\in B_{n}^{\gamma}\big{\}}\) 33: Return \(L_{n}=Z_{n}/(w+\frac{l-\nu}{pn}I_{n})\) ``` **Algorithm 2** Strongly Efficient Estimation of \(\mathbf{P}(A_{n})\) All the discussions above lead to the importance sampling algorithm in Algorithm 2. Here for any \(t>0\) and any Borel measure \(\mu\) with \(\mu(\mathbb{R})<\infty\), we use \(F(t,\mu)\) to denote the law of the compound Poisson process at time \(t\) with jump intensity measure \(\mu\); that is, the arrival rate of jumps is \(\mu(\mathbb{R})\) and the law of jump sizes is \(\mu(\cdot)/\mu(\mathbb{R})\). Below is a summary of all the parameters in the algorithm. * \(\gamma\in(0,b)\): the threshold in \(B^{\gamma}\) defined in (3.3) * \(w\in(0,1)\): the weight of the defensive mixture in \(\mathbf{Q}_{n}\); see (3.4) * \(\rho\in(0,1)\): the geometric rate of decay for \(\mathbf{P}(\tau\geq m)\) in (3.8) * \(\kappa\in(0,1),\ r>0\): determining the truncation threshold \(\kappa_{n,m}\); see (3.14) * \(d>0\): determining the \(\log_{2}(n^{d})\) term in (3.22) As shown in Theorem 3.4 below, Algorithm 2 is unbiased and strongly efficient when properly parametrized. **Theorem 3.4**.: _There exist \(\bar{\kappa}\in(0,1),\ \bar{r}>0\) and \(\bar{d}>0\) such that the following claim is valid: given any \(\kappa\in(0,\bar{\kappa}],\ r\geq\bar{r},\ d\geq\bar{d}\) and any \(w\in(0,1)\), there is \(\bar{\rho}\in(0,1)\) such that Algorithm 2 is unbiased and strongly efficient under any \(\gamma\in(0,b)\) small enough and any \(\rho\in(\bar{\rho},1)\)._ Proof.: Fix some \(\mu>2l^{*}(\alpha-1)\). Fix some \(\beta_{+}\in(\beta,2)\) where \(\beta<2\) is the Blumenthal-Getoor index (see Assumption 1). Set \(\bar{\kappa}\in(0,1)\) small enough and \(\bar{r},\bar{d}\) large enough such that \[\bar{\kappa}^{2-\beta_{+}}<\frac{1}{2},\qquad 2(\bar{r}-\beta_{+})>2\vee(\mu-1),\qquad\bar{d}\geq 2,\qquad\bar{d}>(2\mu-1). \tag{3.24}\] Now pick \(\kappa\in(0,\bar{\kappa}],\ r\geq\bar{r},\ d\geq\bar{d}\) and \(w\in(0,1)\). Thanks to Propositions 3.2 and 3.3, we can find some \(C_{0}>0,\ \bar{m}\geq 0\) and \(\rho_{0}\in(0,1)\) such that conditions (3.9) and (3.10) hold for \(\hat{Y}_{n}^{m}=\hat{Y}_{n}^{m}(J_{n})\) (for its definition, see (3.23)) under the parameters specified above. It then follows immediately from Proposition 3.1 that the estimator \(L_{n}\) (and hence Algorithm 2) is unbiased and strongly efficient under any \(\gamma\in(0,b)\) small enough and any \(\rho\in(\rho_{0},1)\). To conclude the proof, one only needs to set \(\bar{\rho}=\rho_{0}\). We stress that the exact range of the parameters for strong efficiency to hold in Theorem 3.4 is readily available. First, note that the choice of \(w\in(0,1)\) won't affect the unbiasedness and strong efficiency of the algorithm. Next, \(\bar{\kappa},\bar{r}\) and \(\bar{d}\) can be determined using (3.24). After picking \(\kappa\in(0,\bar{\kappa}],\ r\geq\bar{r},\ d>\bar{d}\) and any \(w\in(0,1)\), it is shown in the proof of Propositions 3.2 and 3.3 (specifically, in (6.24)-(6.36) and (6.43)) that the value of \(\bar{\rho}\) can be determined as follows. First pick \(\alpha_{3}\in(0,\frac{\theta}{\lambda}),\ \alpha_{4}\in(0,\frac{\theta}{2\lambda})\) where \(\lambda>0\) and \(\theta\in(0,1]\) are the constants in Assumption 2. Next, pick \(\alpha_{2}\in(0,\frac{\alpha_{3}}{2}\wedge 1)\) and \(\alpha_{1}\in(0,\frac{\theta\alpha_{2}}{\lambda})\). Also, fix \(\delta\in(1/\sqrt{2},1)\). Since we require \(\alpha_{2}\) to be strictly less than \(1\), it is easy to see the existence of some integer \(\bar{m}\) such that \(\delta^{m\alpha_{2}}-\delta^{m}\geq\frac{\delta^{m\alpha_{2}}}{2}\) and \(\delta^{m\alpha_{2}}<a\ \forall m\geq\bar{m}\). Here \(a>0\) is the parameter in set \(A\); see Assumption 3. This allows us to pick \(\rho_{1}\in(0,1)\) such that \[\rho_{1}>\max\bigg{\{}\delta^{\alpha_{1}},\ \frac{\kappa^{2-\beta_{+}}}{\delta^{2} },\ \frac{1}{\sqrt{2}\delta},\ \delta^{\theta\alpha_{2}-\lambda\alpha_{1}},\ \delta^{\theta-\lambda\alpha_{3}},\delta^{-\alpha_{2}+\frac{ \alpha_{3}}{2}}\bigg{\}}.\] Now we can fix some \[\bar{\rho}\in\bigg{(}\max\Big{\{}\frac{1}{\sqrt{2}},\ \kappa^{2-\beta_{+}},\ \rho_{1} \Big{\}},1\bigg{)}.\] and pick a larger \(\bar{m}\) if necessary to make sure that \(m^{2}\rho_{1}^{m}\leq\bar{\rho}^{m}\ \forall m\geq\bar{m}.\) Lastly, the choice of \(\gamma\) is detailed in the proof of Proposition 3.1. Specifically, after picking \(\rho\in(\bar{\rho},1)\), one can find some \(q>1\) such that \(\bar{\rho}^{1/q}<\rho\). Let \(p>1\) be such that \(\frac{1}{p}+\frac{1}{q}=1\). For the algorithm to be unbiased and strongly efficient, one can pick any \(\gamma>0\) small enough such that \[\frac{a-\Delta-(l^{*}-1)b}{\gamma}+l^{*}-1>2l^{*}p.\] ## 4 Lipschitz Continuity of the Distribution of \(X^{<z}(t)\) This section investigates the sufficient conditions for Assumption 2. In particular, we focus on the case where \(\theta=1\) so that Assumption 2 can be viewed as a strengthened and uniform version of Lipschitz continuity of the law of \(X^{<z}(t)\). To demonstrate the key technique in our approach, we start with a simple case where the Levy process \(X(t)\) has generating triplet \((c_{X},\sigma,\nu)\) with \(\sigma>0\). This allows us to decompose the process into \[X^{<z}(t)\stackrel{{ d}}{{=}}\sigma B(t)+Y^{<z}(t)\qquad\forall t,z>0\] where \(B\) is a standard Brownian motion, \(Y^{<\gamma}\) is a Levy process with generating triplet \((c_{X},0,\nu|_{(-z,z)})\), and the two processes are independent. Now for any \(x\in\mathbb{R},\ t>0\) and \(\delta\in(0,1)\), \[\begin{split}\mathbf{P}(X^{<z}(t)\in[x,x+\delta])& =\int_{\mathbb{R}}\mathbf{P}(\sigma B(t)\in[x-y,x-y+\delta])\cdot \mathbf{P}(Y^{<\gamma}(t)\in dy)\\ &=\int_{\mathbb{R}}\mathbf{P}\bigg{(}\frac{B(t)}{\sqrt{t}}\in \Big{[}\frac{x-y}{\sigma\sqrt{t}},\frac{x-y+\delta}{\sigma\sqrt{t}}\Big{]} \bigg{)}\cdot\mathbf{P}(Y^{<\gamma}(t)\in dy)\\ &\leq\frac{1}{\sigma\sqrt{2\pi}}\cdot\frac{\delta}{\sqrt{t}}\end{split} \tag{4.1}\] since the density of a standard Normal distribution is bounded by \(1/\sqrt{2\pi}\). This immediately verifies Assumption 2 under \(\theta=1,\ \lambda=1/2,C=\frac{1}{\sigma\sqrt{2\pi}}\), and any \(z_{0}>0\). Now we consider the case where \(\sigma=0\). For any two (Borel) measures \(\mu_{1},\mu_{2}\) on \(\mathbb{R}\), their difference \(\mu=\mu_{1}-\mu_{2}\) can be considered as a signed measure. For any Borel set \(A\subset\mathbb{R}\), we say that \(\mu_{1}\)**majorizes \(\mu_{2}\) when restricted on**\(A\) (denoted as \((\mu_{1}-\mu_{2})|_{A}\geq 0\)) if \(\mu(B\cap A)=\mu_{1}(B\cap A)-\mu_{2}(B\cap A)\geq 0\) for any Borel set \(B\subset\mathbb{R}\); in other words \(\mu|_{A}=(\mu_{1}-\mu_{2})|_{A}\) is a **positive** measure. When \(A=\mathbb{R}\) we simply write \(\mu_{1}-\mu_{2}\geq 0\). Returning to the Levy measure \(\nu\) of the process \(X(t)\), If we can find some \(z_{0}>0\) and some (positive) Borel measure \(\mu\) such that \((\nu-\mu)|_{(-z_{0},z_{0})}\geq 0\), then by splitting the underlying Poisson random measure for the jumps in the process \(X(t)\), for any \(z\geq z_{0}\) we have the decomposition \[X^{<z}(t)\stackrel{{ d}}{{=}}Y(t)+\widetilde{X}^{z_{0},z}(t)\] where \(\mu_{z_{0}}=\mu|_{(-z_{0},z_{0})}\), \(Y\) is a Levy process with generating triplet \((0,0,\mu_{z_{0}})\), \(\widetilde{X}^{z_{0},z}\) is a Levy process with generating triplet \((c_{X},0,\nu-\mu_{z_{0}})\), and the two processes are independent. Furthermore, if we can show that Assumption 2 holds for the process \(Y(t)\) with generating triplet \((0,0,\mu_{z_{0}})\), then analogous to the arguments in (4.1) we can pass the continuity conditions in Assumption 2 from \(Y(t)\) to \(X^{<z}(t)\) through the convolution of \(X^{<z}(t)\stackrel{{ d}}{{=}}Y(t)+\widetilde{X}^{z_{0},z}(t)\). The success of this technique hinges on the identification of the majorized measure \(\mu\), which naturally depends on the property of the majorizing measure \(\nu\). Recall the definition of regularly varying function in Definition 1. The following result provides sufficient conditions for Assumption 2 when \(\nu[x,\infty)\) or \(\nu(-\infty,x]\) is regularly varying as \(x\downarrow 0\). **Proposition 4.1**.: _Let \(\alpha\in(0,2),z_{0}>0\), and \(\epsilon\in(0,(2-\alpha)/2)\). Let \(\nu\) be a Borel measure supported on \((0,\infty)\) such that \(\nu[x,\infty)\) is regularly varying as \(x\to 0\) with index \(\alpha+2\epsilon\). There exists a constant \(C<\infty\) such that for the Levy process \(\{Y(t):t\geq 0\}\) with generating triplet \((0,0,\nu|_{(-z_{0},z_{0})})\),_ \[\big{\|}f_{Y(t)}\big{\|}_{\infty}\leq\frac{C}{t^{1/\alpha}\wedge 1}\qquad\forall t >0.\] _where the \(L_{\infty}\) norm for a density \(f\) is defined as \(\|f\|_{\infty}=\sup_{x\in\mathbb{R}}|f(x)|\), and \(f_{Y(t)}\) is the density function of the distribution of \(Y(t)\)._ We give the detailed proof in Section 6.3. The main idea is that when \(\nu\) is regularly varying with some index \(\alpha^{\prime}=\alpha+\epsilon\) at the origin, it dominates the measure \(\nu_{\alpha}[x,\infty)=1/x^{\alpha}\) eventually as \(x\downarrow 0\). This allows us to factor out a polynomial component from \(\nu\) around the origin and associate \(Y(t)\) with an \(\alpha\)-stable Levy process. By comparing the characteristic functions of the two processes, we are able transfer the continuity of the \(\alpha\)-stable Levy process to the process \(Y(t)\). Equipped with Proposition 4.1, we establish the following set of sufficient conditions for Assumption 2. **Theorem 4.2**.: _Let \((c_{X},\sigma,\nu)\) be the generating triplet of Levy process \(X\)._ 1. _If_ \(\sigma>0\)_, then Assumption_ 2 _holds for_ \(\theta=1,\ \lambda=1/2\)_, and any_ \(z_{0}>0\)_._ 2. _If there exist Borel measure_ \(\mu\)_, some_ \(z_{0}>0\)_, and some_ \(\alpha^{\prime}\in(0,2)\) _such that_ \((\nu-\mu)|_{(0,z_{0})}\geq 0\) _(resp.,_ \((\nu-\mu)|_{(-z_{0},0)}\geq 0\)_) and_ \(\mu[x,\infty)\) _(resp.,_ \(\mu(-\infty,x]\)_) is regularly varying with index_ \(\alpha^{\prime}\) _as_ \(x\downarrow 0\)_, then Assumption_ 2 _holds with_ \(\theta=1\) _and_ \(\lambda=1/\alpha\) _for any_ \(\alpha\in(0,\alpha^{\prime})\)_._ Proof.: Part (i) follows immediately from the calculations in (4.1). To prove part (ii), we fix some \(\alpha\in(0,\alpha^{\prime})\), and without loss of generality assume that \((\nu-\mu)|_{(0,z_{0})}\geq 0\) and \(\mu[x,\infty)\) is regularly varying with index \(\alpha^{\prime}\) as \(x\downarrow 0\). This allows us to fix some \(\epsilon=(\alpha^{\prime}-\alpha)/2\in\big{(}0,(2-\alpha)/2\big{)}\). For any \(z\geq z_{0}\), consider the decomposition \[X^{<z}(t)\triangleq Y(t)+\widetilde{X}^{z_{0},z}(t)\] where \(\mu_{z_{0}}=\mu|_{(-z_{0},z_{0})}\), \(Y\) is a Levy process with generating triplet \((0,0,\mu_{z_{0}})\), \(\widetilde{X}^{z_{0},z}\) is a Levy process with generating triplet \((c_{X},\sigma,\nu-\mu_{z_{0}})\), and the two processes are independent. First of all, applying Proposition 4.1, we can find \(C>0\) such that \(\big{\|}f_{Y(t)}\big{\|}_{\infty}\leq\frac{C}{t^{1/\alpha}\wedge 1}\ \forall t>0.\) Next, due to the independence between \(Y\) and \(\widetilde{X}^{z_{0},z}\), it then holds for any \(x\in\mathbb{R},\delta\geq 0\), and \(t>0\) that \[\mathbf{P}(X^{<z}(t)\in[x,x+\delta]) =\int_{\mathbb{R}}\mathbf{P}(Y(t)\in[x-y,x-y+\delta])\cdot \mathbf{P}(\widetilde{X}^{z_{0},z}(t)\in dy)\] \[\leq\frac{C}{t^{1/\alpha}\wedge 1}\cdot\delta\qquad\text{ due to }\left\|f_{Y(t)}\right\|_{\infty}\leq\frac{C}{t^{1/\alpha}\wedge 1}.\] This concludes the proof. **Remark 2**.: _To understand why the conditions stated in Theorem 4.2 are considered mild, let us recall our underlying assumption that \(X\) exhibits infinite activity, implying either \(\sigma>0\) or \(\nu(\mathbb{R})=\infty\). Theorem 4.2 (i) deals with the scenario when \(\sigma>0\). On the other hand, when \(\sigma=0\) we must have either \(\lim_{\epsilon\downarrow 0}\nu[\epsilon,\infty)=\infty\) or \(\lim_{\epsilon\downarrow 0}\nu(-\infty,-\epsilon]=\infty\). To satisfy the conditions in part (ii) of Theorem 4.2, it only requires \(\nu[\epsilon,\infty)\) (or \(\nu(-\infty,-\epsilon]\)) to approach infinity at a rate that matches or exceeds some polynomial function._ Aside from the regularly varying structure exploited in Theorem 4.2 (ii), we discuss another type of self-similarity structure in the Levy measure \(\nu\) that can be utilized to verify Assumption 2. Given \(\alpha\in(0,2)\) and \(b>1\), we say that the process \(X\) is \(\alpha\)**-semi-stable with span**\(b\) if its Levy measure satisfies \[\nu=b^{-\alpha}T_{b}\nu \tag{4.2}\] where the transformation \(T_{r}\) (\(\forall r>0\)) onto a Borel measure \(\rho\) on \(\mathbb{R}\) is given by \((T_{r}\rho)(B)=\rho(r^{-1}B)\). As a special case, note that \(X\) is \(\alpha\)**-stable** if \[\nu(dx)=\begin{cases}c_{1}\frac{dx}{x^{1+\alpha}}&\forall x>0\\ c_{2}\frac{dx}{x^{1+\alpha}}&\forall x<0\end{cases}\] where \(c_{1},c_{2}\geq 0,c_{1}+c_{2}>0.\) See Theorem 14.3 in [49] for details. To see how \(\alpha\)-semi-stability of \(\nu\) is different from the concept of regular variations, consider the following examples. Given a Borel measure \(\nu\), suppose that \(f(x)=\nu\big{(}(-\infty,-x]\cup[x,\infty)\big{)}\) is regularly varying at \(0\) with index \(\alpha>0\). Even if \(\nu\) satisfies the scaling-invariant property in (4.2) for some \(b>1\), we can fix a sequence of points \(\{x_{n}=\frac{1}{b^{n}}\}_{n\geq 1}\) and assign an extra mass of \(\ln n\) onto \(\nu\) at each point \(x_{n}\). In doing so, we break the scaling-invariant property but still maintain the regular variation of \(\nu\). On the other hand, to show that semi-stable processes may not have regularly varying Levy measure (when restricted on some neighborhood of the origin), let us consider a simple example. For some \(b>1\) and \(\alpha\in(0,2)\), define the following measure: \[\nu(\{b^{-n}\})=b^{n\alpha}\ \ \forall n\geq 0;\qquad\nu\big{(}\mathbb{R} \backslash\{b^{n}:\ n\in\mathbb{N}\}\big{)}=0.\] Clearly, \(\nu\) can be seen as the restriction of the Levy measure (restricted on \((-1,1)\)) of some \(\alpha\)-semi-stable process. Now define function \(f(x)=\nu[x,\infty)\) on \((0,\infty)\). For any \(t>0\), \[\frac{f(tx)}{f(x)}=\frac{\sum_{n=0}^{\lfloor\log_{b}(1/tx)\rfloor}b^{n\alpha}} {\sum_{n=0}^{\lfloor\log_{b}(1/tx)\rfloor}b^{n\alpha}}=\frac{b^{\lfloor\log_{b }(1/tx)\rfloor+1}-1}{b^{\lfloor\log_{b}(1/x)\rfloor+1}-1}.\] As \(x\to 0\), we see that \(f(tx)/f(x)\) will be very close to \[b^{\alpha(\lfloor\log_{b}(1/tx)\rfloor-\lfloor\log_{b}(1/x)\rfloor)}.\] As long as we didn't pick \(t=b^{k}\) for some \(k\in\mathbb{Z}\), asymptotically, the value of \(f(tx)/f(x)\) will repeatedly cycle through the following three different values \[\{b^{\alpha\lfloor\log_{b}(1/t)\rfloor},b^{\alpha\lfloor\log_{b}(1/t)\rfloor+ \alpha},b^{\alpha\lfloor\log_{b}(1/t)\rfloor-\alpha}\},\] thus implying \(f(tx)/f(x)\) does not have a limit as \(x\) approaches \(0\). This confirms that \(\nu[x,\infty)\) is not regularly varying as \(x\downarrow 0\). The usefulness of the concept of semi-similarity is demonstrated in Proposition 4.3 below. The proof is detailed in Section 6.3, and the key idea is to argue the similarity between the density of the non-trivial \(\alpha\)-semi-stable process \(Y(t)\) and the truncated \(Y^{(-z_{0},z_{0})}(t)\) by comparing their characteristic functions. **Proposition 4.3**.: _Let \(\alpha\in(0,2)\) and \(\nu\) be the Levy measure of a non-trivial \(\alpha\)-semi-stable process \(Y(t)\) of span \(b>1\). Let \(N\in\mathbb{Z}\). There exists some \(C\in(0,\infty)\) such that, under \(z_{0}=b^{N}\),_ \[\Big{\|}f_{Y^{(-z_{0},z_{0})}(t)}\Big{\|}_{\infty}\leq\frac{C}{t^{1/\alpha} \wedge 1}\qquad\forall t>0\] _where \(\{Y^{(-z_{0},z_{0})}(t):\ t>0\}\) is the Levy process with generating triplet \((0,0,\nu|_{(-z_{0},z_{0})})\) and \(f_{Y^{(-z_{0},z_{0})}(t)}\) is the density of distribution of \(Y^{(-z_{0},z_{0})}(t)\)._ Lastly, by applying Proposition 4.3, we yield another set of sufficient conditions for Assumption 2. **Theorem 4.4**.: _Let \((c_{X},\sigma,\nu)\) be the generating triplet of Levy process \(X\). If there exist some Borel measure \(\mu\) and some \(z_{0}>0,\alpha\in(0,2)\) such that \((\nu-\mu)|_{(-z_{0},z_{0})}\geq 0\) and \(\mu\) is the Levy measure of some \(\alpha\)-semi-stable process, then Assumption 2 holds with \(\theta=1\) and \(\lambda=1/\alpha\)._ Proof.: Let \(b>1\) be the span of the \(\alpha\)-semi-stable process. Fix some \(N\in\mathbb{Z}\) such that \(b^{N}\leq z_{0}\). For any \(z\geq z_{0}\), consider the decomposition \[X^{<z}(t)\stackrel{{ d}}{{=}}Y_{*}(t)+\widetilde{X}^{z}(t)\] where \(\mu_{*}=\mu|_{(-b^{N},b^{N})}\), \(Y_{*}\) is a Levy process with generating triplet \((0,0,\mu_{*})\), \(\widetilde{X}^{z}\) is a Levy process with generating triplet \((c_{X},\sigma,\nu-\mu_{*})\), and the two processes are independent. First of all, applying Proposition 4.3, we can find \(C>0\) such that \(\left\|f_{Y_{*}(t)}\right\|_{\infty}\leq\frac{C}{t^{1/\alpha}\wedge 1}\,\forall t >0.\) Next, due to the independence between \(Y_{*}\) and \(\widetilde{X}^{z}\), it then holds for any \(x\in\mathbb{R},\delta\geq 0\), and \(t>0\) that \[\mathbf{P}(X^{<z}(t)\in[x,x+\delta]) =\int_{\mathbb{R}}\mathbf{P}(Y_{*}(t)\in[x-y,x-y+\delta])\cdot \mathbf{P}(\widetilde{X}^{z}(t)\in dy)\] \[\leq\frac{C}{t^{1/\alpha}\wedge 1}\cdot\delta\qquad\text{ due to }\left\|f_{Y_{*}(t)}\right\|_{\infty}\leq\frac{C}{t^{1/\alpha}\wedge 1}.\] This concludes the proof. ## 5 Numerical Experiments In this section, we apply the importance sampling strategy outlined in Algorithm 2 to conduct numerical experiments, showcasing (i) the performance of the importance sampling estimator under varying scaling factors and tail distributions, and (ii) the efficiency of the algorithm when compared to crude Monte Carlo methods. We consider a Levy process given by \(X(t)=B(t)+\sum_{i=1}^{N(t)}W_{i}\), where \(B(t)\) is the standard Brownian motion, \(N\) is a Poisson process with arrival rate \(\lambda=0.1\), and \(\{W_{i}\}_{i\geq 1}\) is a sequence of iid samples from Pareto distribution with tail index \(\alpha>1\), i.e., \[\mathbf{P}(W_{1}>x)=\frac{1}{\max\{x,1\}^{\alpha}}.\] For each \(n\geq 1\), we define the scaled process \(X_{n}(t)=\frac{X(nt)}{n}\). The objective is to estimate the probability of the events \(A_{n}=\{X_{n}\in A\}\), where \[A=\left\{\xi\in\mathbb{D}:\sup_{t\in[0,1]}\xi(t)-\xi(t-)<b,\ \sup_{t\in[0,1]}\xi(t)\geq a\right\}\] with \(a=2\) and \(b=1.15\). To evaluate the performance of the importance sampling estimator under different scaling factors and tail distributions, we conduct experiments using \(\alpha\in\{1.45,1.6,1.75\}\), and \(n\in\{1000,2000,\cdots,10000\}\). The efficiency of an estimator is quantified by the _relative error_, namely the ratio of the standard deviation estimated from all samples to the estimated mean. For the parameters in Algorithm 2, we set \(\gamma=0.2\), \(w=0.05\), \(\rho=0.95\), and \(\bar{d}=2\). Note that \(l^{*}=\lceil a/b\rceil=2\) in this case. We generate 500,000 independent samples for each combination of \(\alpha\in\{1.45,1.6,1.75\}\) and \(n\in\{1000,2000,\cdots,10000\}\). We also compare the efficiency of the importance sampling estimator against the crude Monte Carlo methods. For the number of simulation trials in crude Monte Carlo estimation, we ensure that at least \(64/\hat{p}_{\alpha,n}\) samples are generated, where \(\hat{p}_{\alpha,n}\) is the estimated value of \(\mathbf{P}(A_{n})\) using Algorithm 2. The results are summarized in Table 5.1 and Figure 5.1. In Table 5.1, it is evident that for a fixed \(\alpha\), the relative error of the importance sampling estimator remains constant regardless of the magnitude of \(n\), demonstrating the strong efficiency established in Theorem 3.4. Figure 5.1 presents a comparison of relative errors between the two methods, highlighting that our importance sampling strategy significantly outperforms crude Monte Carlo methods by several orders of magnitude. In summary, when the importance sampling algorithm is appropriately parameterized, the efficiency of our importance sampling estimator becomes increasingly more evident against the vanilla Monte Carlo approach as the scaling factor \(n\) (and thus, the rarity of event \(A_{n}\)) grows larger. **Lemma 6.1**.: _There exist \(c_{A},C_{A}\in(0,\infty)\) such that_ \[c_{A}\leq\liminf_{n\to\infty}\frac{\mathbf{P}(\bar{X}_{n}\in A)}{(n\nu[n,\infty ))^{t^{*}}}\leq\limsup_{n\to\infty}\frac{\mathbf{P}(\bar{X}_{n}\in A)}{(n\nu[n, \infty))^{t^{*}}}\leq C_{A}.\] Proof.: In this proof we focus on the two-sided case in Assumption 1, but it is worth noticing that analysis to the one-sided case is almost identical with the only major difference being that we apply Result 1 (i.e., the one-sided version of the large deviations of \(\bar{X}_{n}\)) instead of Result 2 (i.e., the two-sided version). We claim that 1. \(\mathcal{J}(A)=l^{*},\mathcal{K}(A)=0\) is the only feasible solution to \(\big{(}\mathcal{J}(A),\mathcal{K}(A)\big{)}\in\operatorname*{argmin}_{(j,k) \in\mathbb{N}^{2},\ \mathbb{D}_{j,k}\cap A\neq\emptyset}j(\alpha-1)+k(\alpha^{ \prime}-1)\); here \(\mathbb{D}_{j,k}\) is the set containing all step functions in \(\mathbb{D}\) vanishing at the origin that has exactly \(j\) upward jumps and \(k\) downward jumps; 2. \(\mathbf{C}_{l^{*},0}(A^{\circ})>0\); 3. the set \(A\) is bounded away from \(\mathbb{D}_{<l^{*},0}=\bigcup_{j\leq l^{*}-1}\mathbb{D}_{j,0}\); recall that \(\mathbb{D}_{<j,k}\triangleq\bigcup_{(l,m)\in\mathbb{L}_{<j,k}}\mathbb{D}_{l,m}\) where \(\mathbb{L}_{<j,k}\triangleq\big{\{}(l,m)\in\mathbb{N}^{2}\backslash\{(j,k)\} :\ l(\alpha-1)+m(\alpha^{\prime}-1)\leq j(\alpha-1)+k(\alpha^{\prime}-1)\big{\}}\). Figure 5.1: Relative errors of the proposed importance-sampling estimator (dashed lines) and crude Monte-Carlo methods (solid lines). \begin{table} \begin{tabular}{c c c c c c} \hline n & 2000 & 4000 & 6000 & 8000 & 10000 \\ \hline \multirow{2}{*}{\(\alpha=1.45\)} & \(3.53\times 10^{-6}\) & \(1.85\times 10^{-6}\) & \(1.28\times 10^{-6}\) & \(9.76\times 10^{-7}\) & \(7.96\times 10^{-7}\) \\ & 12.84 & 13.02 & 13.06 & 13.16 & 13.19 \\ \hline \multirow{2}{*}{\(\alpha=1.6\)} & \(3.34\times 10^{-7}\) & \(1.45\times 10^{-7}\) & \(8.84\times 10^{-8}\) & \(5.89\times 10^{-8}\) & \(4.60\times 10^{-8}\) \\ & 17.13 & 17.16 & 17.26 & 17.80 & 17.63 \\ \hline \multirow{2}{*}{\(\alpha=1.75\)} & \(3.46\times 10^{-8}\) & \(1.14\times 10^{-8}\) & \(6.21\times 10^{-9}\) & \(4.17\times 10^{-9}\) & \(2.92\times 10^{-9}\) \\ & 21.74 & 22.50 & 22.53 & 22.16 & 22.40 \\ \hline \end{tabular} \end{table} Table 5.1: Rare-event simulation results using Algorithm 2. First row: estimated probability of \(\mathbf{P}(A_{n})\); Second row: the relative error. Then one can apply Result 2 to conclude the proof. In particular, since the set \(A\) is bounded away from \(\mathbb{D}_{<l^{*},0}\) and \(\big{(}\mathcal{J}(A),\mathcal{K}(A)\big{)}=(l^{*},0)\), we yield \[0<\underbrace{\mathbf{C}_{l^{*},0}(A^{\circ})}_{\stackrel{{\Delta} }{{=}}_{A_{A}}}\leq\liminf_{n\to\infty}\frac{\mathbf{P}(\bar{X}_{n}\in A)}{(n \nu[n,\infty))^{l^{*}}}\leq\limsup_{n\to\infty}\frac{\mathbf{P}(\bar{X}_{n}\in A )}{(n\nu[n,\infty))^{l^{*}}}\leq\underbrace{\mathbf{C}_{l^{*},0}(A^{-})}_{ \stackrel{{\Delta}}{{=}}_{C_{A}}}<\infty.\] Now it only remains to prove claims (i), (ii), and (iii). We start from claim (i). By definition of \(\mathbb{D}_{j,k}\), given any \(\xi\in\mathbb{D}_{j,k}\) there exist \((u_{i})_{i=1}^{j}\in(0,\infty)^{j},(t_{i})_{i=1}^{j}\in(0,1)^{j}\) and \((v_{i})_{i=1}^{k}\in(0,\infty)^{k},(s_{i})_{i=1}^{k}\in(0,1)^{k}\) such that \[\xi(t)=\sum_{i=1}^{j}u_{i}\mathbb{1}_{[t_{i},1]}(t)-\sum_{i=1}^{k}v_{i} \mathbb{1}_{[s_{i},1]}(t). \tag{6.1}\] We first show that \(\mathbb{D}_{l^{*},0}\cap A\neq\emptyset\). Recall the definition of \(A=\{\xi\in\mathbb{D}:\sup_{t\in[0,1]}\xi(t)\geq a;\sup_{t\in(0,1]}\xi(t)-\xi(t -)<b\}\) in (3.1). Also, recall that \(l^{*}=\lceil a/b\rceil\) and \(a/b\notin\mathbb{Z}\); see Assumption 3. Therefore, \((l^{*}-1)b<a<l^{*}b\). This allows us to pick some \(\epsilon>0\) small enough such that \(l^{*}(b-\epsilon)>a\). By setting \(j=l^{*},k=0\), and \(u_{i}=b-\epsilon\) for all \(i\in[l^{*}]\) in (6.1), we can see that \(\sup_{t\in[0,1]}\xi(t)=\sum_{i=1}^{l^{*}}u_{i}=l^{*}(b-\epsilon)>a\). This establishes \(\mathbb{D}_{l^{*},0}\cap A\neq\emptyset\). Now to conclude the proof of claim (i), it suffices to show that \(j\geq l^{*}\) is a necessary condition for \(\mathbb{D}_{j,k}\cap A\neq\emptyset\). Indeed, knowing that \(j\geq l^{*}\) is a necessary condition, one can see that \[\big{\{}(j,k)\in\mathbb{N}^{2}:\ \mathbb{D}_{j,k}\cap A\neq\emptyset\big{\}} \subseteq\big{\{}(j,k)\in\mathbb{N}^{2}:\ j\geq l^{*},\ k\geq 0\big{\}}.\] Also, recall that we have just shown \(\mathbb{D}_{l^{*},0}\cap A\neq\emptyset\). Because of \(\alpha,\alpha^{\prime}>1\) (see Assumption 1), we can then show that \(\underset{(j,k)\in\mathbb{N}^{2},\ \mathbb{D}_{j,k}\cap A\neq\emptyset}{ \operatorname{argmin}}j(\alpha-1)+k(\alpha^{\prime}-1)=\{(l^{*},0)\}\) and conclude the proof of claim (i). In other words, to prove claim (i) one only needs to show that \(j\geq l^{*}\) is a necessary condition for \(\mathbb{D}_{j,k}\cap A\neq\emptyset\). To this end, we proceed with a proof by contradiction. Suppose there is some \(j\leq l^{*}-1,k\geq 0\) and some \(\xi\in\mathbb{D}_{j,k}\cap A\). Then by definition of set \(A\), in the representation (6.1) of \(\xi\) we must have \(u_{i}<b\) for all \(i\in[k]\). As a result, \[\sup_{t\in[0,1]}\xi(t)\leq\sum_{i=1}^{j}u_{i}<jb\leq(l^{*}-1)b<a.\] This leads to the contradiction that \(\xi\notin A\) and allows us to conclude the proof of claim (i). To prove claim (ii), recall that we can pick some \(\epsilon>0\) small enough such that \(l^{*}(b-\epsilon)>a\). Therefore, given any \(u_{i}\in(b-\epsilon,b)\) and \(0<t_{1}<t_{2}<\cdots<t_{l^{*}}<1\), for the step function \(\xi(t)=\sum_{i=l^{*}}^{j}u_{i}\mathbb{1}_{[t_{i},1]}(t)\) we must have \(\sup_{t\in[0,1]}\xi(t)\geq\xi(1)>a\), thus implying \(\xi\in A\). This observation leads to (for the definition of \(\mathbf{C}_{j,k}\), see (2.3)) \[\mathbf{C}_{l^{*},0}(A^{\circ})\geq\nu_{\alpha}^{l^{*}}\Big{(}(b-\epsilon,b)^{ l^{*}}\Big{)}=\frac{1}{l^{*}!}\bigg{[}\frac{1}{(b-\epsilon)^{\alpha}}-\frac{1}{b^{ \alpha}}\bigg{]}>0.\] This establishes claim (ii). Moving onto claim (iii), recall the bound \(a>(l^{*}-1)b\). It then holds for any \(\epsilon>0\) small enough that \(a-\epsilon>(l^{*}-1)(b+\epsilon)\). Fix such \(\epsilon>0\). It suffices to show that \[\boldsymbol{d}(\xi,\xi^{\prime})\geq\epsilon\qquad\forall j=0,1,\cdots,l^{*}-1,\ \xi\in\mathbb{D}_{j,0},\ \xi^{\prime}\in A. \tag{6.2}\] Here \(\boldsymbol{d}\) is the Skorokhod \(J_{1}\) metric; see (2.1) for the definition. Now fix some \(j=0,1,\cdots,l^{*}-1,\ \xi\in\mathbb{D}_{j,0}\), and \(\xi^{\prime}\in A\). Also, fix some \(\lambda\) that is an increasing homeomorphisms from \([0,1]\) to itself. Let \(\widetilde{\xi}(t)=\xi\big{(}\lambda(t)\big{)}\). It suffices to show that \(\sup_{t\in[0,1]}|\xi^{\prime}(t)-\widetilde{\xi}(t)|\geq\epsilon\). First, using the representation in (6.3), we know there is some \((u_{i})_{i\leq j}\in(0,\infty)^{j}\) and some \(0<\widetilde{t}_{1}<\dots<\widetilde{t}_{j}<1\) such that \[\widetilde{\xi}(t)=\sum_{i=1}^{j}u_{i}\mathbb{1}_{[\widetilde{u}_{i},1]}(t).\] Next, we consider two different cases. If \(u_{i}>b+\epsilon\) for some \(i\in[j]\), then by definition of \(A\) (in particular, \(\xi^{\prime}(t)-\xi^{\prime}(t-)<b\) for all \(t\in[0,1]\) and \(\xi^{\prime}\in A\)), we have \(\sup_{t\in[0,1]}|\xi^{\prime}(t)-\widetilde{\xi}(t)|\geq\epsilon\). On the other hand, if \(u_{i}\in(0,b+\epsilon]\) for all \(i\in[j]\), then \[\sup_{t\in[0,1]}\widetilde{\xi}(t)=\sum_{i=1}^{j}u_{i}\leq j(b+\epsilon)\leq( l^{*}-1)(b+\epsilon)<a-\epsilon.\] Due to \(\sup_{t\in[0,1]}\xi^{\prime}(t)\geq a\) for \(\xi^{\prime}\in A\), we yield \(\sup_{t\in[0,1]}|\xi^{\prime}(t)-\widetilde{\xi}(t)|\geq\epsilon\) again. This establishes (6.2), and hence claim (iii). **Lemma 6.2**.: _Let \(p>1\). Let \(\Delta>0\) be such that \(a-\Delta>(l^{*}-1)b\) and \(\big{[}a-\Delta-(l^{*}-1)b\big{]}/\gamma\notin\mathbb{Z}\). Let_ \[J_{\gamma}=\lceil\frac{a-\Delta-(l^{*}-1)b}{\gamma}\rceil.\] _If \((J_{\gamma}+l^{*}-1)/p>2l^{*},\) then_ \[\mathbf{P}\big{(}\bar{X}_{n}\in A^{\Delta},\ \mathcal{D}(\bar{J}_{n})\leq l^{*} -1\big{)}=\boldsymbol{o}\Big{(}\big{(}n\nu[n,\infty)\big{)}^{2pl^{*}}\Big{)}\] _where \(A^{\Delta}=\big{\{}\xi\in\mathbb{D}:\sup_{t\in[0,1]}\xi(t)\geq a-\Delta;\ \sup_{t\in(0,1]}\xi(t)-\xi(t-)<b\big{\}}\) and the function \(\mathcal{D}(\xi)\) counts the number of discontinuities in \(\xi\) for any \(\xi\in\mathbb{D}\)._ Proof.: Similar to the proof of Lemma 6.1, we focus on the two-sided case in Assumption 1, but it is worth noticing that analysis to the one-sided case is almost identical with the only major difference being that we apply Result 1 (i.e., the one-sided version of the large deviations of \(\bar{X}_{n}\)) instead of Result 2 (i.e., the two-sided version). First, recall that \(\gamma\in(0,b)\) is the parameter in the definition of set \(B^{\gamma}\) in (3.3). Also, recall that \(J_{n}(t)=\sum_{s\leq t}\Delta X(t)\mathbb{1}\big{(}\Delta X(t)\geq n\gamma \big{)}\) and \(\bar{J}_{n}=\{\frac{1}{n}J_{n}(nt):\ t\in[0,1]\},\bar{X}_{n}=\{\frac{1}{n}X(nt) :\ t\in[0,1]\}\). Therefore, \(\mathbf{P}\big{(}\bar{X}_{n}\in A^{\Delta},\ \mathcal{D}(\bar{J}_{n})\leq l^{*}-1 \big{)}=\mathbf{P}\big{(}\bar{X}_{n}\in E(\Delta)\big{)}\) where \[E(\Delta)=\Big{\{}\xi\in\mathbb{D}:\ \sup_{t\in[0,1]}\xi(t) \geq a-\Delta;\ \sup_{t\in(0,1]}\xi(t)-\xi(t-)<b,\] \[\#\big{\{}t\in[0,1]:\ \xi(t)-\xi(t-)\geq\gamma\big{\}}\leq l^{*}-1 \Big{\}}.\] Furthermore, we claim that 1. \(\mathcal{J}\big{(}E(\Delta)\big{)}=l^{*}-1+J_{\gamma},\mathcal{K}\big{(}E( \Delta)\big{)}=0\) is the only feasible solution to \[\Big{(}\mathcal{J}\big{(}E(\Delta)\big{)},\mathcal{K}\big{(}E(\Delta)\big{)} \Big{)}\in\operatorname*{argmin}_{(j,k)\in\mathbb{N}^{2},\ \mathbb{D}_{j,k}\cap E(\Delta)\neq\emptyset}j(\alpha-1)+k(\alpha^{\prime}-1)\] where \(\mathbb{D}_{j,k}\) is the set containing all step functions in \(\mathbb{D}\) vanishing at the origin that has exactly \(j\) upward jumps and \(k\) downward jumps; 2. The set \(E(\Delta)\) is bounded away from \(\mathbb{D}_{<l^{*}-1+J_{\gamma},0}=\bigcup_{j\leq l^{*}+J_{\gamma}-2}\mathbb{D} _{j,0}\); recall that \(\mathbb{D}_{<j,k}\triangleq\bigcup_{(l,m)\in\mathbb{I}_{<j,k}}\mathbb{D}_{l,m}\) where \(\mathbb{I}_{<j,k}\triangleq\big{\{}(l,m)\in\mathbb{N}^{2}\backslash\{(j,k)\}: \ l(\alpha-1)+m(\alpha^{\prime}-1)\leq j(\alpha-1)+k(\alpha^{\prime}-1)\big{\}}\). These two claims allows us to apply Result 2 and immediately get \[\mathbf{P}\big{(}\bar{X}_{n}\in A^{\Delta},\ \mathcal{D}(\bar{J}_{n})\leq l^{*}-1 \big{)}=\mathbf{P}\big{(}\bar{X}_{n}\in E(\Delta)\big{)}=\boldsymbol{O}\Big{(} \big{(}n\nu[n,\infty)\big{)}^{l^{*}+J_{\gamma}}\Big{)}\] as \(n\to\infty\). To conclude, recall that we have imposed the condition that \((J_{\gamma}+l^{*}-1)/p>2l^{*}\), which implies \(l^{*}-1+J_{\gamma}>2l^{*}p\) and hence \(\big{(}n\nu[n,\infty)\big{)}^{l^{*}-1+J_{\gamma}}=\boldsymbol{o}\Big{(}\big{(} n\nu[n,\infty)\big{)}^{2l^{*}p}\Big{)}\) due to \(n\nu[n,\infty)\in\mathcal{RV}_{-(\alpha-1)}(n)\) as \(n\to\infty\) with \(\alpha>1\). Now it only remains to prove claims (i) and (ii). In order to prove claim (i), we first show that \((l^{*}-1+J_{\gamma},0)\) is a feasible solution to \(\{(j,k)\in\mathbb{N}^{2}:\ \mathbb{D}_{j,k}\cap E(\Delta)\neq\emptyset\}\). By definition of \(\mathbb{D}_{j,k}\), given any \(\xi\in\mathbb{D}_{j,k}\) there exist \((u_{i})_{i=1}^{j}\in(0,\infty)^{j},(t_{i})_{i=1}^{j}\in(0,1)^{j}\) and \((v_{i})_{i=1}^{k}\in(0,\infty)^{k},(s_{i})_{i=1}^{k}\in(0,1)^{k}\) such that \[\xi(t)=\sum_{i=1}^{j}u_{i}\mathbbm{1}_{[t_{i},1]}(t)-\sum_{i=1}^{k}v_{i} \mathbbm{1}_{[s_{i},1]}(t). \tag{6.3}\] Since \(\big{[}a-\Delta-(l^{*}-1)b\big{]}/\gamma\notin\mathbb{Z}\), we must have \(J_{\gamma}\cdot\gamma>a-\Delta-(l^{*}-1)b\). It then holds for all \(\epsilon>0\) small enough that \(a-\Delta<J_{\gamma}(\gamma-\epsilon)+(l^{*}-1)(b-\epsilon)\). By setting \(j=l^{*}-1+J_{\gamma}\), \(k=0\), \(u_{i}=b-\epsilon\) for all \(i\in[l^{*}-1]\), and \(u_{i}=\gamma-\epsilon\) for all \(i=l^{*},l^{*}+1,\cdots,l^{*}-1+J_{\gamma}\) in (6.3), we get \(\xi\in\mathbb{D}_{l^{*}-1+J_{\gamma},0}\cap E(\Delta)\). This proves that \((l^{*}-1+J_{\gamma},0)\) is a feasible solution to \(\{(j,k)\in\mathbb{N}^{2}:\ \mathbb{D}_{j,k}\cap E(\Delta)\neq\emptyset\}\). Furthermore, if we can show that \(j\geq l^{*}-1+J_{\gamma}\) is the necessary condition for \(\mathbb{D}_{j,k}\cap E(\Delta)\neq\emptyset\), then we get \[\{(j,k)\in\mathbb{N}^{2}:\ \mathbb{D}_{j,k}\cap E(\Delta)\neq\emptyset\} \subseteq\{(j,k)\in\mathbb{N}^{2}:\ j\geq l^{*}+J_{\gamma},\ k\geq 0\}.\] Because of \(\alpha,\alpha^{\prime}>1\) (see Assumption 1), we can then conclude that \(\underset{(j,k)\in\mathbb{N}^{2},\ \mathbb{D}_{j,k}\cap E(\Delta)\neq \emptyset}{\operatorname{argmin}}j(\alpha-1)+k(\alpha^{\prime}-1)=\{(l^{*}-1+J_ {\gamma},0)\}.\) In other words, regarding claim (i) the only remaining task is to show that \(j\geq l^{*}-1+J_{\gamma}\) is the necessary condition for \(\mathbb{D}_{j,k}\cap E(\Delta)\neq\emptyset\). We proceed with a proof by contradiction. Suppose there is some \(\xi\in\mathbb{D}_{j,k}\cap E(\Delta)\) with \(j\leq l^{*}+J_{\gamma}-2\). Then by definition of \(E(\Delta)\), we know that all elements in \(\{u_{i}:\ i\in[l^{*}+J_{\gamma}-2]\}\) are upper bounded by \(b\), and among them there are at most \(l^{*}-1\) of there are larger than \(\gamma\). As a result \[\sup_{t\in[0,1]}\xi(t)\leq\sum_{i=1}^{l^{*}+J_{\gamma}-2}u_{i}\leq(l^{*}-1)b+( J_{\gamma}-1)\gamma.\] However, since \(\big{[}a-\Delta-(l^{*}-1)b\big{]}/\gamma\notin\mathbb{Z}\), we have \((J_{\gamma}-1)\gamma<a-\Delta-(l^{*}-1)b<J_{\gamma}\gamma\), which implies \(\sup_{t\in[0,1]}\xi(t)\leq(l^{*}-1)b+(J_{\gamma}-1)\gamma<a-\Delta\) and hence \(\xi\notin E(\Delta)\). With this contraction obtained, we conclude the proof of claim (i). Moving onto claim (ii), recall the bound \((J_{\gamma}-1)\gamma<a-\Delta-(l^{*}-1)b\) we have just applied. It then holds for all \(\epsilon>0\) small enough such that \[a-\Delta-\epsilon>(l^{*}-1)(b+\epsilon)+(J_{\gamma}-1)(\gamma+\epsilon).\] Fix such \(\epsilon>0\). By establishing \[\boldsymbol{d}(\xi,\xi^{\prime})\geq\epsilon\qquad\forall j=0,1,\cdots,\ l^{*} +J_{\gamma}-2,\ \xi\in\mathbb{D}_{j,0},\ \xi^{\prime}\in E(\Delta) \tag{6.4}\] (here \(\boldsymbol{d}\) is the Skorokhod \(J_{1}\) metric; see (2.1) for the definition), we get \(\boldsymbol{d}\Big{(}\bigcup_{j\leq l^{*}+J_{\gamma}-2}\mathbb{D}_{j,0},E( \Delta)\Big{)}\geq\epsilon>0\) and conclude the proof of claim (ii). To prove (6.4), we fix some \(j=0,1,\cdots,\ l^{*}+J_{\gamma}-2,\ \xi\in\mathbb{D}_{j,0}\), and \(\xi^{\prime}\in E(\Delta)\). First, it suffices to show that for any \(\lambda\) that is an increasing homeomorphisms from \([0,1]\) to itself, we have \(\sup_{t\in[0,1]}\big{|}\xi^{\prime}(t)-\xi\big{(}\lambda(t)\big{)}\big{|}>\epsilon\). Specifically, we can fix some \(\lambda\) and set \(\widetilde{\xi}(t)=\xi\big{(}\lambda(t)\big{)}\). Using the representation in (6.3), we know there is some \((u_{i})_{i\leq j}\in(0,\infty)^{j}\) and some \(0<\widetilde{t}_{1}<\cdots<\widetilde{t}_{j}<1\) such that \[\widetilde{\xi}(t)=\sum_{i=1}^{j}u_{i}\mathbb{1}_{[\widetilde{u}_{i},1]}(t).\] We proceed studying the three cases that exhaust all the possibilities for the jumps \(u_{i}\) in \(\widetilde{\xi}\). First, suppose there is some \(u_{i}\geq b+\epsilon\). Then from the definition of \(E(\Delta)\), we know that \(\xi^{\prime}(t)-\xi^{\prime}(t-)\leq b\) for all \(t\in[0,1]\), and hence \(\sup_{t\in[0,1]}|\xi^{\prime}(t)-\widetilde{\xi}(t)|\geq\epsilon\). Next, suppose that \(u_{i}\in(0,b+\epsilon)\) for all \(i\in[j]\), but there are at least \(l^{*}\) elements in \(\{u_{i}:\ i\in[j]\}\) that are larger than or equal to \(\gamma+\epsilon\). Then from the definition of \(E(\Delta)\) again, we know \(\sup_{t\in[0,1]}|\xi^{\prime}(t)-\widetilde{\xi}(t)|\geq\epsilon\). Lastly, suppose that \(u_{i}\in(0,b+\epsilon)\) for all \(i\in[j]\), and there are at most \(l^{*}-1\) elements in \(\{u_{i}:\ i\in[j]\}\) that are larger than or equal to \(\gamma+\epsilon\). Then observe that \[\sup_{t\in[0,1]}\widetilde{\xi}(t) \leq\sum_{i=1}^{j}u_{i}\leq(l^{*}-1)(b+\epsilon)+\big{[}j-(l^{*}- 1)\big{]}(\gamma+\epsilon)\] \[\leq(l^{*}-1)(b+\epsilon)+(J_{\gamma}-1)(\gamma+\epsilon)\qquad \text{to to }j\leq l^{*}-2+J_{\gamma}\] \[<a-\Delta-\epsilon\qquad\text{due to our choice of }\epsilon>0.\] Again, from the definition of \(E(\Delta)\), we have \(\sup_{t\in[0,1]}|\xi(t)-\widetilde{\xi}(t)|\geq\epsilon\). In summary, we have shown that \(\sup_{t\in[0,1]}|\xi^{\prime}(t)-\xi\big{(}\lambda(t)\big{)}|\geq\epsilon\) for any \(j=0,1,\cdots,\ l^{*}+J_{\gamma}-2,\ \xi\in\mathbb{D}_{j,0},\ \xi^{\prime}\in E(\Delta)\) and any \(\lambda\) that is an increasing homeomorphisms from \([0,1]\) to itself. This establishes (6.4) and completes the proof of claim (ii). Now we are ready to prove Proposition 3.1. Proof of Proposition 3.1.: We start by proving the unbiasedness of the importance sampling estimator \(L_{n}\). Note that \[L_{n}=Z_{n}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}=\sum_{m=0}^{\tau}\frac{\hat{Y} _{n}^{m}\mathbb{1}_{E_{n}}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}-\hat{Y}_{n}^{m-1 }\mathbb{1}_{E_{n}}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}}{\mathbf{P}(\tau\geq m)}.\] Recall that \(\tau\sim\text{Geom}(\rho)\) under \(\mathbf{Q}_{n}\) (that is, the importance sampling distribution \(\mathbf{Q}_{n}\) does not alter the law of \(\tau\)) and \(\tau\) is independent of everything else. Furthermore, we claim that (for any \(n\geq 1\) and \(\rho\in(\rho_{0},1)\)) \[\sum_{m\geq 1}\mathbf{E}^{\mathbf{Q}_{n}}\bigg{|}\hat{Y}_{n}^{m-1} \mathbb{1}_{E_{n}}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}-Y_{n}^{*}\mathbb{1}_{E_{ n}}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}\bigg{|}^{2}\bigg{/}\mathbf{P}(\tau\geq m )<\infty. \tag{6.5}\] Then due to \(\mathbf{P}(\tau\geq m)=\rho^{m-1}\), we have \(\mathbf{E}^{\mathbf{Q}_{n}}\big{|}\hat{Y}_{n}^{m-1}\mathbb{1}_{E_{n}}\frac{d \mathbf{P}}{d\mathbf{Q}_{n}}-Y_{n}^{*}\mathbb{1}_{E_{n}}\frac{d\mathbf{P}}{d \mathbf{Q}_{n}}\big{|}^{2}\to 0\) as \(m\to\infty\). This \(\mathcal{L}_{2}\) convergence result then implies the \(\mathcal{L}_{1}\) convergence of \(\mathbf{E}^{\mathbf{Q}_{n}}\big{|}\hat{Y}_{n}^{m-1}\mathbb{1}_{E_{n}}\frac{d \mathbf{P}}{d\mathbf{Q}_{n}}-Y_{n}^{*}\mathbb{1}_{E_{n}}\frac{d\mathbf{P}}{d \mathbf{Q}_{n}}\big{|}\to 0\) as \(m\to\infty\). Applying Result 3, we conclude the proof of the unbiasedness of \(L_{n}\) under \(\mathbf{Q}_{n}\) (for any \(n\geq 1\)). Returning to claim (6.5), observe that \[\mathbf{E}^{\mathbf{Q}_{n}}\bigg{|}\hat{Y}_{n}^{m-1}\mathbb{1}_{E_{ n}}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}-Y_{n}^{*}\mathbb{1}_{E_{n}}\frac{d \mathbf{P}}{d\mathbf{Q}_{n}}\bigg{|}^{2}\] \[=\mathbf{E}^{\mathbf{Q}_{n}}\bigg{[}|\hat{Y}_{n}^{m-1}-Y_{n}^{*}|^ {2}\cdot\mathbb{1}_{E_{n}}\cdot\bigg{(}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}} \bigg{)}^{2}\bigg{]}\leq\mathbf{E}^{\mathbf{Q}_{n}}\bigg{[}|\hat{Y}_{n}^{m-1}-Y _{n}^{*}|^{2}\cdot\bigg{(}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}\bigg{)}^{2}\bigg{]}\] \[=\mathbf{E}\bigg{[}|\hat{Y}_{n}^{m-1}-Y_{n}^{*}|^{2}\cdot\frac{d \mathbf{P}}{d\mathbf{Q}_{n}}\bigg{]}\] \[\leq\frac{1}{w}\mathbf{E}|\hat{Y}_{n}^{m-1}-Y_{n}^{*}|^{2}\qquad\text{ due to }\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}\leq\frac{1}{w},\text{ see (\ref{eq:1}).}\] Therefore, given any \(n\geq 1\), it suffices to show that \(\mathbf{E}|\hat{Y}_{n}^{m}-Y_{n}^{*}|^{2}=\boldsymbol{o}(\rho^{m})\) as \(m\to\infty\). In particular, since \(\hat{Y}_{n}^{m}\) and \(Y_{n}^{*}\) only take values in \(\{0,1\}\), we have \(\mathbf{E}|\hat{Y}_{n}^{m}-Y_{n}^{*}|^{2}=\mathbf{P}(\hat{Y}_{n}^{m}\neq Y_{n} ^{*}).\) Now observe that (for any \(m\geq\bar{m}\)) \[\mathbf{P}(\hat{Y}_{n}^{m}\neq Y_{n}^{*}) =\sum_{k\geq 0}\mathbf{P}\Big{(}Y_{n}^{*}\neq\hat{Y}_{n}^{m}\ \Big{|}\ \mathcal{D}(\bar{J}_{n})=k\Big{)}\mathbf{P}\Big{(} \mathcal{D}(\bar{J}_{n})=k\Big{)}\] \[\leq\sum_{k\geq 0}C_{0}\rho_{0}^{m}\cdot(k+1)\cdot\mathbf{P} \Big{(}\mathcal{D}(\bar{J}_{n})=k\Big{)}\qquad\text{due to the upper bound in (\ref{eq:1})}\] \[=C_{0}\rho_{0}^{m}\sum_{k\geq 0}\cdot(k+1)\cdot\exp\big{(}-n \nu[n\gamma,\infty)\big{)}\frac{\big{(}n\nu[n\gamma,\infty)\big{)}^{k}}{k!}\] \[=C_{0}\rho_{0}^{m}\cdot\mathbf{E}\Big{[}1+\text{Poisson}\big{(}n \nu[n\gamma,\infty)\big{)}\Big{]}\] \[=C_{0}\rho_{0}^{m}\cdot\big{(}1+n\nu[n\gamma,\infty)\big{)}.\] In particular, due to \(\nu(x)\in\mathcal{RV}_{-\alpha}(x)\) as \(x\to\infty\) with \(\alpha>1\), we have \(n\nu[n\gamma,\infty)\in\mathcal{RV}_{-(\alpha-1)}(n)\) as \(n\to\infty\), thus implying \(\lim_{n\to\infty}n\nu[n\gamma,\infty)=0\). As a result, by setting \(C_{0}^{*}=C_{0}\cdot\sup_{n\geq 1}\big{(}1+n\nu[n\gamma,\infty)\big{)}<\infty\), we get \(\mathbf{P}(\hat{Y}_{n}^{m}\neq Y_{n}^{*})\leq C_{0}^{*}\rho_{0}^{m}\) for any \(n\geq 1\) and \(m\geq\bar{m}\). In light of the choice \(\rho\in(\rho_{0},1)\), we get \(\mathbf{P}(\hat{Y}_{n}^{m}\neq Y_{n}^{*})=\boldsymbol{o}(\rho^{m})\) as \(m\to\infty\). This concludes the proof of claim (6.5), and hence the unbiasedness of \(L_{n}\). The rest of the proof is devoted to establishing the strong efficiency of \(L_{n}\). Observe that \[\mathbf{E}^{\mathbf{Q}_{n}}[L_{n}^{2}]=\int Z_{n}^{2}\frac{d \mathbf{P}}{d\mathbf{Q}_{n}}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}d\mathbf{Q}_{n }=\int Z_{n}^{2}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}d\mathbf{P}=\int Z_{n}^{2} \mathbb{1}_{B^{\gamma}_{n}}\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}d\mathbf{P}+ \int Z_{n}^{2}\mathbb{1}_{(B^{\gamma}_{n})^{c}}\frac{d\mathbf{P}}{d\mathbf{Q} _{n}}d\mathbf{P}.\] Meanwhile, from (3.5) one can see that \(\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}\leq\frac{1}{w}\) on event \((B^{\gamma}_{n})^{c}\) and \(\frac{d\mathbf{P}}{d\mathbf{Q}_{n}}\leq\frac{\mathbf{P}(B^{\gamma}_{n})}{1-w}\) on event \(B^{\gamma}_{n}\), which leads to \[\mathbf{E}^{\mathbf{Q}_{n}}[L_{n}^{2}]\leq\frac{\mathbf{P}(B^{ \gamma}_{n})}{1-w}\mathbf{E}[Z_{n}^{2}\mathbb{1}_{B^{\gamma}_{n}}]+\frac{1}{w} \mathbf{E}[Z_{n}^{2}\mathbb{1}_{(B^{\gamma}_{n})^{c}}]. \tag{6.6}\] Now we claim the existence of some \(c_{A},C_{A},C_{B}\in(0,\infty)\) such that \[c_{A}\leq\liminf_{n\to\infty}\frac{\mathbf{P}(A_{n})}{(n\nu[n, \infty))^{l^{*}}}\leq\limsup_{n\to\infty}\frac{\mathbf{P}(A_{n})}{(n\nu[n, \infty))^{l^{*}}}\leq C_{A}, \tag{6.7}\] \[\mathbf{P}(B^{\gamma}_{n})\leq C_{B}\cdot(n\nu[n,\infty))^{l^{*}} \qquad\forall n\geq 1. \tag{6.8}\] Let \(Z_{n,1}\triangleq Z_{n}\mathbb{1}_{B^{\gamma}_{n}}\) and \(Z_{n,2}\triangleq Z_{n}\mathbb{1}_{(B^{\gamma}_{n})^{c}}\). Then it suffices to prove that for any \(\gamma\in(0,b)\) close enough to \(0\) and any \(\rho\in(\rho_{0},1)\), the following claims hold (as \(n\to\infty\)) \[\mathbf{E}[Z_{n,1}^{2}] =\boldsymbol{O}\big{(}(n\nu[n,\infty))^{l^{*}}\big{)}, \tag{6.9}\] \[\mathbf{E}[Z_{n,2}^{2}] =\boldsymbol{o}\big{(}(n\nu[n,\infty))^{2l^{*}}\big{)}. \tag{6.10}\] Indeed, using (6.8) and (6.9) we get \(\mathbf{P}(B^{\gamma}_{n})\mathbf{E}[Z_{n}^{2}\mathbb{1}_{B^{\gamma}_{n}}]= \boldsymbol{O}\big{(}(n\nu[n,\infty))^{l^{*}}\big{)}\cdot\boldsymbol{O}\big{(} (n\nu[n,\infty))^{l^{*}}\big{)}=\boldsymbol{O}\big{(}(n\nu[n,\infty))^{2l^{*}} \big{)}=\boldsymbol{O}\big{(}(n\nu[n,\infty))^{2l^{*}}\big{)}=\boldsymbol{O} \big{(}\mathbf{P}^{2}(A_{n})\big{)}.\) The last equality follows from (6.7). Similarly, from (6.7) and (6.10) we get \(\mathbf{E}[Z_{n}^{2}\mathbb{1}_{(B^{\gamma}_{n})^{c}}]=\boldsymbol{o}\big{(}(n \nu[n,\infty))^{2l^{*}}\big{)}=\boldsymbol{o}\big{(}\mathbf{P}^{2}(A_{n})\big{)}.\) Therefore, in (6.6) we have \(\mathbf{E}^{\mathbf{Q}_{n}}[L_{n}^{2}]=\boldsymbol{O}\big{(}\mathbf{P}^{2}(A_{n}) \big{)}\), thus establishing the strong efficiency. It now remains to prove claims (6.7)(6.8)(6.9)(6.10). **Proof of Claim (6.7)**: It follows directly from Lemma 6.1. **Proof of Claim** (6.8): We will make use of the following preliminary result. For any \(c>0,k\in\mathbb{N}\), \[\mathbf{P}\Big{(}\text{Poisson}(c)\geq k\Big{)}=\sum_{j\geq k}\exp(-c)\frac{c^{j} }{j!}=c^{k}\sum_{j\geq k}\exp(-c)\frac{c^{j-k}}{j!}\leq c^{k}\sum_{j\geq k}\exp( -c)\frac{c^{j-k}}{(j-k)!}=c^{k}. \tag{6.11}\] Recall that \(B_{n}^{\gamma}=\{\bar{X}_{n}\in B^{\gamma}\}\) and \(B^{\gamma}\triangleq\{\xi\in\mathbb{D}:\#\{t\in[0,1]:\xi(t)-\xi(t-)\geq\gamma \}\geq l^{*}\}.\) Therefore, \[\mathbf{P}(B_{n}^{\gamma}) \stackrel{{\Delta}}{{=}}\mathbf{P}\Big{(}\#\big{\{} t\in[0,n]:\ X(t)-X(t-)\geq n\gamma\big{\}}\geq l^{*}\Big{)}\qquad\text{ due to }\bar{X}_{n}(t)=X(nt)/n\] \[=\sum_{k\geq l^{*}}\exp\big{(}-n\nu[n\gamma,\infty)\big{)}\frac{ \big{(}n\nu[n\gamma,\infty)\big{)}^{k}}{k!}\leq\big{(}n\nu[n\gamma,\infty) \big{)}^{l^{*}}\qquad\text{due to }\eqref{eq:p-1}.\] Lastly, since the function \(H_{+}(x)=\nu[x,\infty)\) is regularly varying as \(x\to\infty\) with index \(-\alpha\) (see Assumption 1), we have \(\lim_{n\to\infty}\frac{\big{(}n\nu[n\gamma,\infty)\big{)}^{l^{*}}}{\big{(}n\nu [n,\infty)\big{)}^{l^{*}}}=1/\gamma^{\alpha l^{*}}\in(0,\infty)\), and hence \(\mathbf{P}(B_{n}^{\gamma})=\boldsymbol{O}\big{(}(n\nu[n,\infty))^{l^{*}}\big{)}\). **Proof of Claim** (6.9): By definition of \(Z_{n}\) in (3.8), \[Z_{n,1}\stackrel{{\Delta}}{{=}}Z_{n}\mathbb{1}_{B_{n}^{\gamma}}= \sum_{m=0}^{\tau}\frac{\hat{Y}_{n}^{m}\mathbb{1}_{E_{n}\cap B_{n}^{\gamma}}- \hat{Y}_{n}^{m-1}\mathbb{1}_{E_{n}\cap B_{n}^{\gamma}}}{\mathbf{P}(\tau\geq m )}.\] We make one more observation. For any \(k=0,1,\cdots,l^{*}-1\), on event \(\{\mathcal{D}(\bar{J}_{n})=k\}=\big{\{}\#\{t\in[0,1]:\bar{X}_{n}(t)-\bar{X}_{n} (t-)\geq\gamma\}=k\big{\}}\), due to our choice of \(\gamma\in(0,b)\) we have \(\#\{t\in[0,1]:\bar{X}_{n}(t)-\bar{X}_{n}(t-)>b\}<l^{*}\). As a result, we must have \(\mathbb{1}_{B_{n}^{\gamma}}=0\), and hence \(Z_{n,1}=0\). By applying Result 3, we yield \[\mathbf{E}Z_{n,1}^{2}\] \[\leq\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\big{|}Y_{n}^{*}\mathbb{1}_ {E_{n}\cap B_{n}^{\gamma}}-\hat{Y}_{n}^{m-1}\mathbb{1}_{E_{n}\cap B_{n}^{ \gamma}}\big{|}^{2}\Big{]}}{\mathbf{P}(\tau\geq m)}\] \[=\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\big{|}Y_{n}^{*}-\hat{Y}_{n} ^{m-1}\big{|}^{2}\mathbb{1}_{E_{n}\cap B_{n}^{\gamma}}\Big{]}}{\mathbf{P}( \tau\geq m)}\leq\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\big{|}Y_{n}^{*}-\hat{Y}_{n}^{m-1} \big{|}^{2}\mathbb{1}_{B_{n}^{\gamma}}\Big{]}}{\mathbf{P}(\tau\geq m)}\] \[=\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\mathbb{1}\big{(}Y_{n}^{*} \neq\hat{Y}_{n}^{m-1}\big{)}\cdot\mathbb{1}_{B_{n}^{\gamma}}\Big{]}}{\mathbf{P }(\tau\geq m)}\qquad\text{because }\hat{Y}_{n}^{m}\text{ and }Y_{n}^{*}\text{ only take values in }\{0,1\}\] \[=\sum_{m\geq 1}\sum_{k\geq 0}\frac{\mathbf{E}\Big{[}\mathbb{1} \big{(}Y_{n}^{*}\neq\hat{Y}_{n}^{m-1}\big{)}\cdot\mathbb{1}_{B_{n}^{\gamma}} \Big{|}\ \{\mathcal{D}(\bar{J}_{n})=k\}\Big{]}}{\mathbf{P}(\tau\geq m)}\cdot\mathbf{P} \big{(}\mathcal{D}(\bar{J}_{n})=k\big{)}\] \[=\sum_{m\geq 1}\sum_{k\geq l^{*}}\frac{\mathbf{E}\Big{[}\mathbb{1} \big{(}Y_{n}^{*}\neq\hat{Y}_{n}^{m-1}\big{)}\cdot\mathbb{1}_{B_{n}^{\gamma}} \Big{|}\ \{\mathcal{D}(\bar{J}_{n})=k\}\Big{]}}{\mathbf{P}(\tau\geq m)}\cdot\mathbf{P} \big{(}\mathcal{D}(\bar{J}_{n})=k\big{)}\ \ \text{due to }\mathbb{1}_{B_{n}^{\gamma}}=0\text{ on }\{\{\mathcal{D}(\bar{J}_{n})<l^{*}\}}\] \[\leq\sum_{k\geq l^{*}}\mathbf{P}\big{(}\mathcal{D}(\bar{J}_{n})=k \big{)}\cdot\sum_{m\geq 1}\frac{\mathbf{P}\Big{(}Y_{n}^{*}\neq\hat{Y}_{n}^{m-1}\ \Big{|}\ \{ \mathcal{D}(\bar{J}_{n})=k\}\Big{)}}{\mathbf{P}(\tau\geq m)}\] \[\leq\sum_{k\geq l^{*}}\mathbf{P}\big{(}\mathcal{D}(\bar{J}_{n})=k \big{)}\cdot\bigg{[}\sum_{m=1}^{\bar{m}}\frac{1}{\rho^{m-1}}+\sum_{m\geq\bar{m}+1 }\frac{C_{0}\rho_{0}^{m-1}\cdot(k+1)}{\rho^{m-1}}\bigg{]}\qquad\text{due to }\eqref{eq:p-1}\text{ and }\tau\sim\text{Geom}(\rho)\] \[\leq\sum_{k\geq l^{*}}\mathbf{P}\big{(}\mathcal{D}(\bar{J}_{n})=k\big{)}\cdot(k+1 )\cdot\bigg{[}\underbrace{\sum_{m=1}^{m}\frac{1}{\rho^{m-1}}+\sum_{m\geq\hat{m+1} }\frac{C_{0}\rho_{0}^{m-1}}{\rho^{m-1}}}_{\triangleq\widehat{C}_{\rho,1}}\bigg{]}.\] In particular, for any \(\rho\in(\rho_{0},1)\), we have \(\widetilde{C}_{\rho,1}<\infty\), and hence \[\mathbf{E}Z_{n,1}^{2} \leq\widetilde{C}_{\rho,1}\sum_{k\geq l^{*}}(k+1)\cdot\mathbf{P} \big{(}\mathcal{D}(\bar{J}_{n})=k\big{)}\] \[=\widetilde{C}_{\rho,1}\sum_{k\geq l^{*}}(k+1)\cdot\exp\big{(}-n \nu[n\gamma,\infty)\big{)}\frac{\big{(}n\nu[n\gamma,\infty)\big{)}^{k}}{k!}\] \[=\widetilde{C}_{\rho,1}\sum_{k\geq l^{*}}\frac{k+1}{k}\cdot k \cdot\exp\big{(}-n\nu[n\gamma,\infty)\big{)}\frac{\big{(}n\nu[n\gamma,\infty) \big{)}^{k}}{k!}\] \[\leq 2\widetilde{C}_{\rho,1}\sum_{k\geq l^{*}}k\cdot\exp\big{(}-n \nu[n\gamma,\infty)\big{)}\frac{\big{(}n\nu[n\gamma,\infty)\big{)}^{k}}{k!} \qquad\text{ due to }l^{*}\geq 1\implies\frac{k+1}{k}\leq 2\ \forall k\geq l^{*}\] \[=2\widetilde{C}_{\rho,1}\sum_{k\geq l^{*}}\exp\big{(}-n\nu[n \gamma,\infty)\big{)}\frac{\big{(}n\nu[n\gamma,\infty)\big{)}^{k}}{(k-1)!}\] \[=2\widetilde{C}_{\rho,1}\cdot\big{(}n\nu[n\gamma,\infty)\big{)}^{ l^{*}}\sum_{k\geq l^{*}}\exp\big{(}-n\nu[n\gamma,\infty)\big{)}\frac{\big{(}n\nu[n \gamma,\infty)\big{)}^{k-l^{*}}}{(k-1)!}\] due to \[l^{*}\geq 1\] \[=2\widetilde{C}_{\rho,1}\cdot\big{(}n\nu[n\gamma,\infty)\big{)}^{ l^{*}}.\] Lastly, due to the regular varying nature of \(H_{+}(x)=\nu[x,\infty)\), we can conclude that \(\mathbf{E}Z_{n,1}^{2}=\boldsymbol{O}\big{(}(n\nu[n,\infty))^{l^{*}}\big{)}\) as \(n\to\infty\). **Proof of Claim (6.10)**: We start by fixing some \(\rho\in(\rho_{0},1)\) and specifying a few constants. Due to \(\rho_{0}<\rho<1\), one can find some \(q>1\) such that \(\rho_{0}^{1/q}<\rho\). Let \(p>1\) be such that \(\frac{1}{p}+\frac{1}{q}=1\). Due to Assumption 3, we have \(a>(l^{*}-1)b\). By picking any \(\gamma\in(0,b)\) small enough, we have \((\hat{J}_{\gamma}+l^{*}-1)/p>2l^{*}\) where \[\hat{J}_{\gamma}=\frac{a-\Delta-(l^{*}-1)b}{\gamma}.\] Then one can pick some \(\Delta>0\) small enough such that \(\big{[}a-\Delta-(l^{*}-1)b\big{]}/\gamma\notin\mathbb{Z}\), \(a-\Delta>(l^{*}-1)b\), and \((J_{\gamma}+l^{*}-1)/p>2l^{*}\) where \[J_{\gamma}=\lceil\frac{a-\Delta-(l^{*}-1)b}{\gamma}\rceil.\] Recall that \(A^{\Delta}=\big{\{}\xi\in\mathbb{D}:\sup_{t\in[0,1]}\xi(t)\geq a-\Delta\big{\}}\) and let \(A^{\Delta}_{n}=\{\bar{X}_{n}\in A^{\Delta}\}\). Note that \[Z_{n,2}\triangleq Z_{n}\mathbb{1}_{(B^{\gamma}_{n})^{e}}=\underbrace{Z_{n} \mathbb{1}_{A^{\Delta}_{n}\cap(B^{\gamma}_{n})^{e}}}_{\triangleq Z_{n,3}}+ \underbrace{Z_{n}\mathbb{1}_{(A^{\Delta}_{n})^{e}\cap(B^{\gamma}_{n})^{e}}}_ {\triangleq Z_{n,4}}.\] For term \(Z_{n,3}=\sum_{m=0}^{\log_{2}(n^{2})+\tau}\frac{\hat{Y}_{n}^{m-1}A_{n}^{\Delta} \cap E_{n}\cap(B_{n}^{\gamma})^{c}-\hat{Y}_{n}^{m-1}1_{A_{n}^{\Delta}\cap E_{n} \cap(B_{n}^{\gamma})^{c}}}{\mathbf{P}(\tau\geq m)}\), applying Result 3 we get \[\mathbf{E}Z_{n,3}^{2}\] \[\leq\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\big{|}Y_{n}^{*}\mathbb{1}_ {A_{n}^{\Delta}\cap E_{n}\cap(B_{n}^{\gamma})^{c}}-\hat{Y}_{n}^{m-1}1_{A_{n}^{ \Delta}\cap E_{n}\cap(B_{n}^{\gamma})^{c}}\big{|}^{2}\Big{]}}{\mathbf{P}(\tau \geq m)}=\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\big{|}Y_{n}^{*}-\hat{Y}_{n}^{m-1} \big{|}^{2}\mathbb{1}_{A_{n}^{\Delta}\cap E_{n}\cap(B_{n}^{\gamma})^{c}}\Big{]} }{\mathbf{P}(\tau\geq m)}\] \[=\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\mathbb{1}\big{(}Y_{n}^{*} \neq\hat{Y}_{n}^{m-1}\big{)}\cdot\mathbb{1}_{A_{n}^{\Delta}\cap E_{n}\cap(B_{n }^{\gamma})^{c}}\Big{]}}{\mathbf{P}(\tau\geq m)}\qquad\text{because $\hat{Y}_{n}^{m}$ and $Y_{n}^{*}$ only take values in $\{0,1\}$}\] \[\leq\sum_{m\geq 1}\frac{\Big{(}\mathbf{P}\big{(}Y_{n}^{*}\neq\hat{Y}_ {n}^{m-1}\big{)}\Big{)}^{1/q}\cdot\Big{(}\mathbf{P}\big{(}A_{n}^{\Delta}\cap E _{n}\cap(B_{n}^{\gamma})^{c}\big{)}\Big{)}^{1/p}}{\mathbf{P}(\tau\geq m)} \qquad\text{by H\"{o}lder's inequality}.\] Applying Lemma 6.2, we get \(\mathbf{P}\big{(}A_{n}^{\Delta}\cap E_{n}\cap(B_{n}^{\gamma})^{c}\big{)}= \boldsymbol{O}\Big{(}\big{(}n\nu[n,\infty)\big{)}^{2pl^{*}}\Big{)}\) as \(n\to\infty\). As a result, \(\Big{(}\mathbf{P}\big{(}A_{n}^{\Delta}\cap E_{n}\cap(B_{n}^{\gamma})^{c}\big{)} \Big{)}^{1/p}=\boldsymbol{o}\Big{(}\big{(}n\nu[n,\infty)\big{)}^{2l^{*}} \Big{)}\) as \(n\to\infty\). On the other hand, for any \(n\geq 1\) and \(m\geq\bar{m}\), \[\mathbf{P}\big{(}Y_{n}^{*}\neq\hat{Y}_{n}^{m-1}\big{)}\] \[=\sum_{k\geq 0}\mathbf{P}\big{(}Y_{n}^{*}\neq\hat{Y}_{n}^{m-1} \big{|}\ \mathcal{D}(\bar{J}_{n})=k\big{)}\mathbf{P}\big{(}\mathcal{D}(\bar{J}_{n})=k \big{)}\big{)}\] \[\leq\sum_{k\geq 0}C_{0}\rho_{0}^{m-1}\cdot(k+1)\cdot\exp\big{(}-n \nu[n\gamma,\infty)\big{)}\frac{\big{(}n\nu[n\gamma,\infty)\big{)}^{k}}{k!} \qquad\text{using (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq the second-order moment of term \(Z_{n,4}=\sum_{m=0}^{\log_{2}(n^{2})+\tau}\frac{\hat{Y}_{n}^{m-1}\left\{A_{n}^{ \Delta}\right\}\cap E_{n}\cap(B_{n}^{\gamma})^{c}-\hat{Y}_{n}^{m-1}\left\{A_{n} ^{\Delta}\right\}\cap E_{n}\cap(B_{n}^{\gamma})^{c}}{\mathbf{P}(\tau\geq m)}\) with \[\mathbf{E}Z_{n,4}^{2}\] \[\leq\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\big{|}Y_{n}^{*}\mathbb{1} _{(A_{n}^{\Delta})^{c}\cap E_{n}\cap(B_{n}^{\gamma})^{c}}-\hat{Y}_{n}^{m-1} \mathbb{1}_{(A_{n}^{\Delta})^{c}\cap E_{n}\cap(B_{n}^{\gamma})^{c}}\Big{]}^{2 }}{\mathbf{P}(\tau\geq m)}=\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\big{|}Y_{n}^{*}-\hat{Y}_{n}^ {m-1}\big{|}^{2}\mathbb{1}_{(A_{n}^{\Delta})^{c}\cap E_{n}\cap(B_{n}^{\gamma}) ^{c}}\Big{]}}{\mathbf{P}(\tau\geq m)}\] \[=\sum_{m\geq 1}\frac{\mathbf{E}\Big{[}\mathbb{1}\left(Y_{n}^{*} \neq\hat{Y}_{n}^{m-1}\right)\cdot\mathbb{1}_{(A_{n}^{\Delta})^{c}\cap E_{n} \cap(B_{n}^{\gamma})^{c}}\Big{]}}{\mathbf{P}(\tau\geq m)}\qquad\text{because }\hat{Y}_{n}^{m}\text{ and }Y_{n}^{*}\text{ only take values in }\{0,1\}\] \[\leq\sum_{m\geq 1}\frac{\mathbf{P}\Big{(}\big{\{}Y_{n}^{*}\neq\hat{Y} _{n}^{m-1},\ \bar{X}_{n}\notin A^{\Delta}\big{\}}\cap(B_{n}^{\gamma})^{c}\Big{)}}{ \mathbf{P}(\tau\geq m)}\qquad\text{ due to }A_{n}^{\Delta}=\{\bar{X}_{n}\in A^{\Delta}\}\] \[=\sum_{m\geq 1}\frac{\mathbf{P}\Big{(}\big{\{}Y_{n}^{*}\neq\hat{Y} _{n}^{m-1},\ \bar{X}_{n}\notin A^{\Delta}\big{\}}\cap\{\mathcal{D}(\bar{J}_{n})<l^{*}\} \Big{)}}{\mathbf{P}(\tau\geq m)}\] \[\qquad\qquad\qquad\qquad\qquad\text{ due to }B_{n}^{\gamma}=\{\bar{X}_{n}\in B^{ \gamma}\}=\{\bar{J}_{n}\in B^{\gamma}\}=\{\mathcal{D}(\bar{J}_{n})\geq l^{*}\}\] \[=\sum_{m\geq 1}\sum_{k=0}^{l^{*}-1}\frac{\mathbf{P}\big{(}Y_{n}^{*} \neq\hat{Y}_{n}^{m-1},\ \bar{X}_{n}\notin A^{\Delta}\big{|}\ \{\mathcal{D}(\bar{J}_{n})=k\}\big{)}}{ \mathbf{P}(\tau\geq m)}\cdot\mathbf{P}\big{(}\mathcal{D}(\bar{J}_{n})=k\big{)}\] \[\leq\sum_{m\geq 1}\sum_{k=0}^{l^{*}-1}\frac{C_{0}\rho_{0}^{m-1}}{ \Delta^{2}n^{\mu}\cdot\rho^{m-1}}\qquad\text{ due to }\eqref{eq:2.1}\] \[=l^{*}\sum_{m\geq 1}\frac{C_{0}\rho_{0}^{m-1}}{\Delta^{2}n^{\mu} \cdot\rho^{m-1}}=\frac{C_{0}l^{*}}{\Delta\cdot(1-\frac{\rho_{0}}{\rho})}\cdot \frac{1}{n^{\mu}}.\] Lastly, recall the condition that \(\mu>2l^{*}(\alpha-1)\). Since \(n\nu[n,\infty)\in\mathcal{RV}_{-(\alpha-1)}\) as \(n\to\infty\), we have \(\big{(}n\nu[n,\infty)\big{)}^{2l^{*}}\in\mathcal{RV}_{-2l^{*}(\alpha-1)}\). As a result, \(1/n^{\mu}=\boldsymbol{o}\Big{(}\big{(}n\nu[n,\infty)\big{)}^{2l^{*}}\Big{)}\) and hence \[\mathbf{E}Z_{n,4}^{2}=\boldsymbol{o}\Big{(}\big{(}n\nu[n,\infty) \big{)}^{2l^{*}}\Big{)}\qquad\text{ as }n\to\infty. \tag{6.13}\] Combining (6.12)(6.13) with the preliminary bound \((x+y)^{2}\leq 2x^{2}+2y^{2}\), we yield \(\mathbf{E}Z_{n,2}^{2}\leq 2\mathbf{E}Z_{n,3}^{2}+2\mathbf{E}Z_{n,4}^{2}= \boldsymbol{o}\Big{(}\big{(}n\nu[n,\infty)\big{)}^{2l^{*}}\Big{)}\) and conclude the proof of (6.10). ### Proofs of Propositions 3.2 and 3.3 The proof of Propositions 3.2 and 3.3 will be based on the technical tools developed below. First, we collect two useful results. **Result 5** (Lemma 9 in [30]).: _Let \(\nu\) be the Levy measure of a Levy process \(X\). Let \(I_{0}^{p}(\nu)\triangleq\int_{(-1,1)}|x|^{p}\nu(dx)\). Suppose that for the Blumenthal-Getoor index \(\beta\triangleq\inf\{p>0:\ I_{0}^{p}(\nu)<\infty\}\) we have \(\beta<2\). Then for any \(\beta_{+}\in(\beta,2)\),_ \[\int_{(-\kappa,\kappa)}x^{2}\nu(dx)\leq\kappa^{2-\beta_{+}}I_{0}^{\beta_{+}}( \nu)\qquad\forall\kappa\in(0,1].\] The next result can be obtained by setting \(T=t\) in Lemma 10 in [30]. **Result 6** (Lemma 10 in [30]).: _Let \(X\) be a Levy process with generating triplet \((c,\sigma,\nu)\). Suppose that \(I^{1}_{+}(\nu)\triangleq\int_{[1,\infty)}x\nu(dx)<\infty\) and for the Blumenthal-Getoor index \(\beta\triangleq\inf\{p>0:\ I^{p}_{0}(\nu)<\infty\}\) we have \(\beta<2\). For any \(t>0,\beta_{+}\in(1\vee\beta,2)\),_ \[\mathbf{E}\big{[}M(t)\big{]}\leq\Big{(}|\sigma|\sqrt{\frac{2}{\pi}}+2\sqrt{I^{ \beta_{+}}_{0}(\nu)}\Big{)}\sqrt{t}+\Big{(}c^{+}+I^{1}_{+}(\nu)+2I^{\beta_{+} }_{0}(\nu)\Big{)}t\] _where \(M(t)\triangleq\sup_{s\leq t}X(s)\) and \(I^{p}_{0}(\nu)\triangleq\int_{(-1,1)}|x|^{p}\nu(dx)\)._ Next, we study the expectations regarding the supremum of \(\Xi_{n}\) (see (3.6) for the definition) and the difference between \(\Xi_{n}\) and \(\tilde{\Xi}_{n}^{m}\) (see (3.17)). **Lemma 6.3**.: _There exists a constant \(C_{X}<\infty\) only depending on the law of Levy process \(X\) such that_ \[\mathbf{E}\bigg{[}\sup_{s\in[0,t]}\Xi_{n}(t)\bigg{]}\leq C_{X}(\sqrt{t}+t) \qquad\forall t>0,\ n\geq 1.\] Proof.: Recall that the generating triplet of \(X\) is \((c_{X},\sigma,\nu)\) and the Blumenthal-Getoor index \(\beta\triangleq\inf\{p>0:\int_{(-1,1)}|x|^{p}\nu(dx)<\infty\}\) satisfies \(\beta<2\); see Assumption 1. Fix some \(\beta_{+}\in(1\vee\beta,2)\), and let \[C_{X}\triangleq\max\Big{\{}|\sigma|\sqrt{\frac{2}{\pi}}+2\sqrt{I^{\beta_{+}}_ {0}(\nu)},\ c_{X}^{+}+I^{1}_{+}(\nu)+2I^{\beta_{+}}_{0}(\nu)\Big{\}}\] where \(x^{+}=x\lor 0\) for any \(x\in\mathbb{R}\) and \(I^{1}_{+}(\nu)\triangleq\int_{[1,\infty)}x\nu(dx),I^{p}_{0}(\nu)\triangleq\int_ {(-1,1)}|x|^{p}\nu(dx).\) Meanwhile, recall that for any \(n\geq 1,\gamma>0\), the process \(\Xi_{n}\) is a Levy process with generating triplet \((c_{X},\sigma,\nu|_{(-\infty,n\gamma)})\). Let \(\nu_{n}\triangleq\nu|_{(-\infty,n\gamma)}\). It follows from Result 6 that \(\mathbf{E}\sup_{s\in[0,t]}\Xi_{n}(t)\leq\Big{(}|\sigma|\sqrt{\frac{2}{\pi}}+2 \sqrt{I^{\beta_{+}}_{0}(\nu_{n})}\Big{)}\sqrt{t}+\Big{(}c_{X}^{+}+I^{1}_{+}( \nu_{n})+2I^{\beta_{+}}_{0}(\nu_{n})\Big{)}t\) holds for any \(t>0,n\geq 1\). In particular, since \(I^{\beta_{+}}_{0}(\nu_{n})=\int_{(-1,1)}|x|^{p}\nu_{n}(dx)=\int_{(-1,1)\cap(- \infty,n\gamma)}|x|^{p}\nu(dx)\leq I^{\beta_{+}}_{0}(\nu)\) and \(I^{1}_{+}(\nu_{n})=\int_{[1,\infty)}x\nu_{n}(dx)=\int_{[1,\infty)\cap(-\infty,n\gamma)}x\nu(dx)=I^{1}_{+}(\nu)\), we obtain \[\mathbf{E}\bigg{[}\sup_{s\in[0,t]}\Xi_{n}(t)\bigg{]}\leq C_{X}(\sqrt{t}+t) \qquad\forall t>0,\ n\geq 1\] and conclude the proof. **Lemma 6.4**.: _Let \(\beta_{+}\in(\beta,2)\) where \(\beta<2\) is the Blumenthal-Getoor index (see Assumption 1). There exists some \(C_{\beta_{+}}\in(0,\infty)\) that only depends on \(\beta_{+}\) the law of Levy process \(X\) such that_ \[\mathbf{P}\Big{(}\sup_{t\in[0,n]}\Big{|}\Xi_{n}(t)-\tilde{\Xi}_{n}^{m}(t)\Big{|} >c\Big{)}\leq\frac{C_{\beta_{+}}\kappa^{m(2-\beta_{+})}}{c^{2}\cdot n^{2(r- \beta_{+})-1}}\qquad\forall c>0,n\geq 1\] _where \(r\) is the parameter in the truncation threshold \(\kappa_{n,m}=\kappa^{m}/n^{r}\) (see (3.14))._ Proof.: From the definitions of \(\Xi_{n}\) and \(\tilde{\Xi}_{n}^{m}\) in (3.15)(3.17), \[\Xi_{n}(t)-\tilde{\Xi}_{n}^{m}(t)\overset{d}{=}X^{(-\kappa_{n,m},\kappa_{n,m} )}(t)-\bar{\sigma}(\kappa_{n,m})B(t)\] where (for any \(c\in(0,1]\)) \(X^{(-c,c)}\) is the Levy process with generating triplet \((0,0,\nu|_{(-c,c)})\), \(\kappa_{n,m}=\kappa^{m}/n^{r}\), and \(B\) is a standard Brownian motion independent of \(X^{(-\kappa_{n,m},\kappa_{n,m})}\). In particular, \(X^{(-\kappa_{n,m},\kappa_{n,m})}\) is a martingale with variance \(varX^{(-\kappa_{n,m},\kappa_{n,m})}(1)=\bar{\sigma}^{2}(\kappa_{n,m})\); here \(\bar{\sigma}^{2}(c)\triangleq\int_{(-c,c)}x^{2}\nu(dx)\) is defined in (3.16). Therefore, \[\mathbf{P}\Big{(}\sup_{t\in[0,n]}\Big{|}\Xi_{n}(t)-\tilde{\Xi}_{n}^{m}(t) \Big{|}>c\Big{)}\] \[\leq\frac{2n}{c^{2}}\cdot\kappa_{n,m}^{2-\beta_{+}}I_{0}^{\beta_{+}}( \nu)\qquad\text{using Result \ref{eq:21}}\] \[=\frac{2I_{0}^{\beta_{+}}(\nu)}{c^{2}}\cdot\frac{n\kappa^{m(2- \beta_{+})}}{n^{r(2-\beta_{+})}}=\frac{2I_{0}^{\beta_{+}}(\nu)}{c^{2}}\cdot \frac{\kappa^{m(2-\beta_{+})}}{n^{r(2-\beta_{+})-1}}\qquad\text{ due to }\kappa_{n,m}=\kappa^{m}/n^{r}.\] To conclude the proof, note that the constant \(C_{\beta_{+}}=2I_{0}^{\beta_{+}}(\nu)=2\int_{(-1,1)}\int|x|^{\beta_{+}}\nu(dx)\) only depends on \(\beta_{+}\). To present the next few lemmas we introduce a slightly more general version of the stick-breaking procedure described in (3.18)(3.19)(3.20). For any \(l>0\), let \[l_{1}(l) =V_{1}\cdot l, \tag{6.14}\] \[l_{j}(l) =V_{j}\cdot(l-l_{1}-l_{2}-\cdots-l_{j-1})\ \forall j\geq 2 \tag{6.15}\] where \(V_{j}\) is an iid sequence of \(\mathrm{Unif}(0,1)\) RVs. For any \(n\geq 1\), let \(\Xi_{n},\tilde{\Xi}_{n}^{m}\) be L evy processes with joint law specified in (3.15)(3.17) and let them be independent of the sequence \(V_{j}\). Conditioning on the values of \(l_{j}(l)\), define \(\xi_{j}^{[n]}(l),\xi_{j}^{[n];m}(l)\) using \[\big{(}\xi_{j}^{[n]}(l),\xi_{j}^{[n],1}(l),\xi_{j}^{[n],2}(l), \xi_{j}^{[n],3}(l),\cdots\big{)}=\Big{(}\Xi_{n}\big{(}l_{j}(l)\big{)},\tilde{ \Xi}_{n}^{1}\big{(}l_{j}(l)\big{)},\tilde{\Xi}_{n}^{2}\big{(}l_{j}(l)\big{)}, \tilde{\Xi}_{n}^{3}\big{(}l_{j}(l)\big{)},\cdots\Big{)}\ \forall j\geq 1. \tag{6.16}\] **Lemma 6.5**.: _Let \(n\in\mathbb{Z}_{+}\) and \(l\in[0,n]\). Let \(l_{j}(l)\) and \(\xi_{j}^{[n]}(l),\xi_{j}^{[n],m}(l)\) be defined as in (6.14)-(6.16). Let \(\beta_{+}\in(\beta,2)\) where \(\beta<2\) is the Blumenthal-Getoor index (see Assumption 1). There exists some \(C_{\beta_{+}}\in(0,\infty)\) that only depends on \(\beta_{+}\) the law of Levy process \(X\) such that_ \[\mathbf{P}\Big{(}\Big{|}\sum_{j=1}^{m+\lceil\log_{2}(n^{d})\rceil} \big{(}\xi_{j}^{[n]}(l)\big{)}^{+}-\sum_{j=1}^{m+\lceil\log_{2}(n^{d})\rceil} \big{(}\xi_{j}^{[n],m}(l)\big{)}^{+}\Big{|}>c\Big{)}\leq\frac{C_{\beta_{+}} \kappa^{m(2-\beta_{+})}}{c^{2}\cdot n^{2(r-\beta_{+})-1}}\qquad\forall c,d>0,m \geq 0,n\geq 1\] _where \(r\) is the parameter in the truncation threshold \(\kappa_{n,m}=\kappa^{m}/n^{r}\) (see (3.14))._ Proof.: Let \(k(n)=\lceil\log_{2}(n^{d})\rceil\). First, due to \(|x^{+}-y^{+}|\leq|x-y|\), \[\leq\mathbf{P}\Big{(}\sum_{j=1}^{m+k(n)}\Big{|}\big{(}\xi_{j}^{[ n]}(l)\big{)}^{+}-\big{(}\xi_{j}^{[n],m}(l)\big{)}^{+}\Big{|}>c\Big{)}\leq \mathbf{P}\Big{(}\sum_{j=1}^{m+k(n)}\Big{|}\underbrace{\xi_{j}^{[n]}(l)-\xi_{ j}^{[n],m}(l)}_{\stackrel{{\Delta}}{{=}}q_{j}}\Big{|}>c\Big{)}. \tag{6.17}\] Let \(\chi=2^{1/4}\). Due to \(\frac{1}{\chi}+\frac{1}{\chi^{2}}+\cdots+\frac{1}{\chi^{m+k(n)}}\leq\frac{1}{ \chi^{-1}}\), we have \[1\geq(\chi-1)\Big{(}\frac{1}{\chi}+\frac{1}{\chi^{2}}+\cdots+ \frac{1}{\chi^{m+k(n)}}\Big{)}.\] As a result, \[\mathbf{P}\Big{(}\sum_{j=1}^{k(n)+m}|q_{j}|>c\Big{)}\leq \mathbf{P}\Big{(}\sum_{j=1}^{k(n)+m}|q_{j}|>c(\chi-1)(\sum_{j=1}^ {k(n)+m}\frac{1}{\chi^{j}})\Big{)}\] \[\leq\frac{2n}{c^{2}(2^{1/4}-1)^{2}\sqrt{2^{j}}}\cdot\bar{\sigma}^{2}( \kappa_{m,n})\qquad\text{ due to }l\leq n.\] Therefore, in (6.17) we obtain \[\mathbf{P}\Big{(}\Big{|}\sum_{j=1}^{m+t(n)}\big{(}\xi_{j}^{[n]}(l )\big{)}^{+}-\sum_{j=1}^{m+t(n)}\big{(}\xi_{j}^{[n],m}(l)\big{)}^{+}\Big{|}>c \Big{)}\] \[\leq\bar{\sigma}^{2}(\kappa_{m,n})\cdot\sum_{j=1}^{m+t(n)}\frac{2n }{c^{2}(2^{1/4}-1)^{2}\sqrt{2^{j}}}\leq\bar{\sigma}^{2}(\kappa_{m,n})\cdot \sum_{j\geq 0}\frac{2n}{c^{2}(2^{1/4}-1)^{2}\sqrt{2^{j}}}\] \[=n\bar{\sigma}^{2}(\kappa_{m,n})\cdot\frac{2\sqrt{2}}{c^{2}(2^{1/ 4}-1)^{2}(\sqrt{2}-1)}.\] Lastly, using Result 5, we have \[n\bar{\sigma}^{2}(\kappa_{m,n})\leq n\cdot\kappa_{n,m}^{2-\beta_{+}}I_{0}^{ \beta_{+}}(\nu)=\frac{n\kappa^{m(2-\beta_{+})}}{n^{r(2-\beta_{+})}}\cdot I_{0} ^{\beta_{+}}(\nu)=\frac{\kappa^{m(2-\beta_{+})}}{n^{r(2-\beta_{+})-1}}\cdot I _{0}^{\beta_{+}}(\nu)\] where \(I_{0}^{\beta_{+}}(\nu)=\int_{(-1,1)}\int|x|^{\beta_{+}}\nu(dx)\). To conclude the proof, note that \(I_{0}^{\beta_{+}}(\nu)\) only depends on \(\beta_{+}\). **Lemma 6.6**.: _Let \(n\in\mathbb{Z}_{+}\) and \(l\in[0,n]\). Let \(C_{X}<\infty\) be the constant characterized in Lemma 6.3 that only depends on the law of Levy process \(X\). The inequality_ \[\mathbf{P}\Big{(}\sum_{j>m+\lceil\log_{2}(n^{d})\rceil}\big{(}\xi_{j}^{[n]}(l )\big{)}^{+}>c\Big{)}\leq\frac{C_{X}}{c}\bigg{[}\sqrt{\frac{1}{n^{d-1}\cdot 2^{m}} }+\frac{1}{n^{d-1}\cdot 2^{m}}\bigg{]}\] _holds for all \(c,d>0\), \(n\geq 1\), and \(m\geq 0\)._ Proof.: Conditioning on \(l_{m+\lceil\log_{2}(n^{d})\rceil}(l)=t\), it follows directly from Result 4 that \[\mathscr{L}\bigg{(}\sum_{j>m+\lceil\log_{2}(n^{d})\rceil}\big{(}\xi_{j}^{[n]}(l) \big{)}^{+}\ \bigg{|}\ l_{m+\lceil\log_{2}(n^{d})\rceil}(l)=t\bigg{)}=\mathscr{L}(\sup_{s \in[0,t]}\Xi_{n}).\] It then follows from Markov property and the Lemma 6.3 that \[\mathbf{P}\bigg{(}\sum_{j>m+\lceil\log_{2}(n^{d})\rceil}\big{(}\xi_{j}^{[n]}(l) \big{)}^{+}>c\ \bigg{|}\ l_{m+\lceil\log_{2}(n^{d})\rceil}(l)=t\bigg{)}\leq\frac{C_{X}}{c}( \sqrt{t}+t)\qquad\forall c,t>0.\] Therefore, unconditionally, \[\mathbf{P}\bigg{(}\sum_{j>m+\lceil\log_{2}(n^{d})\rceil}\big{(} \xi_{j}^{[n]}(l)\big{)}^{+}>c\bigg{)} \leq\frac{C_{X}}{c}\mathbf{E}\Big{[}\sqrt{l_{m+\lceil\log_{2}(n^{ d})\rceil}(l)}+l_{m+\lceil\log_{2}(n^{d})\rceil}(l)\Big{]}\] \[\leq\frac{C_{X}}{c}\Big{[}\sqrt{\mathbf{E}l_{m+\lceil\log_{2}(n^{ d})\rceil}(l)}+\mathbf{E}l_{m+\lceil\log_{2}(n^{d})\rceil}(l)\Big{]}\] because of Jensen's inequality, i.e., \(\mathbf{E}\sqrt{W}\leq\sqrt{\mathbf{E}W}\) for any non-negative random variable \(W\). Lastly, by definition of \(l_{j}(l)\) in (6.14)(6.15), \(\mathbf{E}l_{m+\lceil\log_{2}(n^{d})\rceil}(l)\leq\frac{l}{2^{m}.n^{d}}\leq \frac{n}{2^{m}.n^{d}}=\frac{1}{n^{d-1}.2^{m}}\); here we applied \(l\leq n\). This concludes the proof. **Lemma 6.7**.: _Let \(n\in\mathbb{Z}_{+}\) and \(l\in[0,n]\). Let \(C,z_{0},\lambda>0,\theta\in(0,1]\) be the constants in Assumption 2. Let \(C_{X}<\infty\) be the constant characterized in Lemma 6.3 that only depends on the law of Levy process \(X\). Given \(y_{0},\delta,\alpha_{3},\alpha_{4}>0\), the inequality_ \[\mathbf{P}\Big{(}\sum_{j=1}^{m+\lceil\log_{2}(n^{d})\rceil}\big{(}\xi_{j}^{[n ]}(l)\big{)}^{+}\in[y,y+c]\Big{)}\leq C\frac{(m+(\lceil\log_{2}(n^{d})\rceil) n^{\alpha_{4}\lambda}}{\delta^{\alpha_{3}\lambda}}c^{\theta}+4C_{X}\big{(}m^{2}+( \lceil\log_{2}(n^{d})\rceil)^{2}\big{)}\frac{\delta^{\alpha_{3}/2}}{y_{0}\cdot n ^{\alpha_{4}/2}}.\] _holds for all \(y\geq y_{0}\) and \(c,d>0\)._ Proof.: To simplify notations, let \(k(n)\triangleq\lceil\log_{2}(n^{d})\rceil\) and write \(l_{j}\triangleq l_{j}(l)\) when there is no ambiguity. For the sequence of random variables \((l_{1},\cdots,l_{m+k(n)})\), let \(\tilde{l}_{1}\geq\tilde{l}_{2}\geq\cdots\geq\tilde{l}_{m+k(n)}\) be the order statistics. For any \(t_{1}\geq t_{2}\geq\cdots\geq t_{m+k(n)}>0\), by conditioning on \(\tilde{l}_{j}=t_{j}\ \forall j\in[m+k(n)]\), it follows from (6.16) that \[\mathscr{L}\Big{(}\sum_{j=1}^{m+k(n)}\big{(}\xi_{j}^{[n]}(l)\big{)}^{+}\ \Big{|}\ \tilde{l}_{j}=t_{j}\ \forall j\in[m+k(n)]\Big{)}=\mathscr{L}\Big{(}\sum_{j=1}^{m+k(n)}\big{(}\Xi_ {n,j}^{\prime}(t_{j})\big{)}^{+}\Big{)}\] where \(\Xi_{n,j}^{\prime}\) are iid copies of the Levy processes \(\Xi_{n}=X^{<n\gamma}\). Next, fix \[\eta=\delta^{m\alpha_{3}}/n^{\alpha_{4}}.\] Also, given the ordered sequence \(t_{j}\), we define \(J\triangleq\#\big{\{}j\in[m+k(n)]:\ \tilde{t}_{j}\geq\eta\big{\}}\) as the number elements in \(t_{1}\geq t_{2}\geq\cdots\geq t_{m+k(n)}\) that are no less than \(\eta\). Note that if \(t_{1}<\eta\) we have \(J=0\). By considering a decomposition of event based on the first \(j\) such that \(\Xi_{n,j}^{\prime}(t_{j})>0\) (and hence \(\big{(}\Xi_{n,j}^{\prime}(t_{j})\big{)}^{+}>0\)), we get \[\mathbf{P}\Big{(}\sum_{j=1}^{m+k(n)}\big{(}\xi_{j}^{[n]}(l)\big{)}^{+} \in[y,y+c]\ \Big{|}\ \tilde{l}_{j}=t_{j}\ \forall j\in[m+k(n)]\Big{)}\] \[=\sum_{j=1}^{J}\underbrace{\mathbf{P}\Big{(}\Xi_{n,i}^{\prime}(t_ {i})\leq 0\ \forall i\in[j-1];\ \Xi_{n,j}^{\prime}(t_{j})>0;\ \sum_{i=j}^{m+k(n)}\big{(}\Xi_{n,i}^{\prime}(t_{i})\big{)}^{+}\in[y,y+c] \Big{)}}_{\stackrel{{\Delta}}{{=}}p_{j}} \tag{6.19}\] \[+\underbrace{\mathbf{P}\Big{(}\Xi_{n,i}^{\prime}(t_{i})\leq 0\ \forall i\in[J];\sum_{j=J+1}^{m+k(n)}\big{(}\Xi_{n,j}^{\prime}(t_{j})\big{)}^{+ }\in[y,y+c]\Big{)}}_{\stackrel{{\Delta}}{{=}}p_{*}}.\] We first bound the term \(p_{j}\). For any \(j\in[J]\), observe that \[p_{j} \leq\mathbf{P}\Big{(}\Xi_{n,j}^{\prime}(t_{j})>0;\ \sum_{i=j}^{m+k(n)}\big{(}\Xi_{n,i}^{\prime}(t_{i})\big{)}^{+}\in[y,y+c] \Big{)}\] \[=\int_{\mathbb{R}}\mathbf{P}\Big{(}\Xi_{n,j}^{\prime}(t_{j})\in[y -x,y-x+c]\cap(0,\infty)\Big{)}\mathbf{P}\Big{(}\sum_{i=j+1}^{m+k(n)}\big{(} \Xi_{n,i}^{\prime}(t_{i})\big{)}^{+}\in dx\Big{)}\] \[\leq\frac{C}{t_{j}^{\lambda}\wedge 1}c^{\theta}\qquad\text{ due to Assumption \ref{eq:prop}}\] \[\leq\frac{Cn^{\alpha_{4}\lambda}}{\delta^{m\alpha_{3}\lambda}}c^ {\theta}\qquad\text{ due to }j\leq J\ \Longrightarrow\ t_{j}\geq\eta=\delta^{m\alpha_{3}}/n^{\alpha_{4}}. \tag{6.20}\] Moving on, for term \(p_{*}\), we have \[p_{*} \leq\mathbf{P}\Big{(}\sum_{j=J+1}^{m+k(n)}\big{(}\Xi_{n,j}^{ \prime}(t_{j})\big{)}^{+}\in[y,y+c]\Big{)}\leq\mathbf{P}\Big{(}\sum_{j=J+1}^{m +k(n)}\big{(}\Xi_{n,j}^{\prime}(t_{j})\big{)}^{+}\geq y_{0}\Big{)}\qquad\text{ due to }y\geq y_{0}>0\] \[\leq\sum_{j=J+1}^{m+k(n)}\mathbf{P}\Big{(}\Xi_{n,j}^{\prime}(t_{j })\geq y_{0}/N\Big{)}\qquad\text{ where }N\stackrel{{\Delta}}{{=}}k(m)+m-J\] \[\leq\sum_{j=J+1}^{m+k(n)}\frac{C_{X}(\sqrt{t_{j}}+t_{j})\cdot N}{ y_{0}}\qquad\text{ using Markov's inequality and Lemma \ref{eq:prop}}\] \[\leq\sum_{j=J+1}^{m+k(n)}\frac{C_{X}(\sqrt{\eta}+\eta)\cdot N}{y_{ 0}}\qquad\text{ due to }j>J\ \Longrightarrow\ t_{j}<\eta=\delta^{\alpha_{3}}/n^{\alpha_{4}}\] \[=N^{2}\cdot\frac{C_{X}(\sqrt{\eta}+\eta)}{y_{0}}\leq\big{(}m+k(n )\big{)}^{2}\frac{C_{X}(\sqrt{\eta}+\eta)}{y_{0}}\qquad\text{ due to }N\leq m+k(n)\] \[\leq 2C_{X}\big{(}m^{2}+(\lceil\log_{2}(n^{d})\rceil)^{2}\big{)} \frac{\sqrt{\eta}+\eta}{y_{0}}\qquad\text{ using }(u+v)^{2}\leq 2(u^{2}+v^{2})\] \[\leq 4C_{X}\big{(}m^{2}+(\lceil\log_{2}(n^{d})\rceil)^{2}\big{)} \frac{\sqrt{\eta}}{y_{0}}=4C_{X}\big{(}m^{2}+(\lceil\log_{2}(n^{d})\rceil)^{ 2}\big{)}\frac{\delta^{m\alpha_{3}/2}}{y_{0}\cdot n^{\alpha_{4}/2}}. \tag{6.21}\] In the last line of the display above, we applied \(\eta=\delta^{m\alpha_{3}}/n^{\alpha_{4}}\in(0,1).\) Plugging (6.20)(6.21) into (6.19), we yield \[\mathbf{P}\Big{(}\sum_{j=1}^{m+k(n)}\big{(}\xi_{j}^{[n]}(l)\big{)}^{+ }\in[y,y+c]\ \Big{|}\ \tilde{l}_{j}=t_{j}\ \forall j\in[m+k(n)]\Big{)}\] \[\leq J\cdot\frac{Cn^{\alpha_{4}\lambda}}{\delta^{m\alpha_{3} \lambda}}c^{\theta}+4C_{X}\big{(}m^{2}+(\lceil\log_{2}(n^{d})\rceil)^{2}\big{)} \frac{\delta^{m\alpha_{3}/2}}{y_{0}\cdot n^{\alpha_{4}/2}}\] \[\leq C\frac{(m+(\lceil\log_{2}(n^{d})\rceil)n^{\alpha_{4}\lambda} }{\delta^{m\alpha_{3}\lambda}}c^{\theta}+4C_{X}\big{(}m^{2}+(\lceil\log_{2}(n^ {d})\rceil)^{2}\big{)}\frac{m\delta^{\alpha_{3}/2}}{y_{0}\cdot n^{\alpha_{4}/ 2}}\qquad\text{ due to }J\leq m+\lceil\log_{2}(n^{d})\rceil.\] In particular, in the last line of the display above, all quantities involved do not depend on the value of sequence \(t_{j}\), so unconditionally we have \[\mathbf{P}\Big{(}\sum_{j=1}^{m+k(n)}\big{(}\xi_{j}^{[n]}(l)\big{)}^{+}\in[y,y+ c]\Big{)}\leq C\frac{(m+(\lceil\log_{2}(n^{d})\rceil)n^{\alpha_{4}\lambda}}{ \delta^{m\alpha_{3}\lambda}}c^{\theta}+4C_{X}\big{(}m^{2}+(\lceil\log_{2}(n^{ d})\rceil)^{2}\big{)}\frac{\delta^{m\alpha_{3}/2}}{y_{0}\cdot n^{\alpha_{4}/2}}.\] This completes the proof. Now we are ready to provide the proof of Propositions 3.2 and 3.3. Proof of Proposition 3.2.: We start by specifying parameters \(\kappa,r,d,C_{0},\rho_{0}\) and \(\bar{m}\). First, let \(\beta\in[0,2)\) be the Blumenthal-Getoor index of \(X\); see Assumption 1. Fix some \[\beta_{+}\in(\beta,2). \tag{6.22}\] This allows us to pick \(d,r\) large enough such that \[d\geq 2,\qquad 2(r-\beta_{+})\geq 2. \tag{6.23}\] Let \(\lambda>0,\theta\in(0,1]\) be the constants in Assumption 2. First, choose \[\alpha_{3}\in(0,\frac{\theta}{\lambda}),\ \alpha_{4}\in(0,\frac{\theta}{2 \lambda}). \tag{6.24}\] Next, fix \[\alpha_{2}\in(0,\frac{\alpha_{3}}{2}\wedge 1), \tag{6.25}\] and based on the chosen value of \(\alpha_{2}\), fix \[\alpha_{1}\in(0,\frac{\theta\alpha_{2}}{\lambda}). \tag{6.26}\] Fix \[\delta\in(1/\sqrt{2},1). \tag{6.27}\] Since we require \(\alpha_{2}\) to be strictly less than \(1\), it is easy to see the existence of some integer \(\bar{m}\) such that \[\delta^{m\alpha_{2}}-\delta^{m}\geq\frac{\delta^{m\alpha_{2}}}{2}\text{ and }\delta^{m\alpha_{2}}<a\qquad\forall m\geq\bar{m} \tag{6.28}\] where \(a>0\) is the parameter in set \(A\); see Assumption 3. Based on the values of \(\delta,\beta_{+}\), it holds for all \(\kappa\in(0,1)\) close enough to \(0\) that \[\kappa^{2-\beta_{+}}<\frac{1}{2}<\delta^{2} \tag{6.29}\] Lastly, based on all previous choices, it holds for all \(\rho_{1}\in(0,1)\) close enough to \(1\) such that \[\delta^{\alpha_{1}} <\rho_{1}, \tag{6.30}\] \[\frac{\kappa^{2-\beta_{+}}}{\delta^{2}} <\rho_{1}\] (6.31) \[\frac{1}{\sqrt{2\delta}} <\rho_{1}\] (6.32) \[\delta^{\theta\alpha_{2}-\lambda\alpha_{1}} <\rho_{1}\] (6.33) \[\delta^{\theta-\lambda\alpha_{3}} <\rho_{1}\] (6.34) \[\delta^{-\alpha_{2}+\frac{\alpha_{3}}{2}} <\rho_{1}. \tag{6.35}\] Lastly, for any \(\rho_{0}\in(\rho_{1},1)\), by picking a larger \(\bar{m}\) if necessary, we can ensure that \[m^{2}\rho_{1}^{m}\leq\rho_{0}^{m}\qquad\forall m\geq\bar{m}. \tag{6.36}\] We kick off the analysis by characterizing the law of \(J_{n}\) under \(\mathbf{P}\) conditioning on event \(\{\mathcal{D}(\bar{J}_{n})=k\}\), i.e., there are \(k\) jumps over \([0,n]\) in the process \(J_{n}\). Note that under \(\mathbf{P}\), the process \(J_{n}\) is a Levy process with generating triplet \((0,0,\nu|_{[n\gamma,\infty)})\). Therefore, \(\mathcal{L}(J_{n}|\{\mathcal{D}(\bar{J}_{n})=k\})=\mathcal{L}(\zeta_{k})\) where \[\zeta_{k}(t)\triangleq\sum_{i=1}^{k}z_{i}\mathbb{1}_{[u_{i},n]}(t)\qquad \forall t\in[0,n],\] \(0<u_{1}<u_{2}<\cdots<u_{k}\) are the order statistics of \(k\) iid copies of \(\text{Unif}(0,n)\), and \(z_{i}\) are iid with law \(\nu\big{(}\cdot\cap[n\gamma,\infty)\big{)}\big{/}\nu[n\gamma,\infty)\). Recall the random functions \(Y_{n}^{*}(\cdot)\) and \(\hat{Y}_{n}^{m}(\cdot)\) in (3.13)(3.23). We now have (for all \(n\geq 1,k,m\geq 0\)) \[\mathbf{P}\Big{(}Y_{n}^{*}(J_{n})\neq\hat{Y}_{n}^{m}(J_{n})\ \Big{|}\ \mathcal{D}(\bar{J}_{n})=k\Big{)} =\mathbf{P}\Big{(}Y_{n}^{*}(\zeta_{k})\neq\hat{Y}_{n}^{m}(\zeta_{k })\Big{)}\] \[\leq\mathbf{P}(u_{1}<n\delta^{m\alpha_{1}})+\mathbf{P}\Big{(}Y_{ n}^{*}(\zeta_{k})\neq\hat{Y}_{n}^{m}(\zeta_{k}),\ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)}.\] For term \(\mathbf{P}(u_{1}<n\delta^{m\alpha_{1}})\), suppose that \(0<u_{1}<u_{2}<\cdots<u_{k}\) are the order statistics of \((v_{i})_{i=1}^{k}\) that are iid copies of \(\text{Unif}(0,n)\), then \[\mathbf{P}(u_{1}<n\delta^{m\alpha_{1}}) =\mathbf{P}(v_{i}<n\delta^{m\alpha_{1}}\text{ for some }i\in[k])\leq k\cdot\mathbf{P}\big{(}\text{Unif}(0,n)<n\delta^{m\alpha_{1}} \big{)}\] \[=k\cdot\delta^{m\alpha_{1}}<k\cdot\rho_{0}^{m}\qquad\text{ due to \eqref{eq:p_n}}.\] Therefore, it suffices to find \(C_{0}\) such that \[\mathbf{P}\Big{(}Y_{n}^{*}(\zeta_{k})\neq\hat{Y}_{n}^{m}(\zeta_{k}),\ u_{1} \geq n\delta^{m\alpha_{1}}\Big{)}\leq C_{0}\rho_{0}^{m}\cdot(k+1)\qquad \forall n\geq 1,m,k\geq 0.\] For notation simplicity, let \(t(n)=\lceil\log_{2}(n^{d})\rceil\). Due to the coupling between \(\xi_{j}^{(i)},\xi_{j}^{(i),m}\) in (3.20)(3.21) and the definitions of \(Y_{n}^{*}(\cdot)\) and \(\hat{Y}_{n}^{m}(\cdot)\) in (3.13)(3.23), we have \[Y_{n}^{*}(\zeta_{k}) =\max_{i\in[k+1]}\underbrace{\mathbb{1}\Big{\{}\sum_{q=1}^{i-1} \sum_{j\geq 0}\xi_{j}^{(q)}+\sum_{q=1}^{i-1}z_{q}+\sum_{j\geq 1}(\xi_{j}^{(i)})^{+} \geq na\Big{\}}}_{\triangleq\hat{Y}_{n}^{(i),m}(\zeta_{k})},\] where (under the convention \(u_{0}=0,u_{k+1}=n\)) \[l_{1}^{(i)}=V_{1}^{(i)}(u_{i+1}-u_{i}),\qquad l_{j}^{(i)}=V_{j}^{(i )}(u_{i+1}-u_{i}-l_{1}^{(i)}-l_{2}^{(i)}-\cdots-l_{j-1}^{(i)})\ \ \forall j\geq 2,\] \[\big{(}\xi_{j}^{(i)},\xi_{j}^{(i),1},\xi_{j}^{(i),2},\xi_{j}^{(i),3},\cdots\big{)}\stackrel{{ d}}{{=}}\Big{(}\Xi_{n}(l_{j}^{(i)} ),\ddot{\Xi}_{n}^{1}(l_{j}^{(i)}),\ddot{\Xi}_{n}^{2}(l_{j}^{(i)}),\ddot{\Xi}_{n }^{3}(l_{j}^{(i)}),\cdots\Big{)}.\] with \(V_{j}^{(i)}\) all being iid copies of \(\text{Unif}(0,1)\). Therefore, \(\mathbf{P}\big{(}Y_{n}^{*}(\zeta_{k})\neq\hat{Y}_{n}^{m}(\zeta_{k}),\ u_{1}\geq n \delta^{m\alpha_{1}}\big{)}\leq\sum_{i=1}^{k+1}\mathbf{P}\big{(}Y_{n}^{(i),*}( \zeta_{k})\neq\hat{Y}_{n}^{(i),m}(\zeta_{k}),\ u_{1}\geq n\delta^{m\alpha_{1}} \big{)}.\) Next, define events \[E_{n,1}^{(i),m}\] \[E_{n,2}^{(i),m}\] \[E_{n,3}^{(i),m}\] Now we observe a few important facts on \(E_{n,1}^{(i),m}\cap E_{n,2}^{(i),m}\cap E_{n,3}^{(i),m}\cap\big{\{}Y_{n}^{(i),* }(\zeta_{k})\neq\hat{Y}_{n}^{(i),m}(\zeta_{k}),\ u_{1}\geq n\delta^{m\alpha_{1}} \big{\}}\). First, let \[W_{n}^{(i),*} \triangleq\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+\sum_{q=1}^ {i-1}z_{q}+\sum_{j\geq 1}(\xi_{j}^{(i)})^{+},\] \[\widetilde{W}_{n}^{(i),m} \triangleq\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+\sum_{q=1}^ {i-1}z_{q}+\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i)})^{+},\] \[\hat{W}_{n}^{(i),m} \triangleq\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q),m}+\sum_{q=1}^ {i-1}z_{q}+\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i),m})^{+}.\] On \(E_{n,1}^{(i),m}\cap E_{n,2}^{(i),m}\cap E_{n,3}^{(i),m}\), we have \[|W_{n}^{(i),*}-\widetilde{W}_{n}^{(i),m}|\vee|W_{n}^{(i),*}-\hat{W}_{n}^{(i), m}|\vee|\widetilde{W}_{n}^{(i),m}-\hat{W}_{n}^{(i),m}|\leq\delta^{m}/\sqrt{n}.\] However, due to \(Y_{n}^{(i),*}(\zeta_{k})\neq\hat{Y}_{n}^{(i),m}(\zeta_{k})\), we must have \(\widetilde{W}_{n}^{(i),m}\in[na-\frac{\delta^{m}}{\sqrt{n}},na+\frac{\delta^{m }}{\sqrt{n}}]\): otherwise, when combined with \(|W_{n}^{(i),*}-\widetilde{W}_{n}^{(i),m}|\vee|\widetilde{W}_{n}^{(i),m}-\hat{ W}_{n}^{(i),m}|\leq\delta^{m}/\sqrt{n}\), the fact that \(\widetilde{W}_{n}^{(i),m}<na-\frac{\delta^{m}}{\sqrt{n}}\) (resp. \(\widetilde{W}_{n}^{(i),m}>na+\frac{\delta^{m}}{\sqrt{n}}\)) immediately implies that \(\hat{W}_{n}^{(i),m}<na,W_{n}^{(i),*}<na\) (resp. \(\hat{W}_{n}^{(i),m}>na,W_{n}^{(i),*}>na\)), and hence \(Y_{n}^{(i),*}(\zeta_{k})=\hat{Y}_{n}^{(i),m}(\zeta_{k})=0\) (resp. \(Y_{n}^{(i),*}(\zeta_{k})=\hat{Y}_{n}^{(i),m}(\zeta_{k})=1\)). In short, we have shown that \[E_{n,1}^{(i),m}\cap E_{n,2}^{(i),m}\cap E_{n,3}^{(i),m}\cap\big{\{}Y_{n}^{(i),* }(\zeta_{k})\neq\hat{Y}_{n}^{(i),m}(\zeta_{k}),\ u_{1}\geq n\delta^{m\alpha_{1}} \big{\}}\] \[\qquad\subseteq\Big{\{}\widetilde{W}_{n}^{(i),m}\in[na-\frac{ \delta^{m}}{\sqrt{n}},na+\frac{\delta^{m}}{\sqrt{n}}],\ u_{1}\geq n\delta^{m \alpha_{1}}\Big{\}}.\] This decomposition of events leads to \[\mathbf{P}\Big{(}Y_{n}^{(i),*}(\zeta_{k})\neq\hat{Y}_{n}^{(i),m}( \zeta_{k}),\ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)}\] \[\leq\mathbf{P}\Big{(}\big{(}E_{n,1}^{(i),m}\big{)}^{c}\Big{)}+ \mathbf{P}\Big{(}\big{(}E_{n,2}^{(i),m}\big{)}^{c}\Big{)}+\mathbf{P}\Big{(} \big{(}E_{n,3}^{(i),m}\big{)}^{c}\Big{)}+\mathbf{P}\bigg{(}\widetilde{W}_{n}^{ (i),m}\in[na-\frac{\delta^{m}}{\sqrt{n}},na+\frac{\delta^{m}}{\sqrt{n}}],\ u_{1} \geq n\delta^{m\alpha_{1}}\Big{)}. \tag{6.37}\] Furthermore, \[\mathbf{P}\bigg{(}\widetilde{W}_{n}^{(i),m}\in[na-\frac{\delta^{m}}{ \sqrt{n}},na+\frac{\delta^{m}}{\sqrt{n}}],\ u_{1}\geq n\delta^{m\alpha_{1}}\bigg{)}\] \[=\mathbf{P}\bigg{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+ \sum_{q=1}^{i-1}z_{q}\in[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}],\ \widetilde{W}_{n}^{(i),m}\in[na-\frac{\delta^{m}}{\sqrt{n}},na+\frac{\delta^{m}} {\sqrt{n}}],\ u_{1}\geq n\delta^{m\alpha_{1}}\bigg{)}\] \[\qquad+\mathbf{P}\bigg{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+ \sum_{q=1}^{i-1}z_{q}\notin[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}], \ \widetilde{W}_{n}^{(i),m}\in[na-\frac{\delta^{m}}{\sqrt{n}},na+\frac{\delta^{m}} {\sqrt{n}}],\ u_{1}\geq n\delta^{m\alpha_{1}}\bigg{)}\] \[\leq\mathbf{P}\bigg{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+ \sum_{q=1}^{i-1}z_{q}\in[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}],\ u_{1}\geq n\delta^{m\alpha_{1}}\bigg{)}\] \[\qquad+\mathbf{P}\bigg{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)} +\sum_{q=1}^{i-1}z_{q}\notin[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}], \ \widetilde{W}_{n}^{(i),m}\in[na-\frac{\delta^{m}}{\sqrt{n}},na+\frac{\delta^{m}} {\sqrt{n}}]\bigg{)}\] \[\leq\mathbf{P}\Big{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)} +\sum_{q=1}^{i-1}z_{q}\in[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}],\ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)}\] \[\qquad+\int_{\mathbb{R}\setminus[na-\delta^{m\alpha_{2}},na+ \delta^{m\alpha_{2}}]}\mathbf{P}\bigg{(}\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i)})^{+ }\in[na-x-\frac{\delta^{m}}{\sqrt{n}},na-x+\frac{\delta^{m}}{\sqrt{n}}]\bigg{)} \mathbf{P}\Big{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+\sum_{q=1}^{i-1}z_{q}\in dx \Big{)}\] \[=\mathbf{P}\Big{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)} +\sum_{q=1}^{i-1}z_{q}\in[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}], \ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)}\] \[\qquad+\int_{(-\infty,na-\delta^{m\alpha_{2}}]}\mathbf{P}\bigg{(} \sum_{j=1}^{m+t(n)}(\xi_{j}^{(i)})^{+}\in[na-x-\frac{\delta^{m}}{\sqrt{n}},na- x+\frac{\delta^{m}}{\sqrt{n}}]\bigg{)}\mathbf{P}\Big{(}\sum_{q=1}^{i-1}\sum_{j \geq 0}\xi_{j}^{(q)}+\sum_{q=1}^{i-1}z_{q}\in dx\Big{)}.\] In the last equality of the display above, we applied the simple fact that \(\mathbf{P}\big{(}\sum_{j\geq 1}(\xi_{j}^{(i)})^{+}\geq 0\big{)}=1\). Furthermore, we claim that for all \(i\in[k+1]\), \(n\geq 1\), and \(m\geq\bar{m}\), \[\mathbf{P}\Big{(}\big{(}E_{n,1}^{(i),m}\big{)}^{c}\Big{)}=\mathbf{P} \bigg{(}\Big{|}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}-\sum_{q=1}^{i-1}\sum_{j \geq 0}\xi_{j}^{(q),m}\Big{|}>\frac{\delta^{m}}{\sqrt{n}}\bigg{)} \leq C_{1}\rho_{0}^{m}, \tag{6.38}\] \[\mathbf{P}\Big{(}\big{(}E_{n,2}^{(i),m}\big{)}^{c}\Big{)}=\mathbf{ P}\bigg{(}\Big{|}\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i)})^{+}-\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i),m})^{+}\Big{|}>\frac{\delta^{m}}{3\sqrt{n}}\bigg{)} \leq C_{2}\rho_{0}^{m},\] (6.39) \[\mathbf{P}\Big{(}\big{(}E_{n,3}^{(i),m}\big{)}^{c}\Big{)}=\mathbf{ P}\bigg{(}\sum_{j\geq m+t(n)+1}(\xi_{j}^{(i)})^{+}>\frac{\delta^{m}}{3\sqrt{n}}\bigg{)} \leq C_{3}\rho_{0}^{m},\] (6.40) \[\mathbf{P}\Big{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+ \sum_{q=1}^{i-1}z_{q}\in[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}],\ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)} \leq C_{4}\rho_{0}^{m},\] (6.41) \[\mathbf{P}\bigg{(}\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i)})^{+}\in[y,y+ \frac{2\delta^{m}}{\sqrt{n}}]\bigg{)} \leq C_{5}\rho_{0}^{m}\qquad\forall y\geq\delta^{m\alpha_{2}}- \frac{\delta^{m}}{\sqrt{n}} \tag{6.42}\] where the values of constants \(C_{1},\cdots,C_{5}\) do not depend on \(n,m,k,i\). Then by setting \(C_{0}=\sum_{j=1}^{5}C_{j}\), we have in (6.37) that \(\mathbf{P}\big{(}Y_{n}^{(i),*}(\zeta_{k})\neq\hat{Y}_{n}^{(i),m}(\zeta_{k}),\ u_{1}\geq n \delta^{m\alpha_{1}}\big{)}\leq C_{0}\rho^{m}\) for all \(n\geq 1,m\geq\bar{m},i\in[n-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}]\). \([k+1]\). In conclusion, we yield (for all \(n\geq 1,\ m\geq\bar{m}\)) \[\mathbf{P}\Big{(}Y_{n}^{*}(\zeta_{k})\neq\hat{Y}_{n}^{m}(\zeta_{k} ),\ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)}\] \[\leq\sum_{i=1}^{k+1}\mathbf{P}\big{(}Y_{n}^{(i),*}(\zeta_{k})\neq \hat{Y}_{n}^{(i),m}(\zeta_{k}),\ u_{1}\geq n\delta^{m\alpha_{1}}\big{)}\leq C_{ 0}\rho^{m}\cdot(k+1)\] and conclude the proof. Now, it only remains to prove claims (6.38)(6.39)(6.40)(6.41)(6.42). **Proof of Claim** (6.38) From the coupling between \(\xi_{j}^{(i)},\xi_{j}^{(i),m}\) in (3.20)(3.21), we have \[\Big{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)},\sum_{q=1}^{i-1}\sum_{j \geq 0}\xi_{j}^{(q),m}\Big{)}\stackrel{{ d}}{{=}}\big{(}\Xi_{n}(u_{i-1}), \bar{\Xi}_{n}^{m}(u_{i-1})\big{)}\] where the laws of processes \(\Xi_{n},\bar{\Xi}_{n}^{m}\) are stated in (3.15)(3.17). Applying Lemma 6.4, we yield (for all \(n\geq 1,m,k\geq 0,i\in[k+1]\)) \[\mathbf{P}\Big{(}\Big{|}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{( q)}-\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q),m}\Big{|}>\frac{\delta^{m}}{ 3\sqrt{n}}\Big{)}\] \[=\mathbf{P}\Big{(}\big{|}\Xi_{n}(u_{i-1})-\bar{\Xi}_{n}^{m}(u_{i -1})|>\frac{\delta^{m}}{3\sqrt{n}}\Big{)}\leq\mathbf{P}\Big{(}\sup_{t\in[0,n]} |\Xi_{n}(t)-\bar{\Xi}_{n}^{m}(t)|>\frac{\delta^{m}}{3\sqrt{n}}\Big{)}\] \[\leq\frac{9C_{\beta_{+}}n}{\delta^{2m}\cdot n^{2(r-\beta_{+})-1} }\cdot\kappa^{m(2-\beta_{+})}=9C_{\beta_{+}}\cdot\frac{1}{n^{2(r-\beta_{+})-2 }}\cdot\left(\frac{\kappa^{2-\beta_{+}}}{\delta^{2}}\right)^{m}\] \[\leq 9C_{\beta_{+}}\cdot\Big{(}\frac{\kappa^{2-\beta_{+}}}{ \delta^{2}}\Big{)}^{m}\qquad\text{ due to \eqref{eq:C_1} and }\rho_{0}\in(\rho_{1},1)\] where \(C_{\beta_{+}}<\infty\) is the constant characterized in Lemma 6.4 that only depends on \(\beta_{+}\) and the law of the Levy process \(X\). To conclude the proof of claim (6.38), we pick \(C_{1}=9C_{\beta_{+}}\). **Proof of Claim** (6.39) It follows directly from Lemma 6.5 that \[\mathbf{P}\Big{(}\Big{|}\sum_{j=1}^{m+t}(\xi_{j}^{(i)})^{+}-\sum _{j=1}^{m+t}(\xi_{j}^{(i),m})^{+}\Big{|}>\frac{\delta^{m}}{3\sqrt{n}}\Big{)}\] \[\leq\frac{9C_{\beta_{+}}n}{\delta^{2m}\cdot n^{2(r-\beta_{+})-1} }\cdot\kappa^{m(2-\beta_{+})}=9C_{\beta_{+}}\cdot\frac{1}{n^{2(r-\beta_{+})-2 }}\cdot\left(\frac{\kappa^{2-\beta_{+}}}{\delta^{2}}\right)^{m}\] \[\leq 9C_{\beta_{+}}\cdot\left(\frac{\kappa^{2-\beta_{+}}}{ \delta^{2}}\right)^{m}\qquad\text{ due to \eqref{eq:C_1} and }\rho_{0}\in(\rho_{1},1)\] where \(C_{\beta_{+}}<\infty\) is the constant characterized in Lemma 6.5 that only depends on \(\beta_{+}\) and the law of the Levy process \(X\). To conclude the proof of claim (6.39), we pick \(C_{2}=9C_{\beta_{+}}\). **Proof of Claim** (6.40) Using Lemma 6.6, we get \[\mathbf{P}\Big{(}\sum_{j\geq m+t(n)+1}(\xi_{j}^{(i)})^{+}>\frac{\delta^{m}}{3 \sqrt{n}}\Big{)}\leq\frac{3C_{X}\sqrt{n}}{\delta^{m}}\cdot\left[\sqrt{\frac{1} {n^{d-1}\cdot 2^{m}}}+\frac{1}{n^{d-1}\cdot 2^{m}}\right]\leq\frac{6C_{X}\sqrt{n}}{ \delta^{m}}\cdot\sqrt{\frac{1}{n^{d-1}\cdot 2^{m}}}\] \[=6C_{X}\cdot\sqrt{\frac{1}{n^{d-2}}\cdot\left(\frac{1}{\sqrt{2}\delta} \right)^{m}}\] \[\leq 6C_{X}\cdot\left(\frac{1}{\sqrt{2}\delta}\right)^{m}\qquad \text{due to \eqref{eq:C_X}}\] \[\leq 6C_{X}\cdot\rho_{0}^{m}\qquad\text{due to \eqref{eq:C_X} and }\rho_{0}\in(\rho_{1},1)\] where \(C_{X}<\infty\) is the constant characterized in Lemma 6.6 that only depends on the law of the Levy process \(X\). By setting \(C_{3}=6C_{X}\), we conclude the proof of claim (6.40). **Proof of Claim** (6.41) Due to the independence between \(z_{i}\) and \(\xi_{j}^{(i)}\), \[\mathbf{P}\Big{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+ \sum_{q=1}^{i-1}z_{q}\in[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}],\ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)}\] \[=\int_{\mathbb{R}}\mathbf{P}\Big{(}\sum_{q=1}^{i-1}\sum_{j\geq 0 }\xi_{j}^{(q)}\in[na-x-\delta^{m\alpha_{2}},na-x+\delta^{m\alpha_{2}}],\ u_{1}\geq n \delta^{m\alpha_{1}}\Big{)}\mathbf{P}(\sum_{q=1}^{i-1}z_{q}\in dx)\] \[\leq\int_{\mathbb{R}}\mathbf{P}\Big{(}\sum_{q=1}^{i-1}\sum_{j \geq 0}\xi_{j}^{(q)}\in[na-x-\delta^{m\alpha_{2}},na-x+\delta^{m\alpha_{2}}] \Big{|}\ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)}\mathbf{P}(\sum_{q=1}^{i-1}z_{q }\in dx)\] \[=\int_{\mathbb{R}}\mathbf{P}\Big{(}X^{<n\gamma}(u_{i-1})\in[na-x- \delta^{m\alpha_{2}},na-x+\delta^{m\alpha_{2}}]\Big{|}\ u_{1}\geq n\delta^{m \alpha_{1}}\Big{)}\mathbf{P}(\sum_{q=1}^{i-1}z_{q}\in dx)\] where \((u_{i})_{i=1}^{k}\) are independent of the Levy process \(X^{<n\gamma}\); see the law of \(\xi_{j}^{(i)}\) in (3.20). Now let us consider two different cases depending on the value of \(i\in[k+1]\). If \(i\geq 2\), then due to the nature of the order statistics \(0=u_{0}<u_{1}<u_{2}<\cdots<u_{k}<u_{k+1}=1\), on event \(\{u_{1}\geq n\delta^{m\alpha_{1}}\}\) we must have \(u_{i-1}\geq u_{1}\geq n\delta^{m\alpha_{1}}\). It then follows directly from Assumption 2 that \[\mathbf{P}\Big{(}X^{<n\gamma}(u_{i-1})\in[na-x-\delta^{m\alpha_{ 2}},na-x+\delta^{m\alpha_{2}}]\Big{|}\ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)}\] \[\leq\frac{C}{n^{\lambda}\cdot\delta^{m\alpha_{1}\lambda}}\cdot(2 \delta^{m\alpha_{2}})^{\theta}\leq 2^{\theta}C\cdot\left(\frac{\delta^{ \theta\alpha_{2}}}{\delta^{\lambda\alpha_{1}}}\right)^{m}\] \[\leq 2^{\theta}C\cdot\rho_{0}^{m}\qquad\text{due to \eqref{eq:C_X} and }\rho_{0}\in(\rho_{1},1).\] In case that \(i=1\), we get \[\mathbf{P}\Big{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+ \sum_{q=1}^{i-1}z_{q}\in[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}],\ u_{1}\geq n\delta^{m\alpha_{1}}\Big{)}= 1\Big{(}0\in[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}]\Big{)}.\] For any \(m\geq\bar{m}\), due to (6.28) we must have \(na-\delta^{m\alpha_{2}}\geq a-\delta^{m\alpha_{2}}0\) for all \(n\geq 1,m\geq\bar{m}\), thus implying \(1\big{(}0\in[na-\delta^{m\alpha_{2}},na+\delta^{m\alpha_{2}}]\big{)}=0.\) To conclude, it suffices to pick \(C_{4}=2^{\theta}C\) and note that \(C<\infty,\theta\in(0,1]\) are constants in Assumption 2 that only depend on the law of the Levy process \(X\). **Proof of Claim** (6.42) Applying Lemma 6.7 with \(y_{0}=\delta^{m\alpha_{2}}-\frac{\delta^{m}}{\sqrt{n}}\) and \(c=\frac{2\delta^{m}}{\sqrt{n}}\), we get (for all \(n\geq 1,m\geq\bar{m},y\geq y_{0}\)) \[\mathbf{P}\bigg{(}\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i)})^{+}\in[y,y+ \frac{2\delta^{m}}{\sqrt{n}}]\bigg{)}\] \[\leq C\frac{(m+(\lceil\log_{2}(n^{d})\rceil)n^{\alpha_{4}\lambda} }{\delta^{m\alpha_{3}\lambda}}\cdot\frac{2^{\theta}\delta^{m\theta}}{n^{\theta/2 }}+4C_{X}\big{(}m^{2}+(\lceil\log_{2}(n^{d})\rceil)^{2}\big{)}\frac{\delta^{m \alpha_{3}/2}}{(\delta^{m\alpha_{2}}-\frac{\delta^{m}}{\sqrt{n}})\cdot n^{ \alpha_{4}/2}}\] \[\leq C\frac{(m+(\lceil\log_{2}(n^{d})\rceil)n^{\alpha_{4}\lambda}}{ \delta^{m\alpha_{3}\lambda}}\cdot\frac{2^{\theta}\delta^{m\theta}}{n^{\theta/2}}+ 8C_{X}\big{(}m^{2}+(\lceil\log_{2}(n^{d})\rceil)^{2}\big{)}\frac{\delta^{m\alpha _{3}/2}}{\delta^{m\alpha_{2}}\cdot n^{\alpha_{4}/2}}\] \[\qquad\text{due to (\ref{eq:C3})}\] \[=\underbrace{2^{\theta}C\cdot\frac{m}{n^{\frac{\theta}{2}- \lambda\alpha_{4}}}\cdot\left(\frac{\delta^{\theta}}{\delta^{\lambda\alpha_{3}} }\right)^{m}}_{\triangleq_{p_{n,m,1}}}+\underbrace{2^{\theta}C\cdot\frac{ \lceil\log_{2}(n^{d})\rceil}{n^{\frac{\theta}{2}-\lambda\alpha_{4}}}\cdot \left(\frac{\delta^{\theta}}{\delta^{\lambda\alpha_{3}}}\right)^{m}}_{ \triangleq_{p_{n,m,2}}}\] \[\qquad+\underbrace{8C_{X}\cdot\frac{m^{2}}{n^{\alpha_{4}/2}} \cdot\left(\frac{\delta^{\alpha_{3}/2}}{\delta^{\alpha_{2}}}\right)^{m}}_{ \triangleq_{p_{n,m,3}}}+\underbrace{8C_{X}\cdot\frac{\left(\lceil\log_{2}(n^{d })\rceil\right)^{2}}{n^{\alpha_{4}/2}}\cdot\left(\frac{\delta^{\alpha_{3}/2}} {\delta^{\alpha_{2}}}\right)^{m}}_{\triangleq_{p_{n,m,4}}}\] where \(C_{X}<\infty\) is the constant characterized in Lemma 6.3 that only depends on the law of Levy process \(X\), and \(C\in(0,\infty),\lambda>0,\theta\in(0,1]\) be the constants in Assumption 2. For term \(p_{n,m,1}\), note that for any \(n\geq 1,m\geq\bar{m}\), \[p_{n,m,1} \leq 2^{\theta}C\cdot m\bigg{(}\frac{\delta^{\theta}}{\delta^{ \lambda\alpha_{3}}}\bigg{)}^{m}\qquad\text{due to }\frac{\theta}{2}>\lambda\alpha_{4};\, \text{see (\ref{eq:C3})}\] \[\leq 2^{\theta}C\cdot m\rho_{1}^{m}\qquad\text{due to (\ref{eq:C3})}\] \[\leq 2^{\theta}C\cdot\rho_{0}^{m}\qquad\text{due to (\ref{eq:C3})}.\] For term \(p_{n,m,2}\), note that \(\frac{\lceil\log_{2}(n^{d})\rceil}{n^{\frac{\theta}{2}-\lambda\alpha_{4}}}\to 0\) as \(n\to\infty\) due to, again, \(\frac{\theta}{2}>\lambda\alpha_{4}\). This allows us to fix some \(C_{d,1}<\infty\) such that \(\sup_{n=1,2,\cdots}\frac{\lceil\log_{2}(n^{d})\rceil}{n^{\frac{\theta}{2}- \lambda\alpha_{4}}}\leq C_{d,1}\). As a result, for any \(n\geq 1,m\geq 0\), \[p_{n,m,2}\leq 2^{\theta}CC_{d,1}\cdot\left(\frac{\delta^{\theta}}{\delta^{ \lambda\alpha_{3}}}\right)^{m}\leq 2^{\theta}CC_{d,1}\cdot\rho_{0}^{m}\qquad\text{ due to (\ref{eq:C3}) and }\rho_{0}\in(\rho_{1},1).\] Similarly, for all \(n\geq 1,m\geq\bar{m}\), \[p_{n,m,3} \leq 8C_{X}\cdot m^{2}\bigg{(}\frac{\delta^{\alpha_{3}/2}}{\delta ^{\alpha_{2}}}\bigg{)}^{m}\leq 8C_{X}\cdot m^{2}\rho_{1}^{m}\qquad\text{due to (\ref{eq:C3})}\] \[\leq 8C_{X}\cdot\rho_{0}^{m}\qquad\text{due to (\ref{eq:C3})}.\] Besides, due to \(\frac{(\lceil\log_{2}(n^{d})\rceil)^{2}}{n^{\alpha_{4}/2}}\to 0\) as \(n\to\infty\), we can find \(C_{d,2}<\infty\) such that \(\sup_{n=1,2,\cdots}\frac{(\lceil\log_{2}(n^{d})\rceil)^{2}}{n^{\alpha_{4}/2}} \leq C_{d,2}\). This leads to (for all \(n\geq 1,m\geq 0\)) \[p_{n,m,4}\leq 8C_{X}C_{d,2}\cdot\left(\frac{\delta^{\alpha_{3}/2}}{\delta^{ \alpha_{2}}}\right)^{m}\leq 8C_{X}C_{d,2}\cdot\rho_{0}^{m}\qquad\text{ due to (\ref{eq:C3}) and }\rho_{0}\in(\rho_{1},1).\] To conclude the proof, one can simply set \(C_{5}=\max\{2^{\theta}C,\ 2^{\theta}CC_{d,1},\ 8C_{X},\ 8C_{X}C_{d,2}\}\). Proof of Proposition 3.3.: Throughout this proof, we only consider \(\rho_{0}\in(0,1)\) large enough such that \[\rho_{0}>\frac{1}{\sqrt{2}},\qquad\rho_{0}>\kappa^{2-\beta_{+}}. \tag{6.43}\] Under \(\mathbf{P}\), the process \(J_{n}\) is a Levy process with generating triplet \((0,0,\nu|_{[n\gamma,\infty)})\). Therefore, \(\mathcal{L}(J_{n}|\{\mathcal{D}(J_{n})=k\})=\mathcal{L}(\zeta_{k})\) where \[\zeta_{k}(t)\triangleq\sum_{i=1}^{k}z_{i}\mathbbm{1}_{[u_{i},n]}(t)\qquad\forall t \in[0,n],\] \(0<u_{1}<u_{2}<\cdots<u_{k}\) are the order statistics of \(k\) iid copies of \(\mathrm{Unif}(0,n)\), and \(z_{i}\) are iid with law \(\nu\big{(}\,\cdot\,\cap[n\gamma,\infty)\big{)}/\nu[n\gamma,\infty)\). For notation simplicity, let \(t(n)=\lceil\log_{2}(n^{d})\rceil\). Due to the coupling between \(\xi_{j}^{(i)}\), \(\xi_{j}^{(i),m}\) in (3.20)(3.21) and the definitions of \(Y_{n}^{*}(\cdot)\) and \(\hat{Y}_{n}^{m}(\cdot)\) in (3.13)(3.23), we have \[Y_{n}^{*}(\zeta_{k})=\max_{i\in[k+1]}\mathbbm{1}\big{\{}W_{n}^{(i),*}\geq na \big{\}},\qquad\hat{Y}_{n}^{m}(\zeta_{k})=\max_{i\in[k+1]}\mathbbm{1}\big{\{} \hat{W}_{n}^{(i),m}\geq na\big{\}}\] where \[W_{n}^{(i),*} \triangleq\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}+\sum_{q=1}^ {i-1}z_{q}+\sum_{j\geq 1}(\xi_{j}^{(i)})^{+},\] \[\hat{W}_{n}^{(i),m} \triangleq\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q),m}+\sum_{q=1} ^{i-1}z_{q}+\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i),m})^{+}.\] In particular, on event \(\{\mathcal{D}(\bar{J}_{n})=k\}\) (i.e., \(J_{n}\) admits the representation of \(\zeta_{k}\) over \([0,n]\)), we have \(\sup_{t\in[0,n]}X(t)=\max_{i=1}^{k+1}W_{n}^{(i),*}\). As a result, for all \(n\geq 1\), \(m\geq 0\), and \(k=0,1,\cdots,l^{*}-1\), \[\mathbf{P}\Big{(}Y_{n}^{*}(J_{n})\neq\hat{Y}_{n}^{m}(J_{n}),\ \bar{X}_{n}\notin A^{\Delta}\ \Big{|}\ \mathcal{D}(\bar{J}_{n})=k\Big{)} =\mathbf{P}\Big{(}Y_{n}^{*}(J_{n})\neq\hat{Y}_{n}^{m}(J_{n}),\ \sup_{t\in[0,n]}X(t)<n(a-\Delta)\ \Big{|}\ \mathcal{D}(\bar{J}_{n})=k\Big{)}\] \[=\mathbf{P}\Big{(}\max_{i\in[k+1]}\hat{W}_{n}^{(i),m}\geq na,\ \max_{i\in[k+1]}W_{n}^{(i),*}<n(a-\Delta)\Big{)}\] \[\leq\sum_{i\in[k+1]}\mathbf{P}\Big{(}\big{|}\hat{W}_{n}^{(i),m}-W _{n}^{(i),*}\big{|}>n\Delta\Big{)}. \tag{6.44}\] To proceed, define events \[E_{n,1}^{(i),m} \triangleq\bigg{\{}\Big{|}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q )}-\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q),m}\Big{|}\leq n\Delta/3\bigg{\}},\] \[E_{n,2}^{(i),m} \triangleq\bigg{\{}\Big{|}\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i)})^{+} -\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i),m})^{+}\Big{|}\leq n\Delta/3\bigg{\}},\] \[E_{n,3}^{(i),m} \triangleq\bigg{\{}\sum_{j\geq m+t(n)+1}(\xi_{j}^{(i)})^{+}\leq n \Delta/3\bigg{\}}.\] Note that on \(E_{n,1}^{(i),m}\cap E_{n,2}^{(i),m}\cap E_{n,3}^{(i),m}\) we must have \(\big{|}\hat{W}_{n}^{(i),m}-W_{n}^{(i),*}\big{|}\leq n\Delta\), which implies \(\{|\hat{W}_{n}^{(i),m}-W_{n}^{(i),*}|>n\Delta\}\cap\big{(}E_{n,1}^{(i),m}\cap E _{n,2}^{(i),m}\cap E_{n,3}^{(i),m}\big{)}=\emptyset\) and hence \[\{|\hat{W}_{n}^{(i),m}-W_{n}^{(i),*}|>n\Delta\}\subseteq\big{(}E_{n,1}^{(i),m }\cap E_{n,2}^{(i),m}\cap E_{n,3}^{(i),m}\big{)}^{c}=\big{(}E_{n,1}^{(i),m} \big{)}^{c}\cup\big{(}E_{n,2}^{(i),m}\big{)}^{c}\cup\big{(}E_{n,3}^{(i),m} \big{)}^{c}. \tag{6.45}\] Furthermore, we claim that for all \(n\geq 1,m\geq 0,k=0,1,\cdots,l^{*}-1\) and \(i\in[k+1]\), \[\mathbf{P}\Big{(}\big{(}E_{n,1}^{(i),m}\big{)}^{c}\Big{)}=\mathbf{P}\Big{(} \Big{|}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)}-\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q),m} \Big{|}>n\Delta/3\Big{)}\leq\frac{C_{1}\rho_{0}^{m}}{n^{\mu}\Delta^{2}}, \tag{6.46}\] \[\mathbf{P}\Big{(}\big{(}E_{n,2}^{(i),m}\big{)}^{c}\Big{)}=\mathbf{P}\Big{(} \Big{|}\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i)})^{+}-\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i),m })^{+}\Big{|}>n\Delta/3\Big{)}\leq\frac{C_{2}\rho_{0}^{m}}{n^{\mu}\Delta^{2}}, \tag{6.47}\] \[\mathbf{P}\Big{(}\big{(}E_{n,3}^{(i),m}\big{)}^{c}\Big{)}=\mathbf{P}\Big{(} \sum_{j\geq m+t(n)+1}(\xi_{j}^{(i)})^{+}>n\Delta/3\Big{)}\leq\frac{C_{3}\rho_ {0}^{m}}{n^{\mu}\Delta^{2}} \tag{6.48}\] where the constants \(C_{1},C_{2},C_{3}\) do not depend on \(n,m,k,i\). By combining these claims with the decomposition of events (6.45), we have in (6.44) that (with \(C^{\prime}=C_{1}+C_{2}+C_{3}\)) \[\mathbf{P}\Big{(}Y_{n}^{*}(J_{n})\neq\hat{Y}_{n}^{m}(J_{n}),\ \bar{X}_{n}\notin A^{\Delta}\ \Big{|}\ \mathcal{D}(\bar{J}_{n})=k\Big{)}\leq(k+1)\frac{C^{\prime}\rho_{0}^{m}}{n^{ \mu}\Delta^{2}}\leq l^{*}\cdot\frac{C^{\prime}\rho_{0}^{m}}{n^{\mu}\Delta^{2}}\] for all \(n\geq 1,m\geq 0,k=0,1,\cdots,l^{*}-1\). By setting \(C_{0}=l^{*}C^{\prime}\) we conclude the proof. Now it only remains to prove (6.46)(6.47)(6.48). **Proof of Claim** (6.46) From the coupling between \(\xi_{j}^{(i)},\xi_{j}^{(i),m}\) in (3.20)(3.21), we have \[\Big{(}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q)},\sum_{q=1}^{i-1}\sum_{j \geq 0}\xi_{j}^{(q),m}\Big{)}\stackrel{{ d}}{{=}}\big{(}\Xi_{n}(u_{i-1}), \tilde{\Xi}_{n}^{m}(u_{i-1})\big{)}\] where the laws of processes \(\Xi_{n},\tilde{\Xi}_{n}^{m}\) are stated in (3.15)(3.17). Applying Lemma 6.4, we yield (for all \(n\geq 1,m,k\geq 0,i\in[k+1]\)) \[\mathbf{P}\Big{(}\Big{|}\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q )}-\sum_{q=1}^{i-1}\sum_{j\geq 0}\xi_{j}^{(q),m}\Big{|}>n\Delta/3\Big{)}\] \[=\mathbf{P}\Big{(}|\Xi_{n}(u_{i-1})-\tilde{\Xi}_{n}^{m}(u_{i-1})| >n\Delta/3\Big{)}\leq\mathbf{P}\Big{(}\sup_{t\in[0,n]}|\Xi_{n}(t)-\tilde{\Xi} _{n}^{m}(t)|>n\Delta/3\Big{)}\] \[\leq\frac{9C_{\beta_{+}}}{\Delta^{2}\cdot n^{2(r-\beta_{+})+1}} \cdot\kappa^{m(2-\beta_{+})}\] \[\leq\frac{9C_{\beta_{+}}}{\Delta^{2}}\cdot\frac{\rho_{0}^{m}}{n^ {\mu}}\qquad\text{ due to }2(r-\beta_{+})+1>\mu\text{ and }\kappa^{2-\beta_{+}}<\rho_{0}\] where \(C_{\beta_{+}}<\infty\) is the constant characterized in Lemma 6.4 that only depends on \(\beta_{+}\) and the law of the Levy process \(X\). To conclude the proof of claim (6.46), we pick \(C_{1}=9C_{\beta_{+}}\). **Proof of Claim** (6.47) It follows directly from Lemma 6.5 that \[\mathbf{P}\Big{(}\Big{|}\sum_{j=1}^{m+t(n)}(\xi_{j}^{(i)})^{+}- \sum_{j=1}^{m+t(n)}(\xi_{j}^{(i),m})^{+}\Big{|}>n\Delta/3\Big{)} \leq\frac{9C_{\beta_{+}}\kappa^{m(2-\beta_{+})}}{\Delta^{2}\cdot n ^{2(r-\beta_{+})-1}}=\frac{9C_{\beta_{+}}\kappa^{m(2-\beta_{+})}}{\Delta^{2} \cdot n^{2(r-\beta_{+})+1}}\] \[\leq\frac{9C_{\beta_{+}}}{\Delta^{2}}\cdot\frac{\rho_{0}^{m}}{n^ {\mu}}.\] In the last line of the display above, we again applied \(2(r-\beta_{+})+1>\mu\) and \(\kappa^{2-\beta_{+}}<\rho_{0}\); here \(C_{\beta_{+}}<\infty\) is the constant characterized in Lemma 6.5 that only depends on \(\beta_{+}\) and the law of the Levy process \(X\). To conclude the proof of claim (6.47), we pick \(C_{2}=9C_{\beta_{+}}\). **Proof of Claim** (6.48) Using Lemma 6.6, we get \[\mathbf{P}\Big{(}\sum_{j\geq m+t(n)+1}(\xi_{j}^{(i)})^{+}>n\Delta /3\Big{)} \leq\frac{3C_{X}}{n\Delta}\cdot\left[\sqrt{\frac{1}{n^{d-1}\cdot 2^{m}}}+ \frac{1}{n^{d-1}\cdot 2^{m}}\right]\leq\frac{6C_{X}}{n\Delta}\cdot\sqrt{\frac{1}{n^{d-1} \cdot 2^{m}}}\] \[=\frac{6C_{X}}{\Delta}\cdot\frac{(\sqrt{1/2})^{m}}{n^{\frac{d+1}{ 2}}}\] \[\leq\frac{6C_{X}}{\Delta}\cdot\frac{\rho_{0}^{m}}{n^{\mu}}\qquad \text{ due to }\frac{d+1}{2}>\mu\text{ and }\frac{1}{\sqrt{2}}<\rho_{0}\] \[<\frac{6C_{X}}{\Delta^{2}}\cdot\frac{\rho_{0}^{m}}{n^{\mu}}\qquad\text{ due to }\Delta\in(0,1)\] where \(C_{X}<\infty\) is the constant characterized in Lemma 6.6 that only depends on the law of the Levy process \(X\). By setting \(C_{3}=6C_{X}\), we conclude the proof of claim (6.48). ### Proofs of Propositions 4.1 and 4.3 The proof of Proposition 4.1 is based on the inversion formula of the characteristic functions (see, e.g., theorem 3.3.5. of [23]). Specifically, we compare the characteristic function of \(Y(t)\) with an \(\alpha\)-stable process and establish the similarities between their laws. Proof of Proposition 4.1.: Applying Levy-Khintchine formula, we yield the following formula for the characteristic function of \(\varphi_{t}(z)=\mathbf{E}\exp(izY(t))\): \[\varphi_{t}(z)=\exp\Big{(}t\int_{(0,z_{0})}\big{[}\underbrace{\exp(izx)-1-izx \mathbbm{1}_{(0,1]}(x)}_{\stackrel{{\Delta}}{{=}}\phi(z,x)} \big{]}\nu(dx)\Big{)}\qquad\forall z\in\mathbb{R},\ t>0.\] Note that for \(\phi(z,x)\) and its complex conjugate \(\overline{\phi(z,x)}\), we have \[\phi(z,x) =\cos(zx)-1+i\big{(}\sin(zx)-zx\mathbbm{1}_{(0,1]}(x)\big{)},\] \[\overline{\phi(z,x)} =\cos(zx)-1-i\big{(}\sin(zx)-zx\mathbbm{1}_{(0,1]}(x)\big{)}.\] As a result, \[|\varphi_{t}(z)|=\exp\Big{(}-t\int_{(0,z_{0})}\big{(}1-\cos(zx)\big{)}\nu(dx) \Big{)}\qquad\forall z\in\mathbb{R},\ t>0. \tag{6.49}\] Furthermore, we claim the existence of some \(\widetilde{M},\widetilde{C}\in(0,\infty)\) such that \[\int_{(0,z_{0})}\big{(}1-\cos(zx)\big{)}\nu(dx)\geq\widetilde{C}|z|^{\alpha} \qquad\forall|z|\geq\widetilde{M}. \tag{6.50}\] By plugging (6.50) in (6.49), we obtain that for all \(|z|\geq\widetilde{M}\) and \(t>0\), \(|\varphi_{t}(z)|\leq\exp(-t\widetilde{C}|z|^{\alpha}).\) It then follows directly from the Inversion Formula (see theorem 3.3.5. of [23]) that for all \(t>0\), \[\|f_{Y_{t}}\|_{\infty} \leq\frac{1}{2\pi}\int|\varphi_{t}(z)|dz\] \[\leq\frac{1}{2\pi}\Big{(}2\widetilde{M}+\int_{|z|\geq\widetilde{ M}}\exp\big{(}-t\widetilde{C}|z|^{\alpha}\big{)}dz\Big{)}\] \[=\frac{1}{2\pi}\Big{(}2\widetilde{M}+\frac{1}{t^{1/\alpha}}\int \exp(-\widetilde{C}|x|^{\alpha})dx\Big{)}\qquad\text{by letting }x=zt^{1/\alpha}\] \[=\frac{\widetilde{M}}{\pi}+\frac{C_{1}}{t^{1/\alpha}}\qquad\text{ where }C_{1}=\frac{1}{2\pi}\int\exp(-\widetilde{C}|x|^{\alpha})dx<\infty.\] To conclude the proof, one only needs to pick \(C=\frac{\widetilde{M}}{\pi}+C_{1}\). Now, it only remains to prove (6.50). **Proof of Claim** (6.50) We start by fixing some constants. Let \[C_{0}=\int_{0}^{\infty}(1-\cos y)\frac{dy}{y^{1+\alpha}}. \tag{6.51}\] On \(y\in(0,1]\), note that \(1-\cos y\leq y^{2}/2\), and hence \(\frac{|1-\cos y|}{y^{1+\alpha}}\leq\frac{1}{2y^{\alpha-1}}\). On \(y\in(1,\infty)\), note that \(1-\cos y\in[0,1]\) so \(\frac{|1-\cos y|}{y^{1+\alpha}}\leq 1/y^{\alpha+1}\). Due to \(\alpha\in(0,2)\), we have \(C_{0}=\int_{0}^{\infty}(1-\cos y)\frac{dy}{y^{1+\alpha}}\in(0,\infty)\). Next, choose positive real numbers \(\theta,\delta\) such that: \[\frac{\theta^{2-\alpha}}{2(2-\alpha)} \leq\frac{C_{0}}{8}; \tag{6.52}\] \[\frac{\delta}{\theta^{\alpha}} \leq\frac{C_{0}}{8}. \tag{6.53}\] For any \(M>0\) and \(z\neq 0\), observe that \[\frac{\int_{x\geq\frac{M}{|z|}}\big{(}1-\cos(zx)\big{)}\frac{dx}{ x^{1+\alpha}}}{|z|^{\alpha}} =\frac{\int_{x\geq\frac{M}{|z|}}\big{(}1-\cos(|z|x)\big{)}\frac{ dx}{x^{1+\alpha}}}{|z|^{\alpha}}\] \[=\int_{M}^{\infty}\big{(}1-\cos y\big{)}\frac{dy}{y^{1+\alpha}} \qquad\text{by letting }y=|z|x.\] Therefore, by fixing some \(M>\theta\) large enough, we must have \[\frac{1}{|z|^{\alpha}}\int_{x\geq M/|z|}\big{(}1-\cos(zx)\big{)}\frac{dx}{x^{1 +\alpha}}\leq C_{0}/4\qquad\forall z\neq 0. \tag{6.54}\] Moving on, we fix some \(c>0\) and consider the difference between \(\int_{(0,z_{0})}\big{(}1-\cos(zx)\big{)}\nu(dx)\) and \(\int_{0}^{M/z}\big{(}1-\cos(zx)\big{)}\frac{dx}{x^{1+\alpha}}\). For any \(z\) such that \(|z|>M/z_{0}\), \[\frac{1}{|z|^{\alpha}}\bigg{[}\int_{(0,z_{0})}\Big{(}1-\cos(zx) \Big{)}\nu(dx)-\int_{0}^{M/|z|}\Big{(}1-\cos(zx)\Big{)}c\frac{dx}{x^{1+\alpha}} \bigg{]}\] \[\geq-\underbrace{\frac{1}{|z|^{\alpha}}\int_{0}^{\theta/|z|} \Big{(}1-\cos(zx)\Big{)}c\frac{dx}{x^{1+\alpha}}}_{\triangleq I_{1}(z)}\qquad \text{due to our choice of }M>\theta \tag{6.55}\] \[\qquad+\underbrace{\frac{1}{|z|^{\alpha}}\bigg{[}\int_{[\theta/ |z|,M/|z|)}\Big{(}1-\cos(zx)\Big{)}\nu(dx)-\int_{[\theta/|z|,M/|z|)}\Big{(}1- \cos(zx)\Big{)}c\frac{dx}{x^{1+\alpha}}\bigg{]}}_{\triangleq I_{2}(z)}.\] First, for any \(z\neq 0\), \[I_{1}(z) \leq\frac{c}{|z|^{\alpha}}\int_{0}^{\theta/|z|}\frac{z^{2}x^{2}}{ 2}\frac{dx}{x^{1+\alpha}}\qquad\text{due to }1-\cos w\leq\frac{w^{2}}{2}\ \forall w\in\mathbb{R}\] \[=\frac{c}{2}\int_{0}^{\theta}y^{1-\alpha}dy\qquad\text{by setting }y=|z|x\] \[=\frac{c}{2}\cdot\frac{\theta^{2-\alpha}}{2-\alpha}\leq c\cdot \frac{C_{0}}{8}\quad\text{due to }\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq Since \(h(z)\) is uniformly continuous on \([\theta,M]\), we can find some \(N\in\mathbb{N},t_{0}>1\), and a sequence of real numbers \(M=x_{0}>x_{1}>\cdots>x_{N}=\theta\) such that \[\begin{split}&\frac{x_{j-1}}{x_{j}}=t_{0}\qquad\forall j=1,2, \cdots,N,\\ &|h(x)-h(y)|<\delta\qquad\forall j=1,2,\cdots,N,\ x,y\in[x_{j},x _{j-1}].\end{split} \tag{6.57}\] In other words, we use a geometric sequence \(\{x_{0},x_{1},\cdots,x_{N}\}\) to partition \([\theta,M]\) into \(N+1\) intervals, on any of which the value of \(h(z)=1-\cos z\) fluctuates within range \(\delta\) in (6.53). Now fix some \(\Delta>0\) such that \[(1-\Delta)t_{0}^{\alpha+\epsilon}>1. \tag{6.58}\] Recall that \(\nu[x,\infty)\) is regularly varying as \(x\to 0\) with index \(\alpha+2\epsilon\). In other words, for \(g(y)=\nu[1/y,\infty)\), we have \(g\in RV_{\alpha+2\epsilon}\). By Potter's bounds (see proposition 2.6 in [46]), we know the existence of some \(\bar{y}_{1}>0\) such that \[\frac{g(ty)}{g(y)}\geq(1-\Delta)t^{\alpha+\epsilon}\qquad\forall y\geq\bar{y} _{1},\ t\geq 1. \tag{6.59}\] On the other hand, define \[\widetilde{g}(y)=cy^{\alpha},\qquad\nu_{c}(dx)=c\mathbb{1}_{(0,\infty)}(x) \frac{dx}{x^{1+\alpha}}.\] Note that \(\widetilde{g}(y)=\nu_{c}(1/y,\infty)\). Due to \(g\in RV_{\alpha+2\epsilon}\), we can find some \(\bar{y}_{2}>0\) such that \[g(y)\geq\frac{t_{0}^{\alpha}-1}{(1-\Delta)t_{0}^{\alpha+\epsilon}-1}\cdot \widetilde{g}(y)\quad\forall y\geq\bar{y}_{2}. \tag{6.60}\] Let \(\widetilde{M}=\max\{M/z_{0},M\bar{y}_{1},M\bar{y}_{2}\}\). For any \(|z|\geq\widetilde{M}\), we have \(|z|\geq M/z_{0}\) and \(\frac{|z|}{x_{j}}\geq\frac{|z|}{M}\geq\bar{y}_{1}\vee\bar{y}_{2}.\) for any \(j=0,1,\cdots,N\). As a result, for \(z\in\mathbb{R}\) with \(|z|\geq\widetilde{M}\) and any \(j=1,2,\cdots,N\), the mass of \(\nu\) on \([x_{j},x_{j-1})\) satisfies \[\nu[x_{j}/|z|,x_{j-1}/|z|) =g(|z|/x_{j})-g(|z|/x_{j-1})\qquad\text{ due to }g(y)=\nu[1/y,\infty)\] \[=g(t_{0}|z|/x_{j-1})-g(|z|/x_{j-1})\qquad\text{ due to }x_{j-1}=t_{0}x_{j},\,\text{see \eqref{eq:11}}\] \[\geq g(|z|/x_{j-1})\cdot\Big{(}(1-\Delta)t_{0}^{\alpha+\epsilon}- 1\Big{)}\qquad\text{ due to }\frac{|z|}{x_{j}}\geq\bar{y}_{1}\vee\bar{y}_{2}\text{ and \eqref{eq:12}}\] \[\geq\widetilde{g}(|z|/x_{j-1})\cdot(t_{0}^{\alpha}-1)\qquad\text{ due to \eqref{eq:12}},\] whereas the mass of \(\nu_{c}\) on \([x_{j},x_{j-1})\) is \[\nu_{c}[x_{j}/|z|,x_{j-1}/|z|)=\widetilde{g}(|z|/x_{j})-\widetilde{g}(|z|/x_{j -1})=\widetilde{g}(|z|/x_{j-1})\cdot(t_{0}^{\alpha}-1).\] Therefore, given any \(z\in\mathbb{R}\) such that \(|z|\geq\widetilde{M}\), we have \(\nu\big{(}E_{j}(z)\big{)}\geq\nu_{c}\big{(}E_{j}(z)\big{)}\) for all \(j\in[N]\) where \(E_{j}(z)=[x_{j}/|z|,x_{j-1}/|z|)\). This leads to \[I_{2}(z)\] \[=\frac{1}{|z|^{\alpha}}\sum_{j=1}^{N}\bigg{[}\int_{E_{j}(z)} \Big{(}1-\cos(zx)\Big{)}\nu(dx)-\int_{E_{j}(z)}\Big{(}1-\cos(zx)\Big{)}c\frac{ dx}{x^{1+\alpha}}\bigg{]}\] \[\geq\frac{1}{|z|^{\alpha}}\sum_{j=1}^{N}\Big{[}\underline{m}_{j} \cdot\nu\big{(}E_{j}(z)\big{)}-\bar{m}_{j}\cdot\nu_{c}\big{(}E_{j}(z)\big{)} \Big{]}\] with \(\bar{m}_{j}=\max\{h(z):\ z\in[x_{j},x_{j-1}]\}\), \(\underline{m}_{j}=\min\{h(z):\ z\in[x_{j},x_{j-1}]\}\) \[=\frac{1}{|z|^{\alpha}}\sum_{j=1}^{N}\Big{[}\underline{m}_{j}\cdot \nu\big{(}E_{j}(z)\big{)}-\underline{m}_{j}\cdot\nu_{c}\big{(}E_{j}(z)\big{)} \Big{]}+\frac{1}{|z|^{\alpha}}\sum_{j=1}^{N}\Big{[}\underline{m}_{j}\cdot\nu_{ c}\big{(}E_{j}(z)\big{)}-\bar{m}_{j}\cdot\nu_{c}\big{(}E_{j}(z)\big{)}\Big{]}\] \[\geq 0+\frac{1}{|z|^{\alpha}}\sum_{j=1}^{N}\Big{[}\underline{m}_{j} \cdot\nu_{c}\big{(}E_{j}(z)\big{)}-\bar{m}_{j}\cdot\nu_{c}\big{(}E_{j}(z) \big{)}\Big{]}\qquad\text{due to }\nu\big{(}E_{j}(z)\big{)}\geq\nu_{c}\big{(}E_{j}(z)\big{)}\] \[\geq-\frac{\delta}{|z|^{\alpha}}\sum_{j=1}^{N}\nu_{c}\big{(}E_{j} (z)\big{)}=-\frac{\delta}{|z|^{\alpha}}\nu_{c}[\theta/|z|,M/|z|]\qquad\text{ due to }\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq: Furthermore, we claim that there is some \(c>0\) such that \[\eta(z)\geq c\qquad\forall z\in[1,b]. \tag{6.63}\] Then due to the self-similarity (i.e., \(\eta(bz)=\eta(z)\)), we have \(\eta(z)\geq c\) for all \(z\neq 0\). In the meantime, note that \[\frac{1}{|z|^{\alpha}}\int_{|x|\geq b^{N}}\big{(}1-\cos(zx)\big{)}\nu(dx)\leq \frac{\nu(|x|\geq b^{N})}{|z|^{\alpha}}\] and \(b,N\) are fixed. By picking \(M>0\) large enough, it holds for any \(|z|\geq M\) that \[\frac{1}{|z|^{\alpha}}\int_{|x|\geq b^{N}}\big{(}1-\cos(zx)\big{)}\nu(dx)\leq \frac{c}{2}. \tag{6.64}\] Therefore, for any \(|z|\geq M\), \[\int_{|x|<b^{N}}\big{(}1-\cos(zx)\big{)}\nu(dx) =\int_{x\in\mathbb{R}}\big{(}1-\cos(zx)\big{)}\nu(dx)-\int_{|x|\geq b ^{N}}\big{(}1-\cos(zx)\big{)}\nu(dx)\] \[=\eta(z)\cdot|z|^{\alpha}-\int_{|x|\geq b^{N}}\big{(}1-\cos(zx) \big{)}\nu(dx)\] \[\geq c|z|^{\alpha}-\frac{c}{2}|z|^{\alpha}=\frac{c}{2}|z|^{\alpha }.\qquad\text{using (\ref{eq:1}) and (\ref{eq:2})},\] and hence \(|\widetilde{\varphi}_{t}(z)|\leq\exp\big{(}-\frac{c}{2}t|z|^{\alpha}\big{)}\) for all \(|z|\geq M\). Applying Inversion Formula (see theorem 3.3.5. of [23]), we get (for any \(t>0\)) \[\Big{\|}f_{\widetilde{Y}(t)}\Big{\|}_{\infty} \leq\frac{1}{2\pi}\int|\widetilde{\varphi}_{t}(z)|dz\] \[\leq\frac{M}{\pi}+\frac{1}{2\pi}\int\exp\bigg{(}-\frac{c}{2}|x|^{ \alpha}\bigg{)}dx\] \[=\frac{M}{\pi}+\frac{1}{2\pi}\cdot\frac{1}{t^{1/\alpha}}\int\exp \bigg{(}-\frac{c}{2}|x|^{\alpha}\bigg{)}dx\qquad\text{using }x=t^{1/\alpha}\cdot z\] \[\leq\frac{M}{\pi}+\frac{C_{1}}{t^{1/\alpha}}\qquad\text{where }C_{1}=\frac{1}{2\pi}\int\exp\bigg{(}-\frac{c}{2}|x|^{\alpha}\bigg{)}dx.\] To conclude the proof, we set \(C=\frac{M}{\pi}+C_{1}\). Now it only remains to prove claim (6.63). **Proof of Claim** (6.63) We proceed with a proof by contradiction. Suppose there exists some \(z\in[1,b]\) such that \[\int_{\mathbb{R}}\big{(}1-\cos(zx)\big{)}\nu(dx)=0.\] Now for any \(\epsilon>0\), define the following sets: \[S =\{x\in\mathbb{R}:\ 1-\cos(zx)>0\}=\mathbb{R}\backslash\{\frac{2 \pi}{z}k:\ k\in\mathbb{Z}\};\] \[S_{\epsilon} =\{x\in\mathbb{R}:\ 1-\cos(zx)\geq\epsilon\}.\] Observe that * For any \(\epsilon>0\), we have \(\epsilon\cdot\nu(S_{\epsilon})\leq\int_{S_{\epsilon}}\big{(}1-\cos(zx)\big{)}\nu( dx)\leq\int_{\mathbb{R}}\big{(}1-\cos(zx)\big{)}\nu(dx)=0\), which implies \(\nu(S_{\epsilon})=0\); * As a result, \(\lim_{\epsilon\to 0}\nu(S_{\epsilon})=\nu(S)\). Together with the fact that \(\nu(\mathbb{R})>0\) (so that the process is non-trivial), we know the existence of some \(m\in\mathbb{Z},\delta>0\) such that \[\nu(\{\frac{2\pi}{z}m\})=\delta>0.\] Besides, from \(\nu(S)=0\) we know that \(\nu(-\frac{2\pi}{z},\frac{2\pi}{z})=0\). However, due to (4.2) we know that \(\nu=b^{-\alpha}T_{b}\nu\) where the transformation \(T_{r}\) (\(\forall r>0\)) onto a Borel measure \(\rho\) on \(\mathbb{R}\) is defined as \((T_{r}\rho)(B)=\rho(r^{-1}B)\). This implies \[\nu(\{\frac{2\pi m}{z}b^{-k}\})>0\ \ \forall k=1,2,3,\cdots\] which contradicts \(\nu(-\frac{2\pi}{z},\frac{2\pi}{z})=0\) since, eventually for all \(b>0\) large enough, we have \(\frac{2\pi m}{z}b^{-k}\in(-\frac{2\pi}{z},\frac{2\pi}{z}).\) This concludes the proof of \(\eta(z)>0\) for all \(z\in[1,b]\). Lastly, since \(\eta\) is continuous on \(z\in[1,b]\), we can find a strictly positive lower bound \(c>0\) such that \(\eta(z)\geq c>0\) for all \(z\in[1,b]\).
2310.20351
Long-range interaction of hydrogen atoms at finite temperatures
In this study, we reexamine the long-range interaction between two atoms placed in an equilibrium thermal radiation environment. Employing the formalism of quantum electrodynamics at finite temperatures, we derive an expression for the thermal correction to the interaction potential and explore various asymptotic behaviors. The numerical calculations of temperature-dependent dispersion coefficients for both the ground and highly excited states of the hydrogen atom are performed. We proceed from the first principles of the theory to derive the dipole-dipole interaction at finite temperature. The analysis presented in this work reveals that the expressions established earlier in the context of phenomenological extrapolation from zero- to finite-temperature scenarios exhibit disparate asymptotic behavior and lead to overestimated results to those of the rigorous quantum electrodynamics approach.
T. Zalialiutdinov, D. Solovyev
2023-10-31T10:46:35Z
http://arxiv.org/abs/2310.20351v2
# Long-range interaction of hydrogen atoms at finite temperatures ###### Abstract In this study, we reexamine the long-range interaction between two atoms placed in an equilibrium thermal radiation environment. Employing the formalism of quantum electrodynamics at finite temperatures, we derive an expression for the thermal correction to the interaction potential and explore various asymptotic behaviors. The numerical calculations of temperature-dependent dispersion coefficients for both the ground and highly excited states of the hydrogen atom are performed. We proceed from the first principles of the theory to derive the dipole-dipole interaction at finite temperature. The analysis presented in this work reveals that the expressions established earlier in the context of phenomenological extrapolation from zero- to finite-temperature scenarios exhibit disparate asymptotic behavior and lead to overestimated results to those of the rigorous quantum electrodynamics approach. ## I Introduction The long-range forces between two stationary atoms or molecules characterized by polarizabilities were initially explored in pioneering studies by Casimir and Polder [1]. In 1956, Lifshitz was arguably among the first to contemplate induced forces between two dipoles at non-zero temperatures [2]. Since then, sporadic yet enduring interest has been exhibited in dipole-dipole interactions at finite temperatures [3; 4; 5]. Over the past decades various approaches to the theoretical description of this phenomenon have been considered [6; 7; 8]. Furthermore, recent experimental investigations of interactions among atoms at long distances exposed to heated environment have sparked additional interest in this problem [9; 10]. Typically, the transition to finite temperatures is accomplished by a phenomenological generalization of the well-established result at \(T=0\) through the introduction of a relevant induced term into the expression for the interaction potential. In the scenario where two interacting atoms are placed within an equilibrium thermal radiation field described by the Planck distribution, this generalization results in the replacement of the term characterizing the vacuum zero-point expectation energy, represented by the term \(1/2\), onto \(1/2+n_{\beta}(\omega)\). Here \(n_{\beta}(\omega)\) is the Bose-Einstein frequency distribution defined as \(n_{\beta}(\omega)=(\exp(\omega/(k_{B}T))-1)^{-1}\), where \(k_{B}\) represents the Boltzmann constant and \(T\) is the temperature in Kelvin. This approach has been employed, notably, in the studies [11; 12; 13]. However, a comprehensive derivation of the interaction potential between two atoms within the formalism of quantum electrodynamics at finite temperatures (TQED) has hitherto been limited to the study presented in [3]. This work primarily offers parametric estimates devoid of specific calculations for particular systems. It is pertinent to highlight that the computations conducted within the mentioned investigation do not readily enable a direct comparison between the TQED approach and the phenomenological extension applied to thermal scenarios as witnessed in works [11; 12; 13]. In the present paper, within the framework of TQED utilizing the real-time formalism, we reexamine the derivation of the interaction potential between two atoms at long distances and compare it with findings from prior research. Specifically, we perform a numerical computation of thermal corrections to the dispersion coefficients for two interacting hydrogen atoms in excited states. Calculations are carried out for various asymptotics of interatomic distances and temperature regimes. The paper is organized as follows. In section II we derive the long-range interaction potential between two atoms within the \(S\)-matrix formalism and discuss its generalization to the finite temperature case. A modification of the resulting expression for two identical atoms, but in different states, is also discussed there. In sections III and IV we consider the short and long-range limit of the obtained potential, respectively. Details of numerical calculations with analysis of results are presented in section V. All additional algebraic calculations are located in the appendixes A and B. The relativistic units (r.u.) are used throughout the paper \(\hbar=c=m=1\) (\(\hbar\) is the Planck constant, \(m\) is the electron mass and \(c\) is the speed of light), in which the fine structure constant can be expressed in terms of the electron charge as \(\alpha=e^{2}\). The Boltzmann constant in these units is \(k_{B}=m\alpha^{2}k_{B}^{\mathrm{a.u.}}\), where \(k_{B}^{\mathrm{a.u.}}=3.16681\times 10^{-6}\) is given in atomic units. ## II Long-range interaction between two atoms: \(S\)-matrix approach Within the framework of Quantum Electrodynamics (QED) and perturbation theory, the interaction between two atoms, designated as \(A\) and \(B\), is described by the fourth-order \(S\)-matrix. The complete set of Feynman diagrams is shown in Fig. 1. We restrict ourselves to the description of the interaction between two one-electron atoms as the main application of the approach developed below. A generalization to many-electron atomic systems can be made in the framework of the general theory, see, e.g., [14]. Then, for the ladder (L) diagram, Fig. 1 (a), the corresponding \(S\)-matrix element can be written as follows: \[S_{AB}^{(4),\,L} = (-ie)^{4}\int dx_{1}dx_{2}dx_{3}dx_{4}\overline{\psi}_{a^{\prime} }^{A}(x_{1})\overline{\psi}_{b^{\prime}}^{B}(x_{3})\] \[\times\gamma_{A}^{\mu_{1}}D_{\mu_{1}\mu_{3}}(x_{1},x_{3})\gamma_{ B}^{\mu_{3}}S_{A}(x_{1},x_{2})\gamma_{A}^{\mu_{2}}\] \[\times D_{\mu_{2}\mu_{4}}(x_{2},x_{4})\gamma_{B}^{\mu_{4}}S_{B}( x_{3},x_{4})\psi_{a}^{A}(x_{2})\psi_{b}^{B}(x_{4}),\] where \(\psi_{a}^{A}(x)=e^{-ie^{A}_{a}t}\psi(\mathbf{x})\) is the solution of Dirac equation for bound electron in the state \(a\) of the atom \(A\), \(\overline{\psi}_{a}=\psi_{a}^{+}\gamma_{0}\) is the Dirac conjugated wave function with \(\psi_{a}^{+}\) being its Hermitian conjugate, \(\gamma_{A}^{\mu}=(\gamma_{0},\mathbf{\gamma})\) are the Dirac matrices (indexes \(A\) and \(B\) refers to the corresponding atoms), \(\varepsilon_{n}\) is the Dirac energy, and \[S_{A}(x_{1},x_{2}) = \frac{i}{2\pi}\int\limits_{-\infty}^{+\infty}d\Omega\,e^{-i \Omega(t_{1}-t_{2})}\] \[\times\sum_{n}\frac{\psi_{n}^{A}(\mathbf{r}_{1})\overline{\psi}_{n}^ {A}(\mathbf{r}_{2})}{\Omega-\varepsilon_{n}^{A}(1-i0)}\] is the eigenstate decomposition of the electron propagator for atom \(A\). The components of the photon propagator \(D_{\mu\nu}\) can be expressed as the sum of two contributions: the zero-temperature part \(D_{\mu\nu}^{0}\) and the thermal one \(D_{\mu\nu}^{\beta}\), which accounts for the Planck frequency distribution associated with photons in the thermal reservoir [15]. The latter allows us to investigate and elucidate the influence of blackbody radiation on the interaction of two hydrogen atoms separated by a distance \(R\). To explore the effects resulting from the incorporation of distribution function, we turn to the finite-temperature Quantum Electrodynamics (TQED) approach formulated by Donoghue and Holstein (DH) in [16]. Then, the interaction between a free-electron gas (in the absence of an external field) and a photon gas is considered under thermal equilibrium conditions. This interaction is described using a grand canonical statistical operator, which alters both the electron and photon propagators. Our objective is to investigate the impact of blackbody radiation (BBR) on atomic levels. Therefore, we retain the electron propagator in the standard (zero-temperature) form (its thermal part is exponentially suppressed over temperature) and employ QED perturbation theory to account for the influence of BBR. This involves considering the thermal photon propagator only. According to [16], the photon propagator in the Feynman (F) gauge and momentum space reads as \[D_{\mu\nu}^{\rm DH}(k)=D_{\mu\nu}^{0,F}+D_{\mu\nu}^{\beta,F} \tag{3}\] \[= -4\pi g_{\mu\nu}\left[\frac{i}{k^{2}+i0}+2\pi\delta(k^{2})n_{ \beta}(|\mathbf{k}|)\right]\,,\] where \(g_{\mu\nu}\) is the metric tensor of Minkowski space, \(k=(k_{0},\mathbf{k})\) is the four-dimensional momentum, \(k^{2}=k_{0}^{2}-\mathbf{k}^{2}\), and \(n_{\beta}\) is defined as follows \[n_{\beta}(x)=\frac{1}{\exp{(\frac{x}{k_{B}T})}-1}. \tag{4}\] Here \(k_{B}\) is the Boltzmann constant in relativistic units and \(T\) is the temperature in Kelvin. Note that Eq. (3) differs from the corresponding expression (4) in [16] by the factor of \(4\pi\), which reflects the definition of the charge units \(e^{2}=\alpha\) used in this paper. In the coordinate space the photon propagator can be found by 4-dimensional Fourier transform \[D_{\mu\nu}^{\rm DH}(x_{1},x_{2}) = -4\pi g_{\mu\nu}\int\frac{d^{4}k}{(2\pi)^{4}}e^{-ik(x_{1}-x_{2})}\] \[\times\left[\frac{i}{k^{2}+i0}+2\pi\delta(k^{2})n_{\beta}(|\mathbf{ k}|)\right]\,.\] In the nonrelativistic problem we are considering, it is convenient to use the the temporal gauge (also known as the Weyl gauge) [17]. Then the components of zero-temperature part of photon propagator are: \[D_{00}^{0}(k)=D_{0i}^{0}(k)=0 \tag{6}\] \[D_{ij}^{0}(k)=\frac{4\pi i}{k^{2}}\left(\delta_{ij}-\frac{k_{i}k_{j}}{k_{0}^{ 2}}\right) \tag{7}\] Similarly, for the finite temperature part we have [18; 19]: \[D_{00}^{\beta}(k)=D_{0i}^{\beta}(k)=0\,, \tag{8}\] Figure 1: Ladder (a) and crossed-ladder (b) Feynman diagrams describing the long-range interaction of two atoms. Each solid line denotes the particular atom while wavy lines correspond to the exchange of photon. The states of atoms are denoted by \(a\), \(a^{\prime}\), \(b\), \(b^{\prime}\) for atoms \(A\) and \(B\), respectively, the photon frequencies are denoted by \(k_{0}\), \(k_{0}^{\prime}\). \[D^{\beta}_{ij}(k)=8\pi^{2}\delta(k^{2})\left(\delta_{ij}-\frac{k_{i}k_{j}}{k_{0}^{ 2}}\right)n_{\beta}(|\mathbf{k}|)\,, \tag{9}\] where \(i,\,j=1,\,2,\,3\). The evaluation of corresponding coordinate representation of zero-temperature and thermal propagators in temporal gauge is considered in Appendix A with the final result given by equation \[D^{0,\beta}_{ij}(x_{1},x_{2})=\frac{i}{2\pi}\int\limits_{-\infty}^{\infty}dk_{ 0}e^{-ik_{0}(t_{1}-t_{2})}D^{0,\beta}_{ij}(k_{0},R), \tag{10}\] where \(R=|\mathbf{r}_{1}-\mathbf{r}_{2}|\) is the interatomic distance and \[D^{0}_{ij}(k_{0},R)=\left(\delta_{ij}+\frac{\nabla_{i}\nabla_{j}}{k_{0}^{2}} \right)\left\{-\frac{e^{i|k_{0}|R}}{R}\right\}\,, \tag{11}\] \[D^{\beta}_{ij}(k_{0},R)=\left(\delta_{ij}+\frac{\nabla_{i}\nabla_{j}}{k_{0}^{ 2}}\right) \tag{12}\] \[\times\left\{-\frac{e^{i|k_{0}|R}}{R}+\frac{e^{-i|k_{0}|R}}{R}\right\}n_{ \beta}(|k_{0}|).\] Subsisting Eq. (10) into Eq. (1) and performing integration over the time variables, we find \[S^{(4),\,L}_{AB}=-2\pi i\delta(\varepsilon^{A}_{a^{\prime}}-\varepsilon^{A}_{ a}+\varepsilon^{B}_{b^{\prime}}-\varepsilon^{B}_{b})U^{(4),\,L}_{AB}(R)\,. \tag{13}\] Here the amplitude of the process can be expressed as \[U^{(4),\,L}_{AB}(R)=i\int\limits_{-\infty}^{+\infty}\frac{dk_{0} }{2\pi}D_{il}(k_{0},R)D_{km}(k_{0},R) \tag{14}\] \[\times\sum_{nn^{\prime}}V^{(-)ik}_{a^{\prime}nna}(k_{0})V^{(+)lm}_ {b^{\prime}n^{\prime}n^{\prime}b}(k_{0})\,,\] where \[V^{(-)ik}_{a^{\prime}nna}(k_{0})=\frac{\langle\psi^{A}_{a^{\prime}}|\mathbf{ \alpha}_{i}|\psi^{A}_{n}\rangle\langle\psi^{A}_{n}|\mathbf{\alpha}_{k}|\psi^{A}_{ a}\rangle}{\varepsilon^{A}_{a^{\prime}}-k_{0}-\varepsilon^{A}_{n}(1-i0)}\,, \tag{15}\] \[V^{(+)lm}_{b^{\prime}n^{\prime}n^{\prime}b}(k_{0})=\frac{\langle\psi^{B}_{b^{ \prime}}|\mathbf{\alpha}_{l}|\psi^{B}_{n^{\prime}}\rangle\langle\psi^{B}_{n^{ \prime}}|\mathbf{\alpha}_{m}|\psi^{B}_{b}\rangle}{\varepsilon^{B}_{b^{\prime}}+k_ {0}-\varepsilon^{B}_{n^{\prime}}(1-i0)}\,. \tag{16}\] Each factor \(D_{ij}(k_{0},R)=D^{0}_{ij}(k_{0},R)+D^{\beta}_{ij}(k_{0},R)\) is the sum of zero and finite temperature contributions to the photon propagator, specified in the'mixed' representation (i.e. in frequency-space domain, see [17]) by Eq. (11) and (12), respectively. Similarly, for the crossed-ladder (CL) diagram given by Fig. 1 (b) the corresponding \(S\)-matrix elements is \[S^{(4),\,CL}_{AB}=(-ie)^{4}\int dx_{1}dx_{2}dx_{3}dx_{4}\overline{\psi}_{a^{ \prime}}(x_{1})\overline{\psi}_{b^{\prime}}(x_{3}) \tag{17}\] \[\times\gamma^{\mu_{1}}_{A}D_{\mu_{1}\mu_{4}}(x_{1},x_{4})\gamma^{\mu_{3}}_{B}D_ {\mu_{2}\mu_{3}}(x_{2},x_{3})\gamma^{\mu_{4}}_{B}\] \[\times S_{A}(x_{1},x_{2})\gamma^{\mu_{2}}_{B}S_{(2},x_{3},x_{4})\psi_{a}(x_{2} )\psi_{b}(x_{4})\,.\] Integration over the time variables in Eq. (17) yields \[S^{(4),\,CL}_{AB}=-2\pi i\delta(\varepsilon^{A}_{a^{\prime}}-\varepsilon^{A}_ {a}+\varepsilon^{B}_{b^{\prime}}-\varepsilon^{B}_{b})U^{(4),\,L}_{AB}(R), \tag{18}\] where \[U^{(4),\,CL}_{AB}(R)=i\int\limits_{-\infty}^{+\infty}\frac{dk_{0} }{2\pi}D_{il}(k_{0},R)D_{km}(k_{0},R) \tag{19}\] \[\times\sum_{nn^{\prime}}V^{(-)ik}_{a^{\prime}nna}(k_{0})V^{(-)lm}_ {b^{\prime}n^{\prime}b}(k_{0})\,.\] According to the Feynman rules for each of these two basic contributions we need to consider three more graphs with exchanged indexes in Fig. 1: 1) \(a^{\prime}\leftrightarrow b^{\prime}\); 2) \(a\leftrightarrow b\); and 3) simultaneous replacement of both states \(a^{\prime}\leftrightarrow b^{\prime}\) and \(a\leftrightarrow b\). Moreover, to all these four graphs, it is also necessary to add exactly the same set but with permuted \(k_{0}\) and \(k^{\prime}_{0}\) (this permutation just leads to additional terms given by Eqs. (14) and (19) in which \(k_{0}\) is just replaced by \(-k_{0}\)). Thus the total number of contributions is eight. For further consideration of the long-range potential, it is convenient to assume that the initial and final states of both atoms of the same type \(A\) and \(B\) remains unchanged, i.e. we set \(a^{\prime}=a\) and \(b^{\prime}=b\). Collecting all contributions together and going to the nonrelativistic limit with Foldy-Wouthysen transformation which in the leading order implies \(\psi^{+}\mathbf{\alpha}\psi\approx\phi^{+}\frac{\mathbf{p}}{m}\phi\), where \(\phi\) is the solution of Schrodinger equation and \(\mathbf{\hat{p}}\) is the electron momentum operator, we find for the total interaction amplitude the following equation in the velocity form: \[U^{(4),\,\mathrm{tot}}_{AB}(R)=i\int\limits_{-\infty}^{+\infty} \frac{dk_{0}}{2\pi}D_{il}(k_{0},R)D_{km}(k_{0},R)\sum_{nn^{\prime}} \tag{20}\] \[\times\left\{\left[\frac{\langle\phi^{A}_{a}|\mathbf{p}_{i}|\phi^{A}_{ a}\rangle\langle\phi^{A}_{n}|\mathbf{p}_{k}|\phi^{A}_{a}\rangle}{E^{A}_{a}-k_{0}-E^{A}_{n}(1-i0)}+ \frac{\langle\phi^{A}_{a}|\mathbf{p}_{i}|\phi^{A}_{n}\rangle\langle\phi^{A}_{n}| \mathbf{p}_{k}|\phi^{A}_{a}\rangle}{E^{A}_{a}+k_{0}-E^{A}_{n}(1-i0)}\right]\right.\] \[\times\left[\frac{\langle\phi^{B}_{b}|\mathbf{p}_{l}|\phi^{B}_{n^{ \prime}}\rangle\langle\phi^{B}_{n^{\prime}}|\mathbf{p}_{m}|\phi^{B}_{b}\rangle}{E^{ B}_{b}-k_{0}-E^{B}_{n^{\prime}}(1-i0)}+\frac{\langle\phi^{B}_{b}|\mathbf{p}_{l}|\phi^{B}_{n^{ \prime}}\rangle\langle\phi^{B}_{n^{\prime}}|\mathbf{p}_{m}|\phi^{B}_{b}\rangle}{E^{ B}_{b}+k_{0}-E^{B}_{n^{\prime}}(1-i0)}\right]\] \[\pm\left[\frac{\langle\phi^{A}_{a}|\mathbf{p}_{i}|\phi^{A}_{a}\rangle \langle\phi^{A}_{n}|\mathbf{p}_{k}|\phi^{A}_{b}\rangle}{E^{A}_{a}-k_{0}-E^{A}_{n}(1-i0)}+ \frac{\langle\phi^{A}_{a}|\mathbf{p}_{i}|\phi^{A}_{n}\rangle\langle\phi^{A}_{n}| \mathbf{p}_{k}|\phi^{A}_{n}\rangle}{E^{A}_{a}+k_{0}-E^{A}_{n}(1-i0)}\right]\] \[\times\left[\frac{\langle\phi^{B}_{a}|\mathbf{p}_{l}|\phi^{B}_{n^{ \prime}}\rangle\langle\phi^{B}_{n^{\prime}}|\mathbf{p}_{m}|\phi_{b}\rangle}{E^{ B}_{b}-k_{0}-E^{B}_{n^{\prime}}(1-i0)}+\frac{\langle\phi^{B}_{a}|\mathbf{p}_{l}|\phi^{B}_{n^{ \prime}}\rangle\langle\phi^{B}_{n^{\prime}}|\mathbf{p}_{m}|\phi^{B}_{b}\rangle}{E^{ B}_{b}+k_{0}-E^{B}_{n^{\prime}}(1-i0)}\right]\,.\] Here \(E_{n}\) in contrast to \(\varepsilon_{n}\) is the eigenvalue related to the atomic state \(n\) of nonrelativistic hamiltonian \(H_{S}\). Note, that the second contribution in curly brackets comes with \(\pm\) sign and has off-diagonal matrix elements in numerator. This problem was covered in detail in to the length-form of the matrix elements in the above equation. This can be done via the well-known commutation relation \(p_{i}=i[H_{S},r_{i}]\) and some algebra. Then the interaction potential can be conveniently written in the length-form in terms of atomic polarizability tensors \(\alpha_{ij}\) as follows \[U^{(4),\,\mathrm{tot}}_{AB}(R)=\frac{i}{2\pi}\int\limits_{-\infty }^{+\infty}dk_{0}k_{0}^{4}D_{il}(k_{0},R)D_{km}(k_{0},R) \tag{21}\] \[\times\left[\alpha^{A}_{ik}(k_{0})\alpha^{B}_{lm}(k_{0})\pm\alpha ^{\overline{A}B}_{ik}(k_{0})\alpha^{\overline{A}\overline{B}}_{lm}(k_{0}) \right]\,,\] where, according to [20; 21], the notations for diagonal \(\alpha^{A}_{ik}\) and off-diagonal \(\alpha^{AB}_{ik}\) contributions are introduced. For atoms in \(s\)-states the both tensors can be written in terms of scalar polarizabilities \[\alpha^{A}_{ik}=\delta_{ik}\alpha_{A}\,, \tag{22}\] where \[\alpha_{A}(k_{0})=\frac{e^{2}}{3}\sum\limits_{\pm}\sum\limits_{n}\frac{\langle \phi_{a}|\mathbf{r}|\phi_{n}\rangle\langle\phi_{n}|\mathbf{r}|\phi_{a}\rangle}{E_{n}(1 -i0)-E_{a}\pm k_{0}}\,, \tag{23}\] \[\alpha_{\overline{A}B}(k_{0})=\frac{e^{2}}{3}\sum\limits_{\pm}\sum\limits_{n} \frac{\langle\phi_{a}|\mathbf{r}|\phi_{n}\rangle\langle\phi_{n}|\mathbf{r}|\phi_{b} \rangle}{E_{n}(1-i0)-E_{a}\pm k_{0}}\,, \tag{24}\] \[\alpha_{A\overline{B}}(k_{0})=\frac{e^{2}}{3}\sum\limits_{\pm}\sum\limits_{n} \frac{\langle\phi_{a}|\mathbf{r}|\phi_{n}\rangle\langle\phi_{n}|\mathbf{r}|\phi_{b} \rangle}{E_{n}(1-i0)-E_{b}\pm k_{0}}\,. \tag{25}\] For further calculations, it is necessary to calculate the explicit form of the functions \(D_{il}(k_{0},R)\) and \(D_{km}(k_{0},R)\) given by the sum of zero, Eq. (11), and finite temperature, Eq. (12), contributions. This can be performed by noting that \[\left(\delta_{ij}+\frac{\nabla_{i}\nabla_{j}}{k_{0}^{2}}\right) \left(-\frac{e^{i|k_{0}|R}}{R}\right)G_{0,\beta} \tag{26}\] \[=\left(\delta_{ik}\left(1+\frac{i}{|k_{0}|R}-\frac{1}{k_{0}^{2}R ^{2}}\right)\right.\] \[\left.+\frac{x_{i}x_{k}}{R^{2}}\left(\frac{3}{k_{0}^{2}R^{2}}- \frac{3i}{|k_{0}|R}-1\right)\right)\left(-\frac{e^{i|k_{0}|R}}{R}\right)G_{0, \beta}\,,\] and \[\left(\delta_{ij}+\frac{\nabla_{i}\nabla_{j}}{k_{0}^{2}}\right) \left(+\frac{e^{-i|k_{0}|R}}{R}\right)G_{0,\beta} \tag{27}\] \[=\left(\delta_{ik}\left(1-\frac{i}{|k_{0}|R}-\frac{1}{k_{0}^{2}R^ {2}}\right)\right.\] \[\left.+\frac{x_{i}x_{k}}{R^{2}}\left(\frac{3}{k_{0}^{2}R^{2}}+ \frac{3i}{|k_{0}|R}-1\right)\right)\left(+\frac{e^{-i|k_{0}|R}}{R}\right)G_{0, \beta}\,,\] where \(G_{0}=1\) and \(G_{\beta}=n_{\beta}(|k_{0}|)\). Substituting Eqs. (22), (23) and the zero-temperature part of \(D_{il}(k_{0},R)\) given by Eq. (11) into Eq. (21), and taking into account Eq. (26), we obtain the following well-known expression for the interaction energy of two identical atoms (\(A=B\)) in the ground state at \(T=0\): \[U^{0}(R)=\frac{i}{2\pi R^{2}}\int\limits_{-\infty}^{\infty}dk_{0}k_{0}^{4} \alpha_{A}(k_{0})\alpha_{A}(k_{0}) \tag{28}\] \[\times e^{2i|k_{0}|R}F_{1}(|k_{0}|,R),\] where \[F_{1}(k_{0},R)=1+\frac{2i}{k_{0}R}-\frac{5}{(k_{0}R)^{2}}-\frac{6i}{(k_{0}R)^ {3}}+\frac{3}{(k_{0}R)^{4}}. \tag{29}\] When one of the atoms is in the excited state (\(A\neq B\)) it is necessary to make a substitution \[\alpha_{A}\alpha_{A}\rightarrow\alpha_{A}\alpha_{B}\pm\alpha_{\overline{A}B} \alpha_{A\overline{B}}. \tag{30}\] This generalization was recently obtained for the long-range interaction of two hydrogen atoms in the \(1s\) and \(2s\) states in [20]. Similar equation can be obtained for the finite temperature part by substituting Eq. (12) and taking into account Eq. (26) as well as (27). Finally, we find the thermal correction to the long-range interaction as \[U^{\beta}(R)=\frac{i}{\pi R^{2}}\int\limits_{-\infty}^{\infty}dk_{0}k_{0}^{4} \alpha_{A}(k_{0})\alpha_{A}(k_{0}) \tag{31}\] \[\times e^{2i|k_{0}|R}F_{1}(|k_{0}|,R)n_{\beta}(|k_{0}|)\] \[-\frac{i}{\pi R^{2}}\int\limits_{-\infty}^{\infty}dk_{0}k_{0}^{4} \alpha_{A}(k_{0})\alpha_{A}(k_{0})F_{2}(|k_{0}|,R)n_{\beta}(|k_{0}|)\,,\] where \(F_{1}\) is given by Eq. (29) and \(F_{2}\) is \[F_{2}(k_{0},R)=1+\frac{1}{(k_{0}R)^{2}}+\frac{3}{(k_{0}R)^{4}}\,. \tag{32}\] Again, when one of the atoms is in the excited state, it is necessary to make a substitution given by Eq. (30). Below we consider various boundaries of the thermal potential defined by the equation (31). Note that the second term in Eq. (31) arises due to the presence of additional term \(\frac{e^{-i|k_{0}|R}}{R}\) in the equation for \(D^{\beta}_{ij}\), see Eq. (12). In contrast to the previous considerations [11; 12; 13] the obtained finite temperature potential consist of these two contributions. It is interesting to note that the second term with the same power expansion as in Eq. (32) was recently obtained in [22] for long-range interaction between two atoms embedded in external electromagnetic field. The crucial point is that according to the expression (31), both summands comprise a divergence at \(k_{0}=0\) arising from the \(\frac{3}{(k_{0}R)^{4}}\) term in Eqs. (29) and (32), and only the total expression is infrared finite. Furthermore, it's worth highlighting that the sum of two exponentials (see Eq. (12)), which arises specifically in the thermal case, yields the correct result when calculating the temperature dependent self-energy of a bound electron [15]. In the non-relativistic limit, the expression found in [15] exactly coincides with the formula for the thermal Stark shift as obtained within the framework of quantum mechanical perturbation theory [23]. ## III Short-range limit of interatomic interaction at finite temperature Before proceeding to the analysis of the asymptotic behavior of the temperature correction to the long-range potential, we consider various limits for the leading non-thermal contribution. In the short-range limit (\(a_{0}\ll R\ll\lambda_{0}\), where \(a_{0}\sim 1/(m\alpha Z)\) is the Bohr radius and \(\lambda_{0}\sim 1/(m(\alpha Z)^{2})\) is the typical atomic wave-length in relativistic units) the expression (28) can be reduced to \[U^{0}(R)=-\frac{C_{6}}{R^{6}}, \tag{33}\] where the coefficient \(C_{6}\) is defined by \[C_{6}=-\frac{3i}{\pi}\int\limits_{0}^{\infty}dk_{0}\alpha_{A}(k_{0})\alpha_{A} (k_{0})\,. \tag{34}\] The integration over \(k_{0}\) in Eq. (34) can be carried out both analytically (with the use of residue theorem) and numerically. The summation over the entire spectrum in Eqs. (23)-(25) is commonly performed numerically using, for example, the B-splines approach for solutions of the Schrodinger equation [24]. In the numerical results below, we also treat hydrogen atoms in the limit of infinite nucleus mass. Note, that the integral in Eq. (34) is purely real despite the fact that polarizability in general case is a complex quantity (see proof of this statement in Appendix B). As a consequence, the imaginary part of the interaction potential appears only in the following orders of decomposition over powers of \(k_{0}R\) in Eq. (28), see, e.g. [21; 25]. For two identical hydrogen atoms in states \(A=B=1s\) we arrive at the known result \(C_{6}=6.499\) in atomic units (hereafter a.u.). For the \(1s-2s\) interaction the resulting \(C_{6}\) constant becomes symmetry dependent according to the substitution given by Eq. (30). This leads to the dispersion constant \(C_{6}=176.735\pm 27.98\) a.u., which is in perfect agreement with the results given in [20; 21]. As shown in these works, in the case of quasi-degenerated states (e.g, levels \(2s\) and \(2p\) in hydrogen) it is also necessary to consider separately the interval \(\lambda_{0}\ll R\ll\lambda_{L}\) with \(\lambda_{L}=\frac{1}{m(\alpha Z)^{4}}\) (the wavelength of the Lamb shift in relativistic units), in which the general dependence \(R^{-6}\) still remains. Then in this range \(C_{6}=121.489\pm 46.61\) a.u. Let us proceed to the analysis of various asymptotics of the finite-temperature contribution to the interaction potential given by expression (31). As was emphasized in [3] in the thermal case, the interatomic distance parameter \(R\) correlates with temperature \(T\). Understanding this phenomenon is straightforward, as the main contribution to the integral in the expression is provided by the poles of the function \(\alpha_{A}(k_{0})\alpha_{B}(k_{0})\), which is within the region bounded by the function \(k_{0}^{4}n_{\beta}(k_{0})\). The latter has a maximum at \(k_{0}\sim k_{B}T\) and exponentially decreasing wings. Thus, in the thermal case, the asymptotic behavior of the potential is determined by the behavior of the oscillating exponential factor \(e^{2ik_{0}R}\) in two different regions: 1) the short-range (SR) limit \(a_{0}\ll R\ll\frac{1}{k_{0}}\), which also implies the thermal condition \(a_{0}\ll R\ll\frac{1}{k_{B}T}\), and 2) the long-range (LR) limit \(R\gg\frac{1}{k_{0}}\) with temperatures satisfying the inequality \(R\gg\frac{1}{k_{B}T}\). In the short-range limit we can set \(k_{0}R\ll 1\). Then, decomposing the exponential factor into a Taylor series in the vicinity of a small argument up to terms of order \(O((k_{0}R)^{6})\), we have \[U_{SR}^{\beta}(R)\approx\frac{2i}{\pi R^{2}}\int\limits_{0}^{ \infty}dk_{0}k_{0}^{4}\alpha_{A}\left(k_{0}\right)\alpha_{B}\left(k_{0}\right) \tag{35}\] \[\times\left[\frac{4}{15}i(k_{0}R)^{5}+\frac{2}{3}(k_{0}R)^{4}- \frac{4}{3}i(k_{0}R)^{3}-2(k_{0}R)^{2}+2i(k_{0}R)+1+O((k_{0}R)^{6})\right]F_{1 }(k_{0},R)n_{\beta}\left(k_{0}\right)\] \[-\frac{2i}{\pi R^{2}}\int\limits_{0}^{\infty}dk_{0}k_{0}^{4} \alpha_{A}\left(k_{0}\right)\alpha_{B}\left(k_{0}\right)F_{2}(k_{0},R)n_{ \beta}\left(k_{0}\right)\,.\] Substitution of \(F_{1}\), \(F_{2}\) leads to \[U_{SR}^{\beta}(R)=\frac{2i}{\pi R^{2}}\int\limits_{0}^{\infty}dk_{0}k_{0}^{4} \alpha_{A}\left(k_{0}\right)\alpha_{B}\left(k_{0}\right)\left[1+\frac{1}{(k_{0} R)^{2}}+\frac{3}{(k_{0}R)^{4}}+\frac{22i}{15}(k_{0}R)-\frac{16}{15}(k_{0}R)^{2}+O((k_{0} R)^{3}))\right]n_{\beta}\left(k_{0}\right) \tag{36}\] \[-\frac{2i}{\pi R^{2}}\int\limits_{0}^{\infty}dk_{0}k_{0}^{4} \alpha_{A}\left(k_{0}\right)\alpha_{B}\left(k_{0}\right)\left[1+\frac{1}{(k_{0 }R)^{2}}+\frac{3}{(k_{0}R)^{4}}\right]n_{\beta}\left(k_{0}\right)\] \[=-\frac{44}{15\pi R}\int\limits_{0}^{\infty}dk_{0}k_{0}^{5}\alpha _{A}\left(k_{0}\right)\alpha_{B}\left(k_{0}\right)n_{\beta}\left(k_{0}\right) -\frac{32i}{15}\int\limits_{0}^{\infty}dk_{0}k_{0}^{6}\alpha_{A}\left(k_{0} \right)\alpha_{B}\left(k_{0}\right)n_{\beta}(k_{0})\,.\] From Eq. (36) it is clearly seen that the terms proportional to the \(1+\frac{1}{(k_{0}R)^{2}}+\frac{3}{(k_{0}R)^{4}}\) (see the second line in the equation) are cancel out in the total result. Thus, the final expression represents the Coulomb behaviour plus constant contribution. The latter arises from the second integral in the last line of Eq. (35) and depends only on the particular atomic polarizabilities. The resulting leading term confirms the conclusion reached earlier in [3], but with the caveat that both contributions under consideration, as will be shown below, may be complex numbers. In addition, by going to atomic units in Eq. (36), we find that the temperature dependence is of order \(\alpha^{5}\), which is consistent with the result of [26] (see Eq. (C) there). The final expression for the short-range limit can be written in the compact form with the dispersion constant \(C_{1}^{\beta}\) as follows: \[U_{SR}^{\beta}(R)=-\frac{C_{1}^{\beta}}{R}+C_{0}^{\beta}, \tag{37}\] where \[C_{1}^{\beta}=\frac{44}{15\pi}\int\limits_{0}^{\infty}dk_{0}k_{0}^{5}\alpha_{ A}(k_{0})\alpha_{B}(k_{0})n_{\beta}(k_{0})\,, \tag{38}\] \[C_{0}^{\beta}=-\frac{32\,i}{15}\int\limits_{0}^{\infty}dk_{0}k_{0}^{6}\alpha_ {A}\left(k_{0}\right)\alpha_{B}\left(k_{0}\right)n_{\beta}(k_{0})\,. \tag{39}\] It is important to note here that, in contrast to the zero temperature case, both constants of the leading order of short-range limit are complex due to the definitions (23)-(25). A detailed analysis of the origin of the imaginary contribution to \(C_{1}^{\beta}\) and \(C_{0}^{\beta}\) is presented in Appendix B. From a physical standpoint, this signifies the manifestation of line broadening due to induced transitions in the presence of an external field of blackbody radiation. This effect is well known and can be obtained by examining the imaginary part of the thermal self-energy correction of the bound electron [23; 27; 15; 28]. Additionally, one can consider asymptotic of Eq. (37) when \(k_{B}T\ll m(\alpha Z)^{2}\) (i.e. the temperature is much less than the binding energy). For the two hydrogen atoms (\(Z=1\)) in the ground states \(A=B=1s\) this inequality is valid up to temperatures \(T\sim 10^{4}\) K. This implies that dynamic polarizability \(\alpha_{1s}(k_{0})\) can be replaced by its static value \(\alpha_{1s}(0)\) which is purely real. Performing integration over the frequency, we obtain \[C_{1}^{\beta}=-\frac{352\pi^{5}}{945}(k_{B}T)^{6}\alpha_{1s}^{2}(0)\,, \tag{40}\] \[C_{0}^{\beta}=1536\,i\,\zeta(7)(k_{B}T)^{7}\alpha_{1s}^{2}(0)\,, \tag{41}\] where \(\zeta(s)\) is the Riemann zeta function. At \(T=300\) K one can find \(C_{1}^{\beta}=-3.51\times 10^{-26}\) a.u. and \(C_{0}^{\beta}=-i\,3.3\times 10^{-30}\) a.u. These estimates lead to energy shift and level broadening on the order of \(10^{-10}\) Hz and \(10^{-14}\) Hz, respectively, which are negligible. If one or both atoms are in an excited state, this estimate is no longer applicable. In this case it is necessary to take into account the quasi-degenerate states available in the sum over the entire spectrum in the expression for the polarizabilities of atoms. In particular, one should consider states of opposite parity separated by a Lamb shift or a fine-structure interval, which are of order \(m\alpha(\alpha Z)^{4}\) and \(m(\alpha Z)^{4}\), respectively. ## IV Long-range limit of interatomic interaction at finite temperature In the long-range (LR) limit, \(k_{0}R\gg 1\) (this also implies \(R\gg 1/(k_{B}T)\)), only the first terms in Eqs. (29) and (32) remain important, i.e. we can set \(F_{1}=F_{2}=1\). This leads to the expression: \[U_{LR}^{\beta}(R)\approx\frac{2i}{\pi R^{2}}\int\limits_{0}^{\infty}dk_{0}k_{0}^ {4}\alpha_{A}\left(k_{0}\right)\alpha_{B}\left(k_{0}\right)e^{2ik_{0}R}n_{ \beta}\left(k_{0}\right) \tag{42}\] \[-\frac{2i}{\pi R^{2}}\int\limits_{0}^{\infty}dk_{0}k_{0}^{4}\alpha_{A}\left(k_ {0}\right)\alpha_{B}\left(k_{0}\right)n_{\beta}\left(k_{0}\right)\,.\] In Eq. (42), the second term obviously falls off slower (due to the absence of a highly oscillating exponent under the integral) with increasing \(R\) than the first term. Therefore, in the long-range limit for the leading contribution, we find \[U_{LR}^{\beta}(R)\approx\frac{B_{2}^{\beta}}{R^{2}}, \tag{43}\] where \[B_{2}^{\beta}=-\frac{2i}{\pi}\int\limits_{0}^{\infty}dk_{0}k_{0}^{4}\alpha_{A} \left(k_{0}\right)\alpha_{B}\left(k_{0}\right)n_{\beta}\left(k_{0}\right)\,. \tag{44}\] In complete analogy with the reasoning in section III, this contribution also yields a complex number (see Appendix B). The energy shift in the LR limit defined by Eq. (43) corresponds to the \(R^{-2}\) dependence. Numerical evaluation for two \(1s-1s\) atoms at \(T=300\) K gives \(B_{2}^{\beta}=7.75\times 10^{-43}-i\,7.04\times 10^{-22}\) a.u., which is also insignificant compared to the present level of experimental accuracy. ## V Results and discussion In this study, a rigorous quantum electrodynamics derivation of the long-range potential between two atoms placed in an equilibrium thermal radiation environment is presented. We explore two asymptotic behaviors of the resulting expression (short-range and long-range limits). The ultimate outcome for the temperature correction is expressed by Eq. (31), differing from the result obtained through a phenomenological generalization of the known zero-temperature result [11; 29]. The latter approach results to a different behavior in the short-range limit, leading to a significant overestimation of the interaction induced by the thermal environment compared to the result presented in this work. To analyze the statement above, we briefly compare our results with the estimates given in [13; 30]. In the short-range limit, the following expression, see Eq. (10) in [13], for the total potential (zero + finite temperature) was found: \[\tilde{U}^{0+\beta}(R) =-\frac{3\pi}{R^{6}}\int\limits_{0}^{\infty}dk_{0}\alpha_{A} \left(k_{0}\right)\alpha_{B}\left(k_{0}\right) \tag{45}\] \[\times\coth\left(\frac{k_{0}}{2k_{B}T}\right)\sin\left(2k_{0}R \right)\,.\] Assuming that \(k_{0}R\ll 1\) the leading thermal contribution of Eq. (45) can be estimated as \[\tilde{U}^{\beta}(R)\approx-\frac{12\pi}{R^{5}}\int\limits_{0}^{\infty}dk_{0} k_{0}\alpha_{A}\left(k_{0}\right)\alpha_{B}\left(k_{0}\right)n_{\beta}(k_{0})\,, \tag{46}\] where we took into account equality \(\coth\left(\frac{\pi}{2}\right)=1+2n_{\beta}(x)\). This equation is also infrared finite and can be evaluated numerically. For the two hydrogen atoms in their ground states and \(T=300\) K, we find \(\tilde{U}^{\beta}(R)=-\frac{8.38\times 10^{-7}}{R^{8}}\) a.u. At \(R=10\) a.u., this leads to a thermal shift of the order of \(8.38\times 10^{-12}\) a.u., which is much larger than the same shift defined by the Eq. (37) with the calculated coefficient \(C_{1}^{\beta}\) given in Table 1. Based on the fundamentals of the theory used in this study (thermal radiation is treated rigorously in the framework of QED at finite temperature rather than phenomenologically), we conclude that the previous results on the thermal interaction between two atoms (45) are significantly overestimated. Numerical calculations presented in Tables 1 and 2 show that only for highly excited states the corresponding energy shift barely reaches a value on the order of 1 Hz (for interatomic distances on the order of ten Bohr radii) at room temperature. However, even this value is far beyond the accuracy achievable in modern experiments measuring the transition frequencies of involving Rydberg states. The approach developed in this work also permits the generalization of thermal corrections to the interaction of an atom with a wall, taking into account the interaction of multiple distributed atoms. For an ensemble of atoms, the resultant contribution should be notably greater. This is confirmed experimentally in the case of a Bose-Einstein condensate of \({}^{87}\)Rb atoms located a few microns from a dielectric substrate [9]. Furthermore, there exists a discrepancy between theory and experiment, notably concerning the interaction of an atom with a graphene layer [31]. It is important to emphasize that the result we obtained here involves the disparity between a rigorous QED derivation and the phenomenological approach. \begin{table} \begin{tabular}{c c c c} \hline States \(a-b\) & T=300 & T=1000 & T=\(10^{*}\) \\ \hline \multicolumn{4}{c}{\(\mathrm{Re}\,C_{1}^{\beta}\)} \\ 1s-1s & \(-3.51\times 10^{-26}\) & \(-4.84\times 10^{-23}\) & \(-8.77\times 10^{-17}\) \\ 2s-2s & \(-2.53\times 10^{-23}\) & \(-4.08\times 10^{-20}\) & \(5.72\times 10^{-15}\) \\ 3s-3s & \(-2.02\times 10^{-21}\) & \(-1.37\times 10^{-18}\) & \(-5.48\times 10^{-15}\) \\ 4s-4s & \(-8.04\times 10^{-20}\) & \(-3.91\times 10^{-17}\) & \(-1.66\times 10^{-14}\) \\ 8s-8s & \(2.11\times 10^{-17}\) & \(-1.21\times 10^{-15}\) & \(-1.98\times 10^{-13}\) \\ & \(\mathrm{Im}\,C_{1}^{\beta}\) & & \\ 2s-2s & \(-1.03\times 10^{-42}\) & \(-8.39\times 10^{-23}\) & \(4.42\times 10^{-15}\) \\ 3s-3s & \(-3.76\times 10^{-25}\) & \(-5.00\times 10^{-18}\) & \(1.56\times 10^{-14}\) \\ 4s-4s & \(-3.62\times 10^{-20}\) & \(-1.37\times 10^{-18}\) & \(1.42\times 10^{-14}\) \\ 8s-8s & \(-2.14\times 10^{-16}\) & \(-9.87\times 10^{-16}\) & \(-3.06\times 10^{-15}\) \\ \hline \end{tabular} \end{table} Table 1: The real and imaginary parts of the dispersion coefficient \(C_{1}^{\beta}\) (see Eq. (40)) for two hydrogen atoms in states \(a\) and \(b\) (first column) at different temperatures \(T\) in Kelvin. All values are given in atomic units. The imaginary part of \(C_{1}^{\beta}\) for two atoms in ground states (\(1s-1s\)) is completely insignificant and therefore is not presented. ## VI Acknowledgements This work was supported by the foundation for the advancement of mathematics and theoretical physics "BASIS" (grant No. 23-1-3-31-1) and President grant MK-4796.2022.1.2. Evaluation of long-range contribution (section IV) was supported by the Russian Science Foundation under grant No. 22-12-00043. ## Appendix A Fourier transform of photon propagators Following [17] we define the 4-dimensional Fourier transform of function \(f(k)\) as follows \[f(x)=\int\frac{d^{4}k}{(2\pi)^{4}}e^{-ikx}f(k)\,. \tag{10}\] To find the corresponding coordinate representation of the total photon propagator, one should apply the above transformation to the sum of the zero- and finite-temperature parts given in momentum space: \[D_{\mu\nu}(k)=D^{0}_{\mu\nu}(k)+D^{\beta}_{\mu\nu}(k)\,, \tag{11}\] Then, in the temporal gauge the corresponding evaluation for the first term in Eq. (11) yields \[D^{0}_{ij}(x_{1},x_{2})=\int\frac{d^{4}k}{(2\pi)^{4}}e^{-ik(x_{1} -x_{2})}D^{0}_{ij}(k)= \tag{12}\] \[-4\pi i\int\frac{dk_{0}}{2\pi}e^{-ik_{0}(t_{1}-t_{2})}\int\frac{ d^{3}k}{(2\pi)^{3}}\frac{e^{ik(\mathbf{r}_{1}-\mathbf{r}_{2})}}{k_{0}^{2}-\mathbf{k}^{2}} \left(\delta_{ij}-\frac{k_{i}k_{j}}{k_{0}^{2}}\right)\] \[=-\frac{i}{2\pi}\int\limits_{-\infty}^{\infty}dk_{0}e^{-ik_{0}(t _{1}-t_{2})}\left(\delta_{ij}+\frac{\nabla_{i}\nabla_{j}}{k_{0}^{2}}\right) \frac{e^{i|k_{0}|r_{12}}}{r_{12}}.\] Finally, we can write \[D^{0}_{ij}(x_{1},x_{2})=\frac{i}{2\pi}\int\limits_{-\infty}^{\infty}dk_{0}e^{ -ik_{0}(t_{1}-t_{2})}D^{0}_{ij}(k_{0},r_{12}), \tag{13}\] where we introduced notation: \[D^{0}_{ij}(k_{0},r)=-\left(\delta_{ij}+\frac{\nabla_{i}\nabla_{j}}{k_{0}^{2}} \right)\frac{e^{i|k_{0}|r_{12}}}{r}. \tag{14}\] The temperature dependent part in Eq. (11) is \[D^{\beta}_{ij}(x_{1},x_{2})=8\pi^{2}\int\frac{d^{4}k}{(2\pi)^{4} }e^{-ik(x_{1}-x_{2})}D^{\beta,P}_{ij}(k) \tag{15}\] \[=8\pi^{2}\int\frac{dk_{0}}{2\pi}e^{-ik_{0}(t_{1}-t_{2})}\int \frac{d^{3}k}{(2\pi)^{3}}e^{ik(\mathbf{r}_{1}-\mathbf{r}_{2})}\frac{n_{\beta}(|\mathbf{k} |)}{2|\mathbf{k}|}\] \[\qquad\times(\delta(k_{0}-|\mathbf{k}|)+\delta(k_{0}+|\mathbf{k}|))\left( \delta_{ij}-\frac{k_{i}k_{j}}{k_{0}^{2}}\right)=\] \[\int\limits_{-\infty}^{+\infty}dk_{0}e^{-ik_{0}(t_{1}-t_{2})} \left(\delta_{ij}+\frac{\nabla_{i}\nabla_{j}}{k_{0}^{2}}\right)\frac{\sin(|k_ {0}|r_{12})}{\pi r_{12}}n_{\beta}(|k_{0}|).\] For our purposes it is convenient to rewrite Eq. (15) in terms of two contributions: \[D^{\beta}_{ij}(x_{1},x_{2})=\frac{i}{2\pi}\int\limits_{-\infty}^{+\infty}dk_ {0}e^{-ik_{0}(t_{1}-t_{2})} \tag{16}\] \[\times\left(\delta_{ij}+\frac{\nabla_{i}\nabla_{j}}{k_{0}^{2}}\right)\left\{- \frac{e^{i|k_{0}|r_{12}}}{r_{12}}+\frac{e^{-i|k_{0}|r_{12}}}{r_{12}}\right\}n _{\beta}(|k_{0}|).\] Similar to Eq. (13), the result of the Fourier transform of the thermal part can be written as follows: \[D^{\beta}_{ij}(x_{1},x_{2})=\frac{i}{2\pi}\int\limits_{-\infty}^{\infty}dk_{0} e^{-ik_{0}(t_{1}-t_{2})}D^{\beta}_{ij}(k_{0},r_{12}), \tag{17}\] together with the notation \[D^{\beta}_{ij}(k_{0},r)=\left(\delta_{ij}+\frac{\nabla_{i}\nabla _{j}}{k_{0}^{2}}\right) \tag{18}\] \[\times\left\{-\frac{e^{i|k_{0}|r_{12}}}{r_{12}}+\frac{e^{-i|k_{0} |r_{12}}}{r_{12}}\right\}n_{\beta}(|k_{0}|).\] The sum of two equations (14) and (18) (often called'mixed' representation of photon propagator [17]) \[D_{ij}(k_{0},r)=D^{0}_{ij}(k_{0},r)+D^{\beta}_{ij}(k_{0},r) \tag{19}\] is then used in the evaluation of interaction potential given by Eq. (21). ## Appendix B Imaginary part of integrals with squared polarizability In this Appendix we describe how the imaginary part of the integrals with the squared atomic polarizability \begin{table} \begin{tabular}{c c c c} \hline States \(a-b\) & T=300 & T=1000 & T=10\({}^{*}\) \\ \hline \multicolumn{4}{c}{\(\mathrm{Re}\,C^{\beta}_{0}\)} \\ 2s-2s & \(1.34\times 10^{-46}\) & \(9.24\times 10^{-26}\) & \(1.24\times 10^{-17}\) \\ 3s-3s & \(1.46\times 10^{-28}\) & \(1.62\times 10^{-21}\) & \(-1.55\times 10^{-17}\) \\ 4s-4s & \(6.02\times 10^{-24}\) & \(-4.14\times 10^{-21}\) & \(-9.44\times 10^{-18}\) \\ 8s-8s & \(7.52\times 10^{-21}\) & \(-3.29\times 10^{-20}\) & \(2.97\times 10^{-22}\) \\ & \(\mathrm{Im}\,C^{\beta}_{0}\) & & \\ 1s-1s & \(-3.31\times 10^{-30}\) & \(-1.52\times 10^{-26}\) & \(-2.88\times 10^{-19}\) \\ 2s-2s & \(-2.40\times 10^{-27}\) & \(-1.37\times 10^{-23}\) & \(5.35\times 10^{-18}\) \\ 3s-3s & \(-1.99\times 10^{-25}\) & \(2.34\times 10^{-22}\) & \(-1.46\times 10^{-17}\) \\ 4s-4s & \(-7.60\times 10^{-24}\) & \(7.62\times 10^{-21}\) & \(-2.16\times 10^{-17}\) \\ 8s-8s & \(-8.38\times 10^{-22}\) & \(-1.71\times 10^{-19}\) & \(-1.54\times 10^{-16}\) \\ \hline \end{tabular} \end{table} Table 2: The real and imaginary parts of dispersion coefficient \(C^{\beta}_{0}\) (see Eq. (41)) for two hydrogen atoms in states \(a\) an \(b\) (first column) at different temperatures \(T\) in Kelvin. All the values are in atomic units. The real part of \(C^{\beta}_{0}\) for the two atoms in ground states (\(1s-1s\)) is completely negligible and therefore is not presented. contributes to the dispersion coefficients in the zero and finite temperature cases. Although in the present work all integrations are performed completely numerically, such an analysis would not be superfluous. For this purpose we consider the following model integral arising in the Eqs. (38), (39) and (44) for the dispersion coefficients: \[J=\int_{0}^{\infty}\frac{f(k_{0})dk_{0}}{(a-k_{0}-i0)^{2}}, \tag{45}\] where \(a\) is the real positive number and \(f(k_{0})\) is a function that is analytical on the real semi-axis and guarantees the convergence of the \(J\). With the use of Dirac prescription \[\frac{1}{a-k_{0}-i0}=\mathrm{P}\frac{1}{a-k_{0}}+i\pi\delta(k_{0}-a), \tag{46}\] (\(P\) stands for integral in the sense of the principal value) the equation (45) can be simplified to \[J=\int_{0}^{\infty}\frac{f(k_{0})dk_{0}}{(a-k_{0}-i0)^{2}}=-\frac {\partial}{\partial a}\int_{0}^{\infty}\frac{f(k_{0})dk_{0}}{a-k_{0}-i0} \tag{47}\] \[=-\frac{\partial}{\partial a}\int_{0}^{\infty}\left(\mathrm{P} \frac{1}{a-k_{0}}+i\pi\delta(k_{0}-a)\right)f(k_{0})dk_{0}\] \[=-\frac{\partial}{\partial a}P\int_{0}^{\infty}\frac{f(k_{0})dk_ {0}}{a-k_{0}}-i\pi\frac{\partial}{\partial a}f(a).\] The first term in the last line of Eq. (47) is purely real, while the second is imaginary (for the real \(f\)) and nonzero for \(f(k_{0})\neq const\). Since for the finite temperature case \(f(k_{0})=k_{0}^{N}n_{\beta}(k_{0})\) (here \(N>1\) is the integer number) the corresponding dispersion coefficient \(C_{1}^{\beta}\) and \(C_{0}^{\beta}\), see Eqs. (38) and (38), becomes complex. The same holds for \(B_{2}^{\beta}\) given by Eq. (44).
2309.14197
Resilience for Loose Hamilton Cycles
We study the emergence of loose Hamilton cycles in subgraphs of random hypergraphs. Our main result states that the minimum $d$-degree threshold for loose Hamiltonicity relative to the random $k$-uniform hypergraph $H_k(n,p)$ coincides with its dense analogue whenever $p \geq n^{- (k-1)/2+o(1)}$. The value of $p$ is approximately tight for $d>(k+1)/2$. This is particularly interesting because the dense threshold itself is not known beyond the cases when $d \geq k-2$.
José D. Alvarado, Yoshiharu Kohayakawa, Richard Lang, Guilherme O. Mota, Henrique Stagni
2023-09-25T14:57:39Z
http://arxiv.org/abs/2309.14197v1
# Resilience for loose Hamilton cycles ###### Abstract. We study the emergence of loose Hamilton cycles in subgraphs of random hypergraphs. Our main result states that the minimum \(d\)-degree threshold for loose Hamiltonicity relative to the random \(k\)-uniform hypergraph \(\mathrm{H}_{k}(n,p)\) coincides with its dense analogue whenever \(p\geq n^{-(k-1)/2+o(1)}\). The value of \(p\) is approximately tight for \(d>(k+1)/2\). This is particularly interesting because the dense threshold itself is not known beyond the cases when \(d\geq k-2\). This research was partly supported by DFG (450397222), H2020-MSCA (101018431), FAPESP (2021/11020-9, 2018/04876-1, 2019/13364-7), CNPq (311412/2018-1, 406248/2021-4, 306620/2020-0, 406248/2021-4) and CAPES (Finance Code 001). CAPES is the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior. CNPq is the National Council for Scientific and Technological Development of Brazil. FAPESP is the Sao Paulo Research Foundation. ## 1. Introduction Let \(G\) be a finite graph with \(k\) vertices and let \(\gamma>0\) be a vertex of \(G\). We say that \(G\) is _\(k\)-uniform_ if \(G\) is \(k\)-uniform and \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform. A _\(k\)-uniform hypergraph_\((k\)\) is \(k\)-uniform if and only if \(G\) is \(k\)-uniform if and only if \ proved that for \(k=3\) it suffices to have \(p\geq C\max\left\{n^{-3/2},\,n^{d-3}\right\}\log n\) for some \(C>0\). However, for all we know, the threshold for the property described in Theorem 1.2 could even be of order \(n^{d-k}\log n\) for the whole range of \(d\). Resilience for Hamiltonicity has also been investigated for other types of hypercycles. In particular, Clemens, Ehrenmuller and Person [10] studied Hamilton Berge cycles (which are less restrictive than loose cycles) in \(3\)-graphs and Allen, Parczyk and Pfenninger [3] studied tight Hamilton cycles (which are more restrictive than loose cycles) for \(d=k-1\). Moreover, Ferber and Kwan [17] proved an analogous result to Theorem 1.2 for perfect matchings, which gives approximately tight bounds for \(p\) whenever \(d>k/2\). We also note that Hamiltonicity has been studied in other random graph models such as 'random perturbation' [7, 30] and 'random robustness' [25, 27, 31]. We return to the latter in Section 10. Our proof is based on the absorption method in combination with embedding results for the (Weak) Hypergraph Regularity Lemma. We also benefit from a framework of Ferber and Kwan [17], which was introduced to tackle the resilience problems for matchings. The main difference to matchings is that Hamilton cycles come with a notion of connectivity. Hence, to prove Theorem 1.2 we need to extend this framework with further ideas in essentially every step. An important constraint for our proof is that the value of \(\mu_{d}(k)\) is unknown in almost all cases. In contrast to the aforementioned results for Hamiltonicity [3, 10, 40], we therefore cannot rely on any structural insights of past work regarding \(\mu_{d}(k)\). Our strategy thus uses only the mere existence of loose Hamilton cycles to extract certain characteristic properties such as connectivity and covering most vertices and relate them to the random setting. Unfortunately, this is not sufficient in certain critical situations, where multiple Hamilton cycles have to be combined. We overcome the issues arising in this situation by showing that the threshold \(\mu_{d}(k)\) actually allows us to find a Hamilton cycle with additional properties such as certain vertices being far apart from each other. Hence, in order to find a Hamilton cycle in the sparse setting, we develop in parallel a way to find an 'enhanced' Hamilton cycle in the dense setting. This process is illustrated in Figure 1. The rest of the paper is organised as follows. In the next section, we present two main lemmas from which we derive Theorem 1.2. In Section 3, we introduce a series of auxiliary results and machinery that we deploy throughout the proofs. The rest of the paper, Sections 4-8, is dedicated to the proofs of the above mentioned two lemmas. ## 2. Proof of Theorem 1.2 (Main result) We apply the method of absorption to find a Hamilton cycle in the proof of Theorem 1.2. Informally, this technique separates the argument into two parts. First we find a special path \(A\) that allows us to integrate any small set of vertices into a larger path. Then we cover all but few vertices with a loose cycle that contains \(A\) as a subpath. We conclude the proof by using the property of \(A\). A _loose path_\(P\) in a \(k\)-graph is a sequence of edges such that each two consecutive edges overlap in exactly one vertex, and no pair of non-consecutive edges have vertices in common. If the context is clear, we simply speak of a _path_. The _order_ of \(P\) is the number of its vertices. For vertices \(u\) and \(v\), we say that \(P\) is a _loose \((u,v)\)-path_ if \(u\) is in the first edge, \(v\) is in the last edge and no other edge contains \(u\) or \(v\). For convenience, the constant hierarchies are expressed in standard \(\ll\)-notation in the remainder of the paper.3 Moreover, given an eventually positive function \(f(n)\) of \(n\), the expression \(\omega(f(n))\) denotes a function \(g(n)\) such that \(g(n)/f(n)\to\infty\) as \(n\to\infty\). **Lemma 2.1** (Sparse Absorption Lemma).: _Let \(\eta\ll\alpha\ll 1/k,\)\(1/d,\)\(\gamma\) and \(1/C\ll 1/k,\gamma\) with \(k\geq 3\) and \(p\geq\max\{n^{-(k-1)/2+\gamma},Cn^{-(k-d)}\log n\}\). Then w.h.p. \(G\sim\mathrm{H}_{k}(n,p)\) has the following property._ _For any spanning subgraph \(G^{\prime}\subseteq G\) with \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p{n-d\choose k-d}\), there is a set \(A\subseteq V(G^{\prime})\) with \(|A|\leq\alpha n\) and two vertices \(u,v\in A\) such that for any subset \(W\subseteq V(G^{\prime})\setminus A\) with \(|W|\leq\eta n\) divisible by \(k-1\), the induced graph \(G^{\prime}[A\cup W]\) has a loose \((u,v)\)-path covering \(A\cup W\)._ The next lemma allows us to cover most vertices with a single loose path. **Lemma 2.2** (Sparse Cover Lemma).: _Let \(1\leq d\leq k-1\) with \(k\geq 3\), \(\eta>0\), \(\alpha\ll 1/k,\)\(1/d,\)\(\gamma\) and \(1/C\ll 1/k,\gamma\) and \(p\geq\max\{Cn^{-(k-d)}\log n,\)\(Cn^{-(k-2)}\log n\}\), then w.h.p. \(G\sim\mathrm{H}_{k}(n,p)\) has the following property._ _For any spanning subgraph \(G^{\prime}\subseteq G\) with \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p{n-d\choose k-d}\), \(Q\subseteq V(G^{\prime})\) with \(|Q|\leq\alpha n\) and \(u,v\in V(G^{\prime})\setminus Q\), there is a loose \((u,v)\)-path \(P\) in \(G^{\prime}-Q\) that covers all but \(\eta n\) vertices of \(G^{\prime}-Q\)._ These two lemmas combine easily to a proof of our main result. Proof of Theorem 1.2.: The case when \(k=2\) is covered by the work of Lee and Sudakov [36]. So in the following we assume that \(k\geq 3\). Consider \(\alpha\) and \(\eta\) with \(\eta\ll\alpha\ll 1/k,\)\(1/d,\)\(\gamma \(1/k\), \(1/d\), \(\gamma\). Then w.h.p. \(G\sim\mathrm{H}_{k}(n,p)\) satisfies the outcomes of Lemmas 2.1 and 2.2. Now let \(G^{\prime}\subseteq G\) be a spanning subgraph with \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)pbinom{n-d}{k-d}\). By assumption on \(G\), we may pick a set \(A\subseteq V(G^{\prime})\) and two vertices \(u,v\in A\) with the property described in Lemma 2.1. Let \(Q=A\setminus\{u,v\}\). By assumption on \(G\), there is a loose \((u,v)\)-path \(P\) in \(G^{\prime}-Q\) that covers all but \(\eta n\) vertices of \(G^{\prime}-Q\). Let \(W\) be the set of uncovered vertices. Then \(|W|\leq\eta n\) and a simple argument shows that \(|W|\) is divisible by \(k-1\). To finish the proof, we use the property of \(A\) to find a loose \((u,v)\)-path \(P^{\prime}\) covering \(A\cup W\). It follows that \(P\cup P^{\prime}\) is a loose Hamilton cycle. The remaining sections are dedicated to the proofs of Lemmas 2.1 and 2.2. ## 3. Preliminaries In this section, we introduce a series of tools and technical facts that will be used throughout the rest of the paper. Much of the exposition closely follows the work of Ferber and Kwan [17]. ### The Sparse Regularity Lemma The Sparse Regularity Lemma allows us to approximately encode the local edge densities of sparse graphs, provided that its edges are nowhere too concentrated. This can be formalised in terms of regular partitions and upper-uniformity. For the following definitions, suppose \(\varepsilon\), \(\eta>0\), \(D>1\) and \(0<p\leq 1\). * **Density:** Given non-empty disjoint vertex sets \(X_{1},\ldots,X_{k}\) in a \(k\)-graph \(G\), we write \(e\left(X_{1},\ldots,X_{k}\right)\) for the number of edges with a vertex in each \(X_{i}\). The _density_\(d(X_{1},\ldots,X_{k})\) is defined by \[d\left(X_{1},\ldots,X_{k}\right)=\frac{e\left(X_{1},\ldots,X_{k}\right)}{|X_{ 1}|\ldots|X_{k}|}.\] * **Regular tuple:** A \(k\)-partite \(k\)-graph with parts \(V_{1},\ldots,V_{k}\) is \((\varepsilon,p)\)_-regular_ if, for every \(X_{1}\subseteq V_{1},\ldots,X_{k}\subseteq V_{k}\) with \(|X_{i}|\geq\varepsilon\,|V_{i}|\) for \(i\in[k]\), we have \[|d\left(X_{1},\ldots,X_{k}\right)-d\left(V_{1},\ldots,V_{k}\right)|\leq \varepsilon p.\] * **Regular partition:** A partition \(\mathcal{V}=\{V_{1},\ldots,V_{t}\}\) of the vertex set of a \(k\)-graph is said to be \((\varepsilon,p)\)_-regular_ if it is an equipartition (meaning that the sizes of the parts differ by at most one), and for all but \(\varepsilon\binom{t}{k}\) of the \(k\)-sets \(\{V_{i_{1}},\ldots,V_{i_{k}}\}\) from \(\mathcal{V}\), the \(k\)-partite \(k\)-graph induced by \(V_{i_{1}},\ldots,V_{i_{k}}\) in \(G\) is \((\varepsilon,p)\)-regular. * **Upper-uniformity:** A \(k\)-graph \(G\) is \((\lambda,p,D)\)_-upper-uniform_ if for any disjoint subsets of vertices \(X_{1},\ldots,X_{k}\) each of size at least \(\lambda\,|V(G)|\), we have \(d\left(X_{1},\ldots,X_{k}\right)\leq Dp\). The (Weak) Sparse (Hypergraph) Regularity Lemma then states that every upper-uniform hypergraph admits a regular partition of constant size. **Lemma 3.1** (Sparse Regularity Lemma [17, Lemma 4.2]).: _Let \(1/n\ll\lambda\ll 1/r_{1}\ll 1/r_{0},\,\varepsilon,\,1/k,\,1/D\) and \(p\in(0,1]\). Then any \((\lambda,p,D)\)-upper-uniform \(n\)-vertex \(k\)-graph \(G\) admits an \((\varepsilon,p)\)-regular partition \(V_{1},\ldots,V_{r}\) of its vertex set into \(r_{0}\leq r\leq r_{1}\) parts._ By using the notion of a _reduced graph_, we can track the parts where a regular partition is (relatively) dense and regular. **Definition 3.2** (Reduced graph).: Given an \((\varepsilon,p)\)-regular partition \(V_{1},\ldots,V_{r}\) of the vertex set of a \(k\)-graph \(G\), the associated _reduced hypergraph_ is the \(k\)-graph whose vertices are the clusters \(V_{1},\ldots,V_{r}\), and we put an edge \(\{V_{i_{1}},\ldots,V_{i_{k}}\}\) whenever \(d\left(V_{i_{1}},\ldots,V_{i_{k}}\right)>2\varepsilon p\) and the \(k\)-partite \(k\)-graph induced by \(V_{i_{1}},\ldots,V_{i_{k}}\) in \(G\) is \((\varepsilon,p)\)-regular. As it turns out, the reduced graph approximately inherits the degree conditions of the original graph, but in a dense form. This is a key fact to transition from the dense to the sparse setting. **Lemma 3.3** (Degree inheritance [17, Lemma 4.4]).: _Let \(1/n\ll 1/r_{1}\ll 1/r_{0},\)\(\varepsilon^{\prime}\ll\varepsilon\ll 1/k,\)\(1/d,\)\(\delta\) and \(p\in\left(0,1\right]\) be given. Let \(G\) be an \(\left(o(1),p,1+o(1)\right)\)-upper-uniform \(n\)-vertex \(k\)-graph. Let \(G^{\prime}\subseteq G\) be a spanning subgraph in which all but \(\varepsilon^{\prime}n^{d}\) of the \(d\)-sets of vertices have degree at least \(\delta{n-d\choose k-d}p\). Let \(\mathcal{R}\) be the \(r\)-vertex reduced \(k\)-graph obtained by applying Lemma 3.1 to \(G^{\prime}\) with parameters \(r_{0}\), \(p\) and \(\varepsilon\). Then all but \(\sqrt{\varepsilon}{r\choose d}\) of the \(d\)-sets of vertices of \(\mathcal{R}\) have degree at least \(\delta{r-d\choose k-d}-\left(4\sqrt{\varepsilon}+k/r_{0}\right)r^{k-d}\)._ ### Prepartitions and regularity The following are two technical versions of Lemma 3.1, which allow for prepartitions and more precise control of the degrees. **Lemma 3.4** (Prepartitioning regular partition [17, Lemma 4.5]).: _Suppose that a \(k\)-graph \(G\) has its vertices partitioned into sets \(P_{1},\ldots,P_{h}\). In the \(\left(\varepsilon,p\right)\)-regular partition \(V_{1},\ldots,V_{r}\) guaranteed by Lemma 3.1, we can assume that all but \(\varepsilon hr\) of the clusters \(V_{i}\) are contained in some \(P_{j}\)._ **Definition 3.5**.: Consider a \(k\)-graph \(G\), and let \(P_{1},\ldots,P_{h}\) be disjoint sets of vertices. Also, consider an \(\left(\varepsilon,p\right)\)-regular partition \(V_{1},\ldots,V_{r}\) of the vertices of \(G\). Then the _partitioned reduced graph_\(\mathcal{R}\) with _threshold_\(\tau\) is the \(k\)-graph defined as follows. The vertices of \(\mathcal{R}\) are the clusters \(V_{i}\) which are completely contained in some \(P_{j}\), with an edge \(\left\{V_{i_{1}},\ldots,V_{i_{k}}\right\}\) if \(d\left(V_{i_{1}},\ldots,V_{i_{k}}\right)>\tau p\) and the \(k\)-partite \(k\)-graph induced by \(V_{i_{1}},\ldots,V_{i_{k}}\) in \(G\) is \(\left(\varepsilon,p\right)\)-regular. **Lemma 3.6** (Sparse Regularity Lemma [17, Lemma 4.7]).: _Let \(1/n\ll\lambda\ll 1/r_{1}\ll 1/r_{0},\)\(\varepsilon,\)\(1/k,\)\(1/d,\)\(\delta\) and \(p\in\left(0,1\right]\). Let \(G\) be an \(\left(\lambda,p,1+\lambda\right)\)-upper-uniform \(n\)-vertex \(k\)-graph with a partition \(P_{1},\ldots,P_{h}\) of its vertices into parts of sizes \(n_{1},\ldots,n_{h}\), respectively. Let \(G^{\prime}\) be a spanning subgraph of \(G\) and let \(\mathcal{R}\) be the partitioned reduced \(r\)-vertex \(k\)-graph with threshold \(\tau\) obtained by applying Lemmas 3.1 and 3.4 to \(G^{\prime}\) with parameters \(r_{0}\), \(p\) and \(\varepsilon\)._ _For every \(1\leq i\leq h\), let \(\mathcal{P}_{i}\) be the set of clusters contained in \(P_{i}\), and let \(r_{i}=\left|\mathcal{P}_{i}\right|\). Also, for every \(J\subseteq\left\{1,\ldots,h\right\}\), write \(P_{J}:=\bigcup_{j\in J}P_{j}\), \(n_{J}=\left|P_{J}\right|\), \(\mathcal{P}_{J}=\bigcup_{j\in J}\mathcal{P}_{j}\) and \(r_{J}=\left|\mathcal{P}_{J}\right|\). Then the following properties hold._ 1. _For each_ \(i\)_, we have_ \(r_{i}\geq\left(n_{i}/n\right)r-\varepsilon hr\)_._ 2. _Consider some_ \(1\leq i\leq h\)_,_ \(1\leq d^{\prime}\leq d\) _and some_ \(J\subseteq\left\{1,\ldots,h\right\}\)_, and suppose that all but_ \(o(n^{d^{\prime}})\) _of the_ \(d^{\prime}\)_-sets of vertices_ \(X\subseteq P_{i}\) _satisfy_ \[\deg_{P_{J}}\left(X\right)\geq\delta^{\prime}p{n_{J}-d^{\prime}\choose k-d^{ \prime}}\] _for some_ \(\delta^{\prime}\geq\delta\)_. Then, in the reduced graph_ \(\mathcal{R}\)_, for all but at most_ \(\sqrt{\varepsilon}{r\choose d^{\prime}}\) _of the_ \(d^{\prime}\)_-sets of clusters_ \(\mathcal{X}\subseteq\mathcal{P}_{i}\)_, we have_ \[\deg_{\mathcal{P}_{J}}\left(\mathcal{X}\right)\geq\delta^{\prime}{r_{J}-d^{ \prime}\choose k-d^{\prime}}-\left(\tau+\varepsilon h+\sqrt{\varepsilon}+k/r_{0 }\right)r^{k-d^{\prime}}.\] Note that Lemma 3.3 is actually a special case of Lemma 3.6 (taking \(h=1\) and threshold \(\tau=2\varepsilon\)). ### The Embedding Lemma A hypergraph is _linear_ if every two edges intersect in at most one vertex. The following embedding lemma allows us to embed linear subgraphs of the reduced graph into the original graph. **Definition 3.7**.: Consider a \(k\)-graph \(H\) with vertex set \(\left\{1,\ldots,r\right\}\) and let \(\mathcal{G}\left(H,n,m,p,\varepsilon\right)\) be the collection of all \(k\)-graphs \(G\) obtained in the following way. The vertex set of \(G\) is a union of pairwise disjoint sets \(V_{1},\ldots,V_{r}\) each of size \(n\). For every edge \(\left\{i_{1},\ldots,i_{k}\right\}\in E\left(H\right)\), we add to \(G\) an \(\left(\varepsilon,p\right)\)-regular \(k\)-graph with \(m\) edges across \(V_{i_{1}},\ldots,V_{i_{k}}\). These are the only edges of \(G\). **Definition 3.8**.: For \(G\in\mathcal{G}\left(H,n,m,p,\varepsilon\right)\), let \(\#_{H}(G)\) be the number of "canonical copies" of \(H\) in \(G\), meaning that the copy of every vertex \(i\) from \(H\) must come from \(V_{i}\). **Definition 3.9**.: The \(k\)-_density_\(m_{k}\left(H\right)\) of a \(k\)-graph \(H\) with more than \(k\) vertices is defined as \[m_{k}\left(H\right)=\max\left\{\frac{e\left(H^{\prime}\right)-1}{v\left(H^{ \prime}\right)-k}\colon H^{\prime}\subseteq H\text{ with }v\left(H^{\prime}\right)>k\right\}.\] The following result appears in the work of Ferber and Kwan [17, Lemma 4.11] and can be proved using the methods of Conlon, Gowers, Samotij and Schacht [13]. **Lemma 3.10** (Sparse Embedding Lemma).: _For every linear \(k\)-graph \(H\) and every \(\tau>0\), there exist \(\varepsilon\), \(\zeta>0\) with the following property. For every \(\kappa>0\), there is \(C>0\) such that if \(p\geq CN^{-1/m_{k}\left(H\right)}\), then with probability \(1-e^{-\Omega\left(N^{k}p\right)}\) the following holds in \(G\sim\mathrm{H}_{k}(N,p)\). For every \(n\geq\kappa N\), \(m\geq\tau pn^{k}\) and every subgraph \(G^{\prime}\) of \(G\) in \(\mathcal{G}\left(H,n,m,p,\varepsilon\right)\), we have \(\#_{H}(G^{\prime})>\zeta p^{e(H)}n^{v(H)}\)._ ### Properties of random graphs and subgraphs In what follows we prove a simple consequence of the Chernoff bound and we list a series of technical statements from the work of Ferber and Kwan [17] that describe certain properties of random graphs. For reals \(x,y,z\) we write \(x=y\pm z\) to mean that \(x\in[y-z,y+z]\). **Lemma 3.11**.: _For every integer \(k\geq 3\) and every \(0<\lambda\), \(\eta<1\), there exists \(C>0\) such that if \(p\geq Cn^{-(k-2)}\log n\), then w.h.p. \(G\in\mathrm{H}^{k}\left(n,p\right)\) has the following property. Let \(U_{1},\ldots,U_{k}\) be disjoint subsets of \(V(G)\) each of size at least \(\lambda n\). Let \(M\subseteq U_{k-1}\times U_{k}\) be of size at least \(C/(pn^{k-3})\). Then \(G\) has \((1\pm\eta)p|M|\prod_{j=1}^{k-2}|U_{j}|\) edges \(e=\{v_{1},\ldots,v_{k}\}\) with \(v_{i}\in U_{i}\) for \(i\in[k-2]\) and \((v_{k-1},v_{k})\in M\)._ Proof.: Fix \(M\) and \(U_{1},\ldots,U_{k}\) as in the statement. We denote by \(Z=Z(M;U_{1},\ldots,U_{k})\) number of edges \(\{v_{1},\ldots,v_{k}\}\in E(G)\) with \(v_{i}\in U_{i}\) for \(i\in[k-2]\) and \((v_{k-1},v_{k})\in M\). Since \(Z\) has binomial distribution, by using the Chernoff bound, with probability \(1-2\exp\left(-c\,|M|n^{k-2}p\right)\) we have \[Z=(1\pm\eta)p|M|\prod_{j=1}^{k-2}|U_{j}|,\] where \(c:=\eta^{2}\lambda^{k-2}/3\). Then, taking \(C:=4\log\left(k+1\right)c^{-1}\), by the union bound on the choices of \(U_{1},\ldots,U_{k}\) and \(M\), the probability that the property described in the statement fails is at most \[\sum_{(M;U_{1},\ldots,U_{k})}2\,\exp\left(-c\,|M|n^{k-2}p\right) \leq 2(k+1)^{n}\sum_{m\geq C/(pn^{k-3})}\binom{n^{2}}{m}\exp \left(-c\,mn^{k-2}p\right)\] \[\leq 2(k+1)^{n}\sum_{m\geq C/(pn^{k-3})}\exp\left(-m(cn^{k-2}p-2 \log n)\right)\] \[\leq 2(k+1)^{n}\sum_{m\geq C/(pn^{k-3})}\exp\left(-m(cn^{k-2}p/2)\right)\] \[\leq 4(k+1)^{n}\exp\left(-2\log\left(k+1\right)n\right)\] \[=o(1),\] which concludes the proof. **Lemma 3.12** (Corollary 5.2 in [17]).: _Fix \(k\geq 2\), let \(p=\omega(n^{1-k}\log n)\), and consider \(G\in\mathrm{H}^{k}\left(n,p\right)\). Then \(G\) is \(\left(o(1),p,1+o(1)\right)\)-upper-uniform with probability at least \(1-e^{-\omega\left(n\log n\right)}\)._ **Lemma 3.13** (Lemma 5.3 in [17]).: _For every \(\alpha>0\) and \(k\geq 3\), there is \(C>0\) such that if \(p\geq Cn^{2-k}\), then w.h.p. \(G\sim\mathrm{H}^{k}\left(n,p\right)\) has the following property. For every vertex \(w\) and every set \(X\subseteq V(G)\) of at most \(\alpha n\) vertices, there are at most \(2\alpha np\binom{n-2}{k-2}\) edges in \(G\) containing \(w\) and a vertex of \(X\setminus\{w\}\)._ For vertex sets \(S\) and \(X\) in a \(k\)-graph \(G\), let \(Z_{G}(S,X)\) be the number of edges \(e\in E(G)\) that contain \(S\) and have non-empty intersection with \(X\setminus S\). **Lemma 3.14** (Lemma 5.4 in [17]).: _Fix \(\alpha>0\) and positive integers \(d<k\). For \(p=\omega(n^{d-k})\), w.h.p. \(G\sim\operatorname{H}^{k}\left(n,p\right)\) has the following property. For every subset \(X\subseteq V(G)\) of size \(|X|\leq\alpha n\), there are \(o(n^{d})\)\(d\)-sets \(S\subseteq V(G)\) such that \(Z_{G}(S,X)>2\binom{k}{d}\alpha np\binom{n-d-1}{k-d-1}\)._ **Lemma 3.15** (Lemma 5.6 in [17]).: _Suppose \(1\leq d<k\), \(\alpha\ll 1/k\), \(\gamma\) and \(0<\sigma\leq 1-\alpha\). If \(p=\omega(n^{d-k})\), then the following holds w.h.p. for \(G\sim\operatorname{H}^{k}\left(n,p\right)\)._ _Let \(G^{\prime}\) be a spanning subgraph of \(G\) with \(\delta_{d}(G^{\prime})\geq\left(\mu+\gamma\right)p\binom{n-d}{k-d}\), and let \(Q\subseteq V(G^{\prime})\) with \(|Q|\leq\alpha n\) be given. If \(Y\subseteq V(G^{\prime})\setminus Q\) with \(|Y|=\sigma n\) is chosen uniformly at random, then w.h.p. all but \(o(n^{d})\)\(d\)-sets in \(V(G^{\prime})\setminus Q\) have degree at least \(\left(\mu+\gamma/2\right)p\binom{\sigma n-d}{k-d}\) into \(Y\)._ We remark that Lemma 5.6 in Ferber and Kwan [17] implies Lemma 3.15 above for \(\alpha=0\). The lemma above for \(\alpha>0\) can be proved following the proof of their Lemma 5.6, making use of Lemma 3.14. **Lemma 3.16** (Lemma 3.4 in [17]).: _Let \(1\leq d<k\) and let \(c\ll 1/k\). Consider an \(n\)-vertex \(k\)-graph \(G\) where all but \(\delta\binom{n}{d}\) of the \(d\)-sets have degree at least \(\left(\mu+\eta\right)\binom{n-d}{k-d}\). Let \(S\) be a subset of \(s\geq 2d\) vertices of \(G\) chosen uniformly at random. Then with probability at least \(1-\binom{s}{d}\left(\delta+e^{-\sigma\eta^{2}s}\right)\), the random induced subgraph \(G\left[S\right]\) has minimum \(d\)-degree at least \(\left(\mu+\eta/2\right)\binom{s-d}{k-d}\)._ ## 4. Connecting vertices In the proofs of Lemmas 2.1 and 2.2, we need to connect vertices with a short path of uniform order while avoiding a few other vertices. This is the purpose of the following lemma. **Lemma 4.1** (Sparse Connection Lemma).: _Let \(k\geq 3\) and suppose \(1\leq d<k\), \(\alpha\ll 1/k\), \(\gamma\) and \(1/K\ll 1/k,\gamma\). Suppose further that \(0<\nu\leq 1-\alpha\) and \(\varrho\ll 1/k,\,\nu\). If \(p\geq\max\{Kn^{-(k-d)}\log n,\,\omega(n^{-(k-2)})\}\), then w.h.p. \(G\sim\operatorname{H}_{k}(n,p)\) has the following property._ _Let \(G^{\prime}\subseteq G\) be a spanning subgraph with \(\delta_{d}(G^{\prime})\geq\left(\mu_{d}(k)+\gamma\right)p\binom{n-d}{k-d}\) and \(Q\subseteq V(G^{\prime})\) with \(|Q|\leq\alpha n\). Let \(C\subseteq V(G^{\prime})\setminus Q\) be a \(\nu n\)-set taken uniformly at random. Then with probability at least \(2/3\) the following holds. For any \(R\subseteq C\) with \(|R|\leq\varrho n\) and distinct \(u,\,v\in V(G^{\prime})\setminus(Q\cup R)\), there is a loose \((u,v)\)-path \(P\) in \(G^{\prime}\) of order \(8(k-1)+1\) with \(V(P)\setminus\{u,v\}\subseteq C\setminus R\)._ Before we get to the details of the proof of Lemma 4.1, let us give an outline of the argument. Sketch of the proof.: In this sketch we focus on the special case where \(Q=\emptyset\), \(C=V(G)\) and \(R=\emptyset\), since the general statement follows in a quite similar way. It is also instructive to verify first that the lemma holds in the dense setting when \(p=1\). Indeed, in this case \(G^{\prime}\) contains a loose Hamilton cycle, and therefore any two vertices \(u,\,v\in V(G)\) are trivially connected by a loose \((u,v)\)-path \(P\). However, we also need to control the order of \(P\), which will be \(4(k-1)+1\) when \(p=1\). To be precise, what matters in our applications of Lemma 4.1 is that the path \(P\) we obtain has the _same_ order for any choice of vertices \(u\) and \(v\). As it turns out, this can easily be satisfied in the dense setting: we first note that the neighbourhoods of \(u\) and \(v\) have a substantial intersection. When \(d\geq 2\), this is trivial. When \(d=1\), this follows from the simple fact that \(\mu_{1}(k)\geq 2^{-k+1}\). In either case, this provides enough room to construct a loose \((u,v)\)-path of order \(4(k-1)+1\) when \(p=1\). The argument in the sparse setting is more involved. We apply the Weak Hypergraph Regularity Lemma to find a regular partition \(\mathcal{V}\) of \(V(G^{\prime})\) together with a reduced dense \(k\)-graph \(\mathcal{R}\) (whose vertices are the clusters of \(\mathcal{V}\)) that approximately captures the local edge densities of \(G^{\prime}\). Crucially, \(\mathcal{R}\) has almost everywhere \(d\)-degree at least \((\mu_{d}(k)+\gamma/2)\binom{v(\mathcal{R})-d}{k-d}\). Using a prepartition, we may also assume that \(u\) has a short path to every vertex of a cluster \(W_{u}\), and likewise \(v\) has a short path to every vertex of a cluster \(W_{v}\). Together with a Hypergraph Embedding Lemma this reduces the problem of connecting \(u\) and \(v\) in \(G^{\prime}\) to finding a loose \((W_{u},W_{v})\)-path of order \(4(k-1)+1\) in \(\mathcal{R}\). To obtain such a path, we avoid \(d\)-sets of low degree by selecting a subgraph \(\mathcal{R}^{\prime}\subseteq\mathcal{R}\) with \(W_{u},\,W_{v}\in V(\mathcal{R}^{\prime})\) and \(\delta_{d}(\mathcal{R}^{\prime})\geq(\mu_{d}(k)+\gamma/4)\binom{v(\mathcal{R} ^{\prime})-d}{k-d}\). This can be done by choosing a random induced subgraph whose order is much smaller in comparison to the error in the \(d\)-degree of \(\mathcal{R}\). Once such an \(\mathcal{R}^{\prime}\) is obtained, it follows by the above argument for the dense setting that \(\mathcal{R}^{\prime}\) has the desired loose \((W_{u},W_{v})\)-path. ### Details of the proof of Lemma 4.1 We begin by stating a result analogous to Lemma 4.1 for the dense setting, whose proof is deferred to Section 9.1. **Lemma 4.2** (Dense Connection Lemma).: _Let \(1/n\ll 1/k,\,1/d,\,\gamma\). Suppose that \(G\) is an \(n\)-vertex \(k\)-graph with \(\delta_{d}(G)\geq(\mu_{d}(k)+\gamma)\binom{n-d}{k-d}\), and let \(u,\,v\in V(G)\) be distinct. Then there is a loose \((u,v)\)-path of order \(4(k-1)+1\) in \(G\)._ In what follows we prove Lemma 4.1. For that, recall that for vertex sets \(S\) and \(X\) in a \(k\)-graph \(G\), we let \(Z_{G}(S,X)\) be the number of edges of \(G\) that contain \(S\) and have a non-empty intersection with \(X\setminus S\). We also use the fact that \(\delta_{1}(G)/\binom{v(G)-1}{k-1}\geq\delta_{d}(G)/\binom{v(G)-d}{k-d}\). Proof of Lemma 4.1.: We introduce the constants required in the proof in steps. Let \(1\leq d<k\) and \(\alpha\ll 1/k\), \(\gamma\) be as in Lemma 3.15. Suppose we have \(\nu\) with \(0<\nu\leq 1-\alpha\). In what follows, we shall apply Lemma 3.15 with \(\sigma=\nu\). Now let \[\frac{1}{r_{0}}\ll\frac{1}{s}\ll\tau\ll\eta\ll\frac{1}{k},\,\frac{1}{d},\,\gamma\] be such that in particular \(s\) can play the role of \(n\) when applying Lemma 4.2 with parameters \(k\) and \(d\) and \(\gamma\) of Lemma 4.2 equal to \(\eta\). With \(\tau\) at hand, we apply Lemma 3.10 to obtain \(\varepsilon\) and \(\zeta\) that allow us to embed any linear hypergraph with at most \(6(k-1)+1\) vertices in any suitable '\(\varepsilon\)-regular system'. Finally, we apply Lemma 3.6 with \[\frac{1}{n}\ll\lambda\ll\frac{1}{r_{1}}\ll\frac{1}{r_{0}},\,\varepsilon,\, \frac{1}{k},\,\frac{1}{d},\,\delta=\mu_{d}(k)\text{ and }p.\] We shall apply Lemma 3.10 with \(\kappa=\nu/(2r_{1})\). Finally, let \(1/K\ll 1/k,\gamma\) and \(p\geq\max\{Kn^{-(k-d)}\log n,\,\omega(n^{-(k-2)})\}\) such that we can apply Lemmas 3.10 and 3.12 to 3.15 with \(k\)-density \(1/(k-1)\). In the following, we also assume that \(n\) is large enough to satisfy Lemma 3.6 with the above constants. Note that from the choice of \(p\), we may apply Lemmas 3.10 and 3.12 to 3.15. The next claim contains all properties that we require from the random \(k\)-graph. **Claim 4.3**.: _W.h.p. \(G\sim\mathrm{H}_{k}(n,p)\) satisfies the following properties._ 1. _For each_ \(\nu n\)_-set_ \(C\subseteq V(G)\)_, the induced_ \(k\)_-graph_ \(G[C]\) _satisfies the conclusion of Lemma_ 3.10 _for embedding all graphs_ \(H\) _with_ \(v(H)\leq 6(k-1)+1\) _and_ \(m_{k}(H)\leq 1/(k-1)\)_._ 2. _For each_ \(\nu n\)_-set_ \(C\subseteq V(G)\)_, the induced_ \(k\)_-graph_ \(G[C]\) _is_ \((\lambda,p,1+\lambda)\)_-upper-uniform._ 3. _For every_ \(\beta\leq\alpha+\varrho^{1/2}\)_,_ \(\beta n\)_-set_ \(X\subseteq V(G)\) _and_ \(w\in V(G)\)_, there are at most_ \(2\beta pn^{k-1}\) _edges in_ \(G\) _containing_ \(w\) _and a vertex of_ \(X\setminus\{w\}\)_._ 4. _For every_ \(\beta\leq\alpha+\varrho^{1/2}\) _and_ \(X\subseteq V(G)\) _with_ \(|X|\leq\beta n\)_, there are_ \(o(n^{d})\)__\(d\)_-sets_ \(S\) _such that_ \(Z_{G}(S,X)>2\binom{k}{d}\beta np\binom{n-d}{k-d-1}\)_._ 5. _Let_ \(G^{\prime}\) _be a spanning subgraph of_ \(G\) _with_ \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p\binom{n-d}{k-d}\) _and let_ \(Q\subseteq V(G^{\prime})\) _with_ \(|Q|\leq\alpha n\) _be given. Let_ \(C\subseteq V(G^{\prime})\setminus Q\) _be a_ \(\nu n\)_-set chosen uniformly at random. Then with probability at least_ \(5/6\) _all but_ \(o(n^{d})\)__\(d\)_-sets of vertices in_ \(V(G^{\prime})\setminus Q\) _have_ \(d\)_-degree at least_ \((\mu_{d}(k)+\gamma/2)p\binom{\nu n-d}{k-d}\) _into_ \(C\) Proof.: To begin, observe that for a fixed set \(C\subseteq V(G)\) of size \(\nu n\), the induced subgraph \(G[C]\) has probability distribution \(\mathrm{H}_{k}(\nu n,p)\). For parts (1) and (2), note that there are \(\binom{n}{\nu n}\leq 2^{n}\) ways to choose \(C\) and at most \(O(1)\) ways to choose \(H\). We apply Lemma 3.10 (with \(\nu n\) playing the role of \(N\)) and Lemma 3.12 and then we take the union bound over all choices for \(H\) and \(C\). Parts (3) and (4) follow from applying Lemmas 3.13 and 3.14, respectively, with \(\beta\) playing the role of \(\alpha\) in each case. Finally, part (5) follows by applying Lemma 3.15, where \(C\) and \(\nu\) play the roles of \(Y\) and \(\sigma\), respectively. Let us fix a deterministic graph \(G\) that satisfies the properties stated in Claim 4.3. Let \(G^{\prime}\) be a spanning subgraph of \(G\) with \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p\binom{n-d}{k-d}\) and let \(Q\subseteq V(G^{\prime})\) with \(|Q|\leq\alpha n\) be fixed. Let \(G^{\prime\prime}=G^{\prime}-Q\). For the next claim, given a vertex \(x\in V(G^{\prime\prime})\), we write \(L_{G^{\prime\prime}}(x)\) for the _link graph_ of \(x\) in \(G^{\prime\prime}\), which is the \((k-1)\)-graph on \(V(G^{\prime\prime})\setminus\{x\}\) with a \((k-1)\)-edge \(e\) whenever \(e\cup\{x\}\) is an edge in \(G^{\prime\prime}\). **Claim 4.4**.: _For any \(x\in V(G^{\prime\prime})\), there is a matching \(M_{x}\subseteq E(L_{G^{\prime\prime}}(x))\) of size \(10k\varrho\nu^{-(k-1)}n\)._ Proof.: Suppose that the claim is false, and let \(M\subseteq E(L_{G^{\prime\prime}}(x))\) be a matching of maximum size with fewer than \((k-1)10k\varrho\nu^{-(k-1)}n\) vertices. By Claim 4.3(3) with \(V(M)\cup Q\) playing the role of \(X\) there are at most \(2k(\alpha+(k-1)10k\varrho\nu^{-(k-1)})pn^{k-1}\) edges in \(G^{\prime}\subseteq G\) containing \(x\) and a vertex of \(V(M)\cup Q\). On the other hand, \(\deg_{G^{\prime}}(x)\geq(\mu_{d}(k)+\gamma)p\binom{n-1}{k-1}\), which contradicts the maximality of \(M\). Next, let \(C\subseteq V(G^{\prime\prime})\) be a \(\nu n\)-set taken uniformly at random. For \(x\in V(G^{\prime\prime})\), denote by \(M^{\prime}_{x}\) the edges of \(M_{x}\) that are entirely in \(C\). Using the Chernoff bound, the union bound and Claim 4.3(5), each of the following holds with probability at least \(5/6\): 1. for every \(x\in V(G^{\prime\prime})\), we have \(|M^{\prime}_{x}|\geq 9k\varrho n\), 2. all but \(o(n^{d})\)\(d\)-sets of vertices in \(V(G^{\prime})\setminus Q\) have \(d\)-degree at least \((\mu_{d}(k)+\gamma/2)p\binom{\nu n-d}{k-d}\) into \(C\). Therefore, (i) and (ii) hold with probability at least \(2/3\). We proceed by fixing such a set \(C\). Now let \(R\subseteq C\) with \(|R|\leq\varrho n\) be given and consider distinct vertices \(x_{1}\), \(x_{2}\in V(G^{\prime})\setminus(Q\cup R)\). Our goal is to find a loose \((x_{1},x_{2})\)-path \(P\) of order \(8(k-1)+1\) with \(V(P)\setminus\{x_{1},x_{2}\}\subseteq C\setminus R\). For \(i\in\{1,2\}\), let \(M_{i}\) be obtained from \(M^{\prime}_{x_{i}}\) by deleting all edges that contain vertices in \(R\). After deleting some additional edges if necessary, we may assume that \(\{x_{1}\}\cup V(M_{1})\) and \(\{x_{2}\}\cup V(M_{2})\) are disjoint and both \(M_{1}\) and \(M_{2}\) have cardinality \(3\varrho n\). By Claim 4.3(2), \(G[C]\) is \((\lambda,p,1-\lambda)\)-upper-uniform. Thus, we may apply Lemma 3.1 to \(G^{\prime\prime}[C]\) with parameters \(\varepsilon\), \(r_{0}\) and \(r_{1}\). Using Lemma 3.6 with prepartition \(W_{1}=V(M_{1})\), \(W_{2}=V(M_{2})\), \(U=C\setminus(W_{1}\cup W_{2}\cup R\cup\{x_{1},x_{2}\})\) we obtain a reduced \(k\)-graph \(\mathcal{R}\) on \(r\geq r_{0}\) vertices. Let \(\mathcal{W}_{1}\), \(\mathcal{W}_{2}\), \(\mathcal{U}\subseteq V(\mathcal{R})\) be the sets of clusters contained in \(W_{1}\), \(W_{2}\) and \(U\), respectively. **Claim 4.5**.: _We have that_ * _all but_ \(\sqrt{\varepsilon}|C|\) _vertices_ \(X\in\mathcal{W}_{1}\cup\mathcal{W}_{2}\) _of_ \(\mathcal{R}\) _satisfy_ \(\deg_{\mathcal{U}}(X)\geq(\mu_{d}(k)+\gamma/4)\binom{|\mathcal{U}|-1}{k-1}\) _and_ * _all but_ \(\sqrt{\varepsilon}|C|^{d}\)__\(d\)_-sets_ \(D\subseteq\mathcal{U}\) _of_ \(\mathcal{R}\) _satisfy_ \(\deg_{\mathcal{U}}(D)\geq(\mu_{d}(k)+\gamma/4)\binom{|\mathcal{U}|-d}{k-d}\)_._ Proof.: The conclusions follow directly from Lemma 3.6, once we verify the degree hypothesis for \(G^{\prime\prime}[C]\) (as a spanning subgraph of \(G[C]\)). Since the proofs of both statements are similar, we only prove the second one. It is sufficient to show that all but \(o(n^{d})\)\(d\)-sets \(D\) in \(U\) satisfy \(\deg_{U}(D)\geq(\mu_{d}(k)+\gamma/3)p\binom{|U|-d}{k-d}\). Note that, \(\deg_{U}(D)\geq\deg_{C}(D)-(Z_{G}(D,W_{1})+Z_{G}(D,W_{2})+Z_{G}(D,R))\). Therefore, if \(\deg_{U}(D)<(\mu_{d}(k)+\gamma/3)p\binom{|U|-d}{k-d}\), then either \(\deg_{C}(D)<(\mu_{d}(k)+\gamma/2)p\binom{|U|-d}{k-d}\), or \(Z_{G}(D,W_{1})>(\gamma/18)p\binom{|U|-d}{k-d}\), or \(Z_{G}(D,W_{2})>(\gamma/18)p\binom{|U|-d}{k-d}\) or \(Z_{G}(D,R)>(\gamma/18)p\binom{|U|-d}{k-d}\). In any case, by Claim 4.3(4) and (ii), it follows that the number of \(d\)-sets \(D\subseteq U\) which satisfy \(\deg_{U}(D)<(\mu_{d}(k)+\gamma/3)p\binom{|U|-d}{k-d}\) is \(o(n^{d})\). Now we consider two vertices in \(\mathcal{R}\): fix \(X_{i}\in\mathcal{W}_{i}\) with \(\deg_{\mathcal{U}}(X_{i})\geq(\mu_{d}(k)+\gamma/4)\binom{|U|-1}{k-1}\) for \(i\in\{1,2\}\). We want to find a loose \((X_{1},X_{2})\)-path \(\mathcal{P}\) of order \(6(k-1)+1\) in \(\mathcal{R}\) such that \(V(\mathcal{P})\setminus\{X_{1},X_{2}\}\subseteq\mathcal{U}\). For that, let \(S\subseteq\mathcal{U}\) be an \(s\)-set chosen uniformly at random. By Claim 4.5, we can apply Lemma 3.16 to obtain that \(\delta_{d}(\mathcal{R}[S])\geq(\mu_{d}(k)+\eta)\binom{s-d}{k-d}\) with probability at least \(2/3\). Moreover, standard concentration inequalities show that \(\deg_{\mathcal{R}[S]}(X_{i})\geq(\mu_{d}(k)+\eta)\binom{s-1}{k-1}\) for \(i\in\{1,2\}\) with probability at least \(2/3\). Fix a set \(S\) that satisfies both of these conditions. Let \(F_{1}\) and \(F_{2}\) be two disjoint edges in \(\mathcal{R}[S\cup\{X_{1},X_{2}\}]\) containing \(X_{1}\) and \(X_{2}\) respectively. Note that such edges can be found due to the large degree of the vertices \(X_{1}\) and \(X_{2}\) in \(\mathcal{R}[S]\). Select vertices \(Y_{1}\in F_{1}\setminus\{X_{1}\}\) and \(Y_{2}\in F_{2}\setminus\{X_{2}\}\), and let \(S^{\prime}\) be obtained from \(S\) by deleting the vertices of \(F_{1}\cup F_{2}\) except for \(Y_{1}\) and \(Y_{2}\). This deletion of vertices may reduce the minimum degree a little, but we are still guaranteed to be able to apply Lemma 4.2, which tells us that there is a loose \((Y_{1},Y_{2})\)-path in \(\mathcal{R}[S^{\prime}]\) of order \(4(k-1)+1\). We then extend this path by \(F_{1}\) and \(F_{2}\) to obtain the desired loose \((X_{1},X_{2})\)-path \(\mathcal{P}\). Finally, by Claim 4.3(1), we may apply Lemma 3.10 to obtain a loose \((x^{\prime}_{1},x^{\prime}_{2})\)-path \(P\) in \(G[C]\) with ends \(x^{\prime}_{1}\in X_{1}\) and \(x^{\prime}_{2}\in X_{2}\) of order \(6(k-1)+1\) avoiding the set \(R\). This is possible because the \(k\)-density of \(P\) is \(m_{k}(P)=1/(k-1)\). To finish the proof, we augment \(P\) with two edges from \(M_{1}\) and \(M_{2}\) to obtain the desired loose \((x_{1},x_{2})\)-path. ## 5. Covering most vertices This section is dedicated to the proof of Lemma 2.2. Let us begin with the following outline. Sketch of the proof of Lemma 2.2 (Sparse Cover Lemma).: We begin by reserving a linear sized subset \(C\subseteq V(G^{\prime})\setminus Q\) via a random choice. By Lemma 4.1, we can guarantee that a constant number of (arbitrary) vertices is pairwise connectible through loose paths whose inner vertices lie in \(C\) and avoid \(Q\). Next, we cover \(V(G^{\prime})\setminus(C\cup Q)\) with a constant number of loose paths. To this end, we apply the Weak Hypergraph Regularity Lemma to find a regular partition \(\mathcal{V}\) of \(V(G^{\prime})\) together with a reduced dense \(k\)-graph \(\mathcal{R}\) (whose vertices are the clusters of \(\mathcal{V}\)) of constant order \(r\) that approximately captures the local edge densities of \(G^{\prime}\). Crucially, most \(d\)-sets of vertices of \(\mathcal{R}\) have \(d\)-degree at least \((\mu_{d}(k)+\gamma/2)\binom{r-d}{k-d}\). To avoid \(d\)-sets of low degree altogether, we find a partition \(\mathcal{U}\) of \(\mathcal{R}\) into parts \(U\) of small size such that most parts \(U\in\mathcal{U}\) satisfy \(\delta_{d}(\mathcal{R}[U])\geq(\mu_{d}(k)+\gamma/4)\binom{|U|-d}{k-d}\). As \(|U|\) is chosen to be small with respect to the fraction of \(d\)-tuples that fail to have large enough \(d\)-degree, this can be done by choosing \(\mathcal{U}\) randomly (see Lemma 3.16). Let \(\mathcal{U}^{\prime}\subseteq\mathcal{U}\) be the set of parts \(U\in\mathcal{U}\) such that \(\delta_{d}(\mathcal{R}[U])\geq(\mu_{d}(k)+\gamma/4)\binom{|U|-d}{k-d}\). We may assume that the vertices of \(Q\) (that have to be avoided) are in the parts outside of \(\mathcal{U}^{\prime}\). The next task is to find in \(G^{\prime}\), for each \(U\in\mathcal{U}^{\prime}\), a loose path \(P_{U}\) that covers most of the clusters of \(U\). Then, it will remain to connect up the paths \(P_{U}\) for \(U\in\mathcal{U}^{\prime}\) into a long loose path \(P\) using the reserved vertex set \(C\). We may suppose that, with exception of possibly one part, every part in the partition \(\mathcal{U}\) has cardinality \(s\), which is chosen to be divisible by \(k-1\). Fix \(U\in\mathcal{U}^{\prime}\) and suppose \(|U|=s\). By the definition of \(\mu_{d}(k)\) and the fact that \(k-1\) divides \(s=|U|\), we have that the induced \(k\)-graph \(\mathcal{R}[U]\) contains a loose Hamilton cycle \(F\). Suppose that the clusters of \(U\) are labelled \(W_{1},\ldots,W_{s}\) according to the ordering of \(F\). To cover most vertices of \(G^{\prime}[\bigcup_{1\leq i\leq s}W_{i}]\), we build a path \(P_{U}\) edge by edge by selecting for the next edge an edge that leaves many possibilities to continue the process. The nature of loose cycles supplies us with a good number of possible edge extensions at every stage, whose distribution can be captured by a dense (2-uniform) regular pair. Having observed this, we may extend without having to deal with any major technicalities until it covers most of the vertices of \(G^{\prime}[\![\bigcup_{1\leq i\leq s}W_{i}]\!]\). Finally, we connect up the paths \(P_{U}\) for \(U\in\mathcal{U}^{\prime}\) into a long loose path \(P\) using the reserved vertex set \(C\). Since the order of \(\mathcal{R}\) is constant, the number of paths to be connected up does not pose any problem. In the same way, we may extend \(P\) to become a loose \((u,v)\)-path. This gives the desired almost spanning loose path. Now we are ready give the complete argument of the proof. Proof of Lemma 2.2.: We define the constants required in the proof in several steps. Given \(1\leq d\leq k-1\) with \(k\geq 3\) and positive \(\eta\) and \(\gamma\), we introduce constants \(\nu\), \(\varepsilon\), \(s\), \(r_{1}\,\lambda\), \(r_{0}\), \(\tau\) with \(s\) divisible by \(k-1\) such that \[\varepsilon\ll\frac{1}{s},\,\nu\ll\frac{1}{k},\,\gamma,\,\eta\,, \text{ and }\] \[\frac{1}{n}\ll\lambda\ll\frac{1}{r_{1}}\ll\frac{1}{r_{0}}\ll \varepsilon\ll\tau\ll\frac{1}{k},\,\gamma,\,\delta=\mu_{d}(k)\] and note that the hierarchy of constants in Lemma 3.6 is satisfied. Following the quantification in Lemma 4.1, we introduce \(\alpha\) and \(\varrho\) with \(\alpha\ll 1/k\), \(\gamma\) and \(\varrho\ll 1/k\), \(\nu\). Finally, let \(1/K\ll 1/k,\gamma\) and \(p\geq\max\{Kn^{-(k-d)}\log n,\,Kn^{-(k-2)}\log n\}\) so that we can apply Lemmas 3.11, 3.12 and 4.1. The following claim contains all properties that we require from the random graph. **Claim 5.1**.: _W.h.p. \(G\sim\mathrm{H}_{k}(n,p)\) satisfies the following properties._ 1. _Let_ \(G^{\prime}\subseteq G\) _be a spanning subgraph of_ \(G\) _with_ \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p\binom{n-d}{k-d}\) _and let_ \(Q\subseteq V(G^{\prime})\) _with_ \(|Q|\leq\alpha n\) _be given. Then there is a_ \(\nu n\)_-set_ \(C\subseteq V(G^{\prime})\setminus Q\) _with the following property. For any_ \(R\subseteq C\) _with_ \(|R|\leq\varrho n\) _and distinct_ \(u\)_,_ \(v\in V(G^{\prime})\setminus(Q\cup R)\)_, there is a loose_ \((u,v)\)_-path_ \(P\) _of order_ \(8(k-1)+1\) _with_ \(V(P)\setminus\{u,v\}\subseteq C\setminus R\)_._ 2. \(G\) _is_ \((\lambda,p,1+\lambda)\)_-upper-uniform._ 3. _For every_ \(X\subseteq V(G)\) _with_ \(|X|\leq(\alpha+\nu)n\)_, all but_ \(o(n^{d})\)__\(d\)_-tuples_ \(D\subseteq V(G)\setminus X\) _have_ \(d\)_-degree at least_ \((\mu_{d}(k)+\gamma/2)p\binom{|V(G)\setminus X|-d}{k-d}\) _in_ \(G-X\)_._ 4. _Let_ \(U_{1},\ldots,U_{k}\) _be disjoint subsets of_ \(V(G)\) _each of size at least_ \(\varepsilon n/r_{1}\)_. Let_ \(M\subseteq U_{k-1}\times U_{k}\) _be of size at most_ \((\tau-\sqrt{\varepsilon})|U_{k-1}||U_{k}|\)_. Then_ \(G\) _has at most_ \((1+\varepsilon)(\tau-\sqrt{\varepsilon})p\prod_{j=1}^{k}|U_{j}|\) _edges_ \(e=\{v_{1},\ldots,v_{k}\}\) _with_ \(v_{i}\in U_{i}\) _for_ \(i\in[k-2]\) _and_ \((v_{k-1},v_{k})\in M\)_._ Proof.: Properties (1) and (2) follow, respectively, from Lemmas 4.1 and 3.12. Property (3) follows from an application of Lemma 3.14. To show that (4) holds, we start by applying Lemma 3.11 with \(\varepsilon/r_{1}\) and \(\varepsilon\) playing the role of \(\lambda\) and \(\eta\) respectively. Then we know that w.h.p. \(G\) satisfies the conclusion of Lemma 3.11, which states that for any disjoint subsets \(U_{1},\ldots,U_{k}\) each of size at least \((\varepsilon/r_{1})n\) and any \(M\subseteq U_{k-1}\times U_{k}\) of size at least \(C/(pn^{k-3})\), we have \(Z(M;U_{1},\ldots,U_{k})=(1\pm\varepsilon)p|M|\prod_{j=1}^{k-2}|U_{j}|\), where we recall that \(Z(M;U_{1},\ldots,U_{k})\) denotes the number of edges \(e=\{v_{1},\ldots,v_{k}\}\) with \(v_{i}\in U_{i}\) for \(i\in[k-2]\) and \((v_{k-1},v_{k})\in M\). Fix disjoints subsets \(U_{1},\ldots,U_{k}\) each of size at least \((\varepsilon/r_{1})n\) and \(M\subseteq U_{k-1}\times U_{k}\) such that \(M\) has size at most \((\tau-\sqrt{\varepsilon})|U_{k-1}||U_{k}|\). If \(|M|\geq C/pn^{k-3}\), then from the conclusion of Lemma 3.11 we obtain \[Z(M;U_{1},\ldots,U_{k})\leq(1+\varepsilon)(\tau-\sqrt{\varepsilon})p\prod_{j=1 }^{k}|U_{j}|,\] which finishes the proof in this case. On the other hand, if \(|M|<C/pn^{k-3}\), then since \(1/pn^{k-3}\ll n^{2}\) one can add pairs to \(M\) to obtain a set \(M^{+}\subseteq U_{k-1}\times U_{k}\) such that \(C/pn^{k-3}\leq|M^{+}|\leq(\tau-\sqrt{\varepsilon})|U_{k-1}||U_{k}|\). Therefore, as before, since \(Z(M;U_{1},\ldots,U_{k})\leq Z(M^{+};U_{1},\ldots,U_{k})\), we obtain the desired bound on \(Z(M;U_{1},\ldots,U_{k})\), completing the proof of our claim. Let us fix a deterministic graph \(G\) that satisfies the properties described in Claim 5.1. Let \(G^{\prime}\subseteq G\) be a spanning subgraph of \(G\) with \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p\binom{n-d}{k-d}\), and let \(Q\subseteq V(G^{\prime})\) with \(|Q|\leq\alpha n\) and \(u\), \(v\in V(G^{\prime})\setminus Q\) be given. We shall find a loose \((u,v)\)-path \(P\subseteq G^{\prime}\) in \(G^{\prime}-Q\) that covers all but \(\eta n\) vertices of \(G^{\prime}-Q\). To begin, we reserve a set \(C\subseteq V(G^{\prime})\setminus Q\) of \(\nu n\) vertices satisfying the conclusion of Claim 5.1(1). We will use \(C\) for connections later on. By Claim 5.1(2), we may apply Lemma 3.1 to \(G^{\prime}\) with parameters \(\varepsilon\), \(r_{0}\) and \(r_{1}\). We apply Lemmas 3.4 and 3.6 with threshold \(\tau\) and prepartition \(\{C\cup Q\), \(V(G)\setminus(C\cup Q)\}\) to obtain an \(r\)-vertex graph \(\mathcal{R}\) with \(r_{0}\leq r\leq r_{1}\). Denote by \(\mathcal{Q}\) the set of clusters that contain the vertices of \(C\cup Q\). We partition the vertex set of \(\mathcal{R}-\mathcal{Q}\) randomly into parts \(S\) of size \(s\) and a'residue' part of size at most \(s\). We now invoke Claim 5.1(3) with \(X=Q\cup C\) and Lemma 3.6 to note that most \(d\)-tuples in \(\mathcal{R}-\mathcal{Q}\) have large degree, from which we deduce, by an application of Lemma 3.16, that \(\delta_{d}(\mathcal{R}[S])\geq(\mu_{d}(k)+\gamma/8)\binom{s-d}{k-d}\) for all but \((\eta/2)r/s\) parts \(S\) with \(|S|=s\). Let \(\mathcal{S}\) be the set of such parts \(S\). **Claim 5.2**.: _For each \(S\in\mathcal{S}\), the induced graph \(G^{\prime}\big{[}\bigcup S\big{]}\) contains a loose path of order at least \((1-2\sqrt{\varepsilon})sn/r\)._ Proof.: By the definition of \(\mu_{d}(k)\), there is a loose Hamilton cycle \(L\) in \(\mathcal{R}[S]\), whose cyclically ordered vertex set we denote by \(V_{1},\ldots,V_{s}\). Without loss of generality, we can assume that \(V_{1}\) has degree \(2\) in \(L\). Let \(T=\{i\in[s]\colon i\equiv 1\pmod{k-1}\}\), and note that \(V_{i}\) has degree \(2\) in \(L\) if and only if \(i\in T\). In the following, all index computations are taken modulo \(s\). To construct a loose path \(P\) of the desired order, we begin with some preliminaries. Suppose \(W\subseteq V_{1}\cup\cdots\cup V_{s}\) is such that \(|V_{i}\setminus W|\geq 2\sqrt{\varepsilon}|V_{i}|\) for every \(i\in[s]\). Let \(V^{\prime}_{i}=V_{i}\setminus W\). For every \(i\in T\), let \(H(i,W)\) be the bipartite graph with parts \(V^{\prime}_{i}\) and \(V^{\prime}_{i+k-1}\) that has an edge \(v_{i}v_{i+k-1}\) whenever there are vertices \(v_{i+1}\in V^{\prime}_{i+1}\),..., \(v_{i+k-2}\in V^{\prime}_{i+k-2}\) such that \(\{v_{i},\ldots,v_{i+k-1}\}\) is an edge in \(G^{\prime}\). We claim that \[H(i,W)\text{ is $\sqrt{\varepsilon}$-lower regular with threshold density $\tau$},\] which means that for any \(A\subseteq V^{\prime}_{i}\) with \(|A|\geq\sqrt{\varepsilon}|V^{\prime}_{i}|\) and \(B\subseteq V^{\prime}_{i+k-1}\) with \(|B|\geq\sqrt{\varepsilon}|V^{\prime}_{i+k-1}|\), we have \(d(A,B)\geq\tau-\sqrt{\varepsilon}\). To see this, fix \(i\in T\), consider such sets \(A\) and \(B\) and note that \(|A|\geq\sqrt{\varepsilon}|V^{\prime}_{i}|\geq\varepsilon|V_{i}|\) and \(|B|\geq\sqrt{\varepsilon}|V^{\prime}_{i+k-1}|\geq\varepsilon|V_{i+k-1}|\). For the sake of contradiction, assume that \(d(A,B)<\tau-\sqrt{\varepsilon}\). By Claim 5.1(4) and the definition of \(H(i,W)\), it follows that \(d(A,V^{\prime}_{i+1},\ldots,V^{\prime}_{i+k-2},B)\leq(1+\varepsilon)(\tau- \sqrt{\varepsilon})p\leq(\tau-\sqrt{\varepsilon}/2)p\). On the other hand, since the tuple \((V_{i},\ldots,V_{i+k-1})\) is \((\varepsilon,p)\)-regular with \(d(V_{i},\ldots,V_{i+k-1})\geq\tau p\), it follows that \(d(A,V^{\prime}_{i+1},\ldots,V^{\prime}_{i+k-2},B)\geq(\tau-\varepsilon)p\), which is a contradiction. This proves that \(H(i,W)\) is indeed \(\sqrt{\varepsilon}\)-lower regular with threshold density \(\tau\). Now we find a loose path \(P\) step by step as follows. Suppose that we have constructed a path \(P_{\ell}\) with vertices \(v_{1},\ldots,v_{\ell}\) where \(v_{i}\in V_{i}\) for \(i\in[\ell]\) and \(\ell\in T\). Suppose moreover that \(v_{\ell}\) has degree at least \((\tau-2\sqrt{\varepsilon})|V_{\ell+k-1}\setminus V(P_{\ell})|\) in \(H(\ell,V(P_{\ell}))\). If \(V(P_{\ell})\) has more than \((1-2\sqrt{\varepsilon})|V_{i}|\) vertices in some cluster \(V_{i}\), we stop. In this case \(P_{\ell}\) has the desired order. Otherwise, we select a neighbour \(v_{\ell+k-1}\in V_{\ell+k-1}\) of \(v_{\ell}\) in \(H(\ell,V(P_{\ell}))\) that has degree at least \((\tau-2\sqrt{\varepsilon})|V_{\ell+2k-2}\setminus V(P_{\ell})|\) in \(H(\ell+k-1,V(P_{\ell}))\). This is possible because, on the one hand, almost all vertices in \(H(\ell+k-1,V(P_{\ell}))\) have this property as \(H(\ell+k-1,V(P_{\ell}))\) is \(\sqrt{\varepsilon}\)-lower-regular with threshold density \(\tau\), and on the other hand \(v_{\ell}\) has a sufficiently large degree in \(H(\ell,V(P_{\ell}))\). By the definition of \(H(\ell,V(P_{\ell}))\), there are vertices \(v_{\ell+1}\in V_{\ell+1},\ldots,v_{\ell+k-2}\in V_{\ell+k-2}\) such that \(\{v_{\ell}\ldots v_{\ell+k-1}\}\) is an edge in \(G^{\prime}-(V(P_{\ell})\setminus\{v_{\ell}\})\). Hence, we can extend \(P_{\ell}\) to a loose path \(P_{\ell+k-1}\) with vertices \(v_{1},\ldots,v_{\ell+k-1}\), with \(v_{\ell+k-1}\) having degree at least \((\tau-2\sqrt{\varepsilon})|V_{\ell+2k-2}\setminus V(P_{\ell})|\) in \(H(\ell+k-1,V(P_{\ell}))\). For each \(S\in\mathcal{S}\), apply the above claim to obtain a path \(P_{S}\) covering most of \(S\). To finish the proof of Lemma 2.2, we connect up these paths to obtain the desired loose \((u,v)\)-path \(P\) using \(|\mathcal{S}|+1\) pairwise disjoint loose paths of order \(8(k-1)+1\) in \(G^{\prime}[C]\). This is is possible, since by Claim 5.1(1) we may avoid (within \(C\)) up to \(\mathit{gn}\) vertices \(R\) in addition to those of \(Q\). It follows that \(P\) covers all but \((\nu+\eta/2+2\sqrt{\varepsilon})n\leq\eta n\) vertices of \(G^{\prime}-Q\). ## 6. Absorbing vertices This section is dedicated to the proof of Lemma 2.1. The basic idea is to combine many small absorbing structures into a larger one. We define the former as follows. **Definition 6.1** (Absorber).: Let \(X=\{x_{1},\ldots,x_{k-1}\}\) be a set of vertices in a \(k\)-graph \(G\). A collection \(A=\{P_{1}^{j},P_{2}^{j}\}_{j\in[q]}\) is an _absorber rooted in \(X\)_ if the following hold: * \(P_{i}^{1},\ldots,P_{i}^{q}\) are pairwise vertex-disjoint loose paths for \(i\in[2]\); * \(V(\bigcup_{j\in[q]}P_{1}^{j})=V(\bigcup_{j\in[q]}P_{2}^{j})\cup X\) and \(X\cap V(\bigcup_{j\in[q]}P_{2}^{j})=\emptyset\); * \(P_{1}^{j}\) has the same starting and terminal vertices as \(P_{2}^{j}\) for each \(j\in[q]\). We refer to \(\bigcup_{j\in[q]}P_{1}^{j}\) as the absorbers _active_ state and \(\bigcup_{j\in[q]}P_{2}^{j}\) as its _passive_ state. The _vertices of \(A\)_ are the vertices in \(\bigcup_{j\in[q]}P_{2}^{j}\) and we let \(V(A)=\bigcup_{j\in[q]}V(P_{2}^{j})\). Note that \(V(A)\cap X=\emptyset\). The _order_ of \(A\) is \(|V(A)|\). Next, we define templates, which encode the relative position of the absorbers in our construction. **Definition 6.2** (Template).: An \(r\)-graph \(T\) is an \((r,z)\)_-template_ if there is a \(z\)-set \(Z\subseteq V(T)\) such that \(T-W\) has a perfect matching for any set \(W\subseteq Z\) of size less than \(z/2\) with \(v(T-W)\) divisible by \(r\). We call \(Z\) the _flexible_ set of \(T\). Templates were introduced by Montgomery [37] to adjust the absorption method of Rodl, Rucinski and Szemeredi [42] to the sparse setting. The next lemma was derived by Ferber and Kwan [17] from the work of Montgomery and states that there exist sparse templates. **Lemma 6.3** (Lemma 7.3 in [17]).: _For \(1/z\), \(1/L\ll 1/r\), there is an \((r,z)\)-template \(T\) with \(v(T)\), \(e(T)\leq Lz\) and \(v(T)\equiv 0\bmod r\)._ Consider a graph \(G^{\prime}\subseteq G\) as in the setting of Lemma 2.1, where \(G\) is a typical instance of \(\operatorname{H}_{k}(n,p)\). The next lemma states that a typical linear sized set \(Z\subseteq V(G^{\prime})\) has the property that any sublinear set \(W\subseteq Z\) can be matched into \(Z\).4 Footnote 4: The statement of Lemma 6.4 slightly differs from Lemma 7.5 in [17], as we state that a random \(\nu n\)-set \(Z\) has the desired property. This version follows immediately from their proof. **Lemma 6.4** (Lemma 7.5 in [17]).: _Let \(\eta\ll 1/k\), \(1/d\), \(\gamma\), \(\nu\) and \(p\geq n^{-(k-1)}\log^{3}n\). Then w.h.p. \(G\sim\operatorname{H}_{k}(n,p)\) has the following property. Let \(G^{\prime}\subseteq G\) be a spanning subgraph with \(\delta_{d}(G^{\prime})\geq\gamma p\binom{n-d}{k-d}\). Let \(Z\subseteq V(G)\) be chosen uniformly at random among all \(\nu n\)-sets. Then w.h.p. for any \(W\subseteq V(G)\setminus Z\) with \(|W|\leq\eta n\), there is a matching in \(G^{\prime}\) covering all vertices in \(W\), each edge of which contains one vertex of \(W\) and \(k-1\) vertices of \(Z\)._ Finally, we need the following lemma, which provides us with a single absorber avoiding a small set of fixed vertices. **Lemma 6.5** (Sparse Absorber Lemma).: _Let \(k\geq 3\) and suppose \(1/M\ll\alpha\ll 1/k\), \(1/d\), \(\gamma\) and \(1/C\ll 1/k,\gamma\). If \(p\geq\max\{n^{-(k-1)/2+\gamma},Cn^{-(k-d)}\log n\}\), then w.h.p. \(G\sim\operatorname{H}_{k}(n,p)\) has the following property._ _For any spanning subgraph \(G^{\prime}\subseteq G\) with \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p\binom{n-d}{k-d}\), any set \(Q\) of at most \(\alpha n\) vertices and any \((k-1)\)-set \(X\) in \(V(G)\setminus Q\), there is an absorber \(A\) in \(G^{\prime}\) rooted in \(X\) that avoids \(Q\) and has order at most \(M\)._ Lemma 6.5 is proved in Section 7. Before we come to the proof of Lemma 2.1, let us give an outline of the argument. _Sketch of the proof of Lemma 2.1 (Sparse Absorption Lemma)._ Consider \(\nu\) and \(M\) with \(\eta\ll\nu\), \(1/M\ll\alpha\). We begin by reserving a set \(Z\subseteq V(G^{\prime})\) and vertices \(u^{\prime}\in Z\) and \(v\notin Z\) such that, for any set \(W\subseteq V(G^{\prime})\setminus Z\) with \(|W|\leq\eta n\), we may cover \(W^{\prime}=W\cup\{v\}\) with a loose \((u^{\prime},v)\)-path \(P_{W^{\prime}}\) of order at most \(\sqrt{\eta}n\) in \(G^{\prime}[Z\cup W^{\prime}]\). This is possible by applying Lemma 4.1 with \(Z\) playing the role of \(C\) and Lemma 6.4. Next, we use Lemma 6.3 to find a \((k-1,\nu n)\)-template \(T\). We assume that \(V(T)\subseteq V(G^{\prime})\) and that \(Z\) considered above is the flexible set of \(T\). In the next step, we find absorbers rooted in the edges of \(T\). More precisely, by Lemma 6.5 there is an absorber \(A_{e}\) in \(G^{\prime}\) rooted in \(e\) of order at most \(M\) for each \(e\in E(T)\). Denote by \(\mathcal{P}_{e}\) the collection of loose paths of \(A_{e}\) that do not cover \(e\), that is, the paths corresponding to the passive state of \(A_{e}\). We can assume that the collections \(\mathcal{P}_{e}\) are pairwise vertex-disjoint. This can be guaranteed by choosing the above absorbers one after another, avoiding the already involved vertices with the help of the set \(Q\) in Lemma 6.5. Finally, fix \(u\in V(G^{\prime})\setminus\big{(}V(T)\cup\{v\}\big{)}\) arbitrarily and integrate the absorbers \(A_{e}\) (\(e\in E(T)\)) into a single loose \((u,u^{\prime})\)-path \(P_{1}\) with \(V(P_{1})\cap\big{(}V(T)\cup\{v\}\big{)}=\{u^{\prime}\}\) using Lemma 4.1 several times. Set \(A=V(P_{1})\cup V(T)\cup\{v\}\). We claim that \(A\) has the properties detailed in Lemma 2.1. We may pick the involved constants so that \(|A|\leq\alpha n\). For the absorption property, consider an arbitrary subset \(W\subseteq V(G^{\prime})\setminus A\) with \(|W|\leq\eta n\) and \(|W|\) divisible by \(k-1\). We have to show that the induced graph \(G^{\prime}[A\cup W]\) has a loose \((u,v)\)-path covering \(A\cup W\). To that end, let \(W^{\prime}=W\cup\{v\}\). By the choice of \(Z\), we can find a loose \((u^{\prime},v)\)-path \(P_{W^{\prime}}\) of order at most \(\sqrt{\eta}n\) in \(G^{\prime}[Z\cup W^{\prime}]\) that covers \(W^{\prime}\). We now concatenate \(P_{1}\) and \(P_{W^{\prime}}\) to obtain a loose \((u,v)\)-path \(P\) that covers \(W\) and uses at most \(|Z|/2\) vertices of \(Z\). Moreover, one can check that \(|V(T)\setminus V(P)|\equiv 0\) mod \((k-1)\). Recall that \(T\) is a \((k-1,\nu n)\)-template with flexible set \(Z\). It follows that \(T-V(P)\) admits a perfect matching \(\mathcal{M}\). We then 'activate' each absorber \(A_{e}\) with \(e\in\mathcal{M}\) and leave all other absorbers in their passive state. For each \(e\in\mathcal{M}\), let \(\mathcal{P}^{\prime}_{e}\) be the collection of paths of \(A_{e}\) that cover \(e\). Let \(P^{\prime}\) be obtained from \(P\) by replacing \(\mathcal{P}_{e}\) with \(\mathcal{P}^{\prime}_{e}\) for each \(e\in\mathcal{M}\). It follows that \(P^{\prime}\) is a loose \((u,v)\)-path in \(G^{\prime}[A\cup W]\) covering \(A\cup W\). We now give a complete proof of Lemma 2.1. Most of the combinatorial components of the proof are given in the sketch above, but there are still some numerical facts to check. Proof of Lemma 2.1.: Let \(\ell=8(k-1)+1\) and introduce further constants as follows: let \[\frac{1}{z},\,\frac{1}{L}\ll\frac{1}{k-1}\] be as in Lemma 6.3 with \(r=k-1\). Let \[\frac{1}{M}\ll\alpha\ll\frac{1}{k},\,\frac{1}{d},\,\gamma\] be as in Lemmas 4.1 and 6.5. Now let \(\nu\) be such that \[\nu\ll\alpha,\,\frac{1}{L},\,\frac{1}{M},\,\frac{1}{\ell},\] and let \[\eta\ll\varrho\ll\frac{1}{k},\,\frac{1}{d},\,\gamma,\,\nu\] with \(\varrho\) as in Lemma 4.1 and \(\eta\) as in Lemma 6.4. Finally, let \(1/K\ll 1/k,\gamma\) as in Lemma 4.1 and \(p\geq\max\{n^{-(k-1)/2+\gamma},Kn^{-(k-d)}\log n\}\), which allows us to apply Lemmas 4.1, 6.4 and 6.5. We can now prove the following claim. **Claim 6.6**.: _W.h.p. \(G\sim\mathrm{H}_{k}(n,p)\) satisfies the following property. For any spanning subgraph \(G^{\prime}\) of \(G\) with \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p\binom{n-d}{k-d}\) and any \(Q\subseteq V(G^{\prime})\) with \(|Q|\leq\alpha n\) the following hold:_ 1. _Let_ \(C=V(G^{\prime})\setminus Q\)_. For every pair of distinct vertices_ \(u,\,v\in C\)_, there is a loose_ \((u,v)\)_-path_ \(P\) _of order_ \(\ell=8(k-1)+1\) _in_ \(G^{\prime}\) _with_ \(V(P)\subseteq C\) 1. _Let_ \(C\subseteq V(G^{\prime})\setminus Q\) _be a set of size_ \(\nu n\) _taken uniformly at random. Then with probability at least_ \(2/3\) _the following holds. For any_ \(R\subseteq C\) _with_ \(|R|\leq\varrho n\) _and distinct_ \(u,v\in V(G^{\prime})\setminus(Q\cup R)\)_, there is a loose_ \((u,v)\)_-path_ \(P\) _of order_ \(\ell=8(k-1)+1\) _in_ \(G^{\prime}\) _with_ \(V(P)\setminus\{u,v\}\subseteq C\setminus R\)_._ 2. _Let_ \(Z\subseteq V(G)\) _be chosen uniformly at random among all_ \(\nu n\)_-sets. Then with probability at least_ \(2/3\) _for any_ \(W\subseteq V(G)\setminus Z\) _with_ \(|W|\leq\eta n\)_, there is a matching in_ \(G^{\prime}\) _covering all vertices in_ \(W\)_, each edge of which contains one vertex of_ \(W\) _and_ \(k-1\) _vertices of_ \(Z\)_._ 3. _For any_ \((k-1)\)_-set_ \(X\) _in_ \(V(G)\setminus Q\)_, there is an absorber_ \(A\) _in_ \(G^{\prime}\) _rooted in_ \(X\) _that avoids_ \(Q\) _and has order at most_ \(M\)_._ Fix a graph \(G\) that satisfies the properties in Claim 6.6 and let \(G^{\prime}\) be a spanning subgraph of \(G\) with minimum \(d\)-degree \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p{n-d\choose k-d}\). Our goal is to find \(A\subseteq V(G^{\prime})\) and two vertices \(u,v\in A\) that satisfy the properties detailed in Lemma 2.1. Note that we may assume that \(n\) is large whenever necessary. We begin with the following claim. **Claim 6.7**.: _There is a \(\nu n\)-set \(Z\subseteq V(G^{\prime})\), a vertex \(u^{\prime}\in Z\) and a vertex \(v\in V(G^{\prime})\setminus Z\) with the following property. For every subset \(W^{\prime}\subseteq V(G)\setminus Z\) with \(v\in W^{\prime}\) of size at most \(\eta n+1\), there is a loose \((u^{\prime},v)\)-path \(P_{W^{\prime}}\) of order at most \(\sqrt{\eta}n\) in \(G^{\prime}[Z\cup W^{\prime}]\) that covers \(W^{\prime}\)._ Proof.: Using the union bound, we obtain \(Z\subseteq V(G^{\prime})\) satisfying both Claim 6.6(1) with \(Z\) and \(Q\), where \(\emptyset\) plays the role of \(C\), and Claim 6.6(2). Fix any \(u^{\prime}\in Z\) and any \(v\in V(G^{\prime})\setminus Z\). Now consider an arbitrary subset \(W^{\prime}\subseteq V(G)\setminus Z\) with \(v\in W^{\prime}\) of size at most \(\eta n+1\). We use Claim 6.6(2) to find a matching \(\mathcal{M}\) in \(G^{\prime}[W^{\prime}\cup Z]\) covering all vertices in \(W^{\prime}\), each edge of which contains one vertex of \(W^{\prime}\) and \(k-1\) vertices of \(Z\). Next, we use Claim 6.6(1) to connect up the edges of \(\mathcal{M}\) to a loose \((u^{\prime},v)\)-path in \(G^{\prime}[Z\cup W^{\prime}]\). This can be done by involving at most \(2\ell|\mathcal{M}|\leq\sqrt{\eta}n\leq\varrho n\) vertices in total. (Note that the set \(R\) of Claim 6.6(1) can be used to keep the paths from overlapping in the wrong vertices.) Let \(Z\), \(u^{\prime}\) and \(v\) be as in Claim 6.7. By Lemma 6.3 applied with \(r=k-1\) and \(z=\nu n\), there is a \((k-1,\nu n)\)-template \(T\) on at most \(L\nu n\leq(\alpha/3)n\) vertices and at most \(L\nu n\) edges and with \(v(T)\equiv 0\bmod(k-1)\). We inject \(V(T)\) into \(V(G^{\prime})\setminus\{v\}\), mapping the flexible set of \(T\) onto \(Z\subseteq V(G^{\prime})\). For convenience, let us identify \(V(T)\) with its image under the injection \(V(T)\to V(G^{\prime})\); thus, in what follows, we think of \(V(T)\) as a subset of \(V(G^{\prime})\) and \(Z\subseteq V(T)\) is the flexible set in the template \(T\). Next, we find absorbers rooted in the edges of \(T\). More precisely, by Claim 6.6(3) there is an absorber \(A_{e}\) in \(G^{\prime}\) rooted in \(e\) of order at most \(M\) for each \(e\in E(T)\). Denote by \(\mathcal{P}_{e}\) the collection of loose paths of \(A_{e}\) that do not cover \(e\). We can assume that the collections \(\mathcal{P}_{e}\) are pairwise vertex-disjoint and that they are disjoint from \(V(T)\). This can be guaranteed by choosing the above absorbers one after another, avoiding the at most \(M(e(T)-1)+v(T)\leq 2LM\nu n\leq\alpha n\) already involved vertices with the help of the set \(Q\). Next, we integrate these absorbers into a single path. Fix an arbitrary vertex \(u\in V(G)\setminus\big{(}V(T)\cup\bigcup_{e\in E(T)}V(A_{e})\cup\{v\}\big{)}\). Now use Claim 6.6(0) several times to connect up all paths of all collections \(\mathcal{P}_{e}\) for \(e\in E(T)\) to one loose \((u,u^{\prime})\)-path \(P_{1}\) of order at most \(2\ell Mc(T)\leq 2\ell ML\nu n\leq(\alpha/3)n\) with \(V(P_{1})\cap\big{(}V(T)\cup\{v\}\big{)}=\{u^{\prime}\}\). Note that this is possible by choosing the connecting paths (of order \(\ell\)) one after another, avoiding the at most \(\alpha n\) already used vertices (which play the part of \(Q\) in Claim 6.6(0)). Set \(A=V(P_{1})\cup V(T)\cup\{v\}\). We claim that \(A\) has the properties detailed in the statement of Lemma 2.1. Clearly, \(|A|\leq\alpha n\). For the second part, consider an arbitrary subset \(W\subseteq V(G^{\prime})\setminus A\) with \(|W|\leq\eta n\) divisible by \(k-1\). We have to show that the induced graph \(G^{\prime}[A\cup W]\) has a loose \((u,v)\)-path covering \(A\cup W\). To this end, let \(W^{\prime}=W\cup\{v\}\). We now use Claim 6.7 to find a loose \((u^{\prime},v)\)-path \(P_{W^{\prime}}\) of order at most \(\sqrt{\eta}n\) in \(G^{\prime}[Z\cup W^{\prime}]\) that covers \(W^{\prime}\). We concatenate \(P_{1}\) and to obtain a loose \((u,v)\)-path \(P\) that covers \(W\) and uses at most \(|Z|/2\) vertices of \(Z\). Note that \(V(T)\cap V(P)=V(P_{W^{\prime}})\setminus(W\cup\{v\})\equiv 0\bmod(k-1)\) because \(|W\cup\{v\}|\equiv 1\bmod(k-1)\) and \(|V(P_{W^{\prime}})|\equiv 1\bmod(k-1)\). It follows that \(V(T)\setminus V(P)\) is divisible by \(k-1\), as \(v(T)\equiv 0\bmod(k-1)\). Recall that \(T\) is a \((k-1,\nu n)\)-template with flexible set \(Z\). Hence \(T-V(P)\) admits a perfect matching \(\mathcal{M}\). We then 'activate' each absorber \(A_{e}\) with \(e\in\mathcal{M}\) and leave all other absorbers in their passive state. Moreover, for each \(e\in\mathcal{M}\), let \(\mathcal{P}^{\prime}_{e}\) be the collection of paths of \(A_{e}\) that cover \(e\). Finally, we obtain a path \(P^{\prime}\) from \(P\) by replacing \(\mathcal{P}_{e}\) by \(\mathcal{P}^{\prime}_{e}\) for each \(e\in\mathcal{M}\). It follows that \(P^{\prime}\) is a loose \((u,v)\)-path in \(G[A\cup W]\) covering \(A\cup W\). ## 7. Absorbers in the sparse setting Here we prove Lemma 6.5 (Sparse Absorber Lemma). The idea of the proof is to find an absorbing structure \(A\) in the dense setting and embed it in the random graph via an embedding lemma for hypergraph regularity. One issue arising in this approach is that the probability \(p\) depends on the 'densest spots' of \(A\). So to minimise \(p\), we have to find an absorbing structure \(A\) whose edges are nowhere too cluttered. This can be formalised by the following definition, which we adopt from Ferber and Kwan [17]. A _Berge cycle_ in a (possibly non-uniform) hypergraph is a sequence of distinct edges \(e_{1},\ldots,e_{\ell}\) such that there exist distinct vertices \(v_{1},\ldots,v_{\ell}\) with \(v_{i}\in e_{i}\cap e_{i+1}\) for all \(i\) (where \(e_{\ell+1}=e_{1}\)). The _length_ of such a cycle is its number of edges \(\ell\). The _girth_ of a hypergraph is the length of the shortest Berge cycle it contains (if the hypergraph contains no Berge cycle we say it has infinite girth, or is _Berge acyclic_). We say that a \(k\)-uniform absorber rooted in a \((k-1)\)-set \(X\) is _\(K\)-sparse_ if it has girth at least \(K\), even after adding the extra edge \(X\).5 Footnote 5: Note that adding \(X\) as an edge results in an non-uniform hypergraph, but this is in accordance with the definition of the girth. The following lemma allows us to find a sparse absorber in the dense setting. Its proof is deferred to Section 8. **Lemma 7.1** (Dense Absorber Lemma).: _Let \(1/n\ll\eta\ll 1/M\ll\gamma\), \(1/k\), \(1/d\), \(1/K\). Let \(G\) be an \(n\)-vertex \(k\)-graph in which all but \(\eta n^{d}\)\(d\)-sets have degree at least \((\mu_{d}(k)+\gamma)\binom{n-d}{k-d}\). Let \(X\) be a \((k-1)\)-set of vertices each of degree at least \((\mu_{d}(k)+\gamma)\binom{n-1}{k-1}\). Then \(G\) contains a \(K\)-sparse absorber \(A\) rooted in \(X\) of order at most \(M\)._ The proof of Lemma 6.5 (Sparse Absorber Lemma) can be found in Section 7.1. In the following, we give an outline of the argument. Sketch of the proof of Lemma 6.5.: Let \(X=\{x_{1},\ldots,x_{k-1}\}\), and suppose that \(Q\) is empty for simplicity. We begin by finding pairwise vertex-disjoint loose cycles \(F_{1},\ldots,F_{k-1}\) each of order \(16(k-1)\) such that \(x_{i}\) is a vertex of degree \(2\) in \(F_{i}\). This is possible by applying Figure 2. The absorber appearing in the proof of Lemma 6.5. Note that the path \(P^{i}_{1}\) and the paths of type \(P_{1}\) of the absorber \(A_{j}\) (dashed) cover all (depicted) vertices, whereas the path \(P^{i}_{2}\) and the paths of type \(P_{2}\) of \(A_{j}\) (dashed) leaves \(x_{i}\) uncovered. Lemma 4.1 twice for each vertex of \(X\). For the following argument, it is crucial that each of these cycles has the same order. We also remark that in the actual proof, we choose \(\Omega(n)\) such cycles for each vertex in \(X\), as this lets us have more flexibility when targeting certain objects with our embeddings. However, in this sketch, we omit the details of this technical argument. For \(i\in[k-1]\), denote the vertices of \(F_{i}\) by \(v_{i}^{1},\ldots,v_{i}^{\ell},x_{i}\) along the cyclic ordering of \(F_{i}\), where \(\ell=16(k-1)-1\). We define \((k-1)\)-sets \(Y_{j}\) with \(j\in[\ell]\) by setting \(Y_{j}=\{v_{1}^{j},\ldots,v_{k-1}^{j}\}\). Next, we find a set of pairwise disjoint absorbers \(A_{1},\ldots,A_{\ell}\) of constant order where each \(A_{j}\) is rooted in \(Y_{j}\). This is possible by a contraction argument of Ferber and Kwan [17] based on weak hypergraph regularity, and the fact that we can find \(\Omega(n)\) loose cycles \(F_{i}\) containing \(x_{i}\in X\): intuitively, the fact that we have many choices for the \(F_{i}\) gives us many choices for the \(Y_{j}\), and thus one is able to find the \(A_{j}\) for a suitable choice of the \(F_{i}\). The details of this step can be found in Section 7.1. Here, we only note that during this step, we apply Lemma 7.1 to find a copy of \(A_{j}\) in a suitable reduced graph, which we then embed into \(G^{\prime}\) using a sparse embedding lemma. Since the absorbers \(A_{j}\) are suitably sparse, the hypothesis \(p\geq n^{-(k-1)/2+\gamma}\) is enough to let us find the embedding. We now describe the absorber rooted in \(X\), which is also illustrated in Figure 2. To begin, we define two additional loose paths for each \(i\in[k-1]\) as follows. Let \(P_{1}^{i}\) be the loose \((v_{i}^{k-1},v_{i}^{2k-2})\)-path of order \(15(k-1)-1\) with vertex ordering \[v_{i}^{k-1},\,v_{i}^{k-2},\,\ldots,\,v_{1}^{1},\,x_{i},\,v_{i}^{\ell},\,v_{i} ^{\ell-1},\,\ldots,v_{i}^{2k-2}.\] Let \(P_{2}^{i}\) be the loose \((v_{i}^{k-1},v_{i}^{2k-2})\)-path consisting of a single edge with vertex ordering \[v_{i}^{k-1},\,v_{i}^{k},\ldots,\,v_{i}^{2k-2}.\] We then set \(B_{i}=\{P_{1}^{i},P_{2}^{i}\}\). To finish, we claim that the collection \(A_{1},\ldots,A_{\ell}\), \(B_{1},\ldots,B_{k-1}\) is an absorber rooted in \(X\). Indeed, if \(X\) shall be absorbed, we take the paths \(P_{1}^{i}\) from each of the absorbers \(B_{i}\). This covers \(X\) and the vertices of each \(Y_{i}\) except for \(v_{i}^{k},\ldots,v_{i}^{2k-3}\). To cover the latter, we 'activate' the absorbers \(A_{k},\ldots,A_{2k-3}\). All other absorbers \(A_{j}\) are left in their passive state. If, on the other hand, \(X\) shall not be absorbed, we take the paths \(P_{2}^{i}\) from each of the absorbers \(B_{i}\). This leaves only \(X\) and the vertices \(v_{i}^{2k-1},\ldots,v_{i}^{\ell},v_{i}^{1},\ldots,v_{i}^{k-2}\) uncovered. To include the latter, we 'activate' the absorbers \(A_{2k-1},\ldots,A_{\ell},A_{1},\ldots,A_{k-2}\). All other absorbers \(A_{j}\) are left in their passive state. Since it is clear from the construction that these absorbers consist of paths with shared starting and ending points, this finishes the proof. ### Details of the proof Let us now prove Lemma 6.5 in detail. Our first result, Lemma 7.2, tells us that a 'book' version of the \(K\)-sparse absorber of Lemma 7.1 has low \(k\)-density. The proof of Lemma 7.2 is deferred to the end of this section. **Lemma 7.2**.: _Suppose \(1/K\ll 1/T\), \(\gamma\), \(1/k\) and let \(H\) be the union of \(T\) copies of a \(K\)-sparse \(k\)-uniform absorber that are all rooted in the same \((k-1)\)-set but are otherwise disjoint. Then \(m_{k}(H)\leq 2/(k-1)+\gamma\)._ Note that, in Lemma 7.2, we consider \(H\) to be the \(k\)-graph with vertex set \(X\cup\bigcup_{1\leq j\leq T}V(A_{j})\), where \(X\) is the \((k-1)\)-set that is the common root of the otherwise disjoint \(K\)-sparse absorbers \(A_{j}\) (\(1\leq j\leq T\)). Furthermore, the edges of \(H\) are the edges that occur in the paths that define the absorbers \(A_{j}\). One drawback of Lemma 3.10 (Sparse Embedding Lemma) is that we cannot force the embedded copy of the desired graph to contain a specified collection of vertices (in particular, we cannot force the embedding of the absorber of Lemma 7.1 to have a specified root set \(X\) in the embedding). However, because we are guaranteed to find canonical copies, we can force specified vertices to belong to specified sets of vertices. This fact can be exploited to find an absorber with a specified root \(X\) in the sparse setting using Lemma 3.10. We employ an idea introduced by Ferber and Kwan [17]: we apply Lemma 3.10 to embed a book version of the absorber of Lemma 7.1 (as in Lemma 7.2). The embedding is not quite realized in the original sparse hypergraph, but in the hypergraph after contracting certain sets of vertices. We then show that, by expanding these contracted vertices, we have an absorber with the desired root \(X\). **Definition 7.3** (Contracted graph).: Let \(G\) be a \(k\)-graph, let \(\mathcal{F}\subseteq V(G)^{\ell}\) be a family of disjoint \(\ell\)-tuples, and let \(\mathcal{P}\) be a family of disjoint sets \(U_{1},\ldots,U_{\ell}\subseteq V(G)\) such that the tuples of \(\mathcal{F}\) and the sets in \(\mathcal{P}\) do not share vertices. Let \(G(\mathcal{F},\mathcal{P})\) be the \(k\)-graph obtained from \(G[U_{1}]\cup\cdots\cup G[U_{\ell}]\) by adding for each \(\ell\)-tuple \(\mathbf{v}\in\mathcal{F}\) a new vertex \(w_{\mathbf{v}}\). Moreover, for each \(\mathbf{v}=(v_{1},\ldots,v_{\ell})\in\mathcal{F}\) and each \(1\leq i\leq\ell\), for each \((k-1)\)-set \(f\subseteq U_{i}\) with \(f\cup\{v_{i}\}\in E(G)\), add the edge \(f\cup\{w_{\mathbf{v}}\}\) to \(G(\mathcal{F},\mathcal{P})\). We remark that \(G(\mathcal{F},\mathcal{P})\) can be obtained from a _certain subgraph of_\(G\big{[}\bigcup\mathcal{F}\cup\bigcup_{1\leq i\leq\ell}U_{i}\big{]}\) by collapsing each member \(\mathbf{v}\) of \(\mathcal{F}\) to a vertex \(w_{\mathbf{v}}\). Proof of Lemma 6.5.: We introduce the constants required in the proof in steps. Let \(T=\ell=16(k-1)-1\) and \(\beta=1/(3\ell)\). Let \(\alpha\ll 1/k\), \(\gamma\) following Lemmas 3.15 and 4.1. We now suppose \(\nu\ll\alpha\), \(\beta\), \(1/k\). Suppose \(1/K\ll\gamma\), \(1/T\) as in Lemma 7.2 and \(\eta\ll 1/M\ll\gamma\), \(1/k\), \(1/d\), \(1/K\) as in Lemma 7.1. We now suppose \(1/r_{0}\), \(\tau\ll 1/k\), \(\eta\), \(\beta\) such that in particular \(r_{0}/(2\ell)\) can play the role of \(n\) when applying Lemma 7.1 with the above constants. With \(\tau\) at hand, we apply Lemma 3.10 to obtain \(\varepsilon\) and \(\zeta\) that allow us to embed any linear hypergraph with at most \(M/2+k-1\) vertices in any suitable '\(\varepsilon\)-regular system'. Finally, we apply Lemma 3.6 to obtain \[\frac{1}{n}\ll\lambda\ll\frac{1}{r_{1}}\ll\frac{1}{r_{0}},\,\varepsilon,\, \frac{1}{k},\,\frac{1}{d},\,\delta=\mu_{d}(k)\] and let \[\kappa=\frac{1}{r_{1}}\big{(}(k-1)\nu+\ell\beta\big{)}=\frac{1}{r_{1}}\left(( k-1)\nu+\frac{1}{3}\right).\] We shall apply Lemma 3.10 with this value of \(\kappa\). Finally, let \(1/D\ll 1/k,\gamma\) and \(p\geq\max\{n^{-(k-1)/2+\gamma},Dn^{-(k-d)}\log n\}\) so that we can apply Lemmas 3.10, 3.12, 3.15 and 4.1 with with \(k\)-density \(2/(k-1)+\gamma/2\). In the following, we also assume that \(n\) is large enough to satisfy Lemma 3.6 with the above constants. The next claim summarises the properties that we require from the random graph. **Claim 7.4**.: _W.h.p. \(G\sim\mathrm{H}_{k}(n,p)\) satisfies each of the following properties._ 1. _For each_ \(\mathcal{F}\) _and_ \(\mathcal{P}\) _as in Definition_ 7.3 _with_ \(|\mathcal{F}|=(k-1)\nu n\) _and_ \(|U_{j}|=\beta n\) _for every_ \(1\leq j\leq\ell\)_, the_ \(k\)_-graph_ \(G(\mathcal{F},\mathcal{P})\) _is_ \((\lambda,p,1+\lambda)\)_-upper-uniform._ 2. _Furthermore,_ \(G(\mathcal{F},\mathcal{P})\) _satisfies the conclusion of Lemma_ 3.10 _(Sparse Embedding Lemma) for embedding any linear_ \(k\)_-graph_ \(H\) _with_ \(m_{k}(H)\leq 2/(k-1)+\gamma/2\) _on at most_ \(M/2+k-1\) _vertices with parameters_ \(\varepsilon\)_,_ \(\zeta\)_,_ \(\tau\) _and_ \(\kappa\)_._ 3. _Let_ \(G^{\prime}\) _be a spanning subgraph of_ \(G\) _with_ \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p\binom{n-d}{k-d}\) _and fix_ \(Q\subseteq V(G^{\prime})\) _with_ \(|Q|\leq 2\alpha n\)_. For any distinct_ \(u,v\in V(G^{\prime})\setminus Q\)_, there is a loose_ \((u,v)\)_-path_ \(P\) _of order_ \(8(k-1)+1\) _in_ \(G^{\prime}-Q\)_._ 4. _Let_ \(G^{\prime}\) _be a spanning subgraph of_ \(G\) _with_ \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p\binom{n-d}{k-d}\) _and fix_ \(Q\subseteq V(G^{\prime})\) _with_ \(|Q|\leq 2\alpha n\)_. Let_ \(U\subseteq V(G^{\prime})\setminus Q\) _with_ \(|U|=\beta n\) _be chosen uniformly at random. Then w.h.p. all but_ \(o(n^{d})\) _of the_ \(d\)_-sets of vertices in_ \(V(G^{\prime})\setminus Q\) _have_ \(d\)_-degree at least_ \((\mu_{d}(k)+\gamma/2)p\binom{\beta n-d}{k-d}\) _into_ \(U\)_._ Proof.: To begin, we remark that \(G(\mathcal{F},\mathcal{P})\) can be coupled to a subgraph of the binomial random \(k\)-graph \(G\sim\mathrm{H}_{k}(n,p)\) from which it is obtained as a contracted graph. This is because for each possible edge \(e\) of \(G(\mathcal{F},\mathcal{P})\), there is exactly one \(k\)-set \(e^{\prime}\) whose presence as an edge in \(G\) determines whether or not \(e\) is in \(G(\mathcal{F},\mathcal{P})\). Furthermore, there are \(\exp(O(n\log n))\) ways to choose \(\mathcal{F}\) and \(\mathcal{P}\). By the choice of \(p\), we can apply Lemma 3.12 and take the union bound over all possibilities for \(\mathcal{F}\) and \(\mathcal{P}\) to deduce (1). To prove (2), it suffices to combine the argument above with Lemma 3.10. Assertion (3) follows from Lemma 4.1. For (4), we apply Lemma 3.15 with \(2\alpha,\beta\) playing the role of \(\alpha,\sigma\). Consider a graph \(G\) that satisfies the properties in Claim 7.4. We regard \(G\) as deterministic graph from now on. Let \(G^{\prime}\) be a spanning subgraph of \(G\) with \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma)p\binom{n-d}{k-d}\). Let \(Q\subseteq V(G^{\prime})\) be a set of at most \(\alpha n\) vertices, and let \(X=\{x_{1},\ldots,x_{k-1}\}\subseteq V(G)\setminus Q\). We have to show that there is an absorber \(A\) rooted in \(X\) that avoids \(Q\) and has order at most \(M\). **Claim 7.5**.: _For every vertex \(x\in X\) we can fix a family \(\mathcal{C}_{x}\) of \(\nu n\) loose cycles in \(G^{\prime}-Q\) of order \(\ell+1=16(k-1)\) each containing \(x\) as a vertex of degree \(2\) but disjoint otherwise. Furthermore, we may require every cycle in \(\mathcal{C}_{x}\) and every cycle in \(\mathcal{C}_{x^{\prime}}\) to be disjoint for all distinct \(x\) and \(x^{\prime}\) in \(X\)._ Proof.: For any \(x\in X\), we can apply Claim 7.4(3) twice to obtain a cycle of order \(\ell+1\) containing \(x\) as a vertex of degree \(2\). We can repeat this \(\nu n\) times for each \(x\), avoiding all previously selected cycles using the (extended) set \(Q\). Given families \(\mathcal{C}_{x}\) as in Claim 7.5, let us define \(\mathcal{F}_{1},\ldots,\mathcal{F}_{k-1}\subseteq V(G^{\prime})^{\ell}\) with \(|\mathcal{F}_{i}|=\nu n\) for every \(i\in[k-1]\) and such that, for every \(i\in[k-1]\), \(\mathbf{v}\in\mathcal{F}_{i}\) and \(\mathbf{w}\in\mathcal{F}:=\bigcup_{i\in[k-1]}\mathcal{F}_{i}\) with \(\mathbf{w}\neq\mathbf{v}\), we have that * \(\mathbf{v}\) and \(\mathbf{w}\) are vertex-disjoint; * \(\mathbf{v}\) together with \(x_{i}\) appended forms a loose cycle \(C_{\mathbf{v}}\subseteq G^{\prime}\) along the ordering of \(\mathbf{v}\); * the \((k-1)\)st vertex and \((2k-2)\)nd vertex of \(\mathbf{v}\) both have degree \(2\) in \(C_{\mathbf{v}}\). We denote by \(V(\mathcal{F})\) the set of vertices in the tuples in \(\mathcal{F}\). **Claim 7.6**.: _There are disjoint \(\beta n\)-sets \(U_{1},\ldots,U_{\ell}\subseteq V(G)\setminus(Q\cup V(\mathcal{F})\cup X)\) such that all but \(o(n^{d})\)\(d\)-sets of vertices in \(G^{\prime}\) have degree at least \((\mu_{d}(k)+\gamma/2)\binom{\beta n-d}{k-d}\) into each \(U_{j}\)._ Proof.: Choose a set \(U\subseteq V(G)\setminus(Q\cup V(\mathcal{F})\cup X)\) of cardinality \(\ell\beta n\) uniformly at random and partition \(U\) randomly into \(\ell\) parts of cardinality \(\beta n\) each to obtain the \(U_{j}\) (\(1\leq j\leq\ell\)). We claim that this procedure will succeed in producing the required \(U_{j}\) with high probability. Fix \(1\leq j\leq\ell\). Clearly, \(U_{j}\) is a subset of \(V(G)\setminus(Q\cup V(\mathcal{F})\cup X)\) of cardinality \(\beta n\) chosen uniformly at random. By Claim 7.4(4), we know that \(U_{j}\) satisfies the required degree property with high probability. Since \(\ell\) is a constant, our claim follows from the union bound. Let \(\mathcal{P}\) be the family of the sets \(U_{j}\) (\(1\leq j\leq\ell\)) given by Claim 7.6. We now consider \(G^{\prime}(\mathcal{F},\mathcal{P})\) and note that \(G^{\prime}(\mathcal{F},\mathcal{P})\) is a subgraph of \(G(\mathcal{F},\mathcal{P})\). For later reference, let us call \(W_{i}\) the set of vertices of \(G^{\prime}(\mathcal{F},\mathcal{P})\) obtained by contracting the members of \(\mathcal{F}_{i}\). Note that \(|W_{i}|=\nu n\) for every \(1\leq i\leq\ell\). **Claim 7.7**.: _We can pick an \(\ell\)-tuple \(\mathbf{v}_{i}=(v_{i}^{1},\ldots,v_{i}^{\ell})\) in \(\mathcal{F}_{i}\) simultaneously for all \(i\in[k-1]\) in such a way that the following holds: for each \(j\in[\ell]\), there is an absorber \(A_{j}\) with \(V(A_{j})\subseteq U_{j}\) rooted in \(X^{j}=\{v_{1}^{j},\ldots,v_{k-1}^{j}\}\) of order at most \(M/(2\ell)\)._ Proof.: Note first that \(G^{\prime}(\mathcal{F},\mathcal{P})\) is endowed with the partition \[W_{1},\ldots,W_{k-1},U_{1},\ldots,U_{\ell} \tag{7.1}\] of its vertex set. We now invoke Lemma 3.6 to obtain an \(\varepsilon\)-regular partition \(V_{1},\ldots,V_{r}\) with \(r_{0}\leq r\leq r_{1}\), with most of the \(V_{i}\) contained in some set in the partition (7.1). Moreover, we define the corresponding reduced graph \(\mathcal{R}\) using as the 'threshold parameter' the constant \(\tau\) defined earlier. Let \(\mathcal{U}_{j}\) (\(1\leq j\leq\ell\)) and \(\mathcal{W}_{i}\) (\(1\leq i\leq k-1\)) be the sets of clusters contained in each \(U_{j}\) and \(W_{i}\) respectively. We now claim that almost all the clusters in \(\mathcal{W}_{i}\) have degree at least \((\mu_{d}(k)+\gamma/4)\binom{[\mathcal{U}_{j}]}{k-1}\) into each \(\mathcal{U}_{j}\) in \(\mathcal{R}\). To check this claim, recall first that the \(U_{j}\) are such that all but \(o(n^{d})\)\(d\)-sets of vertices of \(G^{\prime}\) have degree at least \((\mu_{d}(k)+\gamma/2)\binom{\beta n-d}{k-d}\) into \(U_{j}\). It follows that all but \(o(n)\) vertices of \(G^{\prime}\) have degree \((\mu_{d}(k)+\gamma/3)\binom{\beta n-1}{k-a}\) into \(U_{j}\). Our claim now follows from Lemma 3.6(2) applied with \(d^{\prime}=1\). Note that, because of our claim above, we can fix clusters \(Y_{i}\in\mathcal{W}_{i}\) (\(1\leq i\leq k-1\)) that satisfy the degree property above for all \(\mathcal{U}_{j}\) (\(1\leq j\leq\ell\)) simultaneously. Let \(\mathcal{Y}=\{Y_{1},\ldots,Y_{k-1}\}\). Note that \(\mathcal{R}[\mathcal{U}_{j}\cup\mathcal{Y}]\) satisfies the assumptions of Lemma 7.1 for each \(j\in[\ell]\). Hence we may find a \(K\)-sparse absorber \(\mathcal{A}_{j}\) rooted in \(\mathcal{Y}\) of order at most \(M/(2\ell)\) with \(V(\mathcal{A}_{j})\subseteq\mathcal{U}_{j}\). Let \(\mathcal{A}=\bigcup\mathcal{A}_{j}\). Note that \(\mathcal{A}\) is a 'book' of the absorbers of Lemma 7.1 with \(\ell\) 'pages' and'spine' \(\mathcal{Y}\). It follows by Lemma 7.2 that \(m_{k}(\mathcal{A})\leq 2/(k-1)+\gamma/2\). Note also that \(\mathcal{A}\) has at most \(M/2+k-1\) vertices. By Claim 7.4(2) we can find a canonical copy \(A\) of \(\mathcal{A}\) in \(G^{\prime}(\mathcal{F},\mathcal{P})\). For \(i\in[k-1]\), let \(w_{i}\in Y_{i}\) be the vertex of \(A\) that corresponds to the vertex \(Y_{i}\) of \(\mathcal{A}\). By definition, \(w_{i}=w_{\mathbf{v}_{i}}\) for some \(\mathbf{v}_{i}=(v_{i}^{1},\ldots,v_{i}^{\ell})\in\mathcal{F}_{i}\). We now proceed to find the required absorbers \(A_{j}\). Fix \(j\in[\ell]\) and consider \(\mathcal{A}_{j}\). For every cluster \(Z\in V(\mathcal{A}_{j})\subseteq U_{j}\), there is a vertex \(z\in Z\) contained in \(A\). Let \(T_{j}\) be the set of vertices \(v_{i}^{j}\) (\(1\leq i\leq k-1\)) and such vertices \(z\). Then, by the definition of the contracted graph \(G^{\prime}(\mathcal{F},\mathcal{P})\), one sees that the graph \(G^{\prime}[T_{j}]\) contains a copy of an absorber \(A_{j}\) rooted in \(X^{j}=\{v_{1}^{j},\ldots,v_{k-1}^{j}\}\) with \(V(A_{j})\subseteq U_{j}\). We now construct the required absorber \(A\) rooted in \(X\) of order at most \(M\) using the \(A_{j}\) and \(\mathbf{v}_{i}=(v_{i}^{1},\ldots,v_{i}^{\ell})\) from Claim 7.7. Most of \(A\) is \(\bigcup_{j\in[\ell]}A_{j}\); we need only incorporate two paths for each \(\mathbf{v}_{i}\) (\(1\leq i\leq k-1\)). The final absorber \(A\) will be such that \[V(A)=\bigcup_{j\in[\ell]}V(A_{j})\cup X^{j}=\bigcup_{j\in[\ell]}V(A_{j})\cup \bigcup_{i\in[k-1]}\{v_{i}^{1},\ldots,v_{i}^{\ell}\}.\] Next, we describe describe how \(A\) works. First note that for each \(j\in[\ell]\) we may use \(A_{j}\) to either integrate or not integrate \(X^{j}=\{v_{1}^{j},\ldots,v_{k-1}^{j}\}\). Now, if \(X\) is to be covered, then we activate \(A_{j}\) to absorb \(X^{j}\) for \(k\leq j\leq 2k-3\). We do not activate any other \(A_{j}\). To include the remaining vertices, we then take for each \(i\in[k-1]\) the loose path, called \(P_{1}^{i}\) in the sketch of the proof, that runs from the \((k-1)\)st vertex of \(\mathbf{v}_{i}\) backwards to \(x_{i}\) and then to the \((2k-2)\)nd vertex of \(\mathbf{v}_{i}\), namely \[v_{i}^{k-1},\,v_{i}^{k-2},\,\ldots,\,v_{i}^{1},\,x_{i},\,v_{i}^{\ell},\,v_{i}^ {\ell-1},\,\ldots,v_{i}^{2k-2}.\] Conversely, if \(X\) should be left uncovered, we activate \(A_{j}\) to absorb \(X^{j}\) for \(1\leq j\leq k-2\) and \(2k-1\leq j\leq\ell\). We do not activate any other \(A_{j}\).6 To include the remaining vertices, we take for each \(i\in[k-1]\) the loose path with one edge that runs from the \((k-1)\)st vertex of \(\mathbf{v}_{i}\) to the \((2k-2)\)nd vertex of \(\mathbf{v}_{i}\), namely \(v_{i}^{k-1},\ldots,v_{i}^{2k-2}\). For an illustration, see Figure 2. Footnote 6: The absorbers \(A_{k-1}\) and \(A_{2k-2}\) are not used at all and exist only for notational convenience. The discussion above shows that \(A\) is indeed an absorber rooted in \(X\). Moreover, since the order of the \(A_{j}\) is at most \(M/(2\ell)\), we see that \(A\) has order at most \(M/2+(k-1)\ell\leq M\), as required. ### Proof of Lemma 7.2 We shall see that Lemma 7.2 holds because (_a_) a \(K\)-sparse absorber has maximum \(k\)-density not much bigger than \(2/(k-1)\) and (_b_) when we 'am-algamate' such absorbers along their root set forming a 'book', the maximum \(k\)-density increases by a controlled amount. For convenience, let us set \[d_{k}\left(H\right)=\frac{e\left(H\right)-1}{v\left(H\right)-k}\] for a graph \(H\) with \(v(H)>k\). Hence, we can write \[m_{k}\left(H\right)=\max\left\{d_{k}(H^{\prime})\colon H^{\prime}\subseteq H\text { with }v\left(H^{\prime}\right)>k\right\}.\] We need the following observation, whose easy proof we omit. **Fact 7.8**.: _The following hold._ 1. _Every Berge acyclic_ \(k\)_-graph_ \(F\) _satisfies_ \(v(F)\geq(k-1)e(F)+1\)_._ 2. _Every minimal_ \(k\)_-uniform Berge cycle_ \(C\) _of length at least_ \(3\) _is a loose cycle. In particular, we have_ \(v(C)=(k-1)\,e(C)\)_._ Fact 7.8 has the following consequence. **Proposition 7.9**.: _The following hold._ 1. _If_ \(F\) _is a Berge acyclic_ \(k\)_-graph, then_ \(m_{k}(F)\leq 1/(k-1)\)_._ 2. _If_ \(A\) _is a_ \(K\)_-sparse_ \(k\)_-uniform absorber with_ \(K\geq 3\)_, then_ \(m_{k}(A)\leq 2/(k-1)+1/(K(k-1)-k)\)_._ Proof.: We begin by showing (i). To this end, observe that if \(F^{\prime}\) is Berge acyclic \(k\)-graph, then \(v(F^{\prime})\geq(k-1)e(F^{\prime})+1\) by Fact 7.8 (a), which implies that \[d_{k}(F^{\prime})\leq\frac{e(F^{\prime})-1}{(k-1)e(F^{\prime})+1-k}=\frac{1}{k -1}.\] Thus \(m_{k}(F)\leq 1/(k-1)\) for any Berge acyclic \(F\), and (i) is proved. Now suppose \(H^{\prime}\) is a subgraph of an absorber \(A\) as in (ii) with \(v(H^{\prime})>k\). We have to show that \(d_{k}(H^{\prime})=(e(H^{\prime})-1)/(v(H^{\prime})-k)\leq 2/(k-1)+1/(K(k-1)-k)\). If \(H^{\prime}\) is acyclic, then this follows from (i). On the other hand, suppose that \(C\) is a cycle of minimum length in \(H^{\prime}\). By the \(K\)-spareness hypothesis, we have \(e(C)\geq K\geq 3\). Hence, by applying Fact 7.8 (b), it follows that \(v(H^{\prime})\geq v(C)\geq K(k-1)\). Note that an absorber is a union of two Berge acyclic graphs, say \(F_{1}\) and \(F_{2}\) (each \(F_{i}\) can be taken to be a vertex disjoint union of loose paths). We may suppose that \(V(F_{1})=V(F_{2})=V(H^{\prime})\). We then have \[d_{k}(H^{\prime})\leq \,\frac{e(F_{1})+e(F_{2})-1}{v(H^{\prime})-k}\] \[= \,\frac{e(F_{1})-1}{v(H^{\prime})-k}+\frac{e(F_{2})-1}{v(H^{ \prime})-k}+\frac{1}{v(H^{\prime})-k}\] \[\leq m_{k}(F_{1})+m_{k}(F_{2})+\frac{1}{K(k-1)-k}\] \[\leq \,\frac{2}{k-1}+\frac{1}{K(k-1)-k}\,,\] as required. We remark that the previous proof can be extended to a general bound for the \(k\)-density of the 'amalgamation' of two graphs in terms of their \(k\)-densities and the girth of the whole graph. Proof of Lemma 7.2.: Let \(A_{1},\dots,A_{T}\) be copies of a \(K\)-sparse \(k\)-uniform absorber sharing their root \((k-1)\)-set \(X\) with \(V(A_{j})\cap V(A_{j^{\prime}})=\emptyset\) for all \(j\neq j^{\prime}\). Let \(H\) be the \(k\)-graph on \[X\cup\bigcup_{1\leq j\leq T}V(A_{j})\] whose edges are the edges that occur in the paths that define the absorbers \(A_{j}\). We have to show that \(m_{k}(H)\leq 2/(k-1)+\gamma\), that is, we have to show that, for any subgraph \(H^{\prime}\) of \(H\) with \(v(H^{\prime})>k\), we have \[d_{k}(H^{\prime})=\frac{e(H^{\prime})-1}{v(H^{\prime})-k}\leq\frac{2}{k-1}+\gamma. \tag{7.2}\] By Claim 7.9(i), inequality (7.2) holds if \(H^{\prime}\) is acyclic. So we can assume that \(H^{\prime}\) contains a Berge cycle. Let \(C\) be a Berge cycle in \(H^{\prime}\) of minimum length. Recall that by definition of sparse absorbers (at the beginning of Section 7), each \(K\)-sparse absorber \(A_{j}\) has girth at least \(K\) even after adding the extra edge \(X\). Hence there exists an absorber \(A_{j}\) such that \(C\) has at least \(K-1\) edges in \(A_{j}\). After relabelling, we may assume that \(j=1\). In what follows, we write \(H^{\prime}[Y]\) for \(H^{\prime}[V(H^{\prime})\cap Y]\) for any \(Y\subseteq V(H)\) for simplicity. Without loss of generality, assume that \(A_{1},\ldots,A_{t}\) are the absorbers \(A_{j}\) for which \(H^{\prime}[X\cup V(A_{j})]\) contains an edge (if \(H^{\prime}[X\cup V(A_{j})]\) contains no edge, then we can consider \(H^{\prime}-V(A_{j})\) instead of \(H^{\prime}\)). Let us further assume that \(A_{1},\ldots,A_{s}\) are the absorbers \(A_{j}\) such that \(H^{\prime}[X\cup V(A_{j})]\) contains at least two edges. Note that \(s\geq 1\) since \(A_{1}\) contains at least \(K-1\) edges. Let \(H^{\prime\prime}=H^{\prime}[X\cup\bigcup_{1\leq j\leq s}V(A_{j})]\). **Claim 7.10**.: _We have \(d_{k}(H^{\prime\prime})\leq 2/(k-1)+\gamma/2\)._ Proof.: Fix \(j\) with \(1\leq j\leq s\). Let \(v_{j}=v(H^{\prime}[X\cup V(A_{j})])\) and \(e_{j}=e(H^{\prime}[X\cup V(A_{j})])\geq 2\). Note that, by definition of \(s\), we have \(v_{j}>k\). Hence \(d_{k}(H^{\prime}[X\cup V(A_{j})])=(e_{j}-1)/(v_{j}-k)\leq m_{k}(A_{j})\leq 2/(k-1)+ 1/(K(k-1)-k)\), where the last inequality follows from Proposition 7.9 (ii). Let \(e^{\prime\prime}=e(H^{\prime\prime})\) and \(v^{\prime\prime}=v(H^{\prime\prime})\). We have \(e^{\prime\prime}=\sum_{1\leq j\leq s}e_{j}\) and \(v^{\prime\prime}=\sum_{1\leq j\leq s}v_{j}-(s-1)k^{\prime}\), where \(k^{\prime}=|X\cap V(H^{\prime\prime})|<k\). By Claim 7.9 (i), we may assume that \(H^{\prime\prime}\) is not acyclic. Furthermore, since \(A_{1}\) contains at least \(K-1\) edges of \(C\), it follows by Fact 7.8 (b) that \(v_{1}\geq(K-1)(k-1)\). Putting this together, we have \[d_{k}(H^{\prime\prime}) =\frac{e^{\prime\prime}-1}{v^{\prime\prime}-k}=\frac{\sum_{1\leq j \leq s}e_{j}-1}{\sum_{1\leq j\leq s}v_{j}-(s-1)k^{\prime}-k}\] \[=\frac{\sum_{1\leq j\leq s}(e_{j}-1)+s-1}{\sum_{1\leq j\leq s}(v_ {j}-k)+ks-(s-1)k^{\prime}-k}\] \[=\frac{\sum_{1\leq j\leq s}(e_{j}-1)+s-1}{\sum_{1\leq j\leq s}(v_ {j}-k)+(k-k^{\prime})(s-1)}\] \[\leq\frac{\sum_{1\leq j\leq s}(e_{j}-1)}{\sum_{1\leq j\leq s}(v_ {j}-k)}+\frac{s-1}{\sum_{1\leq j\leq s}(v_{j}-k)}\] \[\leq\frac{2}{k-1}+\frac{1}{K(k-1)-k}+\frac{s-1}{(K-1)(k-1)-k}\] \[\leq\frac{2}{k-1}+\frac{T}{(K-2)(k-1)-1}\] \[\leq\frac{2}{k-1}+\frac{\gamma}{2},\] as long as \(s\leq T\) and \(K\) is large enough with respect to \(T,1/\gamma\) and \(k\). Hence, Claim 7.10 follows. We now deduce (7.2) from Claim 7.10. Note that \(H^{\prime}\) contains at most \(t-s\leq T\) edges that are not in \(H^{\prime\prime}\). Since \(K\) was chosen sufficiently large with respect to \(T,1/\gamma\) and \(k\), it \[d_{k}(H^{\prime})\leq\frac{e^{\prime\prime}+t-s-1}{v^{\prime\prime}-k}\leq \frac{e^{\prime\prime}-1}{v^{\prime\prime}-k}+\frac{t-s}{v^{\prime\prime}-k} \leq\frac{2}{k-1}+\gamma.\qed\] Similar to the previous remark, we note that this proof can be extended to a general bound for the \(k\)-density of the 'amalgamation' of two (same rooted) graphs in terms of their \(k\)-densities and the girth of the whole graph. ## 8. Absorbers in the dense setting To finish the proof of Theorem 1.2, we have to show Lemma 7.1. We begin with the following strengthening of the minimum degree threshold for loose Hamilton cycles, which ensures the existence of a loose Hamilton cycle under slightly more general conditions and with some additional properties. Let us say that two vertices in a loose cycle \(C\) are at distance \(K\) if they are \(K\) vertices apart with respect to the ordering of \(C\). So for instance, if \(C\) is \(k\)-uniform then two consecutive vertices of degree \(2\) are at distance \(k-1\). We say that a set \(X\subseteq V(C)\) is _\(K\)-spread_ if the distance between any (distinct) pair of vertices of \(X\) is at least \(K\). For \(1\leq d\leq k-1\), we define \(\mu_{d}^{*}(k)\) as the least \(\mu\in[0,1]\) such that for all \(\gamma>0\), positive integers \(t\), \(K\) and \(n\), where \(n\) is divisible by \(k-1\) and sufficiently large, the following holds: if \(G\) is an \(n\)-vertex \(k\)-graph and \(X\subseteq V(G)\) is a \(t\)-set such that \(\delta_{d}(G-X)\geq(\mu+\gamma)\binom{n-d}{k-d}\) and \(\deg(x)\geq(\mu+\gamma)\binom{n-1}{k-1}\) for every \(x\in X\), then \(G\) contains a loose Hamilton cycle \(C\) in which \(X\) is \(K\)-spread and every vertex of \(X\) has degree \(2\). Note that we trivially have \(\mu_{d}^{*}(k)\geq\mu_{d}(k)\). The following lemma, whose proof is deferred to Section 9, shows that the two thresholds actually coincide. **Lemma 8.1** (Threshold exploitation).: _We have \(\mu_{d}(k)=\mu_{d}^{*}(k)\) for all \(1\leq d\leq k-1\)._ For the proof of Lemma 7.1, we require two further lemmas. Let \(G\) and \(R\) be \(k\)-graphs and \(X\subseteq V(G)\cap V(R)\). We say that \(G\) contains a _copy of \(R\) rooted in \(X\)_, if there is an embedding of \(R\) into \(G\) which is the identity on \(X\). For any integer \(m>0\), we also denote by \(R^{*}(m,X)\) the \(k\)-graph obtained from \(R\) by blowing up each vertex outside of \(X\) by \(m\) and replacing edges with complete \(k\)-partite \(k\)-graphs. **Lemma 8.2** (Blow-up setup).: _Let \(1/n\ll 1/s\ll\gamma\), \(1/k\), \(1/d\), \(1/t\), \(\mu\geq 0\) and \(1/n\ll 1/m\). Let \(G\) be an \(n\)-vertex \(k\)-graph and \(X\subseteq V(G)\) be a \(t\)-set such that \(\delta_{d}(G-X)\geq(\mu+\gamma)\binom{n-d}{k-d}\) and \(\deg_{G}(x)\geq(\mu+\gamma)\binom{n-1}{k-1}\) for every \(x\in X\). Then there is an \(s\)-vertex \(k\)-graph \(R\) with \(X\subseteq V(R)\) such that_ 1. \(\delta_{d}(R-X)\geq(\mu+\gamma/2)\binom{s-d}{k-d}\) _and_ \(\deg_{R}(x)\geq(\mu+\gamma/2)\binom{s-1}{k-1}\) _for every_ \(x\in X\)_;_ 2. \(G\) _contains a copy of_ \(R^{*}(m,X)\) _rooted in_ \(X\)_._ **Lemma 8.3** (Absorber allocation).: _For \(r\geq 1\) and \(K>k\geq 3\), let \(K^{\prime}=K(k-1)\), \(m=2^{2K^{\prime}(2K^{\prime}-1)}\) and \(q=2^{2K^{\prime}-1}rm!\). Let \(X\) be a \((k-1)\)-set. Let \(C_{2}\) be a \(k\)-uniform loose cycle of order \(q\) with \(X\cap V(C_{2})=\emptyset\). Let \(C_{1}\) be a \(k\)-uniform loose cycle with \(V(C_{1})=V(C_{2})\cup X\). Suppose that there is a vertex that has degree \(2\) in \(C_{1}\) and also in \(C_{2}\). Let \(R=C_{1}\cup C_{2}\). Then_ 1. \(R^{*}(2m,X)\) _contains an absorber_ \(A\) _rooted in_ \(X\)_._ 2. _Moreover, if_ \(X\) _is_ \(K^{\prime}\)_-spread in_ \(C_{1}\)_, then_ \(A\) _is_ \(K\)_-sparse._ We remark that the purpose of the constant \(r\) in the above statement is to allow us to take \(q\) arbitrarily large. Moreover, \(q\) is divisible by \(k-1\) as \(m\geq k-1\). The proofs of Lemmas 8.2 and 8.3 are deferred to Sections 8.1 and 8.2. Now we are ready to show the main result of this section. Proof of Lemma 7.1.: We can assume that \(K>k\). Let \(K^{\prime}=K(k-1)\) and \(m=2^{2K^{\prime}(2K^{\prime}-1)}\). Introduce \(r\) with \(1/M\ll 1/r\ll\gamma\). Set \(q=2^{2K^{\prime}-1}rm!\), \(s=q+k-1\) and \(\mu=\mu_{d}(k)\). Given \(G\) and \(X\) as in the statement, we have to show that \(G\) contains a \(K\)-sparse absorber \(A\) rooted in \(X\) of order at most \(M\). We begin by selecting \(S\subseteq V(G)\) uniformly at random among all \(M\)-sets that contain \(X\). Let \(G^{\prime}=G[S]\). Then \(v(G^{\prime})=M\). Note that with probability at least \(2/3\) we have \(\deg_{G^{\prime}}(x)\geq(\mu+\gamma/2)\binom{M-1}{k-1}\) for every \(x\in X\), which follows by a standard concentration inequality (see for instance [22, Corollary 2.2]). Moreover, with probability at least \(2/3\), we have \(\delta_{d}(G^{\prime}-X)\geq(\mu+\gamma/2)\binom{M-d}{k-d}\), which follows follow by Lemma 3.16. Fix such a \(k\)-graph \(G^{\prime}\). In the remainder, we show that \(G^{\prime}\) contains a \(K\)-sparse absorber \(A\) rooted in \(X\). Apply Lemma 8.2 with \(k-1,\,2m,\,\gamma/2\) playing the role of \(t,\,m,\,\gamma\) to \(G^{\prime}\) in order to obtain an \(s\)-vertex \(k\)-graph \(T\) with \(X\subseteq V(T)\) such that 1. \(\delta_{d}(T-X)\geq(\mu+\gamma/4)\binom{s-d}{k-d}\) and \(\deg_{T}(x)\geq(\mu+\gamma/4)\binom{s-1}{k-1}\) for every \(x\in X\); 2. \(G^{\prime}\) contains a copy of \(T^{*}(2m,X)\) rooted in \(X\). By definition of \(\mu\), it follows that \(T-X\) contains a loose Hamilton cycle \(C_{2}\). Fix a vertex \(y\in V(C_{2})\) of degree \(2\). By Lemma 8.1 (applied with \(X\cup\{y\}\) playing the role of \(X\)) and property (i) above it follows that \(T\) contains a loose Hamilton cycle \(C_{1}\) such that \(X\) is \(K^{\prime}\)-spread in \(C_{1}\), and \(y\) has degree \(2\) in \(C_{1}\). Let \(R=C_{1}\cup C_{2}\). Since \(R^{*}(2m,X)\) is a subgraph of \(T^{*}(2m,X)\), it follows that \(G^{\prime}\) also contains a copy of \(R^{*}(2m,X)\) rooted in \(X\), which by Lemma 8.3, contains an absorber \(A\) rooted in \(X\). Finally, since \(X\) is \(K\)-spread in \(C_{1}\), Lemma 8.3 also tells us that the absorber \(A\) is also \(K\)-sparse. It remains to prove Lemmas 8.2 and 8.3, which is done in the following two sections. ### Proof of Lemma 8.2 (Blow-up setup) From Lemma 3.16, it is not difficult to get the following 'weaker' version of Lemma 8.2. **Lemma 8.4**.: _Let \(1/n\ll 1/s\ll\gamma,\,1/k,\,1/d,\,1/t\) and \(\mu\geq 0\). Let \(G\) be an \(n\)-vertex \(k\)-graph and \(X\subseteq V(G)\) be a \(t\)-set such that \(\delta_{d}(G-X)\geq(\mu+\gamma)\binom{n-d}{k-d}\) and \(\deg_{G}(x)\geq(\mu+\gamma)\binom{n-1}{k-1}\) for every \(x\in X\). Then there is an \(s\)-vertex \(k\)-graph \(R\) with \(X\subseteq V(R)\) such that_ 1. \(\delta_{d}(R-X)\geq(\mu+\gamma/2)\binom{s-d}{k-d}\) _and_ \(\deg_{R}(x)\geq(\mu+\gamma/2)\binom{s-1}{k-1}\) _for every_ \(x\in X\)_;_ 2. \(G\) _contains at least_ \(2^{-s^{k}}n^{s-t}\) _copies of_ \(R\) _rooted in_ \(X\)_._ Proof.: Let \(\mathcal{R}\) be the set of all \(s\)-vertex \(k\)-graphs \(R\) satisfying property (1). Now let \(S\subseteq V(G)\) be an \(s\)-set chosen uniformly at random among all sets that contain \(X\).7 By applying Lemma 3.16 with \(G-X\), \(0\) and \(\gamma\) playing the roles of \(G\), \(\delta\) and \(\eta\), it follows, with probability at least \(3/4\), that Footnote 7: This means that \(S\setminus X\) is an \((s-t)\)-set chosen uniformly at random in \(V(G)\setminus X\). \[\delta_{d}(G[S-X])\geq(\mu+\gamma/2)\binom{s-d}{k-d}.\] Similarly, a standard concentration inequality (see for instance [22, Corollary 2.2]) reveals that with probability at least \(3/4\), we have that \[\deg_{G[S]}(x)\geq(\mu+\gamma/2)\binom{s-1}{k-1}\text{ for every }x\in X.\] Hence \(\mathbb{P}(G[S]\in\mathcal{R})\geq 1/2\). This completes the proof of the lemma without part (2). For part (2), note that \(\mathcal{R}\) contains up to isomorphism at most \(2^{\binom{s}{k}}\) elements. So by averaging, we conclude that there must be some \(R\in\mathcal{R}\), for which \[\mathbb{P}(G[S]\text{ is a copy of }R\text{ rooted in }X)\geq 2^{-\binom{s}{k}-1}.\] Since there are \(\binom{n-t}{s-t}\) possibilities for choosing \(S\), we obtain the desired estimate. In order to deduce Lemma 8.2 from Lemma 8.4, one needs to prove that if \(G\) has positive density of copies of a graph \(R\), then \(G\) has a copy of a blow-up of \(R\). For that, we use the following result, which is a consequence of the'supersaturation' phenomenon discovered by Erdos and Simonovits [16], and the fact that the Turan density of partite graphs is zero as proved by Erdos [15]. **Theorem 8.5**.: _Let \(1/n\ll\xi,\,1/M,\,1/q\). Then every \(n\)-vertex \(q\)-graph \(G\) with \(e(G)\geq\xi n^{q}\) contains a copy of the complete \(q\)-partite \(q\)-graph with parts of size \(M\)._ Proof of Lemma 8.2.: Let \(q=s-t\) and introduce \(M,\,\xi\) with \(1/n\ll 1/M\ll 1/m,\,1/s\) and \(1/n\ll\xi,\,1/M\ll 1/q\). Let \(G\) be an \(n\)-vertex \(k\)-graph and \(X\subseteq V(G)\) be a \(t\)-set such that \(\delta_{d}(G-X)\geq(\mu+\gamma)\binom{n-d}{k-d}\) and \(\deg_{G}(x)\geq(\mu+\gamma)\binom{n-1}{k-1}\) for every \(x\in X\). We apply Lemma 8.4 to obtain an \(s\)-vertex \(k\)-graph \(R\) with \(X\subseteq V(R)\) that satisfies properties (1) and (2). Now let us define a \(q\)-uniform auxiliary \((n-t)\)-graph \(H\), with \(V(H)=V(G)\setminus X\) by adding an edge \(Q\) of size \(q\) to \(E(H)\) whenever \(G[Q\cup X]\) is a copy of \(R\) rooted in \(X\). By Lemma 8.4, the \(q\)-graph \(H\) has at least \(\xi n^{q}\) edges. It follows from Theorem 8.5, that \(H\) contains a copy of the complete \(q\)-partite \(q\)-graph \(L\) with parts of size \(M\). Given a copy of \(L\) in \(H\), one can colour each edge of \(L\) by one of \(s!\) colours, corresponding to which of the \(s!\) possible orders the vertices of \(R\) are mapped to the parts of \(L\). Using Theorem 8.5 once again, we find a complete \(q\)-partite subgraph \(L^{\prime}\subseteq L\) with parts of size \(m\), whose edges correspond to copies of \(R\) that have their respective vertices in the same parts. ### Proof of Lemma 8.3 (Absorber allocation) Consider disjoint sets \(\mathcal{B}=\{B_{1},\ldots,B_{\ell}\}\) of size \(m\). An _\((m,\ell)\)-strip_\(H\) is the union of matchings \(M_{1},\ldots,M_{\ell}\), where \(M_{i}\) matches \(B_{i}\) into \(B_{i+1}\) (index computation modulo \(\ell\)). Note that one can obtain a permutation \(\sigma_{H}\) of \(B_{1}\) by following the path starting at \(b\in B_{1}\) along the matching edges until it arrives again in a vertex \(b^{\prime}\in B_{1}\). We say that \(H\) is _cyclical_ if \(\sigma_{H}\) is the identity. In this case, \(H\) is the union of \(m\) vertex-disjoint cycles \(C_{1},\ldots,C_{m}\) each of order \(\ell\), whose orderings follow the indices of \(\mathcal{B}\). Finally, given \((m,\ell)\)-strips \(H_{1}\) and \(H_{2}\) on \(\mathcal{B}\), we say that \(D=H_{1}\cup H_{2}\) is an _\((m,\ell)\)-double-strip_ if both strips are cyclical and their edges are disjoint. We remark that, given any \((m,\ell)\)-strip \(H\) on \(\mathcal{B}\) and \(q=\ell rm!\) with an integer \(r\geq 1\), we can generate a cyclical \((m,q)\)-strip \(H^{\prime}\) by chaining together \(rm!\) copies of \(H\). More precisely, we obtain \(H^{\prime}\) by considering vertex-disjoint copies \(H_{1},\ldots,H_{rm!}\) of \(H\) where \(H_{i}\) has vertex set \(B_{i,1},\ldots,B_{i,\ell}\). For each \(i\in[rm!]\), we then replace each edge \(uv\) of type \(u\in B_{i,\ell}\) and \(v\in B_{i,1}\) with an edge \(uv^{\prime}\), where \(v^{\prime}\) is the copy of \(v\) in \(B_{i+1,1}\) (index computation modulo \(rm!\)). Note that \(\sigma_{H^{\prime}}\) is the \(rm!\)-times product of \(\sigma_{H}\) and hence the identity. So \(H^{\prime}\) is indeed cyclical. Moreover, the constant \(r\) allows us to scale \(q\) if necessary. The following proposition guarantees the existence of double-strips with large girth. **Proposition 8.6**.: _Let \(K^{\prime}\geq 1\), \(m=2^{2K^{\prime}(2K^{\prime}-1)}\) and \(q=2^{2K^{\prime}-1}rm!\) for \(r\in\mathbb{N}\). Then there is an \((m,q)\)-double-strip \(D\) of girth at least \(2K^{\prime}\)._ Proof.: For \(\ell=2^{2K^{\prime}-1}\), consider a \(2^{2K^{\prime}}\)-regular bipartite graph \(G\) with colour classes \(X,Y\) each of size \(m\) with girth at least \(2K^{\prime}-1+5\geq 2K^{\prime}\). The existence of such a graph follows from the work of Lazebnik and Ustimenko [35] since \(2K^{\prime}-1\) is odd and \(2^{2K^{\prime}}\) is a power of a prime. Let \(\mathcal{B}=\{B_{1},\ldots,B_{\ell}\}\) where \(B_{i}\) is copy of \(X\) if \(i\) is odd and a copy of \(Y\) if \(i\) is even. We obtain an \((m,\ell)\)-strip \(H\) by taking pairwise edge-disjoint perfect matchings \(M_{1},\ldots,M_{\ell}\subseteq G\) (using Hall's theorem) and placing \(M_{i}\) between \(B_{i}\) and \(B_{i+1}\). Since \(G\) is \(2^{2K^{\prime}}\)-regular, we find another \((m,\ell)\)-strip \(H^{\prime}\) that is edge-disjoint with \(H\). By the above remark applied to \(H\) and \(H^{\prime}\) separately, we may chain ourselves an \((m,q)\)-double-strip \(D\) whose partition we denote by \(\mathcal{B}^{\prime}\). We claim that \(D\) has the desired properties. By construction, the union of the edges between any \(\ell\) consecutive parts of \(\mathcal{B}^{\prime}\) is isomorphic to a subgraph of \(H\cup H^{\prime}\). Moreover, every cycle in \(H\cup H^{\prime}\) of length \(p\) can be associated with a cycle in \(G\) of length at most \(p\). Since \(G\) has girth at least \(\ell\geq 2K^{\prime}\), it follows that \(D\) has girth at least \(2K^{\prime}\). Hereafter, for \(r\) divisible by \(k-1\), we denote by \((u_{1},\ldots,u_{r})^{k}\) the \(k\)-uniform loose cycle \(C\) with \(V(C)=\{u_{i}\}_{i=1}^{r}\), whose edges follow the ordering \((u_{1},\ldots,u_{r},u_{1})\). Proof of Lemma 8.3.: Let us write \(C_{2}=(1,\ldots,q)^{k}\) and \(X=\{q+1,\ldots,q+k-1\}\). Furthermore, let \(\sigma\colon[q+k-1]\to[q+k-1]\) be a permutation such that \(C_{1}=(\sigma(1),\ldots,\sigma(q+k-1))^{k}\) with \(\sigma(1)=1\). Without loss of generality, we can assume that the vertex \(1\) has degree \(2\) in both \(C_{1}\) and \(C_{2}\). We begin the construction of the absorber \(A\subseteq R^{*}(2m,X)\) as follows. Denote the clusters of \(R^{*}(2m,X)\) by \(A_{1},\ldots,A_{q}\). Let \(\mathcal{B}=\{B_{1},\ldots,B_{q}\}\), where \(B_{i}\subseteq A_{i}\) is an arbitrary subset of \(A_{i}\) of size \(m\) for every \(i\in[q]\). Recall that \(q=2^{2K(k-1)-1}rm!\). So by Proposition 8.6 applied with \(K(k-1)\) playing the role of \(K\), there is an \((m,q)\)-double-strip \(D\) with partition \(\mathcal{B}\) and girth at least \(2K(k-1)\). Denote the cycles of the two corresponding \((m,q)\)-strips by \(\widetilde{S_{1}^{1}},S_{2}^{1},\ldots,S_{m}^{1}\) and \(S_{1}^{2},\ldots,S_{m}^{2}\), respectively. Let us write \(\widehat{S}^{1}_{1}=(v_{1},\ldots,v_{q})^{2}\) with \(v_{i}\in B_{i}\) for every \(i\in[q]\). Moreover, set \(v_{x}:=x\) for every \(x\in X\). Consider the cycle \(S^{1}_{1}=(v_{\sigma(1)},v_{\sigma(2)},\ldots,v_{\sigma(q+k-1)})^{2}\), which follows the order of \(C_{1}\). Note that \(S^{1}_{1}\) covers \(X\) and exactly one vertex (namely \(v_{i}\)) of each cluster \(B_{i}\) of \(\mathcal{B}\). Moreover \(v(S^{1}_{1})=v(\widehat{S}^{1}_{1})+(k-1)\) and \(v(S^{1}_{1})\) is divisible by \(k-1\). Let \(D^{-}\) be the \(2\)-graph obtained from \(D\) by removing the edges of \(\widehat{S}^{1}_{1}\). Define \(D^{+}=\bigcup_{i=1}^{m}(S^{1}_{i}\cup S^{2}_{i})\) as the \(2\)-graph obtained from \(D^{-}\) by adding the vertices of \(X\) and the edges of \(S^{1}_{1}\). Note that, in \(D^{+}\), every vertex of \(V(D)\) is contained in exactly two cycles, namely \(S^{1}_{j}\) and \(S^{2}_{i}\) for some \(i,j\in[m]\). Moreover, the \(2m\) cycles \(\{S^{1}_{i},S^{2}_{i}\}_{i=1}^{m}\) are pairwise edge-disjoint. Next, we turn the \(2m\) cycles \(\{S^{1}_{i},S^{2}_{i}\}_{i=1}^{m}\) of \(D^{+}\) into \(2m\) (\(2\)-uniform) paths \(\{Q^{1}_{i},Q^{2}_{i}\}_{i=1}^{m}\) by splitting each vertex of \(B_{1}\). More formally, we denote \(B_{1}=\{u_{i}\}_{i=1}^{m}\) and \(A_{1}\setminus B_{1}=\{\bar{u}_{i}\}_{i=1}^{m}\). Assume without loss of generality that, for every \(i\in[m]\), the cycles \(S^{1}_{i}\) and \(S^{2}_{i}\) meet in \(u_{i}\in B_{1}\). Hence, we can write \(S^{1}_{i}=u_{i}L^{1}_{i}u_{i}\) and \(S^{2}_{i}=u_{i}L^{2}_{i}u_{i}\) where \(L^{1}_{i},L^{2}_{i}\) are paths in \(D^{+}\). For every \(i\in[m]\), we now define the desired \((u_{i},\bar{u}_{i})\)-paths \(Q^{1}_{i}:=u_{i}L^{1}_{i}\bar{u}_{i}\) and \(Q^{2}_{i}:=u_{i}L^{2}_{i}\bar{u}_{i}\). Let \(D^{+}_{\text{split}}=\bigcup_{i=1}^{m}\{Q^{1}_{i},Q^{2}_{i}\}\) be the \(2\)-graph obtained by the union of those \(2m\) edge-disjoint paths. By construction, \(D^{+}_{\text{split}}\) satisfies the following properties: * \(V(D^{+}_{\text{split}})=X\cup A_{1}\cup B_{2}\cup\cdots\cup B_{q}\); * \(\bigcup_{i=1}^{m}V(Q^{1}_{i})=X\cup\bigcup_{i=1}^{m}V(Q^{2}_{i})\) and \(X\cap\bigcup_{i=1}^{m}V(Q^{2}_{i})=\emptyset\); * the paths \(Q^{j}_{1},\ldots,Q^{j}_{m}\) are pairwise vertex-disjoint for \(j\in[2]\); * \(Q^{1}_{i}\) and \(Q^{2}_{i}\) are both \((u_{i},\bar{u}_{i})\)-paths for every \(i\in[m]\). See also Figure 3 for an illustration. Finally, we set \(A\) to be the \(k\)-graph \(\bigcup_{i=1}^{m}\{P^{1}_{i},P^{2}_{i}\}\) where \(P^{j}_{i}\) is the \(k\)-uniform loose path with the same vertex set and vertex ordering as \(Q^{j}_{i}\). Note that this is well defined since \(v(Q^{j}_{i})\equiv 1\bmod(k-1)\). We proceed to show that \(A\) satisfies the statement of the lemma. First, observe that \(A\) satisfies the corresponding three properties of Definition 6.1 as a consequence of properties (a)-(d). Hence, \(A\) is indeed an absorber rooted in \(X\). Secondly, it is easy to check that \(A\subseteq R^{*}:=R^{*}(2m,X)\). Indeed, define \(f\colon V(R^{*})\to[q+k-1]\) either by setting \(f(w)=i\), if \(w\in A_{i}\) for some \(i\in[q]\); or by setting \(f(w)=w\) if \(w\in X\). By construction, every edge \(\{w_{1},\ldots,w_{k}\}\in E(P^{1}_{1})\) arises in \(R^{*}\) after blowing up the edge \(\{f(w_{1}),\ldots,f(w_{k})\}\in E(C_{1})\). Similarly, every edge \(\{w_{1},\ldots,w_{k}\}\in E(A)\setminus E(P^{1}_{1})\) arises in \(R^{*}\) after blowing up the edge \(\{f(w_{1}),\ldots,f(w_{k})\}\in E(C_{2})\). Note that at this point it Figure 3. The absorber constructed in the proof of Lemma 8.3. is crucial that the vertex \(1=\sigma(1)\) has degree \(2\) in both \(C_{1}\) and \(C_{2}\).8 This shows that \(A\) is an absorber rooted in \(X\) and \(A\subseteq R^{*}\) as desired. Footnote 8: This is a simple but sensitive point in the proof and the reason why Lemma 8.1 is required. It remains to prove that if \(X\) is \(K^{\prime}\)-spread in \(C_{1}\), then the absorber \(A\) is \(K\)-sparse. The following claim is the corresponding statement with respect to \(D^{+}_{\mathrm{split}}\). **Claim 8.7**.: _Suppose \(X\) is \(K^{\prime}\)-spread in the \((2\)-uniform) cycle \(S^{1}_{1}\). Then, the girth of \(D^{+}_{\mathrm{split}}\) is at least \(K^{\prime}\), even after adding to \(D^{+}_{\mathrm{split}}\) the edges \(E_{X}\) of any path \(P_{X}\) satisfying \(V(P_{X})=X\)._ Proof.: Let \(M\) be a cycle of \(D^{+}_{\mathrm{split}}\cup E_{X}\). We need to show that \(v(M)\geq K^{\prime}\). It is clear that \(D^{+}_{\mathrm{split}}\cup E_{X}\) has girth not smaller than \(D^{+}\cup E_{X}\) and, therefore, we can assume that \(E(M)\subseteq E(D^{+})\cup E_{X}=E(D^{-})\cup E(S^{1}_{1})\cup E_{X}\). First, assume \(E(M)\cap E(S^{1}_{1})=\emptyset\). Since the edges of \(D^{-}\) do not intersect \(X\), we get that \(E(M)\subseteq E(D^{-})\subseteq E(D)\), which has girth at least \(2K^{\prime}\geq K^{\prime}\), as required. Next, assume \(E(M)\cap E(D^{-})=\emptyset\). In that case, we have \(E(M)\subseteq E(S^{1}_{1})\cup E_{X}\). If \(E(M)\cap E_{X}=\emptyset\), then \(M=S^{1}_{1}\), which has order \(v(M)=v(S^{1}_{1})>q\geq K^{\prime}\). If \(E(M)\) contains an edge in \(E_{X}\), then \(M\) contains an \((x_{1},x_{2})\)-path \(P\subseteq S^{1}_{1}\) where \(x_{1},x_{2}\in X.\) But since \(X\) is \(K^{\prime}\)-spread in \(S^{1}_{1}\), the path \(P\) has at least \(K^{\prime}\) edges. Thus, \(v(M)\geq v(P)\geq K^{\prime}\), as required. We can now assume that \(E(M)\) contains edges from _both_\(E(S^{1}_{1})\) and \(E(D^{-})\). Let \(P\subseteq M\) be a \((u,w)\)-path of maximum order satisfying \(E(P)\subseteq E(D^{-})\). Let \(e\in E(M)\setminus E(P)\) be the (other) edge of \(M\) incident with \(u\). First, note that \(e\notin E(D^{-})\), as otherwise \(P\) would not have maximum order. Also, we have \(e\notin E_{X}\), since \(u\in V(P)\cap e\) and no vertex of \(P\) is in \(X\). Therefore, we must have \(e\in E(S^{1}_{1})\), which implies \(u=v_{i}\in B_{i}\), for some \(i\in[q]\). Analogously we have \(w=v_{j}\in B_{j}\), for some \(j\in[q]\). Without loss of generality we can assume that \(i<j\). Recall that adjacent vertices of \(P\subseteq D^{-}\) must belong to adjacent blocks of \(\mathcal{B}\). In particular, \(e(P)\geq\mathrm{d}(i,j)\), where \(\mathrm{d}(i,j):=\min\{j-i,q+i-j\}\). Finally, let \(P^{\prime}\subseteq\widehat{S}^{1}_{1}=(v_{1},\ldots,v_{q})^{2}\) be a \((v_{i},v_{j})\)-path satisfying \(e(P^{\prime})=\mathrm{d}(i,j)\). Note that \(P\cup P^{\prime}\) is a cycle of \(D\) and, therefore, must have order at least \(2K^{\prime}\). Using that \(e(P)\geq\mathrm{d}(i,j)=e(P^{\prime})\), it follows that the order of \(P\) must be at least \(K^{\prime}\), which implies \(v(M)\geq v(P)\geq K^{\prime}\). Let us show how Claim 8.7 helps us to finish the proof. Suppose that \(X\) is \(K\)-spread in the (\(k\)-uniform) cycle \(C_{1}\), and let \(A^{\prime}\) be the hypergraph obtained from \(A\) after adding the extra edge \(X=\{q+1,\ldots,q+k-1\}\). Since \(X\) is \(K^{\prime}\)-spread in \(C_{1}\), it follows that \(X\) is \(K^{\prime}\)-spread in the (\(2\)-uniform) cycle \(S^{1}_{1}\), as in the hypothesis ofClaim 8.7. Now let \(C_{\mathrm{Berge}}\) be a Berge cycle in \(A^{\prime}\) of minimum length. Hence we need to show that \(v(C_{\mathrm{Berge}})\geq K^{\prime}=K(k-1)\). Our strategy is to identify a \(2\)-uniform cycle \(M\) in \(D^{+}_{\mathrm{split}}\cup P_{x}\), whose vertices are contained in \(C_{\mathrm{Berge}}\) where \(P_{X}\) is a path as in Claim 8.7. If successful, this shows that \(K^{\prime}\leq v(M)\leq v(C_{\mathrm{Berge}})\) by Claim 8.7, and we are done. To find such a cycle \(M\), we proceed as follows. If \(X\in C_{\mathrm{Berge}}\), consider a path \(P_{X}\) as in Claim 8.7 with the same endpoints of \(X\) along \(C_{\mathrm{Berge}}\). Next, we construct a (\(2\)-uniform) graph \(M^{\prime}\) by replacing each edge \(e\in E(C_{\mathrm{Berge}})\) either by the corresponding \((k-1)\) edges of \(D^{+}_{\mathrm{split}}\) that originated \(e\) (if \(e\in A\)); or by the \((k-2)\) edges of \(P_{X}\) (if \(e=X\)). Since \(D^{+}_{\mathrm{split}}\) is the union of pairwise edge-disjoint paths and \(P_{X}\) is edge-disjoint from \(D^{+}_{\mathrm{split}}\), it follows that \(M^{\prime}\) must contain a cycle \(M\), as desired. ## 9. Exploiting the threshold In this section, we show Lemma 8.1. The proof uses largely the same strategy as the one of Theorem 1.2. But since we are in the dense setting, many of the building blocks are much simpler. The main conceptual difference between the two proofs is the way we construct the absorbers. We require the following two lemmas, which are dense versions of Lemmas 2.1 and 2.2. **Lemma 9.1** (Dense Absorption Lemma).: _Let \(1/n\ll\eta\ll\alpha,\,1/k,\,1/d,\,\gamma\). Let \(G\) be an \(n\)-vertex \(k\)-graph with \(\delta_{d}(G)\geq(\mu_{d}(k)+\gamma)\binom{n-d}{k-d}\). Then there is a set \(A\subseteq V(G)\) and two vertices \(u,v\in A\) such that_ * \(|A|\leq\alpha n\)_;_ * _for any subset_ \(W\subseteq V(G-A)\) _with_ \(|W|\leq\eta n\) _divisible by_ \(k-1\)_, the induced graph_ \(G[A\cup W]\) _has a loose_ \((u,v)\)_-path covering exactly_ \(A\cup W\)_._ The proof of Lemma 9.1 is deferred to the next subsection. **Lemma 9.2** (Dense Cover Lemma).: _Let \(1/n\ll\alpha\ll 1/k,1/d,\gamma\) and let \(1/n\ll\eta\). Let \(G\) be an \(n\)-vertex \(k\)-graph with \(\delta_{d}(G)\geq(\mu_{d}(k)+\gamma)\binom{n-d}{k-d}\), \(Q\subseteq V(G)\) with \(|Q|\leq\alpha n\) and \(u,v\in V(G-Q)\). Then there is a loose \((u,v)\)-path \(P\) in \(G-Q\) that covers all but \(\eta n\) vertices of \(G-Q\)._ Proof.: Let \(G^{\prime}=G-Q\), and note that \(G^{\prime}\) satisfies \(\delta_{d}(G^{\prime})\geq(\mu_{d}(k)+\gamma/2)\binom{n-d}{k-d}\) since \(\alpha\) is very small in comparison to \(\gamma\). For \(\eta^{\prime}=\eta/2\), we begin by selecting an \(\eta^{\prime}n\)-set \(C\subseteq V(G^{\prime})\) such that \(\delta_{d}(G^{\prime}[C])\geq(\mu_{d}(k)+\gamma/4)\binom{\eta^{\prime}n}{k-d}\) and \(\deg_{G^{\prime}[C\cup\{w\}]}(w)\geq(\mu_{d}(k)+\gamma/4)\binom{\eta^{\prime} n}{k-1}\) for every \(w\in V(G^{\prime})\). Such a set can be found using a standard concentration inequality (see for instance [22, Corollary 2.2]). Let \(G^{\prime\prime}=G^{\prime}-C-u-v\), and note that \(\delta_{d}(G^{\prime\prime})\geq(\mu_{d}(k)+\gamma/4)\binom{n-d}{k-d}\). Hence \(G^{\prime\prime}\) contains a loose cycle that covers all but \(k-2\) of its vertices. From this we obtain a loose path \((u^{\prime},v^{\prime})\)-path \(P\) of order at least \((1-\eta/2)n\) with \(u^{\prime},v^{\prime}\in V(G^{\prime\prime})\). To finish, we apply Lemma 4.2 with \(G^{\prime}[C]\) playing the role of \(G^{\prime}\) to find a loose \((u,v)\)-path that contains \(P\) as a subpath. Indeed, by choice of \(C\) any vertex \(x\in V(G^{\prime})\) is on an edge \(e\) such that \(|e\cap C|\geq k-1\). We can therefore connect \(x\) to any vertex within \(C\) using at most \(5(k-1)\) further vertices. Applying this observation twice gives the desired connections. Now we are ready to show the main result of this subsection. Proof of Lemma 8.1.: Introduce \(\gamma\), \(K\), \(t\), \(\alpha\), \(\eta\), \(n\) with \(1/n\ll\eta\ll\alpha\ll 1/t\), \(1/K\), \(1/k\), \(1/d\), \(\gamma\). Set \(\mu=\mu_{d}(k)\). Let \(G\) be an \(n\)-vertex \(k\)-graph and \(X=\{x_{1},\ldots,x_{t}\}\subseteq V(G)\) be a \(t\)-set such that \(\delta_{d}(G-X)\geq(\mu+\gamma)\binom{n-d}{k-d}\) and \(\deg_{G}(x)\geq(\mu+\gamma)\binom{n-1}{k-1}\) for every \(x\in X\). We have to show that \(G\) contains a loose Hamilton cycle \(C\) in which \(X\) is \(K\)-spread. We begin by applying Lemma 9.1 with \(G-X\) playing the role of \(G\) to obtain \(A\subseteq V(G-X)\) and two vertices \(u,v^{\prime}\in A\) such that \(|A|\leq\alpha n\) and for any subset \(W\subseteq V(G-X-A)\) with \(|W|\leq\eta n\) divisible by \(k-1\), the induced graph \(G[A\cup W]\) has a loose \((u,v^{\prime})\)-path covering exactly \(A\cup W\). Let \(v\in V(G-X-A)\). We cover \(X\) by adding its vertices to a \((v^{\prime},v)\)-path \(P\) of order at most \(3Kt\) that shares with \(A\) only the initial vertex \(v^{\prime}\). Moreover, the vertices of \(X\) shall be at distance at least \(K\) in \(P\) and each have degree \(2\). To this end, select pairwise disjoint \((v_{i},v^{\prime}_{i})\)-paths \(P_{i}\subseteq G-A\) of order \(2\) such that \(x_{i}\) has degree \(2\) in \(P_{i}\) for \(1\leq i\leq t\). (This is possible due the minimum degree assumptions on \(X\).) Set \(v^{\prime}_{0}=v^{\prime}\) and \(v_{t+1}=v\). For each \(0\leq i\leq t\), we (repeatedly) apply Lemma 4.2 to obtain a loose \((v^{\prime}_{i},v_{i+1})\)-path \(P^{\prime}_{i}\) of order at least \(K\) and at most \(K+4k\). Note that these paths can be chosen to be pairwise disjoint. (This is possible since \(\delta_{d}(G-X)\geq(\mu+\gamma)\binom{n-d}{k-d}\).) It follows that the concatenation of the paths \(P^{\prime}_{i}\) and \(P_{i}\) forms the desired path \(P\). Set \(A^{\prime}=(A\cup V(P))\setminus\{u,v\}\), and note that \(|A^{\prime}|\leq 2\alpha n\). Next, we use Lemma 9.2 with \(A^{\prime}\), \(2\alpha\) playing the role of \(Q\), \(\alpha\) to find a loose \((v,u)\)-path \(P^{\prime}\) in \(G-A^{\prime}\) that covers all but \(\eta n\) vertices of \(G-A^{\prime}\). By choice of \(A\), we may integrate the remaining vertices into a loose path which together with \(P\cup P^{\prime}\) forms a loose Hamilton cycle of \(G\). We remark that the same argument also gives the following result, which is of independent interest. More details on this follow in Section 10. **Theorem 9.3**.: _Let \(1/n\ll 1/k,\,\gamma\) with \(n-1\) divisible by \(k-1\). Let \(G\) be an \(n\)-vertex graph with \(\delta_{d}(G)\geq(\mu_{d}(k)+\gamma)\binom{n-d}{k-d}\). Then \(G\) contains a loose Hamilton path between any two distinct vertices._ The rest of this section is dedicated to the proofs of Lemmas 4.2 and 9.1. ### Connectivity Here we prove Lemma 4.2. We require the following fact which follows from the Kruskal-Katona theorem. (It is easiest to see using Lovasz's formulation.) **Proposition 9.4**.: _For \(1/n\leq\varepsilon,\,\delta,\,1/k\), let \(G\) be a \(k\)-graph with at least \((\delta+\varepsilon)\binom{n}{k}\) edges. Then the edges of \(G\) span at least \(\delta^{1/k}n\) vertices._ We also need the following simplified version of Lemma 8.2, whose proof we omit as it follows almost line by line the original argument. **Lemma 9.5** (Simple blow-up setup).: _Let \(1/n\ll 1/s\ll\gamma,\,1/k,\,1/d,\,1/t\), \(\mu\geq 0\) and \(1/n\ll 1/m\). Let \(G\) be an \(n\)-vertex \(k\)-graph with \(\delta_{d}(G)\geq(\mu+\gamma)\binom{n-d}{k-d}\). Let \(X\subseteq V(G)\) be a \(t\)-set. Then there is an s-vertex \(k\)-graph \(R\) with \(X\subseteq V(R)\) such that_ 1. \(\delta_{d}(R)\geq(\mu+\gamma/2)\binom{s-d}{k-d}\)_;_ 2. \(G\) _contains a copy of_ \(R^{*}(m,X)\) _rooted in_ \(X\)_._ Proof of Lemma 4.2.: First assume that \(d=1\). Introduce \(m,s\) with \(1/n\ll 1/m\), \(1/s\ll\gamma,\,1/k,\,1/d,\,\mu\). By Lemma 9.5 applied with \(\{u,v\}\) playing the role of \(X\) there is an \(s\)-vertex \(k\)-graph \(R\) with \(X\subseteq V(R)\) such that \(\delta_{d}(R)\geq(\mu_{d}(k)+\gamma/2)\binom{s-d}{k-d}\) and there is a copy \(R^{*}\) of \(R^{*}(m,\{u,v\})\) rooted in \(\{u,v\}\) in \(G\). Let \(w\in V(R-u-v)\). We claim that there are distinct edges \(e,f\in E(R)\) such that \(u\in e\setminus f\), \(w\in f\setminus e\) and \(e\cap f\neq\emptyset\). To see this, let \(L(u)\) denote the link graph of \(u\), which is the \((k-1)\)-graph on \(V(R)\setminus\{u\}\) with a \((k-1)\)-edge \(e\) whenever \(e\cup\{x\}\) is an edge in \(R\). Recall from the introduction that \(\mu_{1}(k)\geq 2^{-(k-1)}\). Thus applying Proposition 9.4 to \(L(u)\) shows that the edges of \(u\) cover more than \(n/2\) vertices. We define \(L(v)\) analogously and obtain the same conclusion. It follows that \(R\) contains the desired edges \(e\) and \(f\). By the same argument, \(R\) contains edges \(e^{\prime},f^{\prime}\) such that \(w\in e^{\prime}\setminus f^{\prime}\), \(v\in f^{\prime}\setminus e^{\prime}\) and \(e^{\prime}\cap f^{\prime}\neq\emptyset\). Hence we may easily construct the desired \((u,v)\)-path of order \(4(k-1)+1\) in \(R^{*}\). Now assume that \(d\geq 2\). In this case, we have \(\delta_{2}(G)\geq(\mu_{d}(k)+\gamma)\binom{n-2}{k-2}\). So we can greedily construct a loose \((u,v)\)-path of order \(4(k-1)+1\) by considering three distinct vertices \(u^{\prime},v^{\prime},w^{\prime}\in V(G)\setminus\{u,v\}\) and then selecting appropriate edges that contain \(\{u,u^{\prime}\},\{u^{\prime},w^{\prime}\},\{w^{\prime},v^{\prime}\}\) and \(\{v^{\prime},v\}\). ### Absorption For the proof of Lemma 9.1, we require the following simplified version of Lemma 7.1. **Lemma 9.6** (Simple Dense Absorber Lemma).: _Let \(1/n\ll 1/q\ll\gamma,1/k,1/d\). Let \(G\) be an \(n\)-vertex \(k\)-graph with \(\delta_{d}(G)\geq(\mu_{d}(k)+\gamma)\binom{n-d}{k-d}\) and \(X\) be a \((k-1)\)-set of vertices of \(G\). Then \(G\) contains an absorber of order \(q+2k\) rooted in \(X\)._ The proof of Lemma 9.6 is based on a similar idea as the one of Lemma 7.1. However, since we cannot force a particular vertex in a Hamilton cycle to have degree \(2\), we need to change our approach a little bit. Proof of Lemma 9.6.: Set \(t=k-1\), \(s=q+t\) and \(\mu=\mu_{d}(k)\). Apply Lemma 9.5 with \(m=2\) to \(G\) in order to obtain an \(s\)-vertex \(k\)-graph \(T\) with \(X\subseteq V(T)\) such that 1. \(\delta_{d}(T)\geq(\mu+\gamma/2)\binom{s-d}{k-d}\); 2. \(G\) contains a copy of \(T^{*}(2,X)\) rooted in \(X\). We claim that \(T\) contains a loose Hamilton cycle \(C_{1}\), and \(T-X\) contains a loose Hamilton cycle \(C_{2}\). The existence of \(C_{1}\) follows by property (1) and the definition of \(\mu=\mu_{d}(k)\). For \(C_{2}\), observe that \(\delta_{d}(T-X)\geq(\mu+\gamma/4)\binom{n-d}{k-d}\) due to the choice of constants. So we also find \(C_{2}\) by the definition of \(\mu=\mu_{d}(k)\). Let \(R=C_{1}\cup C_{2}\). For every vertex \(v\) of \(C_{2}\), we denote by \(\bar{v}\) the vertex that is in the same colour class of \(v\) in \(R^{*}(2,X)\). Our goal is now to find an absorber \(\{P_{1}^{1},P_{1}^{2},P_{2}^{1},P_{2}^{2}\}\) of order \(q+2k\) rooted in \(X\). To begin, let \(P_{1}^{1}=(v_{1},\ldots,v_{k})\) consist of the vertices corresponding to an (ordered) edge of \(C_{2}\). We define \(P_{2}^{1}\) as the path that starts with \((v_{1},\ldots,v_{k-1},\bar{v}_{k})\), follows the orientation of \(C_{2}\) via the vertices of type \(\bar{v}\) until it reaches \(\bar{v}_{1}\) and ends with \((\bar{v}_{1},\ldots,\bar{v}_{k-1},v_{k})\). Next, let \(P_{2}^{2}=(w_{1},\ldots,w_{k})\) consist of the vertices corresponding to an (ordered) edge of \(C_{1}\), which is disjoint of the vertices of \(P_{1}^{1}\) and \(X\). Finally, the path \(P_{1}^{2}\) starts with \((w_{1},\bar{w}_{2},\ldots,\bar{w}_{k})\), follows the orientation of \(C_{1}\) via the vertices \(\bar{v}\) until \(\bar{w}_{1}\) and ends with \((\bar{w}_{1},w_{2},\ldots,w_{k})\). One can easily check that these four paths form an absorber \(A\) of order \(q+2k\) rooted in \(X\). It remains to show Lemma 9.1. The proof follows line by line the one of Lemma 2.1 once a deterministic version of Claim 6.6 is established. This is done by the next lemma. We omit the remaining details of the proof of Lemma 9.1. **Lemma 9.7**.: _Let \(1/n\ll\eta\ll\nu\), \(\alpha,\,1/M\ll 1/k,\,1/d\), \(\gamma\) and \(\eta\ll\varrho\ll\gamma\). Suppose \(G\) is an \(n\)-vertex \(k\)-graph with \(\delta_{d}(G)\geq(\mu_{d}(k)+\gamma)\binom{n-d}{k-d}\), and let \(Q\subseteq V(G)\) with \(|Q|\leq\alpha n\). Then the following hold:_ 1. _Let_ \(C=V(G)\setminus Q\)_. For every pair of distinct vertices_ \(u,\,v\in C\)_, there is a loose_ \((u,v)\)_-path_ \(P\) _of order_ \(\ell=4(k-1)+1\) _in_ \(G\) _with_ \(V(P)\subseteq C\)_._ 2. _Let_ \(C\subseteq V(G)\setminus Q\) _be a set of size_ \(\nu n\) _taken uniformly at random. Then with probability at least_ \(2/3\) _the following holds. For any_ \(R\subseteq C\) _with_ \(|R|\leq\varrho n\) _and distinct_ \(u,\,v\in V(G)\setminus(Q\cup R)\)_, there is a loose_ \((u,v)\)_-path_ \(P\) _of order_ \(\ell=4(k-1)+1\) _in_ \(G\) _with_ \(V(P)\setminus\{u,v\}\subseteq C\setminus R\)_._ 3. _Let_ \(Z\subseteq V(G)\) _be chosen uniformly at random among all_ \(\nu n\)_-sets. Then with probability at least_ \(2/3\) _for any_ \(W\subseteq V(G)\setminus Z\) _with_ \(|W|\leq\eta n\)_, there is a matching in_ \(G\) _covering all vertices in_ \(W\)_, each edge of which contains one vertex of_ \(W\) _and_ \(k-1\) _vertices of_ \(Z\)_._ 4. _For any_ \((k-1)\)_-set_ \(X\) _in_ \(V(G)\setminus Q\)_, there is an absorber_ \(A\) _in_ \(G\) _rooted in_ \(X\) _that avoids_ \(Q\) _and has order at most_ \(M\)_._ Proof.: Part (0) simply follows from Lemma 4.2 since \(\delta_{d}(G-Q)\geq(\mu_{d}(k)+\gamma/2)\binom{n-|Q|-d}{k-d}\). Part (1) follows by first observing that a random \(\nu\)-set \(C\subseteq V(G)\setminus Q\) satisfies \(\delta_{d}(G[C\cup\{u,v\}])\geq(\mu_{d}(k)+\gamma)\binom{\nu^{\prime}n-d}{k-d}\) for every \(u,\,v\in V(G)\setminus Q\) with probability \(2/3\). (The same observation was used in the proof of Lemma 8.1.) One can then apply Lemma 4.2 to obtain the desired path. Part (2) follows in the same way. The matching can be constructed greedily. Finally, part (3) follows by Lemma 9.6. ## 10. Conclusion In this paper, we investigated for which probabilities \(p\) a subgraph of the binomial random graph \(\mathrm{H}_{k}(n,p)\) whose relative minimum \(d\)-degree is above the corresponding dense threshold contains a loose Hamilton cycle. Our main result determines the optimal value for \(p\) when \(d>(k+1)/2\). While we do provide bounds for \(p\) for \(d\leq(k+1)/2\), it is unlikely that our results are optimal in this range. Hence a natural question is whether one can improve on this. It would furthermore be very interesting to understand whether one can obtain similar results for 'tighter' cycles. A first step in this direction was undertaken by Allen, Parczyk and Pfenninger [3] for \(d=k-1\). Note that in this situation, the problem of finding a tight Hamilton cycle in a dense graph is quite well-understood. Going beyond this, one potentially challenging problem would be to prove such a result for values of \(d\) and \(k\) for which we do not yet know the precise value of the (dense) minimum degree threshold. Finally, we remark that our work is related to the corresponding 'random robustness' problem mentioned in Section 1. In this setting, we are given a (deterministic) \(n\)-vertex \(k\)-graph \(G\) with \(\delta_{d}(G)\geq(\mu_{d}(k)+\gamma)\binom{n-d}{k-d}\) for some \(\gamma>0\). Now let \(G^{\prime}\) be a random sparsification of \(G\), which is obtained by keeping every edge independently with probability \(p\). The challenge is then to determine the threshold \(p\) for which \(G^{\prime}\) typically contains a loose Hamilton cycle. A simple coupling argument shows that \(p\) can be taken as small as in Theorem 1.2. It was shown independently by Kelly, Muyesser and Pokrosvkiy [27] and Joos, Lang and Sanhueza-Matamala [25] that Theorem 9.3 can be used to improve this to \(p\geq Cn^{-k+1}\log n\) for a constant \(C=C(k)\), which is asymptotically optimal.
2305.19766
A physical noise model for quantum measurements
In this paper we introduce a novel noise model for quantum measurements motivated by an indirect measurement scheme with faulty preparation. Averaging over random dynamics governing the interaction between the quantum system and a probe, a natural, physical noise model emerges. We compare it to existing noise models (uniform and depolarizing) in the framework of incompatibility robustness. We observe that our model allows for larger compatibility regions for specific classes of measurements.
Faedi Loulidi, Ion Nechita, Clément Pellegrini
2023-05-31T11:54:00Z
http://arxiv.org/abs/2305.19766v3
# A physical noise model for quantum measurements ###### Abstract. In this paper we introduce a novel noise model for quantum measurements motivated by an indirect measurement scheme with faulty preparation. Averaging over random dynamics governing the interaction between the quantum system and a probe, a natural, physical noise model emerges. We compare it to existing noise models (uniform and depolarizing) in the framework of incompatibility robustness. We observe that our model allows for larger compatibility regions for specific classes of measurements. ###### Contents * 1 Introduction * 2 Quantum measurements * 3 Effective noise model via indirect measurements: the two outcomes case. * 3.1 The two outcomes POVM induced by a perfect indirect measurement scheme * 3.2 The noisy two outcomes POVM induced by an imperfect indirect measurement scheme * 4 The case of arbitrary POVMs * 4.1 General induced POVM * 4.2 Physical noise model * 5 Applications to compatibility ## 1. Introduction In quantum information theory, measurements are modelled by POVMs (_Positive Operator Valued Measure_). POVMs are tuples of positive semidefinite operators that sum up the identity. The POVM formalism allows one to describe the outcome probabilities of the measurement process without knowing the microscopic details of the measurement device. However, there exists another way of extracting the probability of obtaining some outcome without destroying the quantum system. This procedure is based on _indirect measurement process_. For a given quantum state \(\rho\) in a Hilbert space \(\mathcal{H}_{S}\cong\mathbb{C}^{d}\), the indirect measurement process consists in performing a measurement of the probe on the evolved quantum state coupled to a general probe in a Hilbert space \(\mathcal{H}_{P}\cong\mathbb{C}^{n}\). Initially, the quantum state is coupled to a probe, and the evolution of the total system is given by a unitary operator \(U\in\mathcal{U}(d\,n)\). More precisely the evolution of the total system is given by: \[U:\mathcal{H}_{S}\otimes\mathcal{H}_{P} \to\mathcal{H}_{S}\otimes\mathcal{H}_{P}\] \[\rho\otimes\beta \to U\big{(}\rho\otimes\beta\big{)}U^{*}.\] Now consider an observable \(A\) on the probe system with spectral decomposition \(A=\sum\lambda_{i}P_{i}\). In the indirect measurement process, one oserves the eigenvalue \(\lambda_{i}\) of A with probability \[\mathbb{P}(i):=\mathrm{Tr}\left[U(\rho\otimes\beta)U^{*}(I\otimes P_{i}) \right]\!.\] Models of indirect measurement have been studied in the physical and mathematical community. Such models are at the cornerstone of the understanding of major experiments in quantum optics ## 1. Introduction In this paper we study the problem of determining the probability of quantum measurements in a quantum measurement device. We shall consider the problem of determining the probability of obtaining the outcome \(i\in\{0,1\}\). the distribution of the interaction unitary \(U\). This procedure gives rise to the _noisy two outcome POVM_\(\mathbb{E}[\mathcal{A}^{\beta}]=(\mathbb{E}[A_{0}^{\beta}],\mathbb{E}[A_{1}^{\beta}])\), where its elements are given by: \[\mathbb{E}[A_{0}^{\beta}]=\beta_{00}\,A_{0}+\beta_{11}\,\frac{\operatorname{ Tr}[A_{1}]}{d}I_{d}\quad\text{and}\quad\mathbb{E}[A_{1}^{\beta}]=\beta_{00} \,A_{1}+\beta_{11}\,\frac{\operatorname{Tr}[A_{0}]}{d}I_{d}.\] Naturally, we obtain a noisy POVM of the form (1). The particularity is that it is close to the depolarizing noise model where the indices of the effect operators inside the trace are switched. While usual models such as uniform or depolarizing are usually introduced as ad-hoc noisy POVMs, our model is physically motivated through indirect measurement. Interestingly, we end up with a "model close to" the depolarising model where the usual noise parameters are switched. In the rest of the article, we give a complete generalization of our noise model for any number of outcomes \(i\in[0,N]\) for arbitrary \(N\in\mathbb{N}\). In this general context, we describe physically motivated effective POVM \(\mathcal{A}^{\beta}\) with \(N+1\) outcomes. Again, we concentrate on averaged POVM \(\mathbb{E}[\mathcal{A}^{\beta}]\) by considering random models for the involved unitary operator. Finally, we shall exploit our noise model by providing some applications for the compatibility of quantum measurements. In particular, we compare it with the usual noise models considered in the literature. The paper is organized as follows. In Section 2, we recall the notion of compatibility and the different type of noise models known in the literature. In Section 3, we introduce our main _physically noise model based on an indirect measurement process_ for POVMs with two outcomes. In Section 4, we will generalize the results for POVMs with more than two outcomes. In Section 5, we will give examples of the application of our noise model to the compatibility of quantum measurements. ## 2. Quantum measurements In this section, we will recall basic notions from quantum information theory. We shall recall the (in-)compatibility of quantum measurement and the different types of noise models established in the literature. In Quantum Information theory, a quantum state is described by the framework of _density matrices_ in a finite-dimensional Hilbert space \(\mathcal{H}\cong\mathbb{C}^{d}\). Formally we shall denote by the set of quantum states by \(\mathcal{M}_{d}^{1,+}\) defined as \[\mathcal{M}_{d}^{1,+}:=\{\rho\in\mathcal{M}_{d}\,:\,\rho\geq 0\text{ and } \operatorname{Tr}\rho=1\},\] The set of density matrices encodes all the information about the physical system. One of the main differences between classical mechanics and quantum theory, the measurement outcomes of a given experiment are intrinsically probabilistic. The celebrated _Born rule_ allows us to obtain the probability of a given outcome. In general, the measurement process is characterized by Positive Operator Valued Measures (POVMs)[10]. Formally a _positive operator valued measure_ on \(\mathcal{M}_{d}\) with \(N\) outcomes is a \(N\)-tuple \(\mathcal{A}=(A_{1},\dots,A_{N})\) of self-adjoint operators from \(\mathcal{M}_{d}\) which are positive semidefinite and sum up to the identity: \[\forall i\in[N],\quad A_{i}\geq 0\qquad\text{ and }\qquad\sum_{i=1}^{N}A_{i}=I_{d}.\] When measuring a quantum state \(\rho\) with the apparatus described by \(\mathcal{A}\), we obtain a random outcome from the set \([N]\): \[\forall i\in[N],\qquad\mathbb{P}(\text{outcome}=i)=\operatorname{Tr}[\rho A_{ i}].\] We shall write \([0,N]:=\{0,1,\dots,N\}\) for the set of the \(N+1\) positive integers and the set for \(N\) positive elements are given by \([N]:=\{1,\cdots,N\}\). In particular, if the POVM framework reduces to the _projective measurements_ (or _von Neumann measurements_) the measurement are projectors of the form \(A_{i}=|a_{i}\rangle\langle a_{i}|\). The POVM framework, allows us to understand only the measurement outcome which is relevant from given experiments without knowing the microscopic details of the measurement device see figure 1 for an illustration. Another main difference between classical physics and quantum mechanics is the notion of _compatibility_. In classical mechanics, the measurement does not affect the physical system. However, it is well known since the discovery of the quantum theory the existence of measurements that cannot be measured at the same time we called incompatible, and others that are compatible. We say that two POVMs \(\mathcal{A}=(A_{1},\cdots,A_{N})\) and \(\mathcal{B}=(B_{1},\cdots,B_{M})\) are compatible if there exists a third POVM \(\mathcal{C}=(C_{11},\cdots,C_{NM})\) from where they are marginals, see figure 2 for an illustration. Formally, two POVMs \(\mathcal{A}\) and \(\mathcal{B}\) are compatible if there exists a joint POVM \(\mathcal{C}=(C_{11},\cdots,C_{NM})\) such that: \[\forall i\in[N],\,A_{i}=\sum_{j=1}^{M}C_{ij},\quad\forall j\in[M],\,B_{j}= \sum_{i=1}^{N}C_{ij}.\] In particular, the definition of compatibility reduces to the commutativity for Projective measurements. Moreover, the definition of compatibility can be extended naturally to _tuples of POVMs_ see [11] and the references therein for more details. In the following, we shall recall different types of _noise models_ considered in the literature. A noise model is a POVM constructed from a convex combination with a parameter \(\alpha\in[0,1]\) of an original POVM \(\mathcal{A}\) and a _trivial_ POVM \(\mathcal{T}\). We shall focus on two different noise models well studied in the literature: the uniform noise, and the depolarizing noise. * The uniform noise model for \(\alpha\in[0,1]\) is defined by: \[\mathcal{A}\rightarrow\mathcal{A}^{\alpha} :=\alpha\mathcal{A}+(1-\alpha)\mathcal{T},\] where \(\mathcal{T}:=(t_{1}I,\cdots,t_{N}I)\), with \(t_{i}:=1/N\) for all \(i\in[N]\). * The depolarizing noise for \(\alpha\in[0,1]\) is defined by: \[\mathcal{A}\rightarrow\mathcal{A}^{\alpha} :=\alpha\mathcal{A}+(1-\alpha)\mathcal{T},\] where \(\mathcal{T}:=(t_{1}I,\cdots,t_{N}I)\), with \(t_{i}:=\operatorname{Tr}A_{i}/d\) for all \(i\in[N]\). Noise models play several important roles in quantum information and more specifically in the compatibility of quantum measurements. One should only mention that for a given incompatible measurement, one can ask how much noise one can add to the original POVMs in order to make them compatible. This question will also be addressed when we will provide applications of our physical noise model to the compatibility question and compare it with the standard noise model in the last section of this paper. ## 3. Effective noise model via indirect measurements: the two outcomes case. In this section, we will describe our _effective noise model_ obtained from an _indirect measurement process_ in the case of POVMs with two outcomes. As outlined in the Introduction, POVMs can be induced by indirect measurement (see figure 3 for a pictorial representation). Here we shall consider a probe of dimension 2 and an observable with two outcomes. This will indeed Figure 1. Diagrammatic representation of a quantum measurement apparatus. The device has an input canal and a set of 5 LEDs which will turn on when the corresponding outcome is achieved. After the measurement is performed, the particle is destroyed, and the apparatus displays the classical outcome (here, 3). give rise to two POVMS with two outcomes. Then introducing noise in the probe preparation will induce noisy POVMs that we can compare with the usual noise models in the literature. In the sequel, we shall describe the induced POVMs in terms of the unitary interaction between the system and the probe. Briefly speaking the description of the interaction will impose the form of the POVMs. In the case of dimension 2 (for the probe), we are able to describe an equivalence between the form of the induced POVMs and the structure of the unitary interaction. This structure combined with noise in the probe allows us to introduce random models which allow us to derive effective POVMs that enter into the form (1). Up to our knowledge, the main difference is that naturally, we obtain noises that have never been studied in the literature and which are physically motivated. Subsection 3.1 concerns the model of indirect measurement in the perfect situation and Subsection 3.2 concerns the models where we introduce noise in the probe preparation. Figure 3. The measurement of a quantum state on a POVM is equivalent to an indirect measurement process, where one measures the probe on an evolved quantum state coupled to a prepared probe on \(|0\rangle\langle 0|\). By destroying the probe in this process, we recover the initial POVM. Figure 2. The joint measurement of \(A\) and \(B\) is simulated by by a third measurement \(C\), followed by classical post-processing. ### The two outcomes POVM induced by a perfect indirect measurement scheme Here we consider \(\mathcal{H}_{P}=\mathbb{C}^{2}\). In the following, we will assume that initially the probe is perfectly prepared in the state \(|0\rangle\langle 0|\). The evolution of the initial state \(\rho\) of the system coupled to the prepared probe on \(|0\rangle\langle 0|\), where we have: \[\rho\otimes|0\rangle\langle 0|\to U\Big{(}\rho\otimes|0\rangle\langle 0| \Big{)}\,U^{*}, \tag{2}\] where \(U\in\mathcal{U}(2d)\subseteq\mathcal{M}_{d}\otimes\mathcal{M}_{2}\) given by: \[U=\sum_{i,j=0}^{1}U_{ij}\otimes|i\rangle\langle j|=\begin{bmatrix}U_{00}&U_{ 01}\\ U_{10}&U_{11}\end{bmatrix}, \tag{3}\] with \(U_{ij}\in\mathcal{M}_{d}(\mathbb{C})\) are the blocks of \(U\). We consider an observale \(A=\lambda_{0}|0\rangle\langle 0|+\lambda_{1}|1\rangle\langle 1|\) on \(\mathcal{H}_{P}\). We denote by \(\mathbb{P}(i)\) the probability of obtaining the outcome \(i\in\{0,1\}\) by measuring the probe in the evolved total system, which is given by: \[\mathbb{P}(i):=\mathrm{Tr}\,\Big{[}U\,\rho\otimes|0\rangle\langle 0|\,U^{*} \Big{(}I_{d}\otimes|i\rangle\langle i|\Big{)}\Big{]}. \tag{4}\] In the following proposition, we will show the probe measurement on the total evolved system will induce an effective POVM \(\mathcal{A}\). **Proposition 3.1**.: _The probability of obtaining the outcome \(i\in\{0,1\}\) induces a POVM with two outcomes \(\mathcal{A}=(A_{0},A_{1})\) given by:_ \[\mathcal{A}:=(A_{0},A_{1})\quad\text{with}\quad A_{0}:=U_{00}^{*}\,U_{00}\quad \text{and}\quad A_{1}:=U_{10}^{*}\,U_{10}. \tag{5}\] Proof.: Let \(\mathbb{P}(0)\) and \(\mathbb{P}(1)\) the probability of obtaining the outcome \(0\) and \(1\), an explicit computation shows that the probability of obtaining the outcome \(i=0\) is : \[\mathbb{P}(0) =\mathrm{Tr}\,\Big{[}U\,\Big{(}\rho\otimes|0\rangle\langle 0| \Big{)}\,U^{*}\,\Big{(}I_{d}\otimes|0\rangle\langle 0|\Big{)}\Big{]}\] \[=\mathrm{Tr}\,\Big{[}\sum_{i,j=0}^{1}\,U_{ij}\otimes|i\rangle \langle j|\Big{(}\rho\otimes|0\rangle\langle 0|\Big{)}\sum_{a,b=0}^{1}U_{ab}^{*} \otimes|b\rangle\langle a|\Big{(}I_{d}\otimes|0\rangle\langle 0|\Big{)}\Big{]}\] \[=\sum_{i,j=0}^{1}\,\sum_{a,b=0}^{1}\mathrm{Tr}\,\Big{[}U_{ij}\, \rho\,U_{ab}^{*}\otimes|i\rangle\,\langle j|0\rangle\,\langle 0|b\rangle\, \langle a|0\rangle\,\langle 0|\,\Big{]}\] \[=\mathrm{Tr}\,\Big{[}U_{00}^{*}\,U_{00}\,\rho\Big{]}=\mathrm{Tr} \,\Big{[}A_{0}\rho\Big{]},\] where we have defined \(A_{0}:=U_{00}^{*}U_{00}\). A simple computation of \(\mathbb{P}(1)\) as the one before, shows that: \[\mathbb{P}(1)=\mathrm{Tr}\,\Big{[}U_{10}^{*}\,U_{10}\,\rho\Big{]}=\mathrm{Tr} \,\Big{[}A_{1}\rho\Big{]},\] where we have defined \(A_{1}:=U_{10}^{*}U_{10}\). Using that \(U\in\mathcal{U}(2d)\) is a unitary matrix, one can check easily that \(U_{00}^{*}\,U_{00}\,+\,U_{10}^{*}\,U_{10}=I_{d}\) which implies \(A_{0}+A_{1}=I_{d}\) which is the requirement for \(A:=(A_{0},A_{1})\) being a POVM. In the proposition above, we have seen that every unitary \(U\in\mathcal{U}(2d)\), induces by an indirect measurement scheme, an effective POVM \(\mathcal{A}=(A_{0},A_{1})\) on the target system \(\mathbb{C}^{d}\). We now ask the reverse question: _given a fixed POVM \(\mathcal{A}\) on \(\mathbb{C}^{d}\),what is the set of interaction unitary operators \(U\) which yield \(\mathcal{A}\) as an effective POVM?_ This question motivates the following definition, where we will fix the POVM elements and search for unitaries where their first column is given by the POVM elements. **Definition 3.2**.: _Given a fixed two outcomes POVM \(\mathcal{A}=(A_{0},A_{1})\), we define a subset of unitary matrices \(\mathbb{U}(A_{0},A_{1})\) as:_ \[\mathbb{U}(A_{0},A_{1}):=\Big{\{}U\in\mathcal{U}(2d)\Big{|}\,A_{0}=U_{00}^{*} \,U_{00},\,A_{1}=U_{10}^{*}\,U_{10}\Big{\}}\subseteq\mathcal{U}(2d). \tag{6}\] In the following proposition, we will show that the set \(\mathbb{U}(A_{0},A_{1})\) can be completely characterized with the help of three unitary matrices \(V,W,Z\in\mathcal{U}(d)\). **Proposition 3.3**.: _The set \(\mathbb{U}(A_{0},A_{1})\) is completely characterized as follows:_ \[\mathbb{U}(A_{0},A_{1})=\Big{\{}V,W,Z\in\mathcal{U}(d)\Big{|}\,U_ {00}=V\,\sqrt{A_{0}},U_{10}=W\,\sqrt{A_{1}},\] \[U_{01}=V\,\sqrt{A_{1}}\,Z^{*}\,,\,U_{11}=-W\,\sqrt{A_{0}}\,Z^{*} \Big{\}}.\] _In other words \(U\in\mathbb{U}(A_{0},A_{1})\) if and only if there exists \(V,W,Z\in\mathcal{U}(d)\) such that_ \[U=\begin{bmatrix}V\sqrt{A_{0}}&V\,\sqrt{A_{1}}\,Z^{*}\\ W\sqrt{A_{1}}&-W\,\sqrt{A_{0}}\,Z^{*}\end{bmatrix}. \tag{7}\] Proof.: Let \(\mathbb{W}(A_{0},A_{1})\) be the set from the right-hand side in the statement. Straightforward matrix block computations easily yield the first inclusion \[\mathbb{W}(A_{0},A_{1})\subseteq\mathbb{U}(A_{0},A_{1}) \tag{8}\] Let us concentrate on the reverse inclusion: \[\mathbb{U}(A_{0},A_{1})\subseteq\mathbb{W}(A_{0},A_{1}). \tag{9}\] To this end let \(U\in\mathbb{U}(A_{0},A_{1})\), that is \[A_{0}=U_{00}^{*}U_{00}\quad\text{and}\quad A_{1}=U_{10}^{*}U_{10}. \tag{10}\] Let the polar decomposition of \(U_{00}\) given by: \[U_{00}=V\,P_{0}, \tag{11}\] where \(V\in\mathcal{U}(d)\) and \(P_{0}\) a positive semidefinte matrix. By combining \(A_{0}\) from equations (10) and (11): \[A_{0}=U_{00}^{*}U_{00}=(V\,P_{0})^{*}V\,P_{0}=P_{0}^{2}\implies P_{0}=\sqrt{ A_{0}}, \tag{12}\] we deduce that \(U_{00}\) is given by: \[U_{00}=V\sqrt{A_{0}}. \tag{13}\] Similarly, let the polar decomposition of \(U_{10}\) given by: \[U_{10}=W\,P_{1}, \tag{14}\] where \(W\in\mathcal{U}(d)\) and \(P_{1}\) a positive semidefinite matrix. We have that: \[A_{1}=U_{10}^{*}U_{10}=(W\,P_{1})^{*}W\,P_{1}=P_{1}^{2}\implies P_{1}=\sqrt{ A_{1}}, \tag{15}\] where we can deduce the form of \(U_{10}\), which is given by: \[U_{10}=W\sqrt{A_{1}}. \tag{16}\] From the unitary property of \(U\in\mathcal{U}(2d)\), we shall determine the other blocks of \(U\). \[U^{*}U=I_{2d}\implies(\mathrm{S}_{1}):\begin{cases}U_{00}^{*}\,U_{00}\,+\,U_ {10}^{*}\,U_{10}=I_{d}\\ U_{00}^{*}\,U_{01}\,+\,U_{10}^{*}\,U_{11}=0\\ U_{01}^{*}\,U_{00}\,+\,U_{11}^{*}\,U_{10}=0\\ U_{01}^{*}\,U_{01}\,+\,U_{11}^{*}\,U_{11}=I_{d}\end{cases} \tag{17}\] and \[UU^{*}=I_{2d}\implies(\mathrm{S}_{2}):\begin{cases}U_{00}\,U_{00}^{*}\,+\,U_ {01}\,U_{01}^{*}=I_{d}\\ U_{00}\,U_{10}^{*}\,+\,U_{01}\,U_{11}^{*}=0\\ U_{10}\,U_{00}^{*}\,+\,U_{11}\,U_{01}^{*}=0\\ U_{10}\,U_{10}^{*}\,+\,U_{11}\,U_{11}^{*}=I_{d}\end{cases} \tag{18}\] By using the fourth equation from the system \((\mathrm{S}_{2})\) (18) and the equation (16): \[WA_{1}W^{*}\,+\,U_{11}\,U_{11}^{*}=I_{d}, \tag{19}\] where from the fact that \(A_{0}+A_{1}=I_{d}\), we deduce that: \[(U_{11}^{*}\,W)^{*}\,U_{11}^{*}\,W=A_{0}. \tag{20}\] Let the polar decomposition of \(U_{11}^{*}\,W\) given by: \[U_{11}^{*}W=-Z\,Q, \tag{21}\] where \(Z\in\mathcal{U}(d)\) and positive semidefinite \(Q\). By combining the equations (20) and (21), we have: \[(U_{11}^{*}\,W)^{*}\,U_{11}^{*}\,W=Q^{2}=A_{0}\implies Q=\sqrt{A_{0}}, \tag{22}\] where we can deduce the form of \(U_{11}\), which is given by : \[U_{11}=-W\,\sqrt{A_{0}}\,Z^{*}. \tag{23}\] The only remaining element we need to determine is \(U_{01}\), for that we shall use the first equation of the system (S\({}_{2}\)) (18) and from \(A_{0}+A_{1}=I_{d}\), we have: \[U_{00}\,U_{00}^{*}\,+\,U_{01}\,U_{01}^{*}=I_{d}\iff(U_{01}^{*}\,V)^{*}U_{01}^{ *}\,V=A_{1}. \tag{24}\] Let the polar decomposition of \(U_{01}^{*}\,V\) given by: \[U_{01}^{*}\,V=\Gamma\,\tilde{Q}, \tag{25}\] with \(\Gamma\in\mathcal{U}(d)\) and \(\tilde{Q}\) positive matrix. By combining the equations (25) and (24), we have: \[(U_{01}^{*}\,V)^{*}U_{01}^{*}\,V=\tilde{Q}^{2}=A_{1}\implies\tilde{Q}=\sqrt{ A_{1}}, \tag{26}\] where we can deduce the form of \(U_{01}\), which is given by: \[U_{01}=V\,\sqrt{A_{1}}\Gamma^{*}. \tag{27}\] The only remaining matrix to determine is \(\Gamma\), for that we shall use the second equation of (S\({}_{2}\)) (18) and analyze the invertibility of the effect operators \(A_{0}\) and \(A_{1}\). From the second equation of (S\({}_{2}\)) (18) we have: \[U_{00}\,U_{10}^{*}\,+\,U_{01}\,U_{11}^{*}=0\iff\sqrt{A_{1}}\,\sqrt{A_{0}}= \sqrt{A_{1}}\,\Gamma^{*}\,Z\,\sqrt{A_{0}} \tag{28}\] Remark that \(\mathrm{Ran}(A_{0})+\mathrm{Ran}(A_{1})=\mathbb{C}^{d}\), to discuss the inversibility \(A_{0}\) and \(A_{1}\), we shall distinguish two cases: * Assuming \(A_{0}\) invertible and \(A_{1}\) invertible only on its range: Let \(\mathbb{C}^{d}\ni x=x_{0}+x_{1}\) with \(x_{i}\in\mathrm{Ran}(A_{i})\), and \(y_{1}^{T}:=x_{1}^{T}(\sqrt{A_{1}})^{-1}\), we have: \[y_{1}^{T}\sqrt{A_{1}}=x_{1}^{T}\,I_{d}\,|_{\mathrm{Ran}(A_{1})}=x_{1}^{T}\,I _{d}|_{\mathrm{Ran}(A_{1})}\Gamma^{*}\,Z.\] (29) where \(I_{d}|_{\mathrm{Ran}(A_{1})}\) is the identity on the range of \(A_{1}\). Hence, we have that: \[\Gamma^{*}Z\,|_{\mathrm{Ran}(A_{1})}=I_{d}\,|_{\mathrm{Ran}(A_{1})}\implies \Gamma|_{\mathrm{Ran}(A_{1})}=Z|_{\mathrm{Ran}(A_{1})}\] (30) * Assuming \(A_{1}\) invertible and \(A_{0}\) invertible only on its range: As before \(\mathbb{C}^{d}\ni x=x_{0}+x_{1}\) with \(x_{i}\in\mathrm{Ran}(A_{i})\), and \(y_{0}:=(\sqrt{A_{0}})^{-1}x_{0}\) we have: \[\sqrt{A_{0}}\,y_{0}=\,I_{d}\,|_{\mathrm{Ran}(A_{0})}x_{0}=\Gamma^{*}\,Z\,I_{d}| _{\mathrm{Ran}(A_{1})}\,x_{0},\] (31) where \(I_{d}|_{\mathrm{Ran}(A_{0})}\) is the identity on the range of \(A_{0}\). We obtain that: \[\Gamma^{*}Z\,|_{\mathrm{Ran}(A_{0})}=I_{d}\,|_{\mathrm{Ran}(A_{0})}\implies \Gamma|_{\mathrm{Ran}(A_{0})}=Z|_{\mathrm{Ran}(A_{0})}.\] (32) By combining the two consequences above we obtain: \[\Gamma=\Gamma|_{\mathrm{Ran}(A_{0})}+\Gamma|_{\mathrm{Ran}(A_{1})}=Z|_{ \mathrm{Ran}(A_{0})}+Z|_{\mathrm{Ran}(A_{0})}=Z\] (33) Therefore, we can deduce in both cases where \(A_{0}\) and \(A_{1}\) are both invertible and only one of the two is invertible, that \(U_{01}\) is given by: \[U_{01}=V\,\sqrt{A_{1}}\,Z^{*}. \tag{34}\] This ends the proof of the second inclusion, therefore the end of the proof of the proposition. **Remark 3.4**.: _The set \(\mathbb{U}(A_{0},A_{1})\) is not a subgroup of \(\mathcal{U}(2d)\), one need to check that \(I\notin\mathbb{U}(A_{0},A_{1})\). From the definition given above we have \(U_{00}=V\sqrt{A_{0}}\) and \(U_{10}=W\sqrt{A_{1}}\) for \(V,W\in\mathcal{U}(d)\). Let \(I_{2d}\in\mathcal{U}(2d)\) we have \(U_{00}=V\,\sqrt{A_{0}}=I_{d}\) and \(U_{10}=W\,\sqrt{A_{1}}=0_{d}\), hence one can check that \(A_{0}+A_{1}\neq I_{d}\), therefore \(\mathbb{U}(A_{0},A_{1})\) is not a subgroup of \(\mathcal{U}(2d)\)._ **Remark 3.5**.: _Let the action \(\circ^{\prime}:\,\mathcal{U}(d)^{\times 3}\curvearrowright\mathbb{U}(A_{0},A_{1})\) defined by:_ \[\circ^{\prime}:\,\mathcal{U}(d)^{\times 3}\times\mathbb{U}(A_{0},A_{1}) \rightarrow\mathbb{U}(A_{0},A_{1}),\] \[\Big{(}(V,W,Z),U\Big{)}\rightarrow(V,W,Z)\circ^{\prime}U.\] _For \(U\in\mathbb{U}(A_{0},A_{1})\), the action of \(\circ^{\prime}\) is defined by:_ \[(V,W,Z)\circ U: =\begin{bmatrix}V&0\\ 0&W\end{bmatrix}\begin{bmatrix}V^{\prime}\sqrt{A_{0}}&V^{\prime}\,\sqrt{A_{1}} \,Z^{\prime*}\\ W^{\prime}\sqrt{A_{1}}&-W^{\prime}\,\sqrt{A_{0}}\,Z^{\prime*}\end{bmatrix} \begin{bmatrix}I&0\\ 0&Z^{*}\end{bmatrix}\] \[=\begin{bmatrix}\tilde{V}\sqrt{A_{0}}&\tilde{V}\,\sqrt{A_{1}}\, \tilde{Z}^{\prime*}\\ \tilde{W}^{\prime}\sqrt{A_{1}}&-\tilde{W}^{\prime}\,\sqrt{A_{0}}\,\tilde{Z}^{ \prime*}\end{bmatrix}\in\mathbb{U}(A_{0},A_{1}),\] _where \(\tilde{V}=VV^{\prime}\),\(\tilde{W}=WW^{\prime}\) and \(\tilde{Z}=ZZ^{\prime}\) are in \(\mathcal{U}(d)\)._ _With this at hand, one can understand the set \(\mathbb{U}(A_{0},A_{1})\) in the proposition 3.3 as the orbit of the simple matrix \(\tilde{U}\) given by:_ \[\tilde{U}:=\begin{bmatrix}\sqrt{A_{0}}&\sqrt{A_{1}}\\ \sqrt{A_{1}}&-\sqrt{A_{0}}\end{bmatrix}.\] _We can also check that the following property holds:_ \[\Big{(}(V,W,Z)\circ^{\prime}(V^{\prime},W^{\prime},Z^{\prime})\Big{)}\circ \tilde{U}=(VV^{\prime},WW^{\prime},ZZ^{\prime})\circ\tilde{U}.\] _where \(\circ^{\prime}\) is the group action on \(\mathcal{U}(d)^{\times 3}\)._ ### The noisy two outcomes POVM induced by an imperfect indirect measurement scheme To introduce our main _physical noise model_, we shall assume that initially the quantum state \(\rho\) is coupled to a general two-level probe \(\beta\), and the unitaries are random. The imperfect measurement of the probe will induce a general POVM \(\mathcal{A}^{\beta}\). The average over the randomness of the POVM \(\mathcal{A}^{\beta}\) will induce our physical noise model. In the basis \(\{\ket{0},\ket{1}\}\) we write \(\beta\in\mathcal{M}_{2}^{1,+}\) as: \[\beta=\sum_{i,j=0}^{1}\beta_{ij}|i\rangle\langle j|\in\mathcal{M}_{2}^{1,+}.\] Again, we consider the evolution of the form \[\rho\otimes\beta\to U\Big{(}\rho\otimes\beta\Big{)}U^{*},\] with \(U\in\mathbb{U}(A_{0},A_{1})\). We shall denote by \(\mathbb{P}_{\beta}(i)\) the probability of obtaining the outcome \(i\in\{0,1\}\) on the evolved quantum state \(\rho\) coupled to the probe \(\beta\). In the following Proposition, we shall show that the outcome probability of obtaining \(i\in\{0,1\}\) will induce an effective POVM \(\mathcal{A}^{\beta}\) that will depend on the probe. **Proposition 3.6**.: _Let the evolution of two-level probe \(\beta\) coupled to a quantum state \(\rho\) governed by \(U\in\mathbb{U}(A_{0},A_{1})\) with associated unitary operators \(V,W,Z\). The probability of obtaining the outcome \(i\in\{0,1\}\) given by \(\mathbb{P}_{\beta}(i)\) induces an effective POVM with two outcomes \(\mathcal{A}^{\beta}\) given by:_ \[\mathcal{A}^{\beta}=(A_{0}^{\beta},A_{1}^{\beta}),\] _where \(A_{0}^{\beta}\) and \(A_{1}^{\beta}\) are explicitly given by:_ \[A_{0}^{\beta} =\beta_{00}\,A_{0}+\beta_{01}\,Z\,\sqrt{A_{1}}\,\sqrt{A_{0}}+\beta_ {10}\,\sqrt{A_{1}}\,\sqrt{A_{0}}\,Z^{*}+\beta_{11}\,Z^{*}\,A_{1}\,Z,\] \[A_{1}^{\beta} =\beta_{00}\,A_{1}-\beta_{01}\,Z\,\sqrt{A_{1}}\,\sqrt{A_{0}}-\beta _{10}\,\sqrt{A_{1}}\,\sqrt{A_{0}}\,Z^{*}+\beta_{11}\,Z^{*}\,A_{0}\,Z.\] Proof.: An explicit computation of \(\mathbb{P}_{\beta}(i)\) shows that: \[\mathbb{P}_{\beta}(i) :=\operatorname{Tr}\left[U\left(\rho\otimes\beta\right)U^{*} \left(I_{d}\otimes|i\rangle\langle i|\right)\right]\] \[=\sum_{c,d=0}^{1}\beta_{cd}\,\operatorname{Tr}\left[U_{id}^{*} \,U_{ic}\,\rho\right]\] \[=\operatorname{Tr}[A_{i}^{\beta}\rho],\] where we have defined \(A_{i}^{\beta}:=\sum_{c,d=0}^{1}\beta_{cd}\,U_{id}^{*}\,U_{ic}\). By using Proposition 3.3, the effects of the POVM \(\mathcal{A}^{\beta}\) are given explicitly by: \[A_{0}^{\beta} :=\sum_{c,d=0}^{1}\beta_{cd}\,U_{0d}^{*}\,U_{0c}=\beta_{00}\,A_{0 }+\beta_{01}\,Z\,\sqrt{A_{1}}\,\sqrt{A_{0}}+\beta_{10}\,\sqrt{A_{1}}\,\sqrt{A _{0}}\,Z^{*}+\beta_{11}\,Z^{*}\,A_{1}\,Z\] \[A_{1}^{\beta} :=\sum_{c,d=0}^{1}\beta_{cd}\,U_{1d}^{*}\,U_{1c}=\beta_{00}\,A_{1 }-\beta_{01}\,Z\,\sqrt{A_{1}}\,\sqrt{A_{0}}-\beta_{10}\,\sqrt{A_{1}}\,\sqrt{A_ {0}}\,Z^{*}+\beta_{11}\,Z^{*}\,A_{0}\,Z\] To introduce our noise model from an indirect measurement process, we shall define a _centered 1-design_ probability measure on \(\mathcal{U}(d)\). **Definition 3.7**.: _A probability distribution \(\mu\) on \(\mathcal{U}(d)\) is called a centered \(1\)-design if \(\forall X\in\mathcal{M}_{d}\)_ \[\mathbb{E}_{Z\sim\mu}[Z\,X]=\mathbb{E}_{Z\sim\mu}[Z^{*}\,X]:=0 \tag{35}\] _and_ \[\mathbb{E}_{Z\sim\mu}[Z\,X\,Z^{*}]:=\frac{\operatorname{Tr}[X]}{d}I_{d}. \tag{36}\] **Remark 3.8**.: _The Haar probability distribution is an example of a centered \(1\)-design._ As we have described in the introduction of this subsection, our physical noise model will emerge naturally by taking the average of the effective POVM \(\mathcal{A}^{\beta}\). For that, we shall assume that \(Z\in\mathcal{U}(d)\) are sampled from a centered \(1\)-design distribution. **Theorem 3.9**.: _Assume \(U\in\mathbb{U}(A_{0},A_{1})\) and assume that the corresponding \(Z\in\mathcal{U}(d)\) is sampled from a centered 1-design distribution \(\mu\). The expectation value of the induced noisy effective POVM \(\mathcal{A}^{\beta}\) will induce a general noise model described by the two outcomes POVM \(\mathbb{E}_{Z\sim\mu}[\mathcal{A}^{\beta}]\) given by:_ \[\mathbb{E}_{Z\sim\mu}[\mathcal{A}^{\beta}]=(\mathbb{E}_{Z\sim\mu}[A_{0}^{\beta }],\mathbb{E}_{Z\sim\mu}[A_{1}^{\beta}]),\] _where \(\mathbb{E}_{Z\sim\mu}[A_{0}^{\beta}]\) and \(\mathbb{E}_{Z\sim\mu}[A_{1}^{\beta}]\) are given by:_ \[\mathbb{E}_{Z\sim\mu}[A_{0}^{\beta}]=\beta_{00}\,A_{0}+\beta_{11}\,\frac{ \operatorname{Tr}[A_{1}]}{d}I_{d}\quad\text{and}\quad\mathbb{E}_{Z\sim\mu}[A_{ 1}^{\beta}]=\beta_{00}\,A_{1}+\beta_{11}\,\frac{\operatorname{Tr}[A_{0}]}{d}I_ {d}.\] _In other words, we have:_ \[(\mathbb{E}_{Z\sim\mu}[A_{0}^{\beta}]\,,\,\mathbb{E}_{Z\sim\mu}[A_{1}^{\beta} ])=\beta_{00}(A_{0}\,,\,A_{1})+\beta_{11}\left(\frac{\operatorname{Tr}[A_{1}]}{d }I_{d}\,,\,\frac{\operatorname{Tr}[A_{0}]}{d}I_{d}\right).\] Proof.: From Proposition 3.6 we have shown that: \[A_{0}^{\beta} =\beta_{00}\,A_{0}+\beta_{01}\,Z\,\sqrt{A_{1}}\,\sqrt{A_{0}}+\beta _{10}\,\sqrt{A_{1}}\,\sqrt{A_{0}}\,Z^{*}+\beta_{11}\,Z^{*}\,A_{1}\,Z. \tag{37}\] \[A_{1}^{\beta} =\beta_{00}\,A_{1}-\beta_{01}\,Z\,\sqrt{A_{1}}\,\sqrt{A_{0}}-\beta _{10}\,\sqrt{A_{1}}\,\sqrt{A_{0}}\,Z^{*}+\beta_{11}\,Z^{*}\,A_{0}\,Z. \tag{38}\] By taking the average on \(Z\) and by linearity we have: \[\mathbb{E}_{Z\sim\mu}[A_{0}^{\beta}] =\beta_{00}\,A_{0}+\beta_{01}\,\mathbb{E}_{Z\sim\mu}[\,Z\,\sqrt{A_{ 1}}\,\sqrt{A_{0}}]+\beta_{10}\,\mathbb{E}_{Z\sim\mu}[\sqrt{A_{1}}\,\sqrt{A_{0}} \,Z^{*}]+\beta_{11}\,\mathbb{E}_{Z\sim\mu}[Z^{*}\,A_{1}\,Z].\] \[=\beta_{00}\,A_{0}+\beta_{11}\,\frac{\operatorname{Tr}[A_{1}]}{d} I_{d}.\] With the same computation as the previous one, we obtain \(\mathbb{E}_{Z\sim\mu}[A_{1}^{\beta}]\) as announced in the statement of the Theorem. Having established the general description of noise models from a two-level probe model \(\beta\), we shall specify two different probes: the _probabilistic probe_\(\sigma_{t}\) and the _cat probe_\(\gamma_{t}\). We will show that starting from these two different types of probes when one takes the average on \(Z\) they give the _same type of noisy effective POVM_, hence the same type of noise model. **Definition 3.10**.: _We shall define the probabilistic and the cat probes are given respectively by \(\sigma_{t}\) and \(\gamma_{t}\) for \(t\in[0,1]\) as:_ \[\sigma_{t}:=(1-t)|0\rangle\langle 0|+t|1\rangle\langle 1|\quad\text{and} \quad\gamma_{t}:=|\lambda_{t}\rangle\langle\lambda_{t}|. \tag{39}\] _where \(|\lambda_{t}\rangle\) is defined as_ \[|\lambda_{t}\rangle:=\sqrt{1-t}\,|0\rangle+\sqrt{t}\,|1\rangle\,. \tag{40}\] As a direct consequence of Theorem 3.9, we will see in the following Corollary that the resulting noise model is independent of using a probabilistic or a cat probe. **Corollary 3.11**.: _Assuming the evolution \(U\in\mathcal{U}(2d)\) of \(\rho\) coupled either to the probabilistic or the cat probe given respectively by \(\sigma_{t}\) and \(\gamma_{t}\). The expectation value of \(\mathcal{A}^{\sigma_{t}}\) and \(\mathcal{A}^{\gamma_{t}}\) are equal, and we have \(\mathbb{E}_{Z\sim\mu}[\mathcal{A}^{\sigma_{t}}]=\mathbb{E}_{Z\sim\mu}[ \mathcal{A}^{\gamma_{t}}]\)._ Proof.: The proof of the Corollary is basically that the probabilistic and the cat probe have the same diagonal elements, moreover, the off-diagonal elements will be all canceled when we take the expectation value. By Theorem 3.9 we obtain: \[\mathbb{E}_{Z\sim\mu}[A_{0}^{\sigma_{t}}]=t\,A_{0}+(1-t)\,\frac{ \operatorname{Tr}[A_{1}]}{d}I_{d}=\mathbb{E}_{Z\sim\mu}[A_{0}^{\gamma_{t}}]\,, \,\mathbb{E}_{Z\sim\mu}[A_{1}^{\sigma_{t}}]=t\,A_{1}+(1-t)\,\frac{ \operatorname{Tr}[A_{0}]}{d}I_{d}=\mathbb{E}_{Z\sim\mu}[A_{1}^{\gamma_{t}}].\] ## 4. The case of arbitrary POVMs In this section, we will generalize our _physical noise model_ description from an _indirect measurement process_ for the case of POVMs _having more than two outcomes_. Here we shall consider a probe of dimension \(N+1\). As in the previous section, the POVMs can be induced by indirect measurement, where the resulting POVMs are of \(N+1\) outcomes. The noise in the probe preparation will induce noisy POVMs of \(N+1\) outcomes. The unitary interaction between the system and the probe will induce and give the form of the POVMs. This will allow us, to introduce the set of the unitaries that give rise to the POVMs. With the noisy probe and set of all the unitaries that we shall consider as random, a natural noise model emerges by taking the average over the unitaries. In Subsection 4.1, we will give the induced POVMs in the perfect situation. In Subsection 4.1, we will give our physical noise model description. ### General induced POVM In the following subsection, we will give a general model where we will describe an emergent POVM \(\mathcal{A}\) with \(N+1\) outcomes. For that, as in Section 3, we shall assume the probe is initially prepared in \(|0\rangle\langle 0|\) and the total system is given by \(\rho\otimes|0\rangle\langle 0|\). The dynamics are given by unitary matrices \(U\in\mathcal{U}((N+1)d)\). Explicitly the evolution of the total system, as usual, is given by: \[\rho\otimes|0\rangle\langle 0|\to U\Big{(}\rho\otimes|0\rangle\langle 0| \Big{)}\,U^{*},\] with \(U\in\mathcal{U}((N+1)d)\subseteq\mathcal{M}_{d}\otimes\mathcal{M}_{N+1}\). We denote by \(\mathbb{P}(i)\) the probability of obtaining the outcome \(i\in[0,N]\) by measuring the probe on the evolved total system. In the following proposition, we will show that the measurement over the probe will induce an effective emergent POVM \(\mathcal{A}\) with \(N+1\) outcomes. **Proposition 4.1**.: _The probability of obtaining the outcome \(i\in[0,N]\) induce an effective POVM \(\mathcal{A}\) with \(N+1\) outcomes given by:_ \[\mathcal{A}:=(A_{0},\cdots,A_{N})\quad\text{with}\quad A_{i}:=U_{i0}^{*}\,U_{ i0},\quad\forall i\in[0,N]. \tag{41}\] Proof.: An explicit computation of \(\mathbb{P}(i)\) shows that: \[\mathbb{P}(i) =\operatorname{Tr}\Big{[}U\left(\rho\otimes|0\rangle\langle 0| \right)U^{*}\left(I_{d}\otimes|i\rangle\langle i|\right)\Big{]}\] \[=\operatorname{Tr}\Big{[}U_{i0}^{*}\,U_{i0}\,\rho\Big{]}= \operatorname{Tr}\Big{[}A_{i}\rho\Big{]}.\] Where we have defined \(A_{i}:=U_{i0}^{*}\,U_{i0}\geq 0\). Therefore \(A_{i}\geq 0\) for all \(i\in[0,N]\) and by using the unitarity of \(U\) one can check easily that \(\sum_{i=0}^{N}\,A_{i}=I_{d}\) holds, hence \(\mathcal{A}:=(A_{0},\cdots,A_{N})\) defines a POVM. The proposition above motivates the following definition, where we will define a set of unitaries \(U\in\mathcal{U}((N+1)d)\) where for fixed \(U_{i0}\) for all \(i\in[0,N]\) is given by \(U_{i0}^{*}\,U_{i0}=:A_{i}\). **Definition 4.2**.: _Consider a POVM \(\mathcal{A}=(A_{0},\cdots,A_{N})\). We define a subset of unitary matrices \(\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\) as:_ \[\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}:=\Big{\{}U\in\mathcal{U}((N+1)d )\Big{|}\,U_{i0}^{*}\,U_{i0}=A_{i}\Big{\}}. \tag{42}\] **Remark 4.3**.: _It is a very difficult task to explicitly characterize the set of unitaries given above, in comparison with the set given in Definition 3.2 which was fully characterized in Proposition 3.3._ **Remark 4.4**.: _The set \(\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\) is non-empty. To show this, we shall show it is possible to complete the matrix \(U\) having as it first block column \(U_{i0}=\sqrt{A_{i}}\). This is possible as long as the first \(d\) columns of \(U\) are orthonormal. For that, let two different columns \(c_{1}\), \(c_{2}\) of \(U\), where \(c_{j}\in[d]\) and \(j\in\{1,2\}\). The scalar product of two columns is given by:_ \[\sum_{l=1}^{d}\sum_{i=0}^{N}\Big{(}\sqrt{A_{i}}\,(l,c_{1})\Big{)}^{*}\,\sqrt{A _{i}}(l,c_{2})=\sum_{l=1}^{d}\sum_{i=0}^{N}A_{i}(c_{1},c_{2})=\delta_{c_{1},c_ {2}},\] _where in the equality above we have used that \(\sum_{i=0}^{N}A_{i}=I_{d}\)._ _Therefore, the first \(d\) columns of \(U\) are orthonormal, hence we can always complete a unitary matrix \(U\in\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\)._ **Remark 4.5**.: _The set \(\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\) is invariant invariant under the action of element in \(\mathcal{U}^{\times 2N+1}(d)\) defined as below._ _Let the action \(\circ^{\prime}:\mathcal{U}(d)^{\times 2N+1}\curvearrowright\mathbb{U}\Big{(}\{A_{i}\}_{i \in[0,N]}\Big{)}\) defined by:_ \[\circ^{\prime}:\mathcal{U}(d)^{\times 2N+1}\times\mathbb{U}\Big{(}\{A_{ i}\}_{i\in[0,N]}\Big{)} \to\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)},\] \[\Big{(}(V_{0},\cdots,V_{N},Z_{1},\cdots,Z_{N}),U\Big{)} \to\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}.\] _For \(U\in\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\) the action of \(\circ^{\prime}\) is defined by:_ \[U^{\prime}=(V_{0},\cdots,V_{N},Z_{1},\cdots,Z_{N})\circ^{\prime}U:=\begin{bmatrix} V_{0}&&&&\\ &V_{1}&&\\ &&\ddots&\\ &&&V_{N}\end{bmatrix}U\,\begin{bmatrix}I_{d}&&&&\\ &Z_{1}^{*}&&\\ &&\ddots&&\\ &&&Z_{N}^{*}\end{bmatrix}.\] _Remark from a simple computation that:_ \[U^{\prime\bullet}_{i0}U^{\prime}_{i0}=U^{*}_{i0}V^{*}_{i}V_{i}U_{i0}=A_{i},\] _therefore we have \(U^{\prime}\in\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\)._ The Definition 4.2 will play a crucial role in the rest of this section to introduce our physical noise model. ### Physical noise model In the following subsection, we will introduce our main general _physical noise_ model. For that, we will assume initially that the quantum state \(\rho\) is coupled to a probe given by an \((N+1)\)-level quantum state \(\beta\in\mathcal{M}^{1,+}_{N+1}\). **Definition 4.6**.: _Let \(\beta\in\mathcal{M}^{1,+}_{N+1}\) the \(N+1\) level quantum state describing the probe:_ \[\beta=\sum_{i,j=0}^{N}\beta_{i,j}|i\rangle\langle j|\in\mathcal{M}^{1,+}_{N+1}.\] Assuming the evolution of the total system, quantum state \(\rho\) coupled to a probe \(\beta\) is given by: \[\rho\otimes\beta\to U\Big{(}\rho\otimes\beta\Big{)}U^{*},\] with \(U\in\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\). We denote by \(\mathbb{P}_{\beta}(i)\) the probability of obtaining an outcome \(i\in[0,N]\) by measuring the probe on the evolved total system. In the following we will show the outcome probability of obtaining \(i\in[0,N]\) will induce an effective noisy POVM \(\mathcal{A}^{\beta}\). **Proposition 4.7**.: _The probability of obtaining the outcome \(i\in[0,N]\) by measuring the probe on the evolved total system, will induce an effective POVM \(\mathcal{A}^{\beta}\) where:_ \[\mathcal{A}^{\beta}=(A^{\beta}_{0},\cdots,A^{\beta}_{N}),\quad\text{and}\quad A ^{\beta}_{i}=\sum_{c,k=0}^{N}\,\beta_{ck}\,U^{*}_{ik}\,U_{ic},\quad\forall i \in[0,N].\] Proof.: An explicit computation of \(\mathbb{P}_{\beta}(i)\) shows that: \[\mathbb{P}_{\beta}(i) =\operatorname{Tr}\Big{[}U\left(\rho\otimes\beta\right)U^{*} \left(I_{d}\otimes|i\rangle\langle i|\right)\Big{]}\] \[=\sum_{c,k=0}^{N}\beta_{ck}\,\operatorname{Tr}\Big{[}U^{*}_{ik} \,U_{ic}\,\rho\Big{]}\] \[=\operatorname{Tr}[A_{i}\rho],\] where we have defined \(A^{\beta}_{i}:=\sum_{c,k=0}^{N}\beta_{ck}\,U^{*}_{ik}\,U_{ic}\geq 0\) and \(\sum_{i=0}^{N}A^{\beta}_{i}=\operatorname{Tr}\beta=1\), therefore \(\mathcal{A}^{\beta}:=(A^{\beta}_{0},\cdots,A^{\beta}_{N})\) defines a POVM. To introduce our physical noise model in this general setting, we shall assume that the unitaries \(U\in\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\) are randomly sampled from a _nice probability_ measure. **Definition 4.8**.: _A probability distribution \(\mu\) on \(\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\) is a nice probability measure if it is invariant under right multiplication with a unitary \(Z\in\mathcal{U}((N+1)d)\) of the form \(Z:=\sum_{i=0}^{N}\,Z_{i}\otimes|i\rangle\langle i|\quad\text{with}\quad Z_{0} =I_{d}\quad\text{and}\quad Z_{i}\in\mathcal{U}(d)\) for all \(i\neq 0.\) More precisely, we have for a random unitary \(U\in\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\):_ \[\mathbb{E}_{U\sim\mu}[f(U)]=\mathbb{E}_{U\sim\mu}[f(U\cdot Z)].\] **Remark 4.9**.: _To construct an example of a nice probability measure, consider a fixed element \(U_{0}\in\mathbb{U}(\{A_{i}\}_{i\in[0,N]})\). Let also \(V_{1},\ldots,V_{N}\in\mathcal{U}(d)\) be independent Haar-distributed random unitary matrices. Define the random variable_ \[U:=U_{0}(I_{d}\oplus V_{1}\oplus\cdots\oplus V_{N}).\] _We claim that \(U\) has a nice distribution. Indeed, if \(Z_{1},\ldots,Z_{N}\in\mathcal{U}(d)\) are arbitrary unitary matrices,_ \[\mathbb{E}[f(U\cdot Z)=\mathbb{E}[f(U_{0}VZ)]=\mathbb{E}[f(U_{0}\tilde{V})]= \mathbb{E}[f(U)],\] _where we have used the fact that \(V\) and \(\tilde{V}=VZ\) have the same (block-Haar) distribution._ **Proposition 4.10**.: _Let \(\mu\) be a nice probability measure on \(\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\). We have:_ \[U\in\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)},\quad\mathbb{E}_{U\sim\mu}[ U_{ik}^{*}\,U_{i0}]=0. \tag{43}\] Proof.: By using the right invariance property of the nice measure \(\mu\) from Definition 4.8 we have: \[\mathbb{E}_{U\sim\mu}[U_{ik}^{*}\,U_{i0}]=Z_{k}^{*}\,\mathbb{E}_{U\sim\mu}[U_ {ik}^{*}\,U_{i0}].\] Since this last equation should hold for all \(Z_{k}\in U(d)\), the proposition holds. The physical motivation of introducing the definition above, all the microscopical degrees of freedom generating the dynamics of the total system are unknown. Therefore, defining them as random becomes relevant. However, assuming they are sampled from a nice probability measure will become clear in the following theorem. In the following, we will show that when one averages over the unitaries an effective noise model emerges. **Theorem 4.11**.: _Let \(U\) be a random unitary operator sampled from a nice probability measure \(\mu\), and assume that the elements of the probe density matrix \(\beta_{cc}\) are constant for all \(c\neq 0\). The averaged POVM given by \(\mathbb{E}_{U\sim\mu}[\mathcal{A}^{\beta}]\) defines a noise model given by:_ \[\mathbb{E}_{U\sim\mu}[A_{i}^{\beta}]=\beta_{00}\,A_{i}+\frac{1-\beta_{00}}{N- 1}\Big{(}1-\frac{1}{d}\operatorname{Tr}[A_{i}]\Big{)}\,I\] _One can interpret the element \(\beta_{00}\) of the probe \(\beta\in\mathcal{M}^{1,+}_{N+1}\) as a noise parameter._ Proof.: Let \(\mu\) a nice probability measure on \(\mathbb{U}\Big{(}\{A_{i}\}_{i\in[0,N]}\Big{)}\), we recall from Proposition 4.7 that: \[A_{i}^{\beta}=\sum_{c,k=0}^{N}\beta_{ck}\,U_{ik}^{*}\,U_{ic}.\] An explicit computation of the expectation value of \(\mathcal{A}^{\beta}\) on \(U\sim\mu\) yields to: \[\mathbb{E}_{U\sim\mu}[A_{i}^{\beta}] =\sum_{c,k=0}^{N}\beta_{ck}\,\mathbb{E}_{U\sim\mu}[U_{id}^{*}\,U_ {ic}]\] \[=\beta_{00}\,A_{i}+\sum_{k=0}^{N}\beta_{0k}\,\mathbb{E}_{U\sim\mu} [U_{ik}^{*}\,U_{i0}]+\sum_{c=0}^{N}\beta_{c0}\,\mathbb{E}_{U\sim\mu}[U_{i0}^{* }\,U_{ic}]+\sum_{c,k\neq 0}^{N}\beta_{ck}\,\mathbb{E}_{U\sim\mu}[U_{ik}^{*}\,U_{ic}]\] \[=\beta_{00}\,A_{i}+\sum_{k=0}^{N}\beta_{0k}\,Z_{k}^{*}\,\mathbb{E }_{U\sim\mu}[U_{ik}^{*}\,U_{i0}]+\sum_{c=0}^{N}\beta_{c0}\,\mathbb{E}_{U\sim \mu}[U_{i0}^{*}\,U_{ic}]\,Z_{c}+\sum_{c,k\neq 0}^{N}\beta_{ck}\,Z_{k}^{*}\mathbb{E}_{U\sim \mu}[U_{ik}^{*}\,U_{ic}]\,Z_{c}\] \[=\beta_{00}\,A_{i}+\sum_{k=0}^{N}\beta_{0k}\,Z_{k}^{*}\,\underbrace {\mathbb{E}_{U\sim\mu}[U_{ik}^{*}\,U_{i0}]}_{0}+\sum_{c=0}^{N}\beta_{c0}\, \underbrace{\mathbb{E}_{U\sim\mu}[U_{i0}^{*}\,U_{ic}]}_{0}\,Z_{c}+\sum_{c,k \neq 0}^{N}\beta_{ck}\,Z_{k}^{*}\underbrace{\mathbb{E}_{U\sim\mu}[U_{ik}^{*} \,U_{ic}]}_{\delta_{ck}\,\Gamma_{c}\,I}\,Z_{c}.\] In the last line, the simplification is obtained by the use of Proposition 4.10, therefore we have the final expression given by: \[\mathbb{E}_{U\sim\mu}[A_{i}^{\beta}]=\beta_{00}\,A_{i}+\sum_{c\neq 0}^{N} \beta_{cc}\,\Gamma_{ic}\,I. \tag{44}\] where we have defined the constant \(\Gamma_{ic}\) as: \[\Gamma_{ic}:=\frac{1}{d}\operatorname{Tr}\Big{[}\mathbb{E}_{U\sim\mu}[U_{ic}^{*} \,U_{ic}]\Big{]}. \tag{45}\] One can check with the definition of \(\Gamma_{ic}\) that \(\mathbb{E}_{U\sim\mu}[A_{i}^{\beta}]=I\) is a POVM, from the expression of \(\mathbb{E}_{U\sim\mu}[A_{i}^{\beta}]\) the positivity condition holds, the only remaining condition is to show that the \(\sum_{i}\mathbb{E}_{U\sim\mu}[A_{i}^{\beta}]\). By using that \(U^{*}U=I_{(N+1)d}\) we have: \[U^{*}U=I_{(N+1)d}\implies\sum_{i=0}^{N}U_{ic}^{*}\,U_{ic}=I_{d}\implies\sum_{i =0}^{N}\Gamma_{ic}=1. \tag{46}\] Then we have from equation (46) that: \[\sum_{i=0}^{N}\mathbb{E}_{U\sim\mu}[A_{i}^{\beta}]=\beta_{00}\,\sum_{i=0}^{N} \,A_{i}+\sum_{c\neq 0}^{N}\beta_{cc}\sum_{i=0}^{N}\,\Gamma_{ic}\,I= \operatorname{Tr}[\beta]=1. \tag{47}\] By assuming for all \(\forall c\neq 0\), \(\beta_{cc}\) are constants and do not depend on \(c\), we have that: \[\mathbb{E}_{U\sim\mu}[A_{i}^{\beta}] =\beta_{00}\,A_{i}+\sum_{c\neq 0}^{N}\beta_{cc}\,\Gamma_{ic}\,I\] \[=\beta_{00}\,A_{i}+\frac{(1-\beta_{00})}{N-1}\sum_{c\neq 0}^{N}\, \Gamma_{ic}\,I\] \[=\beta_{00}\,A_{i}+\frac{(1-\beta_{00})}{N-1}\Big{(}1-\frac{1}{d }\operatorname{Tr}[A_{i}]\Big{)}I.\] To obtain the last equality we have used the unitary property of \(U\) where we have: \[UU^{*}=I_{(N+1)d}\implies\sum_{c=0}^{N}U_{ic}U_{ic}^{*}=I_{d}\implies\sum_{c=0 }^{N}\Gamma_{ic}=1, \tag{48}\] and by combining the expression from equation (45),we deduce that: \[\sum_{c\neq 0}^{N}\Gamma_{ic}=1-\Gamma_{i0}=1-\frac{1}{d}\operatorname{Tr} \Big{[}\mathbb{E}_{U\sim\mu}[U_{i0}^{*}\,U_{i0}]\Big{]}=1-\frac{1}{d} \operatorname{Tr}[A_{i}]. \tag{49}\] **Remark 4.12**.: _An interesting open question is to explore the possible noise models that can be obtained by considering different probe states \(\beta\) (with non-constant diagonal) and / or different probability distributions for interaction unitary \(U\). Could the noise models from [1] be obtained in this way?_ ## 5. Applications to compatibility In this section, we shall give an application of the physical noise model we introduced in the setting of compatibility of quantum measurements. As we have described in Section 2, one can always make incompatible measurements compatible by adding noise. A natural notion that we shall encounter in this section is the notion of _robustness_. Robustness captures the minimal amount of noise needed to make incompatible measurements compatible. We shall give the formulation of robustness in the case of three different noise models, the uniform noise model, the depolarizing noise model, and our physical noise model. We shall give the SDP and its dual formulation in the three types of noise models mentioned above. We will end this section by giving an explicit example of robustness in the case of a particular pair of measurements, using the three noise models. In general, deciding whether a set of quantum measurements is (in-)compatible is a hard task, which can be formulated as a semidefinite program (SDP). These are a type of convex optimization programs with positive semidefinite constraints (see [1] for a general introduction) that will allow us later to give a formulation of robustness. We recall that a noise model is given by a convex combination of a given POVM \(\mathcal{A}\) with a trivial POVM \(\mathcal{T}\). We recall from Section 2 the uniform noise and the depolarizing noise are given by: * The uniform noise model for \(\alpha\in[0,1]\) is defined by: \[\mathcal{A}\to\mathcal{A}^{\alpha} :=\alpha\mathcal{A}+(1-\alpha)\mathcal{T},\] where \(\mathcal{T}:=(t_{1}I,\cdots,t_{N}I)\), with \(t_{i}:=1/N\) for all \(i\in[N]\). * The depolarizing noise for \(\alpha\in[0,1]\) is defined by: \[\mathcal{A}\to\mathcal{A}^{\alpha} :=\alpha\mathcal{A}+(1-\alpha)\mathcal{T},\] where \(\mathcal{T}:=(t_{1}I,\cdots,t_{N}I)\), with \(t_{i}:=\operatorname{Tr}A_{i}/d\) for all \(i\in[N]\). We shall also recall that our main noise model is based on an indirect measurement process obtained in Theorem 4.11 where: * For a given probe \(\beta\) where \(\beta_{00}\) can be interpreted as a noise parameter taking values in \([0,1]\), \[\mathcal{A}\to\mathcal{A}^{\beta}=\beta_{00}\mathcal{A}+(1-\beta_{00}) \mathcal{T},\] and the trivial POVM is given by \(\mathcal{T}=(t_{0}I,\cdots,t_{N}I)\), with \(t_{i}=\frac{1}{N-1}\Big{(}1-\frac{1}{d}\operatorname{Tr}[A_{i}]\Big{)}\) for all \(i\in[0,N]\). Note that in the formulas above, it is the quantity \(1-\alpha\), resp. \(1-\beta_{00}\), which measures the amount of noise present in the POVM \(\mathcal{A}^{\alpha}\). In the following, we shall give the definition of _incompatibility robustness_ that computes the minimal amount of noise needed to make two POVMs compatible, see [11, 12] and the references therein. **Definition 5.1**.: _For two POVMs \(\mathcal{A}\) and \(\mathcal{B}\), and a noise model \(\mathcal{T}\), the robustness of incompatibility of \(\mathcal{A}\) and \(\mathcal{B}\) is defined by:_ \[\alpha^{*}:=\sup_{\alpha\in[0,1]}\{\alpha\,:\,\mathcal{A}^{\alpha}\text{ and }\mathcal{B}^{\alpha}\text{ are compatible}\}. \tag{50}\] For a noise model given by a trivial operator \(\mathcal{T}\), we can give the SDP formulation of robustness. Therefore, we will recall the SDP and its dual formulation for the uniform and depolarizing noise. **Definition 5.2**.: _The SDP of the robustness of two POVMs with \(N\) outcomes \(\mathcal{A}\) and \(\mathcal{B}\) with a uniform and depolarizing noise are given respectively by:_ \[\alpha^{*}_{u}:=\begin{cases}\max_{\alpha\in[0,1],\{C_{ij}\}}\alpha\quad s.t \\ \sum_{j=1}^{N}C_{ij}=\alpha A_{i}+(1-\alpha)I/N\\ \sum_{i=1}^{N}C_{ij}=\alpha B_{j}+(1-\alpha)I/N\end{cases}\quad\text{and}\quad \alpha^{*}_{d}:=\begin{cases}\max_{\alpha\in[0,1],\{C_{ij}\}}\alpha\quad s.t \\ \sum_{j=1}^{N}C_{ij}=\alpha A_{i}+(1-\alpha)\operatorname{Tr}A_{i}/d\\ \sum_{i=1}^{N}C_{ij}=\alpha B_{j}+(1-\alpha)\operatorname{Tr}B_{j}/d\end{cases},\] _where the \(\alpha^{*}_{u}\) and \(\alpha^{*}_{d}\) stand respectively for the case of the uniform and the depolarizing noise._ **Proposition 5.3**.: _[_12_]_ _The dual SDP of the robustness of the case of the uniform and the depolarising noise for two POVMs with \(N\) outcomes are given respectively by:_ \[\alpha^{*}_{u} =\begin{cases}\min_{\{X_{i}\}_{i\in[N]},\{Y_{j}\}_{j\in[N]}}1+ \sum_{i=1}^{N}\operatorname{Tr}[X_{i}A_{i}]+\sum_{j=1}^{N}\operatorname{Tr}[Y_ {j}B_{j}]&s.t\\ X_{i}+Y_{j}\geq 0\,\text{for all}\quad i,j\in[N]\quad\text{and}\\ 1+\sum_{i=1}^{N}\operatorname{Tr}[X_{i}A_{i}]+\sum_{j=1}^{N}\operatorname{Tr}[ Y_{j}B_{j}]\geq\sum_{i=1}^{N}\operatorname{Tr}X_{i}/N+\sum_{j=1}^{N} \operatorname{Tr}Y_{j}/N\\ X_{i}+Y_{j}\geq 0\,\text{for all}\quad i,j\in[N]\quad\text{and}\\ 1+\sum_{i=1}^{N}\operatorname{Tr}[X_{i}A_{i}]+\sum_{j=1}^{N}\operatorname{Tr}[ Y_{j}B_{j}]\geq\sum_{i=1}^{N}\operatorname{Tr}X_{i}\,\operatorname{Tr}X_{i}/d+\sum_{j=1}^{N} \operatorname{Tr}Y_{j}\,\operatorname{Tr}B_{j}/d\end{cases},\] Proof.: Let us consider the following Lagrangian \(\mathcal{L}_{u}\) corresponding to the optimization problem of robustness for the uniform noise given by the primal SDP in Eq. (5.2): \[\mathcal{L}_{u}:=\alpha+\sum_{i,j=1}^{N}\Big{\langle}C_{ij},X_{ij} \Big{\rangle}-\sum_{i=1}^{N}\Big{\langle}\sum_{j=1}^{N}C_{ij}-\alpha A_{i}-(1- \alpha)\frac{I}{N},X_{i}\Big{\rangle}-\sum_{j=1}^{N}\Big{\langle}\sum_{i=1}^{N }C_{ij}-\alpha B_{j}-(1-\alpha)\frac{I}{N},Y_{j}\Big{\rangle}\] where \(X_{ij}\), \(X_{i}\) and \(Y_{j}\) are positive semidefinite matrices for all \(i,j\in[N]\) encoding the different constrains of the primal optimisation problem. Since Slater's condition is satisfied, the minimax duality holds, therefore we have: \[\max_{\alpha,\{C_{ij}\}}\quad\inf_{\{X_{ij}\},\{X_{i}\},\{Y_{j}\} }\mathcal{L}_{u}=\inf_{\{X_{ij}\},\{X_{i}\},\{Y_{j}\}}\quad\max_{\alpha,\{C_ {ij}\}}\mathcal{L}_{u}, \tag{51}\] By expanding the expression of \(\mathcal{L}_{u}\) and taking the maximum over \(\{C_{ij}\}\) and \(\alpha\) we obtain: \[\max_{\{C_{ij}\},\alpha}\mathcal{L}_{u}=\sum_{i=1}^{N}\operatorname{Tr}X_{i}/ N+\sum_{j=1}^{N}\operatorname{Tr}Y_{j}/N\] if the following constraints hold \[1+\sum_{j=1}^{N}\Big{\langle}B_{j},Y_{j}\Big{\rangle}+\sum_{i=1} ^{N}\Big{\langle}A_{i},X_{i}\Big{\rangle} \geq\sum_{i=1}^{N}\operatorname{Tr}X_{i}/N+\sum_{j=1}^{N} \operatorname{Tr}Y_{j}/N\quad\text{and}\] \[X_{ij} =X_{i}+Y_{j}\geq 0,\] and \(+\infty\) if they do not. Above we have used the Hilbert-Schmidt scalar \(\langle A,B\rangle_{\mathrm{H.S}}=\operatorname{Tr}(A^{*}B)\). By plugging the obtained value in the expression of \(\max_{\{C_{ij}\},\alpha}\mathcal{L}_{u}\) we obtain the result of the statement. The same computation can be explicitly done for the depolarising noise, where it is easy to check by duality the results of the statement in the proposition with the following Lagrangian: \[\mathcal{L}_{d}:=\alpha+\sum_{i,j=1}^{N}\Big{\langle}C_{ij},X_{ij }\Big{\rangle} -\sum_{i=1}^{N}\Big{\langle}\sum_{j=1}^{N}C_{ij}-\alpha A_{i}-(1- \alpha)\frac{\operatorname{Tr}A_{i}}{d},X_{i}\Big{\rangle}\] \[-\sum_{j=1}^{N}\Big{\langle}\sum_{i=1}^{N}C_{ij}-\alpha B_{j}-(1 -\alpha)\frac{\operatorname{Tr}B_{j}}{N},Y_{j}\Big{\rangle}.\] In the following, we shall give the SDP formulation of robustness in the case of our physical noise model and its dual formulation. **Definition 5.4**.: _The SDP of the robustness of two POVMs with \(N+1\) outcomes \(\mathcal{A}\) and \(\mathcal{B}\) in the physical noise model is given by:_ \[\alpha_{p}^{*}:=\begin{cases}\max_{\beta_{00}\in[0,1],\{C_{ij}\}}\beta_{00} \quad s.t\\ \sum_{j=0}^{N}C_{ij}=\beta_{00}A_{i}+\frac{(1-\beta_{00})}{N-1}\Big{(}1-\frac{ \operatorname{Tr}A_{i}}{d}\Big{)}I\\ \sum_{i=0}^{N}C_{ij}=\beta_{00}B_{j}+\frac{(1-\beta_{00})}{N-1}\Big{(}1-\frac{ \operatorname{Tr}B_{j}}{d}\Big{)}I\end{cases},\] _where \(\alpha_{p}^{*}\) stands for robustness in our physical model._ We give next the dual formulation. **Proposition 5.5**.: _The dual SDP of robustness in the case of our physical noise model is given by:_ \[\alpha_{p}^{*}=\min_{\{X_{i}\}_{i\in[0,N]},\{Y_{j}\}_{j\in[0,N]}}1 +\sum_{i=0}^{N}\operatorname{Tr}[X_{i}A_{i}]+\sum_{j=0}^{N}\operatorname{Tr}[ Y_{j}B_{j}]\] _with the following constraints:_ \[X_{i}+Y_{j} \geq 0\,\text{for all}\quad i,j\in[0,N]\,\text{and}\] \[1+\sum_{i=0}^{N}\langle X_{i}A_{i}\rangle+\sum_{j=0}^{N}\operatorname {Tr}[Y_{j}B_{j}] \geq\frac{1}{N-1}\Big{(}\sum_{i=0}^{N}\operatorname{Tr}X_{i}+\sum_{j=0}^{N }\operatorname{Tr}Y_{j}-\frac{1}{d}\sum_{j=0}^{N}\operatorname{Tr}B_{j} \,\operatorname{Tr}Y_{j}-\frac{1}{d}\sum_{i=0}^{N}\operatorname{Tr}X_{i}\, \operatorname{Tr}A_{i}\Big{)}.\] Proof.: Let us define the Lagrangian \(\mathcal{L}_{p}\) corresponding to the optimization problem of robustness in our main physical model given by: \[\mathcal{L}_{p}:=\beta_{00}+ \sum_{i,j=0}^{N}\Big{\langle}C_{ij},X_{ij}\Big{\rangle}-\sum_{i= 0}^{N}\Big{\langle}\sum_{j=0}^{N}C_{ij}-\beta_{00}A_{i}-\frac{(1-\beta_{00})}{ N-1}\Big{(}1-\frac{\operatorname{Tr}A_{i}}{d}\Big{)}I,X_{i}\Big{\rangle}\] \[-\sum_{j=0}^{N}\Big{\langle}\sum_{i=0}^{N}C_{ij}-\beta_{00}B_{j} -\frac{(1-\beta_{00})}{N-1}\Big{(}1-\frac{\operatorname{Tr}B_{j}}{d}\Big{)}I,Y_{j}\Big{\rangle}.\] Due to Slater's condition which is satisfied, the minimax duality holds, therefore we have: \[\max_{\beta_{00},\{C_{ij}\}}\quad\inf_{\{X_{ij}\},\{X_{i}\},\{Y_{j}\}}\, \mathcal{L}_{p}=\inf_{\{X_{ij}\},\{X_{i}\},\{Y_{j}\}}\,\max_{\beta_{00},\{C_{ ij}\}}\mathcal{L}_{p}.\] By expanding the expression of \(\mathcal{L}_{p}\) and taking the maximum of \(\{C_{ij}\}\) and \(\beta_{00}\): \[\max_{\{C_{ij}\},\beta_{00}}\mathcal{L}_{p}=\frac{1}{N-1}\Big{(}\sum_{i=0}^{N }\operatorname{Tr}X_{i}+\sum_{j=0}^{N}\operatorname{Tr}Y_{j}-\frac{1}{d}\sum_ {j=0}^{N}\operatorname{Tr}B_{j}\,\operatorname{Tr}Y_{j}-\frac{1}{d}\sum_{i=0 }^{N}\operatorname{Tr}X_{i}\,\operatorname{Tr}A_{i}\Big{)},\] with the following constraints: \[1+\sum_{i=0}^{N}\langle X_{i}A_{i}\rangle+\sum_{j=0}^{N}\operatorname{Tr}[Y_{j }B_{j}]\geq\frac{1}{N-1}\Big{(}\sum_{i=0}^{N}\operatorname{Tr}X_{i}+\sum_{j=0 }^{N}\operatorname{Tr}Y_{j}-\frac{1}{d}\sum_{j=0}^{N}\operatorname{Tr}B_{j}\, \operatorname{Tr}Y_{j}-\frac{1}{d}\sum_{i=0}^{N}\operatorname{Tr}X_{i}\, \operatorname{Tr}A_{i}\Big{)},\] and \[X_{ij}=X_{i}+Y_{j}\geq 0\,\text{for all}\quad i,j\in[0,N],\] and \(+\infty\) if the constraints above are not satisfied. By plugging the obtained value in the expression of \(\max_{\{C_{ij}\},\alpha}\mathcal{L}_{p}\) we obtain the result of the statement. Let us now compare the incompatibility robustness of a pair of measurements using the three different noise models: uniform, depolarizing, and our physical model. We consider an example of two POVMs on \(\mathbb{C}^{3}\) with two outcomes \(\mathcal{A}=(A_{0},A_{1})\) and \(\mathcal{B}=(B_{0},B_{1})\) given by: \[A_{0}:=\begin{bmatrix}1/3&0&0\\ 0&2/3&0\\ 0&0&0\end{bmatrix},\,A_{1}:=\begin{bmatrix}2/3&0&0\\ 0&1/3&0\\ 0&0&1\end{bmatrix},\] and \[B_{0}=|f_{1}\rangle\langle f_{1}|+|f_{2}\rangle\langle f_{2}|\quad\text{and} \quad B_{1}=|f_{3}\rangle\langle f_{3}|,\] where \(\{|f_{i}\rangle\}\) for \(i\in[3]\) are the columns of the Fourier matrix \[F_{3}=\frac{1}{\sqrt{3}}\begin{bmatrix}1&1&1\\ 1&\omega&\omega^{2}\\ 1&\omega^{2}&\omega\end{bmatrix},\] with \(\omega=\exp(2\pi\mathrm{i}/3)\). We plot the _compatibility region_ of the POVMs defined above in Figure 4. Recall that the compatibility region [1] is a generalization of the incompatibility robustness from Eq. (50): \[\Gamma(\mathcal{A},\mathcal{B}):=\{(p,q)\in[0,1]^{2}\,:\,\mathcal{A}^{p}\text{ and }\mathcal{B}^{q}\text{ are compatible}\}.\] Notice in the formula above that the region \(\Gamma(\mathcal{A},\mathcal{B})\) depends on the noise model used to define \(\mathcal{A}^{p}\) and \(\mathcal{B}^{q}\). The compatibility regions corresponding to the different noise models are approximately the same; small differences can be observed for noise parameters in the interval \([0.7,1]\). One can notice that with our physical noise model, we require _less_ noise in order to make the POVMs \(\mathcal{A}\) and \(\mathcal{B}\) compatible with the other noise models considered in the literature. _Acknowledgements_. F.L. would like to thank Denis Rochette for help with Mathematica code. The authors were supported by the ANR project ESQuisses, grant number ANR-20-CE47-0014-01 as well as by the PHC program _Star_ (Applications of random matrix theory and abstract harmonic analysis to quantum information theory). I.N. has also received support from the ANR project STARS, grant number ANR-20-CE40-0008. C.P is also supported by the ANR projects Q-COAST ANR- 19-CE48-0003, "Quantum Trajectories" ANR-20-CE40-0024-01, and "Investissements d'Avenir" ANR-11-LABX-0040 of the French National Research Agency.
2309.16958
PopSED: Population-Level Inference for Galaxy Properties from Broadband Photometry with Neural Density Estimation
We present PopSED, a framework for the population-level inference of galaxy properties from photometric data. Unlike the traditional approach of first analyzing individual galaxies and then combining the results to determine the physical properties of the entire galaxy population, we directly make the population distribution the inference objective. We train normalizing flows to approximate the population distribution by minimizing the Wasserstein distance between the synthetic photometry of the galaxy population and the observed data. We validate our method using mock observations and apply it to galaxies from the GAMA survey. PopSED reliably recovers the redshift and stellar mass distribution of $10^{5}$ galaxies using broadband photometry within $<1$ GPU hr, being $10^{5-6}$ times faster than the traditional spectral energy distribution modeling method. From the population posterior, we also recover the star-forming main sequence for GAMA galaxies at $z<0.1$. With the unprecedented number of galaxies in upcoming surveys, our method offers an efficient tool for studying galaxy evolution and deriving redshift distributions for cosmological analyses.
Jiaxuan Li, Peter Melchior, ChangHoon Hahn, Song Huang
2023-09-29T03:54:19Z
http://arxiv.org/abs/2309.16958v2
# PopSED: Population-Level Inference for Galaxy Properties from Broadband Photometry ###### Abstract We present PopSED, a framework for population-level inference of galaxy properties from photometric data. Unlike the traditional approach of first analyzing individual galaxies and then combining the results to determine the physical properties of the entire galaxy population, we directly make the population distribution the inference objective. We train normalizing flows to approximate the population distribution by minimizing the Wasserstein distance between the synthetic photometry of the galaxy population and the observed data. We validate our method using mock observations and apply it to galaxies from the GAMA survey. PopSED reliably recovers the redshift and stellar mass distribution of \(10^{5}\) galaxies using broadband photometry within \(<1\) GPU-hour, being \(10^{5-6}\) times faster than the traditional SED modeling method. From the population posterior we also recover the star-forming main sequence for GAMA galaxies at \(z<0.1\). With the unprecedented number of galaxies in upcoming surveys, our method offers an efficient tool for studying galaxy evolution and deriving redshift distributions for cosmological analyses. Stellar populations (1622) -- Galaxy photometry (611) -- Galaxy evolution (594) -- Neural networks (1933) -- Astrostatistics (1882) -- Sky surveys (1464) + Footnote †: journal: ApJ 0000-0002-2880-7885]Jiaxuan Li 0000-0002-1881-7885]Peter Melchior 0000-0002-1888-0885]ChangHoon Hahn 0000-0002-1888-0885]Song Huang ## 1 Introduction Galaxies are the building blocks of the Universe. The history of their formation and evolution are encoded in their spectral energy distributions (SEDs). Therefore, one of the most important tasks in extragalactic astronomy is to decode the physical properties of galaxies, including the redshift, stellar mass, star formation history (SFH), and chemical enrichment history, from the observed SEDs. The state-of-the-art SED modeling methods (e.g., Noll et al., 2009; Carnall et al., 2018; Johnson et al., 2021) that utilize stellar population synthesis (SPS) models (Conroy, 2013) have been an indispensable component in many studies ranging from individual high-redshift galaxies (Labbe et al., 2023) to large galaxy surveys (e.g., SDSS, Gunn et al., 2006). Hundreds of millions of galaxies will be characterized with the upcoming observations from Rubin Observatory Legacy Survey of Space and Time (LSST; Ivezic et al., 2019), _Euclid_(Racca et al., 2016), and the _Roman_ telescope (Spergel et al., 2015), enabling a huge discovery space for the origin and evolution of galaxies as a _population_. SED modeling is typically applied to individual galaxies. The point estimates or the posterior distributions of each galaxy need to be combined to study the galaxy population, such as measuring the stellar mass function (e.g., Wright et al., 2017; Hahn et al., 2023), the star-forming main sequence (e.g., Speagle et al., 2014; Thorne et al., 2021), and stellar mass-metallicity relation (e.g., Tremonti et al., 2004; Curti et al., 2020). However, modeling SEDs of individual galaxies is very expensive. SED fitting is a high-dimensional problem (typically \(>10\) dimensions) that requires evaluating the SPS model and sampling the posterior distribution several million times for a single galaxy. A Bayesian SED fitting with tradi tional SPS models (e.g., F5PS, Conroy et al., 2009) takes \(\sim 20\) CPU-hours per galaxy (Leja et al., 2019), making it computationally formidable to analyze millions of galaxies, not to mention analyze the entire galaxy population. This problem is partially mitigated by recent developments on accelerating SED modeling by emulating the SPS models with neural networks (Alsing et al., 2020), building differentiable SPS models with high-performance library (Hearin et al., 2021), and speeding up the sampling by using amortized simulation-based inference (SBI; Hahn and Melchior, 2022; Khullar et al., 2022; Wang et al., 2023). However, some of these methods need sophisticated training and are costly to retrain when adapting to different SPS models or noise properties. Furthermore, to constrain the galaxy population distribution, one cannot simply combine point estimates, e.g. the Maximum A Posteriori (MAP) estimate, or directly average the individual posteriors. One needs thousands of samples from the posterior of each galaxy to construct the population posterior by running Markov Chain Monte Carlo (MCMC) for a hierarchical model (e.g., Malz and Hogg, 2020; Alsing et al., 2022; Hahn et al., 2023), which poses additional modeling and computational challenges. In this paper, we introduce PopSED, an efficient and robust method to infer the properties of the galaxy population from broadband photometric data. We directly constrain the population-level distribution of galaxy properties without fitting for individual galaxies and circumvent the need for combining individual posteriors. We use normalizing flows to flexibly model the population distribution, and train the flows by minimizing the Wasserstein distance between the observed photometric data and the synthetic data generated by the flow. PopSED could reliably recover the population-level properties for \(10^{5}\) galaxies with \(\sim 1\) GPU-hour. The code popsed1 is publicly available online. Footnote 1: [https://github.com/AstroJacobLi/popsed](https://github.com/AstroJacobLi/popsed) The paper is organized as follows. We describe our method in SS2 and Figure 1, introduce the data we used in SS3, validate our method using mock observation and real data in SS4, and discuss the strengths and limitations of this work in SS5. We adopt a Chabrier (2003) initial mass function and a flat \(\Lambda\)CDM cosmology from Planck Collaboration et al. (2016) with \(\Omega_{\rm m}=0.307\) and \(H_{0}=67.7\) km s\({}^{-1}\) Mpc\({}^{-1}\). The photometry used in this work is in the AB system (Oke and Gunn, 1983). ## 2 Method The goal of this paper is to infer the distribution of physical properties \(\mathbf{\theta}\) of a large number of galaxies from photometric data \(\{\mathbf{X}_{i}\}\). We approximate the underlying galaxy population distribution \(p(\mathbf{\theta}|\{\mathbf{X}_{i}\})\) with a flexible neural density estimator, a normalizing flow \(q_{\phi}(\mathbf{\theta})\) with parameters \(\phi\). In order to train the normalizing flow to approximate the population distribution, we compare the synthetic photometric data generated by the flow with the observed data. Specifically, after sampling from the flow \(\mathbf{\theta}_{j}^{\phi}\sim q_{\phi}(\mathbf{\theta})\), we predict the corresponding broadband photometry \(\hat{\mathbf{X}}_{j}^{\phi}=F(\mathbf{\theta}_{j}^{\phi})\) using the SPS model. We then compare the observed photometry \(\{\mathbf{X}_{i}\}\) and the synthetic photometry \(\{\hat{\mathbf{X}}_{j}^{\phi}\}\) by calculating the Wasserstein distance \(\mathcal{W}_{2}(\{\hat{\mathbf{X}}_{j}^{\phi}\},\{\mathbf{X}_{i}\})\) between the two distributions. The normalizing flow is trained to minimize \(\mathcal{W}_{2}\) until the synthetic photometry from the normalizing flow agrees with the observed photometry, at which point \(q_{\phi}(\mathbf{\theta})\) serves as a MAP estimate to the galaxy population distribution \(p(\mathbf{\theta}|\{\mathbf{X}_{i}\})\). We train an ensemble of flows \(\{q_{\phi}(\mathbf{\theta})\}\) to further approximate the posterior of the population distribution. Figure 1 provides a high-level overview of our method. We describe the forward model \(F\) used to model the photometry in SS2.1 and SS2.2, introduce the normalizing flow in SS2.3 and Wasserstein distance in SS2.4, and present the training strategy in SS2.5. ### Stellar Population Synthesis Modeling A stellar population synthesis (SPS) model is needed to translate the physical parameters of a galaxy \(\mathbf{\theta}_{i}\) to its SED \(\mathbf{X}_{i}\)(see Conroy, 2013 for a review). Different SPS models have different choices for modeling the star formation history (SFH), chemical enrichment history, and dust attenuation. In the following, we discuss each of these aspects in the SPS model we use. In this work, the SFHs of galaxies are described using the PROVABGS model (Alsing et al., 2020; Hahn et al., 2023), which is trained on the galaxies in the Illustris hydrodynamical simulation (Vogelsberger et al., 2014; Genel et al., 2014; Nelson et al., 2015). The SFH is modeled by a linear combination of four SFH bases \(\{s_{i}^{\rm SFH}\}\) and one burst component \(\delta(t-t_{\rm burst})\): \[{\rm SFH}(t)\propto(1-f_{\rm burst})\sum_{i=1}^{4}\beta_{i}s_{i}^{\rm SFH}(t)+ f_{\rm burst}\,\delta(t-t_{\rm burst}), \tag{1}\] where \(\beta_{i}\) is the coefficient of each SFH base, \(t_{\rm burst}\) is the lookback time when the starburst happens, and \(f_{\rm burst}\) is the fraction of total stellar mass that is formed during the burst. The SFH bases \(\{s_{i}^{\rm SFH}\}\) are generated based on the SFHs of Illustris galaxies using the non-negative matrix factorization (NMF, Lee and Seung, 1999; Cichocki & Phan, 2009). As shown in Hahn et al. (2023b), the four bases are sufficient to capture the SFH of Illustris galaxies. Among the SFH bases, \(s_{1}^{\rm SFH}\) corresponds to the most recent star formation, whereas \(s_{4}^{\rm SFH}\) corresponds to the oldest star formation (see Fig. 5 and Appendix A in Hahn et al., 2023b). Such SFH bases are non-negative by construction and are physically intuitive to interpret. We require \(\sum_{i}\beta_{i}=1\) and we define the total-formed stellar mass to be the integration of SFH: \(M_{\star,{\rm formed}}=\int_{t=0}^{t_{\rm age}}{\rm SFH}(t,t_{\rm age})\). Unless otherwise noted, in this work, we refer to stellar mass \(M_{\star}\) as the total-formed stellar mass \(M_{\star,{\rm formed}}\)2. In total, the SFH of a galaxy is described by 7 parameters, namely \(\beta_{1},\beta_{2},\beta_{3},\beta_{4}\), \(f_{\rm burst}\), \(t_{\rm burst}\), and \(M_{\star}\). Footnote 2: We note that some surveys report the stellar mass as the total mass of all luminous material at the time of observation (also called “surviving stellar mass”). The difference in the definition of stellar mass should be noticed when comparing the results. Hearin et al. (2021) provided a fitting function to calculate the surviving stellar mass fraction. Another important piece in the SPS model is the chemical enrichment history, which is usually simplified to be the metallicity history (ZH) by assuming a fixed element abundance ratio for all metals. In the original PROVABGS, the ZH is described as a linear combination of two ZH bases extracted from Illustris. Because photometric data alone is quite uninformative in inferring the metallicity history (e.g., Hahn et al., 2023b), we simplify the ZH to be a constant stellar metallicity \(\log(Z_{\star}/Z_{\odot})\) over time, as many other works assume (e.g., Carnall et al., 2019; Leja et al., 2019). In order to synthesize the stellar populations, we discretize the lookback time \(t\) into time bins. The first time bin corresponds to \(\log\left(t/{\rm yr}\right)<6.05\), and each time bin has a width of 0.1 dex until the lookback time reaches the age of the galaxy \(t_{\rm age}\), which is determined by its redshift \(z\). We treat the stellar population in each time bin as a simple stellar population (SSP) and we evaluate its SFR according to the SFH. In the end, we add the spectra of SSPs in each time bin together weighted by the stellar mass formed in each bin. We assume a Chabrier (2003) IMF and use the MIST isochrones (Dot Figure 1: A schematic diagram of PopSED (details in §2). The galaxy population distribution \(p(\mathbf{\theta}|\{\mathbf{X}_{i}\})\) is approximated by a normalizing flow \(q_{\mathbf{\theta}}(\mathbf{\theta})\). We sample from the normalizing flow and forward model the synthetic photometry \(\{\hat{\mathbf{X}}_{j}^{\mathbf{\phi}}\}\) using the galaxy SED emulator \(F(\mathbf{\theta}_{j}^{\phi})\). Then we compare the distributions of observed photometry and the synthetic photometry by calculating the Wasserstein distance \(\mathcal{W}_{2}(\{\hat{\mathbf{X}}_{j}^{\phi}\},\{\mathbf{X}_{i}\})\), which is used as a loss to train the normalizing flow until the synthetic photometry from the normalizing flow agrees with the observed photometry. ter 2016; Choi et al. 2016) and the empirical spectra library MILES (Sanchez-Blazquez et al. 2006) for 3800-7100A and the BaSeL library (Lejeune et al. 1997, 1998; Westera et al. 2002) for wavelengths outside of the range of MILES. We use FSPS (Conroy et al. 2009; Conroy and Gunn 2010; Johnson et al. 2021) to generate SSP spectra. We do not add nebular emissions to the spectra because broadband photometry reflects more about the continuum of the galaxy spectrum. We add dust attenuation to the galaxy spectra following the recipe in Charlot and Fall (2000), which includes the birth-cloud attenuation and the diffuse dust screening. The birth-cloud attenuation only acts for stars younger than \(10^{7}\) yrs (Conroy et al. 2009), whereas the diffuse dust component affects all stars. We refer interested readers to Hahn et al. (2023) for details. There are three parameters in our dust model: \(\tau_{1}\) is the birth-cloud optical depth at 5500A, \(\tau_{2}\) is the diffuse dust optical depth at 5500A, and \(n_{\rm dust}\) is the slope of the Calzetti et al. (2000) attenuation curve. To summarize, our SPS model contains 12 parameters, as listed in Table 1. We refer the interested readers to Hahn et al. (2023) for a more complete description of PROVABGS. We emphasize that our method for population-level inference is not limited to specific SPS models. One can use a different SPS model to perform the population-level inference on galaxy populations. If the SPS model is relatively slow to evaluate, one needs to train a neural emulator to accelerate the evaluation and bring differentiability to the inference, as we introduce below. ### Galaxy Spectrum Emulator Although we do not use MCMC to construct the population distribution in this work, we also need a fast and differentiable SPS model such that the information contained in photometric data (i.e., the Wasserstein distance, see SS2.4) could efficiently flow back to the parameter space. Therefore, we train an emulator for the SPS model described in SS2.1 following the approach in Alsing et al. (2020) and Hahn et al. (2023). After training, the emulator takes the physical parameters \(\mathbf{\theta}_{i}\) of a galaxy (listed in Table 1) and predicts its synthetic photometry \(\hat{\mathbf{X}}_{i}=F(\mathbf{\theta}_{i})\) in SDSS \(ugriz\) bands with realistic noise added. We choose SDSS bands just to better match with the data used in SS3. One can certainly include more filters if needed. The details of training the spectrum emulator are presented in Appendix A. Noise must be added to the noiseless SEDs in order to meaningfully forward model the observed data. For simplicity, we assume the noise in each filter is independent and construct the noise model in each band \(k\). To be specific, we add Gaussian noise to the noiseless fluxes \(f_{k}\) following \(\hat{f}_{k}=f_{k}+n_{k}\), where \(n_{k}\sim\mathcal{N}(0,\sigma_{k})\). The noise level \(\sigma_{k}\) should depend on the flux \(f_{k}\) because, given a survey depth, fainter sources have a lower signal-to-noise ratio (SNR\({}_{k}=f_{k}/\sigma_{k}\)). We assume that the SNR for a given flux \(f_{k}\) follows a Gaussian distribution: \(p(\mathrm{SNR}_{k}|f_{k})=\mathcal{N}(\mu_{\mathrm{SNR}}(f_{k}),\sigma_{ \mathrm{SNR}}(f_{k}))\). For a given data set, we empirically estimate the median SNR (\(\mu_{\mathrm{SNR}}\)) and the standard deviation (\(\sigma_{\mathrm{SNR}}\)) as a function of \(f_{k}\) in each band \(k\) by evaluating them in magnitude bins and interpolating over the bins. During the forward modeling, we sample the distribution \(\mathrm{SNR}_{k}\sim p(\mathrm{SNR}_{k}|f_{k})\) for a given \(f_{k}\), convert \(\mathrm{SNR}_{k}\) to \(\sigma_{k}=f_{k}/\mathrm{SNR}_{k}\), then add Gaussian noise \(n_{k}\sim\mathcal{N}(0,\sigma_{k})\) to the noiseless flux \(f_{k}\). If the resulting flux \(\hat{f}_{k}<0\), we repeat the above procedure until \(\hat{f}_{k}\geq 0\). Unlike in Hahn and Melchior (2022), who model \(p(\sigma_{k}|f_{k})\), we find that modeling \(p(\mathrm{SNR}_{k}|f_{k})\) is more robust in the low-SNR regime. In the end, we convert noisy fluxes \(\hat{\mathbf{f}}_{i}\) to magnitudes \(\hat{\mathbf{X}}_{i}\). Our emulator is \(\sim 10^{3-4}\) times faster than the direct SPS computation. ### Neural Density Estimator We try to approximate the population distribution \(p(\mathbf{\theta}|\{\mathbf{X}_{i}\})\) using density estimators. Although traditional density estimation techniques, such as Gaussian Mixture Models (e.g., Bovy et al. 2011), are easy to optimize and interpret, it is less flexible and scalable to larger data sets in higher dimensions than the neural density estimation (NDE) techniques. In this work, we use "normalizing flows" (e.g., Tabak and Vanden-Eijnden 2010; Tabak and Turner 2013; Kobyzev et al. 2019) to approximate the population distribution of galaxy properties. Being a neural density estimator, normalizing flow has been successfully applied to many fields in astronomy (e.g., Alsing et al. 2019; Zhang et al. 2021; Hahn and Melchior 2022; Ciuca and Ting 2022; Dai and Seljak 2022; Green et al. 2023) to approximate density distributions and generate new data. Normalizing flow maps a complex distribution \(q_{\phi}(\mathbf{\theta})\) to a simple base distribution \(\pi(\mathbf{z})\) using an invertible bijective transformation \(f:\mathbf{\theta}\to\mathbf{z}\), which is described by a neural network with parameters \(\phi\). The base distribution is often chosen to be easy to evaluate and sample such that we can evaluate the target distribution following \[q_{\phi}(\mathbf{\theta})=\pi(f^{-1}(\mathbf{\theta}))\left|\det\left(\frac{\partial f ^{-1}}{\partial\mathbf{\theta}}\right)\right|.\] From many different flow models (e.g., Papamakarios et al. 2017), we use the Neural Spline Flow (NSF, Durkan et al., 2019), where the base function is a multivariate Gaussian distribution and the transformations are described by monotonic rational-quadratic splines. The flexibility of the NSF model makes it well-suited to model the population distribution. We use the NSF implementation in the sbi3 package (Greenberg et al., 2019; Tejero-Cantero et al., 2020). Our NSF has 20 blocks, 60 bins for the splines, and 100 latent features in each coupling layer. Footnote 3: [https://github.com/mackelab/sbi](https://github.com/mackelab/sbi) The original NSF model is initialized such that \(q_{\mathbf{\phi}}(\mathbf{\theta})\) is a standard multivariate Gaussian distribution. However, such an initialization poses significant challenges for training the flow. First, nonphysical parameters (e.g., negative redshift and stellar mass) or parameters that lie outside the prior of the emulator (Table 1) will be drawn from this initial distribution. These nonphysical values are either meaningless or incompatible with our forward model. Second, the standard Gaussian initialization imposes a quite strong prior, which could bias the population distribution. To mitigate these issues, we opt for uniform distributions over the standard Gaussian to initialize the normalizing flow. We append an additional layer to the NSF flow that performs cumulative distribution function (CDF) transformation. The CDF transformation converts a standard Gaussian distribution into a uniform distribution within a user-specified range. We set the ranges of the uniform distributions following the priors listed in Table 1. In particular, for the results shown in SS4, we narrow the redshift range to \(0<z<0.8\) and the stellar mass range to \(7.5<\log(M_{*}/M_{\odot})<13.0\) to remove irrelevant parameter space and make the inference more efficient. After applying this CDF transformation, the initial normalizing flow describes a uniform distribution in all dimensions and ensures that all parameters drawn from the initial normalizing flow are physical. The uniform distribution serves as the prior of our inference for the population distribution. ### Wasserstein Distance The job now is to train the normalizing flow \(q_{\phi}(\mathbf{\theta})\) to best approximate the population distribution. We use optimization to find a flow that is most probable to produce the photometric data that is consistent with the observations. This is equivalent to finding the MAP estimate to the posterior of the population distribution. In order to do so, we sample from the normalizing flow \(\mathbf{\theta}_{j}^{\phi}\sim q_{\phi}(\mathbf{\theta})\), and generate corresponding synthetic photometry using the forward model \(\hat{\mathbf{X}}_{j}^{\phi}=F(\mathbf{\theta}_{j})\). The _distance_ (or divergence) between the distribution of observed data \(\{\mathbf{X}_{i}\}\) and the distribution of synthetic data \(\{\hat{\mathbf{X}}_{j}^{\phi}\}\) can be a proxy for the dissimilarity between the normalizing flow and the underlying population distribution. By minimizing this distance metric, we train the normalizing flow \(q_{\phi}(\mathbf{\theta})\) so that it accurately approximates the underlying population distribution. Choosing an appropriate distance metric for probability distributions is critical. However, traditional distance metrics are challenging to apply to high-dimensional and discrete data sets. Because we try to compare two discrete samples with different sizes (\(\{\hat{\mathbf{X}}_{j}^{\phi}\}\) and \(\{\mathbf{X}_{i}\}\)), the commonly-used Kullback-Leibler (KL) divergence cannot be employed without modeling the two distributions separately. \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{Description} & \multicolumn{1}{c}{Prior for training the emulator} \\ \hline \(z\) & Redshift & Uniform (0, 1.55) \\ \(\log(M_{*}/M_{\odot})\) & log10 stellar mass & Fixed to \(M_{*}=1\)\(M_{\odot}\) when training the emulator \\ \(\beta_{1},\beta_{2},\beta_{3},\beta_{4}\) & Coefficients of SFH bases (Eq. 1) & Flat Dirichlet prior with \(0\leqslant\beta_{i}\leqslant 1,\ \sum_{i}\beta_{i}=1\) \\ \(t_{\text{burst}}\) [Gyr] & The lookback time when star formation burst happens (Eq. 1) & Uniform (\(10^{-2}\), 13.27) \\ \(f_{\text{burst}}\) & The fraction of total stellar mass formed in the star formation burst (Eq. 1) & Uniform (0, 1) \\ \(\log(Z_{*}/Z_{\odot})\) & stellar metallicity (\(Z_{\odot}=0.019\)) & Uniform (\(-2.6\), 0.3) \\ \(n_{\text{dust}}\) & The power-law index of the Calzetti et al. (2000) & Uniform (\(-3.0\), 1.0) \\ & attenuation curve & Uniform (0, 3.0) \\ \(\tau_{1}\) & Birth-cloud dust optical depth & Uniform (0, 3.0) \\ \(\tau_{2}\) & Diffuse dust optical depth & Uniform (0, 3.0) \\ \hline \end{tabular} \end{table} Table 1: Parameters in the SPS model and their priors for training the galaxy spectrum emulator On the other hand, the divergences based on the optimal transport theory, such as the Wasserstein distance (also known as the earth mover's distance), are shown to be well-behaved when the KL divergence is not applicable. The Wasserstein distance quantifies the minimal "total cost" required to move a distribution (which can be viewed as a volume of soil) to the target distribution (the other volume of soil). This distance metric allows one to compare discrete distributions whose supports do not overlap and to quantify the spatial shift between them. Importantly, the Wasserstein distance is differentiable by construction, symmetric under exchange of the two distributions, and satisfies the triangle inequality, making it well-suited for comparing two discrete photometric distributions. Wasserstein distance has been successfully applied to various topics in statistics, including variational inference (Ambrogioni et al., 2018), approximate Bayesian computation (Bernton et al., 2019), and most famously, Generative Adversarial Networks (Arjovsky et al., 2017). It is also used in astronomy to characterize galaxy images (Holzschuh et al., 2022) and the topology of large-scale structures (Tsizh et al., 2023). We refer interested readers to Peyre and Cuturi (2018) and Feydy et al. (2019) for more details on Wasserstein distance and its application. Practically, the optimal transport solution is approximated using the Sinkhorn algorithm (Sinkhorn, 1964; Cuturi, 2013), which provides an efficient and scalable approximation to the Wasserstein distance. The Sinkhorn algorithm tries to minimize the earth moving cost together with an entropic regularization term with a coefficient \(\varepsilon\) (also denoted as "temperature"). Such a regularization makes the optimal transport problem solvable using iterations that only involve linear algebra and can run in parallel on GPUs. Physically speaking, the particles in the base distribution are mapped to a fuzzy collection of the target particles whose diameters are proportional to a blurring scale \(\sigma=\varepsilon^{1/p}\)(Feydy et al., 2019, where \(p\) is the index of the \(L^{p}\) norm used to calculate the cost). Thus, the Sinkhorn distance converges to the Wasserstein distance as \(\sigma\to 0\). A larger \(\sigma\) will produce a fuzzier match between the two distributions but a faster convergence. In this work, we use the Wasserstein-2 distance (\(\mathcal{W}_{2}\), where \(L^{2}\) norm is used in the cost function) to characterize the distance between the observed photometry \(\{\mathbf{X}_{i}\}\) and the synthetic photometry \(\{\mathbf{\hat{X}}_{j}^{\phi}\}\). \(\mathcal{W}_{2}\) is calculated using the implementation of Sinkhorn iteration in the Python package GeomLoss4(Ramdas et al., 2017; Feydy et al., 2019; Charlier et al., 2021). In our case, the two photometric data sets are often not of the same size. GeomLoss could nicely handle this unbalanced optimal transport problem. The Wasserstein distance \(\mathcal{W}_{2}(\{\mathbf{\hat{X}}_{j}^{\phi}\},\{\mathbf{X}_{i}\})\) is then used as a loss to train the normalizing flow \(q_{\phi}(\mathbf{\theta})\). Footnote 4: [https://www.kernel-operations.io/geomloss/api/install.html](https://www.kernel-operations.io/geomloss/api/install.html) ### Training During each training iteration, synthetic photometry is generated by sampling from the normalizing flow, and the Wasserstein distance \(\mathcal{W}_{2}(\{\mathbf{\hat{X}}_{j}^{\phi}\},\{\mathbf{X}_{i}\})\) is computed. The gradient of this distance metric is then backpropagated to train the normalizing flow. Because we initialize the normalizing flow to be a uniform distribution, the distribution of synthetic photometry is quite far from the observed one in the first few steps. Therefore, we calculate the Wasserstein distance using a relatively large blurring scale \(\sigma\) at the beginning of training to capture the global structure of the population distribution. Then we take an annealing strategy to gradually reduce \(\sigma\) such that the focus of the normalizing flow shifts from matching the mean values to matching increasingly subtle details as training proceeds (Chui and Rangarajan, 2000). Compared with using a small \(\sigma\) for all iterations, training using annealing \(\sigma\) is faster to converge. In practice, we initialize \(\sigma\) to be 0.3 (\(\sigma\) corresponds to the blur parameter in GeomLoss) and decrease \(\sigma\) by 0.05 every 60 steps until reaching \(\sigma=0.05\), after which we fix \(\sigma=0.002\) for the remaining iterations. We find that the specific annealing schedule does not significantly affect the outcomes. The realistic noise added to the synthetic photometry \(\mathbf{\hat{X}}_{j}^{\mathbf{\phi}}\) can make the training much harder at the beginning when the normalizing flow has not yet learned the global landscape of the population distribution. Therefore, we take an "anti-annealing" strategy where the noise (described by an effective SNR) is added gradually as the training goes on. In this way, the normalizing flow will be guided by the bulk of the data in the beginning without paying much attention to the noise. The flow is then exposed to more realistic noises later on and adjusts its shape to match the details in the observed data. To do so, we reduce the noise level \(\sigma_{k}\) in the forward model (see SS2.2) by a factor of \(R\), which follows an exponential decay with time: \(R(t)=1+R_{0}\cdot\exp(-\tau\cdot t/T)\), where \(\tau\) is the decay rate and \(T\) is the total epochs of training. For the case studies in SS4, we choose to use \(R_{0}=30,\ \tau=12,\ T=800\). We determine these values by trial and error based on the mock test (see SS4.1). A trained normalizing flow is one MAP solution to the population distribution. It is also critical to understand the _posterior_ of the population distribution, i.e., the variation of the MAP solutions. The deep ensemble method is a popular approach to characterizing the Bayesian posterior by training the same deep learning model multiple times with different initializations and averaging the resulting models. The deep ensemble method is proved to better approximate the Bayesian posterior than methods such as variation inference (e.g., Wilson and Izmailov, 2020). We take this ensemble learning approach and train a number of normalizing flows with different random seeds, then combine these models by drawing the same number of samples from them and aggregating the samples. The ensemble of flows would roughly approximate the posterior of the population distribution. PopSED is implemented in PyTorch(Paszke et al., 2019) and trained with the stochastic gradient descent optimizer Adam(Kingma and Ba, 2014). We use a 1-cycle learning rate policy (Smith and Topin, 2017) with an initial learning rate of \(3\times 10^{-5}\), a maximum learning rate of \(3\times 10^{-4}\), and a minimum learning rate of \(3\times 10^{-6}\). The learning rate increases from the initial learning rate to the maximum learning rate, and then slowly drops to a minimum learning rate. We train each normalizing flow for 800 epochs. The training loss and the validation loss are comparable, indicating that the flows do not overfit. ## 3 Data To test the performance of PopSED on inferring the galaxy properties and redshifts, we use the photometric data used in the Galaxy And Mass Assembly (GAMA) survey (Driver et al., 2011). The GAMA survey is a spectroscopic survey targeting galaxies selected from photometric survey down to \(r<19.8\) mag. It is therefore an ideal data set to test how well can we recover the redshift distribution by comparing our result with the spectroscopic redshifts. We use the aperture-matched photometry (Driver et al., 2016) from GAMA Data Release 3 (DR35, Baldry et al., 2018) in this work. The photometry in the \(ugriz\) bands comes from SDSS DR7 (Abazajian et al., 2009). We remove objects lacking the AUTO photometry (i.e., Kron photometry) and objects with no flux uncertainty. We also apply a color cut \((J-K_{s})>0.025\) using photometry from the VIKING survey (Edge et al., 2013) for star-galaxy separation and an additional signal-to-noise ratio cut \(\mathrm{SNR}>1\) for all SDSS \(ugriz\)-bands to remove marginally detected sources with poor photometric quality. There are 83,692 objects in our GAMA sample. Their spectroscopic redshift distribution peaks around \(z=0.15\) and extends from \(z=0\) to \(z\sim 0.60\). In this work, we only use the AUTO photometry in SDSS \(ugriz\)-bands to infer the population distribution. We also build the noise models for each SDSS band as described in SS2.2. Footnote 5: [http://www.gama-survey.org/dr3/](http://www.gama-survey.org/dr3/) ## 4 Results With the PopSED framework presented above, we first test how well it works by applying it to mock observations (SS4.1), where the ground truth is known. Then we apply our method to the GAMA data and compare the inferred properties with spectroscopic results and literature in SS4.2. ### Mock Observation We construct a mock observation to test our method and understand its strengths and limitations. We first design a parameter distribution \(p(\mathbf{\theta})_{\mathrm{mock}}\) that roughly \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{Distribution} \\ \hline \(z\) and \(\log(M_{*}/M_{\odot})\) & Follow the joint distribution from GAMA DR3 data \\ \(\kappa_{1}\) & Truncated normal: \(\min=0\), \(\max=1\), \(\mu=0.5\), \(\sigma=0.3\) \\ \(\kappa_{2},\kappa_{3}\) & Uniform \((0,\ 1)\) \\ \(\mathrm{f_{burst}}\) [Gyr] & Truncated normal: \(\min=10^{-2}\), \(\max=13.27\), \(\mu=12\), \(\sigma=7\) \\ \(f_{\mathrm{burst}}\) & Truncated normal: \(\min=0\), \(\max=1\), \(\mu=0.1\), \(\sigma=0.7\) \\ \(\log(Z_{*}/Z_{\odot})\) & Truncated normal: \(\min=-2.6\), \(\max=0.3\), \(\mu=-1.2\), \(\mu=0.9\) \\ \(n_{\mathrm{dust}}\) & Truncated normal: \(\min=-3.0\), \(\max=1.0\), \(\mu=2\), \(\sigma=2\) \\ \(\tau_{1}\) & Truncated normal: \(\min=0\), \(\max=3.0\), \(\mu=1\), \(\sigma=0.8\) \\ \(\tau_{2}\) & Truncated normal: \(\min=0\), \(\max=3.0\), \(\mu=0.6\), \(\sigma=0.8\) \\ \hline \end{tabular} \end{table} Table 2: The distribution of SPS parameters for the mock galaxy population. \(\kappa_{j}\) is used to generate \(\beta_{i}\) of the SFH, see §2.1. imitates a real galaxy population in the Universe, then sample from it and generate realistic synthetic photometry as the mock data set. Acknowledging the fact that the redshift and stellar mass should be correlated due to the depth limit of the survey, we use GAMA DR3 spectroscopic results as a basis and resample the joint distribution of stellar masses and spectroscopic redshifts from GAMA DR3. The distributions of other parameters are listed in Table 2. For simplicity, parameters other than \(\log(M_{\star}/M_{\odot})\) and \(z\) are assumed to be independent. Subsequently, we generate mock observations \(\mathbf{X}_{i}=F(\mathbf{\theta}_{i})\) in SDSS \(ugriz\) bands for \(\mathbf{\theta}_{i}\sim p(\mathbf{\theta})_{\rm mock}\). Realistic noise is added according to the noise model of GAMA DR3 (see SS3). In total, our mock galaxy sample comprises 100,863 galaxies with \(\mathrm{SNR}>1\) across all SDSS bands. To better visualize the mock galaxy population, we calculate several characteristic quantities of galaxies, including the average SFR in the past 0.1 Gyr (\(\log\mathrm{SFR}_{0.1\mathrm{Gyr}}\)) and the mass-weighted age (\(t_{\rm age,MW}\)), from the parameter distribution \(p(\mathbf{\theta})_{\rm mock}\). The gray contours in Figure 2 represent the mock galaxy population by showing the joint distribution of stellar mass, redshift, SFR, metallicity, and age. The corresponding photometric distributions are also displayed as gray contours in the right panel of Figure 2. The distribution of the mock galaxy population in the full parameter space is shown in Figure 6 in Appendix B. We run PopSED on the mock photometric data of 100,863 galaxies. Each normalizing flow takes \(\sim 40\) minutes for training using an NVIDIA A100 GPU. We train 10 independent normalizing flows with different random seeds and aggregate the samples drawn from each flow. The inferred posterior of the population distribution is shown as blue contours in the left panel of Figure 2 and Figure 6 in Appendix B. The light blue histograms indicate the variations among different flows, whereas the dark blue histograms show the combined results. We find that photometric data could constrain the stellar mass, redshift, and SFR of the galaxy population very accurately. The inferred galaxy population agrees with the ground truth (gray contours) remarkably well in all dimensions. Moreover, the ensemble of flows encompasses the true distribution, suggesting that PopSED is able to approximate the posterior of population distribution using the ensemble method. However, the stellar metallicity and mass-weighted age have relatively large scatters among different flows, but we understand this as photometric data not being informative enough to constrain the distribution of these two parameters. Figure 2: _Left panel_: The mock galaxy population (gray contours) and the inferred galaxy population (blue contours) using our method. We calculate the average SFR within the past 0.1 Gyr (\(\log\mathrm{SFR}_{0.1\mathrm{Gyr}}\)) and the mass-weighted age (\(t_{\rm age,MW}\)) using the inferred SPS parameters. The lighter blue histograms show the individual normalizing flows, and the dark blue histogram is the result after averaging 10 flows. The inferred galaxy population agrees with the truth very accurately. _Right panel_: The mock photometric data in SDSS \(ugriz\) bands (gray contours) is practically indistinguishable from the photometry of the inferred galaxy population using PorSED (blue contours). ### GAMA Sample Motivated by the success of the mock test in SS4.1, we apply PopSED to 83,692 galaxies in the GAMA sample (see SS3). We run 30 independent normalizing flows and combine their samples together. The left panel in Figure 3 shows the inferred population distribution among the stellar mass, redshift, average SFR within the past 0.1 Gyr, stellar metallicity, and the mass-weighted age. Similar to Figure 2, the light red histograms correspond to individual flows. For completeness, we also show the full population distribution in Figure 7 in Appendix B. Thanks to GAMA spectroscopy, we are able to compare our inferred redshift distribution with the spectroscopic redshifts. The gray contours in Figure 3 show distributions of the spectroscopic redshifts and the stellar masses from the GAMA DR3 catalog. Our redshift estimates align well with the spectroscopic results, despite photometric data being intrinsically less informative of redshift. In terms of stellar mass, we compare our stellar mass distribution to the distribution of total formed stellar mass (logmintsfh) in the GAMA catalog. We simply combine the individual stellar masses in the GAMA catalog without considering the reported uncertainties. We find that our stellar mass distribution is on average 0.35 dex higher than that of GAMA. We note that the stellar mass in GAMA is derived for individual galaxies following the SPS model in Taylor et al. (2011), where a \(\tau\)-model for SFH and the stellar evolution models in Bruzual and Charlot (2003) are used. Given these differences in SPS models between GAMA and our method, it is not surprising to see an offset between GAMA's stellar mass and ours. To account for this systematic effect, we add a constant 0.35 dex offset to all GAMA stellar masses and then compare with our results in Figure 7. We find that the GAMA stellar mass distribution is consistent with ours quite well, but is slightly skewed toward the lower-mass end. This might be because the \(\tau\)-model SFH tends to produce underestimated stellar masses (e.g., Carnall et al., 2019). As with the mock test, constraints on metallicity and age are relatively less stringent. Additionally, the right panel in Figure 3 shows the distributions of GAMA data and the synthetic photometric data from our inferred galaxy population. While a minor difference is observed in the \(u\)-band data, the two distributions show excellent agreement in other bands. Leveraging the inferred population distribution, we can take slices and marginalize over irrelevant parameters to study correlations among key physical parameters. To showcase the capabilities of PopSED on such population-level analysis, we focus on constrain Figure 3: _Left panel_: The inferred galaxy population from GAMA photometric data (red contours) and the distribution of spectroscopic redshift and stellar mass from the GAMA catalog (gray contours). The inferred redshift and stellar mass distributions agree well with spectroscopic results. _Right panel_: The GAMA photometric data (gray contours) and the synthetic photometry of the inferred galaxy population (red contours). Despite the small difference in the \(u\)-band photometry, the two photometric distributions agree with each other quite well. ing the star-forming main sequence (SFMS) for galaxies at \(z<0.1\). With the samples drawn from the inferred population distribution in hand, we select samples with \(z<0.1\) and calculate the average SFR within the past 0.1 Gyr and the surviving stellar mass using the fitting function in Hearin et al. (2021)6. Here we choose the surviving stellar mass rather than the total formed mass to better compare with literature results. The distribution of the inferred galaxy population on the \(M_{\star}\)-SFR plane is shown as the gray hexagons in Figure 4, colored on a logarithmic scale. Figure 4 clearly reveals the main sequence of star-forming galaxies as well as a continuous transition from star-forming to quiescent at \(10<\log M_{\star}/M_{\odot}<11.5\). Footnote 6: [https://github.com/ArgonneCPAC/dsps/blob/main/dsps/imf/](https://github.com/ArgonneCPAC/dsps/blob/main/dsps/imf/) surviving_mstar.py To contextualize our findings, we further compare the derived galaxy distribution on the \(M_{\star}\)-SFR plane with literature results. Renzini & Peng (2015, orange line in Figure 4) derived a linear SFMS using the SDSS galaxies at \(0.02<z<0.085\) whose SFRs are estimated using H\(\alpha\) following Brinchmann et al. (2004). Sanchez et al. (2019, blue line) studied the SFMS for galaxies in the MaNGA survey (Bundy et al., 2015) using the average SFR within the past 0.1 Gyr from SED fitting. Our results are qualitatively consistent with these results at \(9.5<\log M_{\star}/M_{\odot}<11.5\). At the lower-mass end, McGaugh et al. (2017, purple line) explored the SFMS of low surface brightness galaxies at \(D<100\) Mpc using the H\(\alpha\)-based SFRs. Our results agree well with McGaugh et al. (2017) at \(\log M_{\star}/M_{\odot}<9.5\) and even capture the trend of SFMS slope flattening with increasing stellar mass. Nonetheless, it should be noted that proper weighting is needed to account for observational completeness when deriving the SFMS from the population distribution. We defer such analysis to future studies. ## 5 Discussion ### Advantage of Population-Level Inference using PopSED Population-level analyses of galaxies provide important insights into key questions of astrophysics. However, the traditional ways of SED modeling on an individual galaxy basis are very expensive. Using traditional SED fitting methods, an analysis of \(10^{5}\) galaxies will take up to \(2\times 10^{6}\) CPU-hours. Even with the development of accelerated SED fitting (e.g., Alsing et al., 2020; Hearin et al., 2021; Hahn & Melchior, 2022; Khullar et al., 2022; Wang et al., 2023), an analysis of \(10^{5}\) galaxies will still take up to \(\sim 10^{3}\) GPU-hours. PopSED is able to recover the posterior of the population distribution for \(\sim 10^{5}\) galaxies within \(\sim 10\) GPU-hours, 100 times faster than the SBI-based methods. In contrast to PopSED, it is challenging to change the SPS and noise models in SBI-based methods because of the cost of generating training data and retraining the SBI model. Moreover, it is non-trivial to combine the posteriors of individual galaxies from SED fitting to construct a population-level distribution in a statistically rigorous way. Individual posteriors should not be multiplied directly because the prior distribution will be multiplied many times and dominate the resulting posterior. Hierarchical Bayesian models are often used to tackle this problem, and such population-level inference has been successfully applied to the studies of galaxy redshifts (Leistedt et al., 2016; Malz & Hogg, 2020; Alsing et al., 2022) and gravitational wave sources (e.g., Wong et al., 2020). Taking the formulation in Malz & Hogg (2020), the population posterior is often described by simple statistical models (e.g., Gaussian mixture model) parameterized by hyper-parameters \(\mathbf{\varphi}\). The posterior of the population distribution can be written as \[p(\mathbf{\varphi}|\{\mathbf{X}_{i}\})=p(\mathbf{\varphi})\cdot\Pi_{i=1}^{N}\int\frac{p( \mathbf{\theta}_{i}|\mathbf{X}_{i})p(\mathbf{\theta}_{i}|\mathbf{\varphi})}{p(\mathbf{\theta}_{i} )}\mathrm{d}\mathbf{\theta}_{i}.\] Evaluating this posterior requires the calculation of \(N\) integrals, where the integrals are evaluated using Monte Carlo samples from individual posteriors \(p(\mathbf{\theta}_{i}|\mathbf{X}_{i})\). For each galaxy, one needs to save thousands of samples from its posterior to compute the integral. It is very expen Figure 4: The distribution of the inferred galaxy population at \(z<0.1\) on the \(M_{\star}\)-SFR plane. The star-forming main sequence and the quiescent population are clearly shown. The inferred galaxy population agrees well with the SFMSs from Renzini & Peng (2015); McGaugh et al. (2017); Sánchez et al. (2019). sive to store all the individual posteriors and computationally heavy to run MCMC chains to construct the population distribution in this fashion. It is also difficult to ensure that individual posteriors are accurate enough such that the combined population posterior is not biased when combining a large number of them. PopSED bypasses the problem of deriving and combining individual posteriors because we directly find the optimal solution for the population distribution by minimizing the distance between the observed data and the synthetic data. PopSED is much more efficient and robust at inferring the underlying population distribution. The ensemble of flows also provides an effective estimate of the posterior of the population distribution, shown as the light histograms in Figure 2 and 3. ### Potential applications of PopSED Many science cases would benefit from having the population distribution in hand. As we demonstrate in SS4.2, the population distribution can be marginalized to learn about the key relationships such as the star-forming main sequence and stellar mass function. When applied to different galaxy samples that are selected differently (e.g., by colors), PopSED will be able to tell the underlying physical difference between the two galaxy populations. Accurately determining the redshift distribution of galaxies is crucial for extracting cosmological parameters from weak lensing surveys (e.g., Mandelbaum, 2018; Newman and Gruen, 2022; Dalal et al., 2023). In this work, we demonstrate that PopSED is capable of recovering the redshift distribution of GAMA galaxies very well. Such population-level analyses of galaxies (e.g., Alsing et al., 2022) could greatly help the upcoming weak lensing surveys including LSST (Ivezic et al., 2019), _Euclid_(Racca et al., 2016), and _Roman_(Spergel et al., 2015) The PopSED framework can also be employed to generate synthetic observations in specified filters with different noise levels. These simulated observations can serve as a good reference for designing new surveys and assessing survey completeness (e.g., Luo et al., 2023). The target selection for dedicated studies (such as spectroscopic surveys) can now be done by selecting regions in the population distribution, and then translating them into the photometry (color) space. The framework also enables the exploration of outliers by sampling from low-probability regions in the population distribution (e.g., Liang et al., 2023) and subsequently identifying objects in the photometric space that resemble these outliers. Furthermore, the idea of population-level inference presented in this paper could be generalized to understand the population behind galaxy spectra (e.g., DESI, Hahn et al., 2022; PFS, Greene et al., 2022), quasar spectra (e.g., Sun et al., 2022), and stellar spectra (e.g., Gaia XP spectra, Zhang et al., 2023). ### Limitations and Future Work **Runtime:** As the number of galaxies and bandpasses in photometric surveys increases, the bottleneck of running PopSED will be the cost of computing the Wasserstein distance. In this work, we use the Sinkhorn iteration to approximate the Wasserstein distance. However, the time complexity of the Sinkhorn algorithm is \(O(MN)\), where \(M\) and \(N\) are the number of samples in the two discrete distributions (Altschuler et al., 2017). Thus, the cost of PopSED will grow faster with the number of galaxies than traditional methods. One possible solution is to use "sliced Wasserstein distance" (e.g., Bonneel et al., 2015; Kolouri et al., 2018). Instead of solving optimal transport in high dimensions, one can slice the high-dimensional probability distributions into a number of one-dimensional distributions and calculate their Wasserstein distances. Therefore, the sliced Wasserstein distance is much faster to compute. We can also accelerate PopSED by calculating the Wasserstein distance using small batches, which also increases the stochasticity when training the normalizing flows. **Accuracy:** The SPS model in this work is emulated using a neural network, which is trained using synthetic spectra by sampling the parameter space. We notice that generating training samples and training such an emulator can be costly. The emulator might also perform poorly near the boundary of the prior used to train it. Furthermore, our SPS model does not include emission lines, making it inapplicable to narrow-band photometric data. Hearin et al. (2021) presented DSPS, a fast and differentiable SPS model implemented in jax with no emulation. DSPS does not need training and can be directly used in PopSED once it is more developed and tested, as an alternative to the SPS emulator. **Robustness:** Because of the low constraining power of broadband photometry on the physical parameters, we take the annealing strategies (SS2.5) to make sure the normalizing flow smoothly converges from the uniform prior to the solution. A more informative prior distribution would likely make the training more stable. Such a prior might be learned from running PopSED on a small subset of the data. **Selection effects:** The survey completeness and selection effect are not encoded in PopSED. One solution is to include a differentiable selection function in the forward model (e.g., Alsing et al., 2022). Nevertheless, if one has a well-defined survey completeness as a function of photometry, one can sample from the population distribution, calculate the corresponding completeness for each sample, and then reweight the samples to construct a new population distribution. We defer such studies to future works. ## 6 Summary In this work, we propose PopSED, a novel framework to efficiently infer the distribution of physical parameters for an entire galaxy population using broadband photometric data. We focus on the _population-level analysis_ of galaxies rather than inferring physical properties for individual objects. We successfully applied PopSED to GAMA DR3 data and obtained the posterior of the population distribution that is consistent with spectroscopic results. Our main findings and prospects are as follows: * The overall structure of PopSED is highlighted in Figure 1. We approximate the population-level distribution of physical parameters using normalizing flow (SS2.3), a highly flexible neural density estimator. We then forward model the synthetic photometry of galaxies using the samples generated from the normalizing flow and the emulator for the stellar population synthesis model (SS2.1 and SS2.2). The normalizing flow is trained to minimize the Wasserstein distance (SS2.4) between this synthetic photometric data and the observation. After training, the flow is able to approximate the population posterior and imitate the observed photometric data distribution. * PopSED is able to analyze \(10^{5}\) galaxies within 1 GPU hour, being \(\sim 10^{5}\) times faster than traditional MCMC methods and \(\sim 100\) times faster than SBI-based methods. It also circumvents the problem of combining individual posteriors to derive the population distribution. * We validate PopSED through a carefully designed mock observation (SS4.1). The results prove that PopSED can accurately recover the underlying population posterior, especially for key parameters such as stellar mass, redshift, and SFR (Figure 2). * We then apply our method to 83,692 galaxies from the GAMA survey (SS4.2). Our inferred population distribution shows a remarkable agreement with the distribution of spectroscopic redshift and stellar mass, despite photometric data being less informative for constraining redshift (Figure 3). We further demonstrate the versatility of PopSED by studying the star-forming main sequence for galaxies at \(z<0.1\). Our results are in good agreement with the literature results (Figure 4). * PopSED holds promise for application to future photometric surveys to derive redshift distribution for weak lensing studies and understand galaxy formation and evolution. The idea of population-level inference can also be applied to analyze stellar spectra and galaxy spectra. ## Acknowledgment J.L. is grateful for discussions with Kaixuan Huang, Jenny Greene, Sihao Cheng, Yuan-Sen Ting, He Jia, Alexie Leauthaud, Rachel Mandelbaum, Meng Gu, Yifei Luo, Andy Goulding, Alexa Villaume, and Runquan Guan. We are grateful for the comments from anonymous reviewers of the 2023 ICML ML4astro workshop. This work was supported by the AI Accelerator program of the Schmidt Futures Foundation. The authors are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University, which is a consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalog is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programs including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT, and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is [http://www.gama-survey.org/](http://www.gama-survey.org/). NumPy(Harris et al., 2020), Astropy (Astropy Collaboration et al., 2013), SciPy(Jones et al., 2001), Matplotlib(Hunter, 2007), PyTorch(Paszke et al., 2019), Speculator(Alsing et al., 2020), FSPS(Conroy et al., 2009; Conroy and Gunn, 2010), python-fspsps(Johnson et al., 2021), sedpy(Johnson, 2021), speclite, sbi(Tejero-Cantero et al., 2020), geomloss(Feydy et al., 2019), pyKeOps(Charlier et al., 2021). ## Appendix A Details on training the spectrum emulator Our spectrum emulator is very similar to the neculators in Alsing et al. (2020) and Hahn and Melchior (2022). To train the emulator, we generate _restframe_ spectra \(L_{\lambda}(\mathbf{\theta})\) from 1000A to 60,000A for galaxies with fixed stellar masses \(M_{\star}=1~{}M_{\odot}\). We sample the parameter space according to the prior distributions listed in Table 1 to make the training data more representative. Uninformative priors are employed to avoid introducing any bias when training the emulator. The SFH coefficients \(\beta_{i}\) are sampled from a flat Dirichlet prior, which is equivalent to a uniform distribution over the open standard three-dimensional simplex. Following Betancourt (2012), we first sample \(\kappa_{j}\sim\text{Uniform}\left(0,1\right)\), \(j=1,2,3\), then transform \(\kappa_{j}\) to \(\beta_{i}\). Uniform priors are used for other SPS parameters. When generating the training spectra, the redshift \(z\) is only used to calculate the galaxy age \(t_{\text{age}}\) which marks the endpoint of the SFH. In the end, we generate \(3\times 10^{6}\) restframe spectra to train the emulator and \(10^{4}\) spectra for validation. We split the full wavelength range into five bins: 1000-2000A, 2000-3600A, 3600-5500A, 5500-7410A, 7410-60000A, and we train one emulator for each wavelength bin separately. In order to reduce the dimensionality, the restframe spectra are first compressed using the principal component analysis (PCA) technique (Alsing et al., 2020). We use \(N_{\text{bases}}=80,50,50,50,50\) PCA bases for each wavelength bin, and we find these PCA bases are sufficient to recover the training spectra to a high accuracy (\(1\sigma\) error \(<0.2\%\)). Then we train a 5-layer neural network to predict the PCA coefficients given the SPS parameters \(\mathbf{\theta}\). We adopt the customized activation function that is used in Alsing et al. (2020): \(a(x)=\gamma x+(1-\gamma)\,x\cdot\text{sigmoid}(\beta x)\), where \(\beta,~{}\gamma\) are trainable parameters. This activation function combines the Swish function (also known as SiLU, Elfwing et al., 2017; Ramachandran et al., 2017) with a linear component and works better than the commonly used ones such as Sigmoid or ReLU on emulating galaxy spectra. The neural network is implemented in PyTorch(Paszke et al., 2019) and trained using the Adam optimizer (Kingma and Ba, 2014) similar to Alsing et al. (2020). Finally, we calculate the observed spectrum of a galaxy with a stellar mass \(M_{\star}\) at redshift \(z\) by redshifting and scaling the predicted restframe spectrum \(L_{\lambda}(\mathbf{\theta})\) following \[l_{\lambda}(\mathbf{\theta})=\frac{L_{\lambda/(1+z)}(\mathbf{\theta})\cdot(M_{\star}/ M_{\odot})}{4\pi d_{L}^{2}(z)(1+z)},\] (A1) where \(d_{L}(z)\) is the luminosity distance. We further convolve \(l_{\lambda}(\mathbf{\theta})\) with the transmission curves \(R_{k}(\lambda)\) of SDSS filters in Doi et al. (2010) to obtain the noiseless flux \(f_{k}\) in SDSS bands7: Footnote 7: The transmission curves are calculated as the reference response of the filters multiplied by the atmospheric transmission at an airmass 1.3. We note that this set of transmission curves is different from the one measured by James E. Gunn in 2001 ([https://www.sdss4.org/instruments/camera/](https://www.sdss4.org/instruments/camera/)). The \(u\)-band transmission curve has changed most significantly from 2001 to 2010. \[f_{k}(\mathbf{\theta})=\int l_{\lambda}(\mathbf{\theta})R_{k}(\lambda)\mathrm{d} \lambda,\quad k=\{u,g,r,i,z\}.\] (A2) We note that these operations are not embedded in the neural networks but are rather post-processing. We show the validation accuracy of the spectrum emulator \(l_{\lambda}(\mathbf{\theta})\) at \(z=0\) in Figure 5, where we limit our wavelength range to 1000-12000A for brevity. The emulator achieves \(<1\%\) accuracy at \(3000\,\mathrm{\SIUnitSymbolAngstrom}<\lambda<12000\,\mathrm{\SIUnitSymbolAngstrom}\) and \(\sim 2\%\) accuracy at \(1000\,\mathrm{\SIUnitSymbolAngstrom}<\lambda<3000\,\mathrm{\SIUnitSymbolAngstrom}\). The errors on the predicted broadband SEDs for galaxies at \(z=0\) are shown in the right panel of Figure 5. The emulator is able to predict the broadband photometry to an accuracy of \(\sim 0.01\) mag. This accuracy is more than enough for population-level analysis of galaxy SEDs in photometric surveys. Since we are mostly interested in low- to intermediate-redshift galaxies (\(z<1\)) in this work, the emulation error at the very blue end does not affect the inference since the observation noise almost always dominates the photometry error. However, if one wants to study higher-redshift galaxies, a better emulator at the blue end is required. ## Appendix B Full population posteriors Here we present the full population posterior of galaxies from the mock galaxy population (Figure 6) and the GAMA sample (Figure 7).
2309.16780
Isospectral Potentials and Quantum Mechanical Functions for Neutron-Neutron Scattering
In this paper we have constructed inverse isospectral potentials for 1S0-nn state by fitting the experimental SPS using Variational Monte-Carlo technique in tandem with PFM technique. The isospectral potentials are obtained such that the cost measure i.e., mean absolute error (MAE) between the obtained and experimental SPS are less than 1 and the parameters give the low energy scattering parameters (`a' and `re') very close to the experimental values. The S-channel SPS for 1S0-nn have been obtained, with a MAE with respect to experimental data for lab energies up to 350 MeV, to less than 1.
Anil Khachi
2023-09-28T18:21:25Z
http://arxiv.org/abs/2309.16780v1
# Isospectral Potentials and Quantum Mechanical Functions for Neutron-Neutron Scattering ###### Abstract In this paper we have constructed inverse isospectral potentials for \({}^{1}S_{0}-nn\) state by fitting the experimental SPS using Variational Monte-Carlo technique in tandem with PFM technique. The isospectral potentials are obtained such that the cost measure i.e., mean absolute error (MAE) between the obtained and experimental SPS are less than 1 and the parameters give the low energy scattering parameters ('\(a\)' and '\(r_{e}\)') very close to the experimental values. The S-channel _SPS_ for \({}^{1}S_{0}-nn\) have been obtained, with a MAE with respect to experimental data for lab energies up to 350 MeV, to less than 1. **keywords:** nn-scattering, Inverse potential, Phase function method, Variational Monte-Carlo, Morse potential, Amplitude, wavefunction ## 1 Introduction The experimental setup in neutron-neutron scattering consists of a neutron beam generated by \({}^{3}H(d,n)^{4}He\) reaction. The target consists of heavy water enriched in \(D_{2}O\). In total the nd-breakup reaction i.e., \(n+d(deuterium)\to p+n+n\) is the reaction leading to the appearance of interacting two neutrons in the final state. In addition to n-d, another reaction i.e., \(\pi^{+}\rightarrow\gamma+n+n\) helps in determinining the scattering length for _nn_ reaction. Neutron-neutron interaction is important because the reaction helps in the investigation of charge dependence which has been fundamentally interesting to the nuclear community. It is well known that to investigate the charge independence in nuclear forces it is quite sufficient to compare the _np_ and _pp_ interactions. It is also known that to get more accurate information about these forces one needs to get precise low energy scattering data. Under such circumstances only S- waves are important. The investigation of low energy scattering parameters is directly related to test the hypothesis of charge independence and charge symmetry. Both theoreticians and experimentalists are keen to precisly determine the low energy parameters for _np, pp_ and _nn_ interactions. The nucleon-nucleon interactions at low energies are usually expressend in terms of effective range (\(r_{e}\)) and scattering length (_a_). These parameters provides information on the charge dependence of the nuclear forces. There have been various arguments regarding the possibility of violation of charge independence and charge symmentry some of them are discussed here: * Darewych _et al._[1] found that there was substantial difference between the _np_ and _nn_ potential curves with \({}^{1}S_{0}-nn\) (\(V_{0}\)=40.38 MeV) and for \({}^{1}S_{0}-np\) (\(V_{0}\)=61.99 MeV) hence observed "violation of charge independence". The violation was credited to the unavailability of high energy _nn_ data. They used high energy _np_ and _pp_ data for fitting the _nn_ SPS curve. * Henley and others [2] have discussed about the small departure from charge symmetry and charge independence. The departure can be originated by _em_ forces and is related to the basic understanding nuclear interaction and is ultimately connected with the fundamental particle principles of elementary physics. Since departures have been observed to be very small it is imperative to acquire precise data on the nucleon nucleon interaction. * Babenko and Petrov [3] observed from a comparison of the low-energy parameters for the _np_ system and their counterparts _pp_ and _nn_ systems that the charge dependence of nuclear forces is violated, which is associated with the mass difference between the charged and neutral pions. * The values of scattering length and effective range are sensitive to any small change in _nn_ potentials. As stated by E.S. Konobeevski _et al._[4]. A change of 7% in V(r) may lead to change of 20-30% in the scattering length calculation. The above mentioned points are either related to the precise knowledge of data or related to the fundamental knowledge of elementary physics. Our main aim in this paper would be to use the adapted nn data by Wiringa _et al._[5] and obtain the interaction potentials by GOA and CDA techniques developed by our group [6][7][8][9]. * By GOA we will be running 3 parameters (\(V_{0},r_{m}\)&\(a_{m}\)) for all 11 SPS data points at once and obtain one set of globally optimized model parameters. This is shown by solid blue line in Figure 2. * While by CDA we will be running 3 parameters for only 3 SPS data points. The CDA results creates a set of isospectral potential shown in Figure 2 as yellow bar.The details of the CDA procedure have been provided in the Appendix section 6. ## 2 Methodology: The Morse function [10] is given by: \[V_{\rm Morse}(r)=V_{0}\left(e^{-2(r-r_{m})/a_{m}}-2e^{-(r-r_{m})/a_{m}}\right) \tag{1}\] where the model parameters \(V_{0}\), \(r_{m}\) and \(a_{m}\) reflect strength of interaction, equilibrium distance at which maximum attraction is felt (equilibrium separation) and range parameter respectively. It has all the interesting features that are observed in any typical scattering experiment, such as, strong repulsion at short distances, maximum attraction at an equilibrium distance \(r_{m}\), followed by a quickly decaying tail at large distances. It can also be observed that realistic N-N interaction potentials, of Argonne v18 and Reid93 [8], for S-sates resemble that of a Morse function. Also, the phenomenological Malfeit-Tjohn potential [11] has a similar shape. Further, the analytical solution for the _TISE_ with Morse potential interaction has already been solved for bound state energies (\(E<0\)) [12]. Hence, it can be considered as a good choice for modeling the interaction between any two scattering particles. ### Phase Function Method: The Schr\(\ddot{o}\)dinger wave equation for a spinless particle with energy E and orbital angular momentum \(\ell\) undergoing scattering is given by \[\frac{\hbar^{2}}{2\mu}\bigg{[}\frac{d^{2}}{dr^{2}}+\big{(}k^{2}-\ell(\ell+1)/r ^{2}\big{)}\bigg{]}u_{\ell}(k,r)=V(r)u_{\ell}(k,r) \tag{2}\] Where \(k=\sqrt{E/(\hbar^{2}/2\mu)}\). Second order differential equation Eq.2 has been transformed to the first order non-homogeneous differential equation of Riccati type [13, 14] given by following equation: \[\delta^{\prime}_{\ell}(k,r)=-\frac{V(r)}{k}\bigg{[}\cos(\delta_{\ell}(k,r)) \hat{j}_{\ell}(kr)-\sin(\delta_{\ell}(k,r))\hat{\eta}_{\ell}(kr)\bigg{]}^{2} \tag{3}\] Here in Eq. 3 prime denotes differentiation of phase shift with respect to distance and the Riccati Hankel function of first kind is related to \(\hat{j}_{\ell}(kr)\) and \(\hat{\eta}_{\ell}(kr)\) by \(\hat{h}_{\ell}(r)=-\hat{\eta}_{\ell}(r)+i\,\hat{j}_{\ell}(r)\). In integral form the above equation can be simply written as: \[\delta(k,r)=\frac{-1}{k}\int_{0}^{r}V(r)\bigg{[}\cos(\delta_{\ell}(k,r))\hat{ j}_{\ell}(kr)-\sin(\delta_{\ell}(k,r))\hat{\eta_{\ell}}(kr)\bigg{]}^{2}dr \tag{4}\] Eq.3 is numerically solved using Runge-Kutta \(5^{th}\) order (RK-5) method with initial condition \(\delta_{\ell}(0)=0\). For \(\ell=0\), the Riccati-Bessel and Riccati-Neumann functions \(\hat{j}_{0}\) and \(\hat{\eta}_{0}\) get simplified as \(\sin(kr)\) and \(-\cos(kr)\), so Eq.3, for \(\ell=0\) becomes \[\delta^{\prime}_{0}(k,r)=-\frac{V(r)}{k}\sin^{2}[kr+\delta_{0}(k,r)] \tag{5}\] The equation for amplitude function [15] with initial condition is obtained in the form \[\begin{split} A^{\prime}_{\ell}(r)=&-\frac{A_{ \ell}V(r)}{k}\left[\cos(\delta_{\ell}(k,r))\hat{j}_{\ell}(kr)-\sin(\delta_{ \ell}(k,r))\hat{\eta}_{\ell}(kr)\right]\\ &\times\left[\sin(\delta_{\ell}(k,r))(\hat{j}_{\ell}(kr)+\cos( \delta_{\ell}(k,r))\hat{\eta}_{\ell}(kr)\right]\end{split} \tag{6}\] also the equation to obtained wavefunction [15] is \[u_{\ell}(r)=A_{\ell}(r)\left[\cos(\delta_{\ell}(k,r))\hat{j}_{\ell}(kr)-\sin( \delta_{\ell}(k,r))\hat{\eta}_{\ell}(kr)\right] \tag{7}\] In above equation the function \(\delta_{0}(k,r)\) was termed "Phase function" by Morse and Allis [16]. The significant advantage of _PFM_ method is that, the phase-shifts are directly expressed in terms of the potential and have no relation to the wave-function. This has been utilised in this paper to obtain inverse potentials in an innovative way by implementing a modified _VMC_ in tandem with _PFM_. The technique optimizes the model parameters of the potential to obtain the best match with respect to the experimental SPS values. Also, rather than solving the second order Schr\(\ddot{o}\)dinger equation, we only need to solve the first order non-homogeneous differential equation whose asymptotic value gives directly the value of _SPS_. ### Optimization of Morse function model parameters using VMC: Typically, _VMC_ is utilized for obtaining the ground state energy for a given potential. The method initiates with a trial wavefunction, which is varied at a random location by a random amount in the Monte-Carlo sense. Then, the energy is determined using the newly obtained wavefunction and variational principle is applied. This process is done iteratively till one converges to the ground state. Here, we consider to vary the potential instead of wavefunction and minimise variance with respect to experimental data, as follows: **Initialisation step:** To begin the optimisation procedure, Morse parameters \(V_{0}\), \(r_{m}\) and \(a_{m}\) are given some initial values. The phase equation is integrated using RK-5 method for different values of \(k\), a function of lab energies E, to obtain the simulated _SPS_, say \(\delta_{k}^{sim}\). The mean percentage error (\(MPE\)) has been determined with respect to the SPS analysis data of Wiringa _et al._, [5], say \(\delta_{k}^{exp}\), as \[MPE=\frac{1}{N}\sum_{i=1}^{N}\frac{|\delta_{k}^{exp}-\delta_{k}^{sim}|}{|\delta _{k}^{exp}|}\times 100 \tag{8}\] This is named as \(MPE_{old}\) and is also assigned to \(MPE_{min}\). **Monte-Carlo step:** A random number \(r\), generated in an interval [-I, I], is added to one of the parameters, say \(V_{0new}=V_{0}+r\). **PFM step:** Again, the phase equation is integrated with new set of parameters \(V_{0new}\), \(r_{m}\) and \(a_{m}\) to obtain new set of simulated scattering phase shifts (_SPS_), say \(\delta_{k}^{sim-new}\), using which \(MPE_{new}\) is determined. **Variational step:** If \(MPE_{new}<MPE_{old}\), then \(V_{0}=V_{0new}\), \(MPE_{min}=MPE_{new}\), else old values are retained. The final three steps are repeated for each of the parameters to complete one iteration. The size of interval is reduced after a certain number of iterations, if there is no significant reduction in \(MPE_{min}\). The process is completed when \(MPE_{min}\) does not change any further, that is, convergence is reached. ## 3 Singlet Pseudo Bound S-wave Energy for _nn_ We know: \[k\cot(\delta)=-\frac{1}{a}+\frac{1}{2}kr_{e}^{2} \tag{9}\] where \(a\) and \(r_{e}\) are scattering length and effective range. If use is made of the approximation specified by Eq. 9, the S matrix can be written in the form [3]: \[S(k)=\left(\frac{k+i\alpha}{k-i\alpha}\right)\left(\frac{k+i\beta}{k-i\beta}\right) \tag{10}\] Where S(k) is related to scattering length \(a\) and effective range \(r_{e}\) by following relations \[\alpha=\frac{1}{r_{e}}\bigg{[}1-\big{(}1-\frac{2r_{e}}{a}\big{)}^{1/2}\bigg{]} \tag{11}\] \[\beta=\frac{1}{r_{e}}\bigg{[}1+\big{(}1-\frac{2r_{e}}{a}\big{)}^{1/2}\bigg{]} \tag{12}\] The S matrix in Eq. 10 has two poles in the complex plane of the wave number k. The first pole \(i\beta\) (\(\beta>0\)), situated in the upper half-plane of k, is the well-known redundant pole of the S matrix. The second pole \(i\alpha\), situated in the lower half-plane of k, corresponds to a virtual (\(i\alpha<0\)) state of the two-nucleon system at the energy. \[\epsilon=\frac{\hbar^{2}\alpha^{2}}{2m} \tag{13}\] Using the obtained values for \(a=-16.49(1.56)\) and \(r_{e}=2.56(0.02)\) in Eq. 11 and then substituting the value in Eq. 13 the energy of the virtual \({}^{1}S_{0}-nn\) state comes out to be \(0.135(22)\) MeV in comparison with \(0.1293(158)\) given in [3]. ## 4 Results and Discussion At laboratory energies of nn scattering larger that \(E_{\ell ab}=250MeV\) the measured scattering phase-shift in the \({}^{1}S_{0}\)-wave interaction channel becomes negative, i.e., the interaction becomes repulsive. The fits to SPS are improved to reproduce the experimentally observed scattering length and effective range. To reproduce the observed negative S-wave phase shifts at higher energies with a static potential it is necessary to incorporate a repulsion into the potential. Here the Morse potential serves as a far more realistic potential in comparison to square well, exponential, Gaussian, or Hulthen potential used in the earlier studies. In addition to SPS and pseudo bound state energy for the _nn_ system we have also obtained important quantum mechanical functions like (i) SPS vs. r(fm) (ii) Amplitude A(r) vs r(fm) and Wavefunction \(u_{0}(r)\) for energy E=[5, 20, 50, 150, 250, 350]. These functions are shown in figure 3 respectively. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Analysis & [\(V_{0}\), \(r_{m}\), \(a_{m}\)] & MAE & \(a(fm)\) & \(r_{e}(fm)\) & \(E_{nn}\) \\ \hline GOA & [67.523, 0.925, 0.376] & 0.5 & –17.64\{**-18.5(0.4)**\} & 2.58\{**2.8(0.11)**\} & 0.117\{**0.1293(158)**\} \\ \hline \multirow{2}{*}{CDA} & [74.427,0.915,0.359] & 0.7 & \multirow{2}{*}{-16.49(1.56)} & \multirow{2}{*}{2.56(0.02)} & \multirow{2}{*}{0.135(22)\{**0.1293(158)**\} } \\ & [65.191,0.915,0.381] & 0.9 & & & \\ \hline \end{tabular} \end{table} Table 1: Optimized parameters for \({}^{1}S_{0}-nn\) states using GOA and CDA. In later case, parameter values consisting of extreme depths are shown. Scattering length (\(a\) in fm) and effective range (\(r_{e}\) in fm) obtained, using SPS determined from these optimized parameters, are shown with experimental values (bold) [5] in curly brackets. Virtual \({}^{1}S_{0}\) state energy is taken from Petrov _et al._ Figure 1: \({}^{1}S_{0}-nn\) potentilas obtained through GOA (solid blue line) and CDA (yellow ribbon) analysis. Figure 2: \({}^{1}S_{0}-nn\) SPS obtained through GOA (solid blue line) and CDA (yellow ribbon) analysis. Figure 3: \({}^{1}S_{0}-nn\) SPS, amplitude and wavefunction function vs. distance r(fm) for \({}^{1}S_{0}-nn\) state. Conclusion The outcomes of the paper is: _nn_ data from Wiringa _et al._ have been fitted to obtain the class of interaction potentials which further produces scattering parameters (\(r_{e}\) and \(a\)) and pseudo bound state energy for the _nn_ system. For this purpose we used VMC as an optimization procedure which was ran in tandem with phase function method. CDA (computationally time taking procedure \(\approx\) 2,20,000 iterations) was performed to overcome the overfitting nature of GOA (less computational time \(\approx\) 500 iterations) such that the class of potentials generated via CDA could give rise to the potentials which are capable in obtaining the low energy scattering parameters and energy. In addition we also obtained quantum mechanical functions like: amplitude and wavefunctions for nn interaction. We will be communicating GOA plus CDA for higher states of _nn_ scattering in near future. ## 6 Appendix ### Singlet State Analysis * Since, we have three parameters to be determined for \({}^{1}S_{0}-nn\) state, a total of 165 combinations need to considered. All these are shown in Table 2, where the data have been once again presented in ascending order of overall MAE. * Out of these 165, only 112 of them have \(MAE<2\) (Table 2). The average value for \(r_{m}\) from these 112 combinations is determined to be 0.915 fm. * Keeping \(r_{m}=0.915\) fixed, one needs to vary only two parameter \(V_{0}\) and \(a_{m}\). So, only \({}^{11}C_{2}\), that is 55 combinations need to be worked out. These are given in Table 3. * A total of 21 combinations are having \(MAE<1\). These have been considered for determining energy scattering parameters (\(a_{s}\) and \(r_{s}\)) and are shown in Table 4. * The values of depth \(V_{0}\) and width \(a_{m}\), given in bold (Table 4), are utilized for obtaining possible range of values for scattering parameters in our calculations. \begin{table} \begin{tabular}{c c c c c c c} \hline **Sr. No.** & \(E_{1}\) & \(E_{2}\) & \(E_{3}\) & \(V_{0}\) & \(r_{m}\) & \(a_{m}\) & **Overall** \\ & (MeV) & (MeV) & (MeV) & (MeV) & (fm) & (fm) & **MAE** \\ \hline 1 & 5 & 100 & 300 & 77.115479 & 0.903579 & 0.350886 & 0.662538 \\ 2 & 5 & 100 & 250 & 74.794074 & 0.905138 & 0.356135 & 0.664066 \\ 3 & 5 & 50 & 250 & 66.555057 & 0.923181 & 0.377144 & 0.667124 \\ 4 & 5 & 50 & 300 & 68.714993 & 0.923036 & 0.371343 & 0.667529 \\ 5 & 1 & 50 & 250 & 63.920909 & 0.932580 & 0.386325 & 0.683027 \\ 6 & 1 & 50 & 300 & 66.096318 & 0.932593 & 0.380109 & 0.685002 \\ 7 & 1 & 100 & 250 & 72.609854 & 0.912592 & 0.363168 & 0.689700 \\ 8 & 1 & 100 & 300 & 74.974928 & 0.911009 & 0.357580 & 0.689736 \\ \hline \end{tabular} \end{table} Table 2: \({}^{1}S_{0}\) **state:** Model parameters for 165 combinations, each with three lab energies and obtained by minimising MSE. The overall MAE is determined by obtaining SPS for remaining experimental data points. The data is sorted with respect to MAE in ascending order. \begin{table} \begin{tabular}{c c c c c c c c} \hline **Sr. No.** & \(E_{1}\) & \(E_{2}\) & \(E_{3}\) & \(V_{0}\) & \(r_{m}\) & \(a_{m}\) & **Overall** \\ & (MeV) & (MeV) & (MeV) & (MeV) & (fm) & (fm) & **MAE** \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: _Continued from previous page_ \begin{table} \begin{tabular}{c c c c c c c c} \hline **Sr. No.** & \(E_{1}\) & \(E_{2}\) & \(E_{3}\) & \(V_{0}\) & \(r_{m}\) & \(a_{m}\) & **Overall** \\ & (MeV) & (MeV) & (MeV) & (MeV) & (fm) & (fm) & **MAE** \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: _Continued from previous page_ \begin{table} \begin{tabular}{c c c c c c c c} \hline **Sr. No.** & \(E_{1}\) & \(E_{2}\) & \(E_{3}\) & \(V_{0}\) & \(r_{m}\) & \(a_{m}\) & **Overall** \\ & (MeV) & (MeV) & (MeV) & (MeV) & (fm) & (fm) & **MAE** \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: _Continued from previous page_ \begin{table} \begin{tabular}{c c c c c c c c} \hline **Sr. No.** & \(E_{1}\) & \(E_{2}\) & \(V_{0}\) & \(a_{m}\) & **Overall** \\ & (MeV) & (MeV) & (MeV) & (MeV) & (fm) & **MAE** \\ \hline [MISSING_PAGE_POST] *0.939942** \\ \hline 22 & 100 & 300 & 73.759048 & 0.361589 & 1.044861 \\ 23 & 25 & 250 & 68.110429 & 0.370526 & 1.104946 \\ \hline \end{tabular} \end{table} Table 3: \({}^{1}S_{0}\) **state:** Model parameters for 55 combinations for \({}^{1}S_{0}-nn\) state. After fixing \(r_{m}=0.9155\) (average) fm from 112 combinations of previous table 2, two parameters produces \({}^{11}C_{2}\)_i.e._ 55 combinations. The data is sorted with respect to MAE in ascending order. \begin{table} \begin{tabular}{c c c c c c} \hline **Sr. No.** & \(E_{1}\) & \(E_{2}\) & \(V_{0}\) & \(a_{m}\) & **Overall** \\ & (MeV) & (MeV) & (MeV) & (fm) & **MAE** \\ \hline [MISSING_PAGE_POST] *0.939942** \\ \hline 22 & 100 & 300 & 73.759048 & 0.361589 & 1.044861 \\ 23 & 25 & 250 & 68.110429 & 0.370526 & 1.104946 \\ \hline \end{tabular} \end{table} Table 2: _Continued from previous page_ \begin{table} \begin{tabular}{c c c c c c c c} \hline **Sr. No.** & \(E_{1}\) & \(E_{2}\) & \(V_{0}\) & \(a_{m}\) & **Overall** & \(r_{e}\) & \(a\) \\ & (MeV) & (MeV) & (MeV) & (MeV) & **MAE** & (fm) & (fm) \\ \hline 1 & 1 & 250 & 71.193511 & 0.366651 & 0.689214 & -18.380 & 2.539 \\ 2 & 1 & 300 & 72.930120 & 0.362403 & 0.689361 & -18.373 & 2.534 \\ 3 & 1 & 350 & **74.427456** & **0.358861** & 0.754361 & -18.368 & 2.530 \\ 4 & 1 & 200 & 69.374385 & 0.371273 & 0.758771 & -18.387 & 2.544 \\ 5 & 5 & 300 & 71.730802 & 0.363578 & 0.666437 & -16.173 & 2.556 \\ \hline \end{tabular} \end{table} Table 4: \({}^{1}S_{0}\) **state:** Model parameters for 21 combinations, each with three lab energies and obtained by minimising MSE. The overall MAE is determined by obtaining SPS for remaining experimental data points. The data is sorted with respect to \(E_{1}\) in ascending order. \begin{table} \begin{tabular}{c c c c c c} \hline **Sr. No.** & \(E_{1}\) & \(E_{2}\) & \(V_{0}\) & \(a_{m}\) & **Overall** \\ & (MeV) & (MeV) & (MeV) & (fm) & **MAE** \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 3: _Continued from previous page_
2305.00551
Specific features of g $\approx$ 4.3 EPR line behavior in magnetic nanogranular composites
Films of metal-insulator nanogranular composites M$_x$D$_{100-x}$ with different composition and percentage of metal and dielectric phases (M = Fe, Co, CoFeB; D = Al$_2$O$_3$, SiO$_2$, LiNbO$_3$; x $\approx$ 15-70 at.%) are investigated by magnetic resonance in a wide range of frequencies (f = 7-37 GHz) and temperatures (T = 4.2-360 K). In addition to the usual ferromagnetic resonance signal from an array of nanogranules, the experimental spectra contain an additional absorption peak, which we associate with the electron paramagnetic resonance (EPR) of Fe and Co ions dispersed in the insulating space between the granules. In contrast to the traditional EPR of Fe and Co ions in weakly doped non-magnetic matrices, the observed peak demonstrates a number of unusual properties, which we explain by the presence of magnetic interactions between ions and granules.
A. B. Drovosekov, N. M. Kreines, D. A. Ziganurov, A. V. Sitnikov, S. N. Nikolaev, V. V. Rylkov
2023-04-30T18:42:06Z
http://arxiv.org/abs/2305.00551v1
# Specific Features of \(g\approx 4.3\) Epr Line Behavior in Magnetic Nanogranular Composites ###### Abstract Films of metal-insulator nanogranular composites M\({}_{x}\)D\({}_{100-x}\) with different composition and percentage of metal and dielectric phases (M = Fe, Co, CoFeB; D = Al\({}_{2}\)O\({}_{3}\), SiO\({}_{2}\), LiNbO\({}_{3}\); \(x\approx 15-\)70 at. %) are investigated by magnetic resonance in a wide range of frequencies (\(f=7-\)37 GHz) and temperatures (\(T=4.2-\)360 K). In addition to the usual ferromagnetic resonance signal from an array of nanogranules, the experimental spectra contain an additional absorption peak, which we associate with the electron paramagnetic resonance (EPR) of Fe and Co ions dispersed in the insulating space between the granules. In contrast to the traditional EPR of Fe and Co ions in weakly doped non-magnetic matrices, the observed peak demonstrates a number of unusual properties, which we explain by the presence of magnetic interactions between ions and granules. A. B. Drovosekov, N. M. Kreines, D. A. Ziganurov, A. V. Sitnikov, S. N. Nikolaev, V. V. Rylkov, V. Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Rylkov, Ryl, Rylkov, Rylkov, Ryl, Rylkov, Ryl, Rylkov, Ryl, Rylkov, Ryl, Ryl, Rylkov, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, R, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, R, Ryl, Ryl, Ryl, Ryl, Ryl, R, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, Ryl, R, Ryl, Ryl, R, Ryl, Ryl, Ryl, R, Ryl, Ryl, R, Ryl, Ryl, R, Ryl, Ryl, Ryl, R, Ryl, Ryl, Ryl, R, Ryl, Ryl, R, Ryl, Ryl, Ryl, Ryl, Ryl, R, Ryl, Ryl, Ryl, R, Ryl, Ryl, R, Ryl, Ryl, R, Ryl, Ryl, R, Ryl, R, Ryl, Ryl, R, Ryl, Ryl, R, Ryl, R, Ryl, R, Ryl, R, Ryl, R, Ryl, R, Ryl, R, Ryl, R, Ryl, R, Ryl, R, R, Ryl, R, Ryl, R, R, Ryl, R, R, Ryl, R, R, R, Ryl, R, R, R, Ryl, * the dependence \(f(H)\) demonstrates the presence of a finite frequency in the zero field, which increases with the growth of the FM phase content; * the position of the peak depends on the orientation of the magnetic field with respect to the film plane; * as temperature decreases, the peak shifts towards weaker fields and decreases in intensity, disappearing at \(T\lesssim 60\) K. Nanogranular films have previously been studied by magnetic resonance in a number of works [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. Often, besides the main FMR line, the authors observed additional absorption peaks, which were associated either with the inhomogeneity of the samples [30, 31, 32], or with the excitation of inhomogeneous oscillations in the films [33, 34], in particular spin-wave and surface modes [35, 36, 37, 38, 39, 40, 41]. Observation of several resonance peaks caused by the excitation of inhomogeneous modes is also possible in ordered arrays of magnetic nanoparticles [42, 43, 44, 45]. In our case, the behavior of the additional peak does not agree with the described scenarios. It proves to be more productive to assume that this peak is associated with EPR of Fe\({}^{3+}\) (\(g\approx 4.3\)) ions dispersed in the insulating space between FM granules. As shown in [7], the frequency and orientational dependencies of the resonance field for the additional peak are well described taking into account the shift of the EPR frequency due to dipole-dipole and exchange interactions of the Fe\({}^{3+}\) ions with the ensemble of FM granules. However, the proposed model does not entirely explain the non-standard excitation conditions of this peak and its anomalous behavior with temperature [8]. In order to further clarify the reasons for the anomalous behavior of the EPR peak, in this work, we study a wider set of nanocomposites with various compositions M\({}_{x}\)D\({}_{100-x}\). Besides the systems based on the CoFeB alloy, we investigate films in which FM granules are formed of pure iron or cobalt in different insulating matrices D = Al\({}_{2}\)O\({}_{3}\), SiO\({}_{2}\), LiNbO\({}_{3}\). Furthermore, the range of the FM phase content (\(x\approx 15-70\) at. %) is significantly expanded as compared with previous works. ## 2 Samples, their preliminary characterization, methods Nanocomposite films M\({}_{x}\)D\({}_{100-x}\) with a thickness of \(\approx 1-3\)\(\mu\)m were synthesized by ion beam sputtering on glass-ceramic substrates using composite targets [46, 47, 48]. The target represents a plate of FM metal Fe, Co or an alloy Co\({}_{40}\)Fe\({}_{40}\)B\({}_{20}\) (CoFeB), on which a number of rectangular strips of oxides Al\({}_{2}\)O\({}_{3}\), SiO\({}_{2}\) or LiNbO\({}_{3}\) are placed. The uneven arrangement of dielectric strips on the target surface allows the formation of a nanocomposite film M\({}_{x}\)D\({}_{100-x}\) with a smooth controlled change in the concentration \(x\) along the substrate in a wide range \(\Delta x\approx 30-40\) at. %. Further studies are carried out on individual pieces of the grown film with a size of \(5\times 5\) mm\({}^{2}\), so that the change of \(x\) within one sample is less than 1 at. %. The content of the metal phase in the films was determined by energy dispersive X-ray microanalysis. The following series of samples were studied: (CoFeB)\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\), \(x\approx 15-56\) at. %; Fe\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\), \(x\approx 31-58\) at. %; (CoFeB)\({}_{x}\)(SiO\({}_{2}\))\({}_{100-x}\), \(x\approx 20-67\) at. %; Co\({}_{x}\)(SiO\({}_{2}\))\({}_{100-x}\), \(x\approx 24-67\) at. %; (CoFeB)\({}_{x}\)(LiNbO\({}_{3}\))\({}_{100-x}\), \(x\approx 30-48\) at. %; Co\({}_{x}\)(LiNbO\({}_{3}\))\({}_{100-x}\), \(x\approx 33-41\) at. %; According to transmission electron microscopy data, the obtained composites consist of crystalline FM nanogranules randomly distributed inside the amorphous oxide matrix [12, 48, 49, 50]. The granules usually have approximately spherical shape with diameter \(\approx 2-8\) nm depending on the composition and concentration \(x\). However, in the case of the LiNbO\({}_{3}\) matrix, the granules tend to stretch in the direction of film growth up to \(\approx 20\) nm [13, 14]. The structures Co\({}_{x}\)(LiNbO\({}_{3}\))\({}_{100-x}\) are characterized by strongly inhomogeneous distribution of the shape and size of granules over the film thickness [14]. Depending on the composition, the percolation threshold \(x_{p}\) of nanocomposites differs within the range \(x_{p}\approx 45-60\) at. % [12, 13, 14, 28, 47, 48]. Slightly below this threshold at \(x_{c}<x<x_{p}\) (\(x_{p}-x_{c}\approx 5-10\) at. %), the films demonstrate an interesting logarithmic temperature dependence of the conductivity [11, 12, 13, 14], typical of granular systems with "strong tunnel coupling" between the granules [51]. According to magnetic data, approximately in this concentration range, the samples demonstrate a transition from superparamagnetic to ferromagnetic behavior [5, 26, 27, 28, 29, 38, 15]. At the same time, a sharp increase in the magnetization of the films is observed in the low temperature region, which is explained by a large number of magnetic ions Fe and Co dispersed in the insulating matrix [9, 10, 11, 12]. In this work, the nanocomposite samples are studied by magnetic resonance in a wide range of frequencies (\(f=7-37\) GHz) and temperatures (\(T=4.2-360\) K) using a laboratory transmission-type spectrometer based on rectangular and tunable cylindrical resonators [7]. The experiments are carried out at different orientations of the external field \({\bf H}\) (up to 17 kOe) with respect to the film plane. Note that in the case of in-plane field, it is possible to realize both transverse (\({\bf h}\perp{\bf H}\)) and longitudinal (\({\bf h}\parallel{\bf H}\)) geometry of resonance excitation by the microwave field \({\bf h}\)[7]. ## 3 Magnetic resonance spectra and their discussion ### The case of in-plane field (room temperature) Almost for all samples, regardless of composition, the qualitative behavior of the magnetic resonance spectra looks identical (Fig. 1). In usual transverse resonance excitation geometry (\({\bf h}\perp{\bf H}\)), one intensive FMR peak is observed as a rule. The width of this peak varies for different compositions. In the case when the FMR peak is sufficiently narrow, as for the system (CoFeB)\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\), in lower fields it is possible to resolve a second, less intensive absorption peak, which we associate with EPR of magnetic ions dispersed in the insulating matrix [7]. When the resonance excitation geometry changes to a longitudinal one (\({\bf h}\parallel{\bf H}\)), the intensity of the FMR peak drops significantly, which is natural, but the amplitude of the EPR peak remains approximately the same. As a result, the EPR peak is much better manifested in the geometry \({\bf h}\parallel{\bf H}\) and is reproduced almost for all compositions of the films, with the exception of Co\({}_{x}\)(LiNbO\({}_{3}\))\({}_{100-x}\)[7]. The position and intensity of the EPR peak depends on the content of the metal FM phase \(x\) in the films. Interestingly, it is best manifested at concentrations well below the percolation threshold \(x_{p}\). When approaching this threshold, the intensity of the EPR peak decreases, and completely vanishes beyond \(x_{p}\). The detailed concentration dependence of the magnetic resonance spectra was obtained for the system (CoFeB)\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\) (Fig. 2). In the limit of low concentrations \(x<25\) at. % the FMR line approaches the nominal position corresponding to \(g\)-factor \(g\approx 2.1\), which is characteristic of bulk FM metals Fe, Co and their alloys. This peak is obviously associated with the resonance from the array of weakly interacting FM nanoparticles. Note that instead of the term "FMR", the term "superparamagnetic resonance" can be used in this case [21]. The position of the second peak at \(x<25\) at. % corresponds to the effective \(g\)-factor \(g\approx 4.3\), characteristic of EPR for Fe\({}^{3+}\) ions in amorphous solids. Note that the obtained type of magnetic resonance spectra for nanocomposites in the limit of low FM phase concentrations is quite consistent with the results of other authors for iron-based nanoparticles in various media [19, 20, 21, 22, 23]. In our case, however, there is an interesting trend that the EPR peak with \(g\approx 4.3\) is better manifested in the unusual longitudinal geometry of resonance excitation (\({\bf h}\parallel{\bf H}\)). Note another not quite expected result, that in our case the EPR peak (\(g\approx 4.3\)) is observed not only for systems based on iron-containing granules Fe and CoFeB, but also for the cobalt-based nanocomposite Co\({}_{x}\)(SiO\({}_{2}\))\({}_{100-x}\) (Fig. 1). The theory of EPR predicts the possibility of effective \(g\)-factor \(g\approx 4.3\) for Co\({}^{2+}\) ions in the case of octahedral ligand field [52]. Experimentally, such a line is also sometimes observed in some cubic crystals or nanocrystallites with Co ions [53, 54, 55]. Perhaps a similar scenario is realized in our case. It is also possible that the observed lines with \(g\approx 4.3\) are caused by the excitation of "forbidden" transitions be Figure 1: Room temperature spectra for nanocomposite films M\({}_{2}\)D\({}_{100-x}\) (\(x\approx 30-\)40 at.%) with different compositions of metal and dielectric phase (M and D). Spectra are obtained in magnetic field applied in the film plane at frequency \(f\approx 25\) GHz in transverse (\({\bf h}\perp{\bf H}\)) and longitudinal (\({\bf h}\parallel{\bf H}\)) geometries of resonance excitation. tween spin states of PM centers with the change of the spin projection \(\Delta m_{S}=\pm 2\). Note that transitions of this type can be excited by both transverse and longitudinal microwave magnetic field [52]. An increase of the FM phase content in the films leads to the growth of magnetodipole interactions in the system and the appearance of significant demagnetizing fields. In this situation, the FMR line shifts towards weaker fields (Fig. 2). At the same time, a similar shift of the resonance field is also observed for the EPR peak. Figure 3 shows the frequency-field dependencies \(f(H)\) for both absorption peaks in films (CoFeB)\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\). At low FM phase content \(x<25\) at. % these dependencies are close to linear with effective \(g\)-factors \(g\approx 2.1\) and \(g\approx 4.3\) for FMR and EPR peaks, respectively. At higher concentrations \(x\), the dependence \(f(H)\) for the FMR peak is described by the Kittel formula \[f=\gamma_{\rm FMR}\sqrt{H(H+4\pi M)}, \tag{1}\] where the gyromagnetic ratio \(\gamma_{\rm FMR}\approx 2.92\) GHz/kOe corresponds to \(g\)-factor \(g\approx 2.1\), and the effective demagnetizing field of the film \(4\pi M\) increases with the growth of \(x\). For the EPR peak, the dependence \(f(H)\) in the high frequency region is described by a linear function \[f=\gamma(H+\delta H), \tag{2}\] where the gyromagnetic ratio \(\gamma\approx 6.0\) GHz/kOe corresponds to \(g\)-factor \(g\approx 4.3\), and the line shift \(\delta H\) increases with the growth of \(x\). According to a simple model proposed in [7], the line shift \(\delta H\) in formula (2) arises due to the interaction of PM ions with FM granules and is determined by the effective field \[\delta H=JM, \tag{3}\] which acts on the PM ion from the ensemble of FM granules (\(M\) is average magnetization of the array of FM granules, \(J\) is the effective field constant). Figure 3: Frequency-field dependencies \(f(H)\) for FMR and EPR peaks in films (CoFeB)\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\) with different content of the FM phase \(x\). Magnetic field is applied in the film plane (\(T=296\) K). Points are the experimental data, solid lines correspond to Kittel formula (1), dashed lines are linear dependencies (2). Figure 2: Spectra for films (CoFeB)\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\) with different content of the FM phase \(x\). Spectra are obtained in magnetic field applied in the film plane in longitudinal geometry of resonance excitation (h \(\parallel\)\({\bf H}\)) at frequency \(f\approx 21\) GHz (\(T=296\) K). Spectra are normalized on the FMR peak amplitude. In [7, 8] this field was associated with the exchange interaction between ions and granules. In this case, the dimensionless constant \(J\) has the meaning of the effective exchange field parameter. As will be shown below, an alternative explanation of the effective field \(JM\) of a dipole-dipole nature is possible. ### The diagram \(\delta H-4\pi M\). **Manifestation of the "Lorentz field"?** Equations (1)\(-\)(3) qualitatively explain the correlated shift of FMR and EPR lines to low fields with an increase of the concentration \(x\). Indeed, in both cases, the shift of the absorption peak is determined by the average magnetization of the film \(M\). The values \(4\pi M\) and \(\delta H\) can be determined experimentally from the positions of the FMR and EPR peaks, respectively. In this case, the effective constant \(J\) is determined by the ratio \(J/4\pi=\delta H/4\pi M\). It should be noted that the effective field \(4\pi M\) in formula (1), generally speaking, may depend on the shape and anisotropy of the FM granules and differ from the static value \(4\pi M\) of the film. The equivalence of these values can be expected in the case of spherical granules in the absence of any preferred anisotropy axis [56, 57]. In our case, this condition seems to be fulfilled for most structures, with the exception of nanocomposites based on the LiNbO\({}_{3}\) insulating matrix, where the granules have a shape elongated in the direction of film growth [4, 9, 14]. Assuming the exchange nature of the effective field \(\delta H\), one would expect that the constant \(J\) in equation (3) should depend on many factors: the chemical composition of the granules and the dielectric matrix, the content of the FM phase in the nanocomposite, temperature. However, this constant proves to be quite universal. Figure 4 shows a summary diagram \(\delta H-4\pi M\) for all the films studied. It can be seen that, regardless of the specific composition of the films, the experimental points lie near some universal linear dependence. This universal means that the shift of the EPR field \(\delta H\) is mainly determined by the average magnetization of the film, and therefore has a dipole-dipole origin, similar to the demagnetization field \(4\pi M\). The ratio \(\delta H/4\pi M\) is found to be about 1/3, that is, \(\delta H\approx 4\pi M/3\), and the constant \(J\) approximately corresponds to the demagnetizing factor of the sphere \(J\approx 4\pi/3\). This result suggests that the EPR peak shift \(\delta H\) is associated with the so-called "Lorentz field" of dipole nature, which acts on PM ions from the ensemble of FM granules. The concept of the Lorentz field arises in the problem of calculating local fields inside a (quasi-) continuous medium, taking into account its real inhomogeneity at the microscopic level. This concept is better known from general physics courses, where it is applied to the calculation of local electric fields in dielectric crystals [58, 59], as well as to the determination of local magnetic fields in magnetically ordered media by NMR [60] and muon spectroscopy [61]. Its application to the magnetism of nanostructures is sometimes discussed in theoretical works [56, 57, 62]. A possible experimental manifestation of the Lorentz field in such systems was considered in [63, 64] while studying the temperature dependence of the susceptibility of nanogranular films. The Lorentz method of calculating the local field \({\bf H}_{loc}\) at some selected point inside an inhomogeneous medium suggests the decomposition of this field into several components: \[{\bf H}_{loc}={\bf H}+{\bf H}_{dem}+{\bf H}_{L}+\sum{\bf H}_{dip}. \tag{4}\] Here \({\bf H}\) is the external magnetic field, \({\bf H}_{dem}\) is the demagnetization field associated with the shape of the sample. For a thin film, this term has the form \[{\bf H}_{dem}=-4\pi{\bf M}_{\perp},\] where \({\bf M}_{\perp}\) is the vector component of magnetization normal to the film plane. The third term in (4) \({\bf H}_{L}\) is Figure 4: Summary diagram \(\delta H-4\pi M\) for all investigated samples. Points are the experimental data obtained in the present work and in previous works [7, 8]. Dashed line corresponds to the dependence \(\delta H=4\pi M/3\). the Lorentz field created by a fictitious spherical cavity ("Lorentz sphere") cut out in a magnetized medium around the selected point. It is determined by the well-known formula \[{\bf H}_{L}=\frac{4\pi}{3}{\bf M}.\] Finally, the last term in (4) is the sum of the dipole fields \({\bf H}_{dip}\) created at the selected point by magnetic dipoles (FM granules) located inside the Lorentz sphere. The EPR frequency of magnetic ions is determined by the local fields on each of them, taking into account these four contributions: \(f=\gamma|{\bf H}_{loc}|\). Note, that the last term in (4) has random nature due to the disorder in the granules arrangement inside the Lorentz sphere. It can be assumed that the average value of this contribution over all PM centers is zero. This assumption seems to be reasonable in the case of isotropic distribution of spherical granules in the dielectric matrix [59, 65]. In this situation, the last term in (4) leads only to a broadening of the resulting EPR line, while the shift of the absorption peak is determined by the first three terms: \[f=\gamma|{\bf H}-4\pi{\bf M}_{\perp}+4\pi{\bf M}/3|. \tag{5}\] In the case of in-plane orientation of the magnetic field, the demagnetizing field is absent (\({\bf M}_{\perp}=0\)). At the same time the situation \({\bf H}\parallel{\bf M}\) is realized, and equation (5) transforms to \[f=\gamma(H+4\pi M/3). \tag{6}\] Thus, the presence of a Lorentz field in a granular medium can explain the shift of the EPR peak \(\delta H=4\pi M/3\). Experimentally observed deviations from this value, which are seen in Fig. 4 as a scatter of points relative to the theoretical straight line, can be caused by various reasons: not quite accurate account for the contribution of the fourth term in equation (4), the non-spherical shape of the granules, the presence of additional exchange interactions. Note that the best agreement with the model is achieved for nanocomposite films (CoFeB)\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\), which are characterized by a close to spherical shape of granules with small diameters \(2-4\) nm [9, 12]. Films of this composition are also distinguished by the narrowest resonance peaks (Fig. 1), indicating a higher degree of homogeneity of the system. Note that the presence of random magnetic interactions in the system leads to deviations of local fields at PM centers from the direction of the external magnetic field. In this situation, EPR can be excited not only by transverse, but also by longitudinal microwave field, which behavior is observed experimentally. ### The case of out-of-plane field (room temperature) According to equation (5), in the case of deviation of the magnetic field from the film plane by an arbitrary angle \(\theta_{H}\), it is necessary to take into account the additional contribution to the EPR frequency associated with the appearance of the demagnetizing field \(4\pi{\bf M}_{\perp}\). In the particular case of normal orientation of the field, when \({\bf H}\parallel{\bf M}={\bf M}_{\perp}\), the frequency-field dependence takes the form \[f=\gamma(H-8\pi M/3). \tag{7}\] Thus, comparing with the in-plane geometry, the EPR peak is shifted towards stronger fields. This effect is observed experimentally (Fig. 5). Note that when the magnetic field deviates from the film plane, the FMR line also shifts towards stronger fields. In a normal field, the position of the FMR peak is determined by the well-known Kittel formula: \[f=\gamma_{\rm FMR}(H-4\pi M). \tag{8}\] As it was shown in [7], the angular dependence of the EPR field \(H_{res}(\theta_{H})\) can be calculated analytically, neglecting the dependence of the film magnetization on the magnetic field. Such approximation can be considered adequate for films with sufficiently high content of the FM phase in not too weak magnetic fields, when the effects of superparamagnetism can be disregarded. In the work [7], a good agreement between the calculated and experimental angular dependencies \(H_{res}(\theta_{H})\) was demonstrated. Let us show that taking into account the superparamagnetism of films and the presence of the Lorentz field, the frequency-field dependencies for the EPR peak can be well described in cases of in-plane and normal field, using formulas (6), (7). Figure 6a shows the experimental dependencies \(f(H)\) for FMR and EPR peaks obtained in two geometries for one of the samples (CoFeB)\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\). The dashed lines correspond to the calculation according to Kittel equations (1), (8) for the FMR peak and (6), (7) for the EPR peak under the assumption of constant value \(4\pi M=4\pi M_{S}\) ("ideal" FM film). It can be seen that in this case the formulas work only in the region of high frequencies (strong fields). Deviations in weak fields are explained by a decrease of \(4\pi M\) as compared to the saturation value \(4\pi M_{S}\). In Fig. 6b, the field dependencies \(4\pi M(H)\) for in-plane and normal geometries are recalculated from experimental FMR data using Kittel formulas (1), (8). In the case of in-plane geometry, the resulting dependence \(4\pi M(H)\) can be well described by the usual Langevin function \(L(x)\) for superparamagnets \[4\pi M=4\pi M_{S}\cdot L\left(\frac{\mu H}{k_{B}T}\right),\] where \(\mu\) is the magnetic moment of FM granules, \(k_{B}\) is the Boltzmann constant (for the considered nanocomposite \(\mu\approx 10^{4}\mu_{B}\) Bohr magneton). In normal geometry, the experimental dependence \(4\pi M(H)\) can be well approximated by a similar equation with the replacement of \(H\) by the "internal" field \(H_{in}=H-4\pi M\): \[4\pi M=4\pi M_{S}\cdot L\left(\frac{\mu(H-4\pi M)}{k_{B}T}\right)\] with the same parameters \(\mu\) and \(4\pi M_{S}\). In this case, the function \(4\pi M(H)\) is not expressed explicitly, but can be defined parametrically considering the value \(H_{in}\) as a parameter. Taking into account the calculated field dependencies of \(4\pi M\) in two geometries, the resulting dependencies \(f(H)\) obtained by Kittel formulas (1), (8) for the FMR peak and (6), (7) for the EPR peak are plotted in Fig. 6a by solid lines. It can be seen that taking into account the Lorentz field in equations (6), (7) provides a close to perfect approximation of the experimental behavior for the EPR peak. ### Temperature evolution of the spectra In the work [8], we investigated the temperature evolution of magnetic resonance spectra in nanocom Figure 5: Magnetic resonance spectra in nanocomposite (CoFeB)\({}_{4\pi}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{53}\) for different orientations of the magnetic field with respect to the film plane (\(\theta_{H}\)). Spectra are obtained at room temperature at frequency \(f\approx 25\) GHz in transverse geometry of resonance excitation (\(\mathbf{h\perp H}\)). Figure 6: (a) Frequency-field dependencies \(f(H)\) for FMR and EPR peaks in the film (CoFeB)\({}_{48}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{52}\) in the cases of in-plane and normal field (\(T=296\) K). (b) Field dependencies of \(4\pi M\) value. Points are the experimental data, lines are calculations. The dashed lines correspond to the case of “ideal” FM film with \(4\pi M=4\pi M_{S}\), the solid lines are obtained taking into account the filed dependence of \(4\pi M\) value (see text). posite films (CoFeB)\({}_{x}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{100-x}\) with FM phase content \(x\approx 47-51\) at. % and (CoFeB)\({}_{x}\)(LiNbO\({}_{3}\))\({}_{100-x}\) with \(x\approx 30-40\) at. %. Qualitatively, the samples showed identical behavior. In the case of magnetic field applied in the film plane, temperature decrease from 360 to 4.2 K initiates a monotonic shift of the FMR peak towards weaker fields, which behavior is explained by an increase of the film magnetization and, as a consequence, the demagnetizing field \(4\pi M\). At the same time, the EPR peak (\(g\approx 4.3\)) also shifts to weaker fields in accordance with the formula \(\delta H\approx JM\), where \(J\approx 4\pi/3\) (see Fig. 4). An unusual feature in the behavior of the EPR peak was that its intensity reduced with decreasing temperature, and below \(T\approx 60\) K it disappeared. In the present work, it is found that the similar behavior of the EPR line (\(g\approx 4.3\)) is reproduced for films with different compositions M\({}_{x}\)D\({}_{100-x}\) in the case of sufficiently high content of the FM phase \(x\gtrsim 30\) at. %. The observed reduction of the EPR peak intensity with temperature decrease contradicts the typical situation for systems weakly doped with Fe and Co ions, where the weakening of the EPR line occurs, on the contrary, with temperature increase [19, 20, 21, 53, 54, 55]. It turned out that for the studied nanocomposites, the transition to the limit of low FM phase contents \(x\lesssim 30\) at. % leads to a more traditional temperature behavior of the EPR peak. Figure 7 shows the magnetic resonance spectra obtained at different temperatures for the film (CoFeB)\({}_{25}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{75}\) in magnetic field applied in the film plane. As can be seen from the figure, with temperature decreasing from 296 to 60 K, the intensity of the EPR peak increases significantly. A similar, but less pronounced increase in the intensity is observed for the FMR peak ("superparamagnetic" resonance peak in other terms). In this regard, the evolution of the spectra in the temperature range 60\(-\)296 K looks quite natural. However, with a further decrease of temperature from 60 to 4.2 K, the form of the spectra changes significantly. In the case of transverse resonance excitation geometry (\(\mathbf{h\perp H}\)), it is seen that the FMR peak strongly broadens and shifts to weak fields. The EPR peak observed in the longitudinal geometry (\(\mathbf{h\parallel H}\)) vanishes. At the same time, a strong absorption appears in the vicinity of \(H=0\). A small peak, manifested in weak fields at \(T=4.2\) K, is associated with PM impurities in the substrate [8]. The observed transformation of the spectra in the low temperature region is probably explained by a decrease of thermal fluctuations in the system of PM ions and FM nanogranules and the formation of larger magnetically ordered clusters coupled by exchange and magnetodipole interactions. This is accompanied by suppression of EPR of individual ions and the formation of collective oscillation modes with a wide frequency spectrum due to the strong inhomogeneity of the system and fluctuations of local anisotropy. This scenario explains the observed disappearance of the EPR peak and the strong broadening of the FMR line in the low temperature region. Further, it can be assumed that with an increase of the FM phase content in the nanocomposite, the formation of macroscopic magnetically ordered clusters begins at higher temperatures. This explains the expansion of the temperature range with "anomalous" behavior of the EPR peak, as well as the disappearance of this peak in samples with the FM phase content above the percolation threshold. Figure 7: Spectra of the film (CoFeB)\({}_{25}\)(Al\({}_{2}\)O\({}_{3}\))\({}_{75}\) at different temperatures \(T=4.2-296\) K in the case of in-plane field at frequency \(f\approx 25\) GHz for (a) transverse and (b) longitudinal geometries of resonance excitation. In the case of longitudinal geometry, the vertical scale of the spectra is enlarged 20 times with respect to the transverse geometry. It can also be expected that due to the high degree of disorder, the low-temperature state of the system has the features of spin (cluster) glass. This state is characterized by a high density of local energy minima corresponding to various magnetic configurations of the system. The density of these minima (quasi-equilibrium states) decreases with increasing magnetic field when the system approaches the saturation. Thus, the absorption in the vicinity of \(H=0\) observed in low-temperature spectra at \({\bf h}\parallel{\bf H}\) can be associated with the excitation of transitions between various quasi-equilibrium states of the system. ## 4 Conclusion Nanogranular metal-insulator films M\({}_{x}\)D\({}_{100-x}\) with different compositions (M = Fe, Co, CoFeB; D = Al\({}_{2}\)O\({}_{3}\), SiO\({}_{2}\), LiNbO\({}_{3}\)) and various FM metal contents \(x\) were studied by magnetic resonance. The experimental spectra of the films contain the FMR line from the ensemble of FM granules, as well as an additional absorption peak with effective \(g\)-factor \(g\approx 4.3\), which we associate with resonance at PM centers present in the insulating space between FM granules. Fe\({}^{3+}\) and Co\({}^{2+}\) ions dispersed in the insulating matrix during the deposition of the films can serve as such centers. With an increase of the FM phase content, the observed EPR line (\(g\approx 4.3\)) demonstrates an additional shift depending on the orientation of the magnetic field with respect to the film plane. The correlation of this shift with the demagnetization field of the film \(4\pi M\) has been experimentally established. When magnetic field is applied in the film plane, the EPR line is shifted to weaker fields by an amount of \(\approx 4\pi M/3\) relative to its position in the limit of low FM phase contents. On the contrary, for the normal orientation of the field, the EPR peak is shifted to stronger fields by \(\approx 8\pi M/3\). This behavior can be explained by magnetodipole fields acting on PM centers from the ensemble of FM granules: the demagnetizing field \(-4\pi M\), which arises in a normally magnetized film, and the Lorentz field \(4\pi M/3\), which is independent of the external field orientation. Due to fluctuations of the local dipole and exchange fields at PM centers, the EPR peak is manifested not only in the usual transverse, but also in the longitudinal geometry of the resonance excitation. The presence of magnetic interactions in the system of PM ions and FM granules also leads to a peculiar temperature dependence of the EPR peak amplitude. When cooling from the high temperature region, the intensity of the EPR line first rises due to an increase in the susceptibility of PM ions. However, at low temperatures, the weakening of thermal fluctuations leads to the formation of macroscopic coupled clusters in the system of PM ions and FM granules. In this situation, the EPR peak from individual ions decreases until its complete disappearance, when the magnetically ordered state extends to the entire film. Thus, in this paper we have shown that PM ions dispersed in an insulating matrix in metal-insulator nanogranular composites can serve as markers of magnetic interactions present in the system. These interactions are manifested while studying the behavior of the EPR line of the dispersed ions. ## Funding The work was carried out within the framework of a state assignment and was financially supported by the Russian Science Foundation (project no. 22-29-00392). ## Conflict of interest The authors declare that they have no conflicts of interest.
2309.11789
Infalling of Nano-dust Because of Air Drag on Uranus
Uranus and Saturn share similarities in terms of their atmospheric composition, which is primarily made up of hydrogen and helium, as well as their ring systems. Uranus has 13 known rings, which are divided into narrow main rings, dusty rings, and outer rings. Unlike Saturn's broad ring system, Uranus' inner narrow main rings are relatively narrow, and likely consist of dark, radiation-processed organics that range from centimeters to meters in size. We assume that Uranus may have a mechanism similar to Saturn where tiny particles fall on-to the planet due to its gravity and the dragging force of the upper atmosphere. The uncharged nano-dust particles in Uranus' inner narrow rings will collide with neutral gas molecules in the exosphere and fall onto the planet. This work derives a Monte Carlo simulation of the orbital behavior of nano-dust particles in the inner narrow rings of Uranus. The model shows that the braking of the dust grain motion takes place at altitudes between 6000 km and 8000 km, and the dust particles are gradually captured into corotation with the planetary atmosphere below 4000 km altitude. The larger the dust particles are, the lower the altitude at which they will be assimilated into co-rotation. The lifetime of 1-nm dust particles to 1000 km-altitudes is estimated to be about 32.5 $\pm$ 18.8 hours, and that of 30 nm is about 2770.0 $\pm$ 213.9 hours.
Hua-Shan Shih, Wing-Huen Ip
2023-09-21T05:33:27Z
http://arxiv.org/abs/2309.11789v1
# Infalling of Nano-dust Because of Air Drag on Uranus ###### Abstract Uranus and Saturn share similarities in terms of their atmospheric composition, which is primarily made up of hydrogen and helium, as well as their ring systems. Uranus has 13 known rings, which are divided into narrow main rings, dusty rings, and outer rings. Unlike Saturn's broad ring system, Uranus' inner narrow main rings are relatively narrow, and likely consist of dark, radiation-processed organics that range from centimeters to meters in size. We assume that Uranus may have a mechanism similar to Saturn where tiny particles fall on-to the planet due to its gravity and the dragging force of the upper atmosphere. The uncharged nano-dust particles in Uranus' inner narrow rings will collide with neutral gas molecules in the exosphere and fall onto the planet. This work derives a Monte Carlo simulation of the orbital behavior of nano-dust particles in the inner narrow rings of Uranus. The model shows that the braking of the dust grain motion takes place at altitudes between 6000 km and 8000 km, and the dust particles are gradually captured into corotation with the planetary atmosphere below 4000 km altitude. The larger the dust particles are, the lower the altitude at which they will be assimilated into co-rotation. The lifetime of 1-nm dust particles to 1000 km-altitudes is estimated to be about \(32.5\pm 18.8\) hours, and that of 30 nm is about \(2770.0\pm 213.9\) hours. keywords: Uranus, Planetary science, Planetary rings, Circumplanetary dust + Footnote †: journal: Planetary and Space Science ## 1 Introduction The Uranian ring system is composed of nine narrow rings discovered by stellar occultation measurements in 1977 with the designation of 6, 5, 4, and from inside to outside (Elliot et al., 1977; Millis et al., 1977; Bhattacharyya and Bappu, 1977). The close Uranus flyby of the Voyager 2 spacecraft added two more rings to the system (Millis et al., 1977), to be followed by the detection of another two rings by the Hubble Space Telescope (Smith et al., 1986; Showalter and Lissauer, 2006). Overall, the rings are very dark (Smith et al., 1986) with some being depleted in dust while others like the lambda ring are quite dusty. From a reanalysis of Voyager 2 images, Hedman and Chancia (2021) detected the presence of more narrow rings. The size range of the ring particles in the optically thick rings of narrow structures is between cm and m according to Nicholson et al. (2018). The origin of Uranian rings has been explained in terms of meteoroid impacts on small satellites and/or mutual collision of ring particles in a system of ringlet belts (Colwell and Esposito, 1990). The narrowness of the rings that can sometimes be as narrow as a few tens of kilometers might be the result of the shepherding effect of small satellites, as presumably in the case of the epsilon ring located between the two 40-km-sized moons, Cordelia and Ophelia (Elliot and Nicholson, 1984; French et al., 1991). The existence of other moonlets near the alpha and beta rings have been investigated by Chancia and Hedman (2016). More recently, A\({}^{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}{{{{{{{{}}}}}}}}{{{{{{}}}{{{{}}}{{{}}}{{{}}}{{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{} {}}{{{}}{{}}{{}}{{}}{{}{}{{}}{{}}{{}}{{}{}{}{{}}{{}}{{}{}{{}}{{}{}{{}}{{}}{{ }}{{{}}{{}}{{}{}{{}}{{}}{{}{}{{}}{{}{}{}{{}}{{}}{{}}{{}{{}}{{}}{{}{}{ }}{{{}}{{}{{}}{{}}{{}{}{}{{}}{{}{}{}{{}}{{}{}{}{{}}{{}{}{}{{}}{{}{}{{}{ }}{{{}}{{}}{{}{}{{}}{{}{}{{}}{{}}{{}{}{{}}{{}{}{}{{}}{{}{}{{}}{{}{}{ }{{}}{{}{{}}{{}{}{}{{}}{{}{}{}{{}}{{}{}{}{{}}{{}{}{}{}{{}{}{}{{}{}{ }{{}{}{{}}{{}{}{}{{}}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{ }{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{ }{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{ }{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{ }{{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{ }{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{ }{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{ }{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{ }{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{ }{{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{ }{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{ }{{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{}{{}{}{{}{{}{}{}{{}{{}{}{{}{{}{}{{}{}{{}{{}{}{{}{}{{}{{}{}{{}{{}{{}{}{{}{{}{{}{}{{}{{}{}{{}{{}{{}{{}{{{}{{}{{}{{}{{}{{}{{}{{}{{}{{}{{}{}{{{}{{}{{}{{}{}{{}{{}{}{{{}{}{{}{{}{{}{{}{{}{{}{}{{{}{}{{}{{}{}{{}{{}{{}{{}{{}{{{{}{{}{{}{{{}{{}{{}{{}{{}{{{}{{}{{{{}}{{{{{{{}}}{{{{{{{}}}{{{{{{}}{{{{{}}{{{{} 0.}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}}\}\}}}}\}}\\\}}}}}}}}\\\\\\\\\\\\\\\\\\}}}}}}}}}}}}}}}}}}}}}}}}}\{\\\\\\\\\\\\\\\\\\\\\\\\\\{\{\\\\\{\\\\\{\\{\\\{\\\\\{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\ and Ip, 2014; Ip et al., 2016) or the so-called ring rain mechanism (O'Donoghue et al., 2013) confirmed by the dust experiment on the Cassini mission during its Grand Finale (Hsu et al., 2018). What is interesting in the Saturnian ring is that the off-equator ring mass loss rate associated with the charged nanodust amounts to approximately 1800 kg s\({}^{-1}\) to 6800 kg s\({}^{-1}\)(Hsu et al., 2018) while the corresponding equatorial mass loss rate because of uncharged nano-dust as a result of atmospheric drag is unexpectedly large (about \(10^{4}\) kg s\({}^{-1}\)) according to Mitchell et al. (2018), Waite et al. (2018) and Perry et al. (2018). It also is suggested that the entry of such a large number of particles into Saturn's atmosphere may have been transient or anomalous (Moses et al., 2023). If the dust influx observed by the Cassini spacecraft is a steady effect, such rapid ring rain equatorial ward would imply a young age of the Saturnian ring system (Crida et al., 2019). The new discovery of the nanodust transport mechanism in the equatorial plane of Saturn suggests that a similar effect might also take place in the rings of Uranus (and Neptune). The detection of H\({}_{2}\)O, CO and CO\({}_{2}\) in the upper atmosphere of Uranus has been interpreted in terms of the injection of exogenic oxygen-bearing material via meteoroid bombardment, comet impact, and ring dust (Feuchtgruber et al., 1997; Cavalie et al., 2014; Moses and Poppe, 2017; Lara et al., 2019). The mass injection rate, if steady, is on the order of 1-2 kg s\({}^{-1}\) according to the model calculations of Moses and Poppe (2017) and Lara et al. (2019). This means that the ring rain process could play an important role at Uranus, since the rings must be subject to constant micro-meteoroid bombardment as well. In this work, we apply the Saturnian ring scenario to the case of Uranian rings. We hypothesize a sparse plasma environment within the Uranus ring, where dust particles in nano-meter size, typically exhibit a charge-to-mass ratio (\(q/m\)) between \(10^{-4}\) to \(10^{-9}\) e/amu. At an altitude of 10000 km with the Uranus magnetic field around 0.06 to 0.2 Gauss in the equatorial region, the ratio of the Lorentz force to gravity (\(F_{\rm Lorentz}/F_{\rm gravity}\)) is estimated to be between \(10^{3}\) and \(10^{-2}\). This suggests a significant influence of the Lorentz force, particularly on tiny dust grains. However, considering the methodologies proposed by Horanyi (1996) and Szego et al. (2014), the estimated charging timescale of the photoemission effect, which represents the duration for a dust grain to charge from 0 to +1e, is inefficient for a tiny dust grain to charge, so the influence of the Lorentz force is considered negligible in this study. The organization of the paper is as follows. In Section 2, the basic elements of the dust-air drag orbital calculation is described. The numerical results are analyzed in Section 3. A summary and discussion are given in Section 4. ## 2 Method Stellar occultation measurements from the Earth revealed that the light transmission of Uranus' atmosphere with a minor amount of hydrocarbon molecules was dominated by photo-absorption and Rayleigh scattering of H\({}_{2}\) and H atoms (Hudson, 1971; Mount et al., 1977; Mount and Moos, 1978; Smith and Hunten, 1990). Voyager UV occultation measurements of the upper atmosphere of Uranus provided first-hand information on the exospheric temperature (\(\sim\)800 \(\pm\)100 K) of the H\({}_{2}\) and H gas (Broadfoot et al., 1986; Herbert et al., 1987; Stevens et al., 1993). The nominal model of the H\({}_{2}\) and H number density profiles above the 1-bar pressure level together with the extrapolated curves to be used in the present study is shown in Figure 1. It is assumed that the exospheric gas particles are in corotation with the central planet with a corotation speed in the azimuthal direction given to be \(v_{c}(\rho)\) where \(\rho\) is the distance perpendicular to the spin axis. If the particle motion is confined to the ring plane, we have \(\rho=\) r. Between two successive collisions, the motion of a dust particle is determined by the gravitational force of the central planet. The average distance traveled between two collisions, the mean-free-path, can be estimated by knowing the corresponding mean-free-paths of collisions with the H\({}_{2}\) molecules and H atoms in the exosphere, namely, \(\lambda_{\rm H_{2}}\) and \(\lambda_{\rm H}\), respectively. If \(n_{\rm H_{2}}\) is Figure 1: The modeling number density profile of hydrogen compounds. The solid lines are models of Voyager 2 UV from Broadfoot et al. (1986), where atomic hydrogen is an extrapolation of the data measured near 2600 km altitude based on the temperature of 750 from H\({}_{2}\). The dashed lines are the estimated number density of the gas with the same slope of the measurements. the number density of H\({}_{2}\) and \(n_{\rm H}\) that of the H atom, and \(\sigma\) is the geometrical cross section of the dust particles, \(\lambda_{\rm H_{2}}=\frac{v_{\rm th}}{v_{\rm r}}\)\(\sigma\)\(n_{\rm H_{2}}\), and \(\lambda_{\rm H}=\frac{v_{\rm th}}{v_{\rm r}}\)\(\sigma\)\(n_{\rm H}\), respectively, where \(v_{\rm th}\) is the thermal velocity of the chosen gas particle, and \(v_{\rm r}=v_{\rm c}-v_{\rm d}\), which is the difference between the local co-rotation velocity (\(v_{\rm c}\)) and the dust grain velocity (\(v_{\rm d}\)). The combined collisional mean-free-path is therefore given by, \[\frac{1}{\lambda}=\frac{1}{\lambda_{\rm H}}+\frac{1}{\lambda_{\rm H_{2}}}\] The distance (\(S\)) travelled by the dust grain before the next collision can then be estimated by generation of a random number (\(P_{1}\)) between 0 and 1, and finding the value of s by using the following equation, \[P_{1}=1-exp(-\frac{S}{\lambda})\] To decide on which gas species the dust grain will hit can be done by computing the ratios of \(f_{1}=\frac{\lambda}{\lambda_{\rm H_{2}}}\) and \(f_{2}\) = \(\frac{\lambda}{2\rm{{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}\) \). We can generate another random number (\(P_{2}\)) and check whether it is smaller than \(f_{1}\). If yes, the dust grain will collide with an H\({}_{2}\) molecule. Otherwise, it will collide with an H atom. The Uranian ring system is quite extended with its outer-most (\(\epsilon\)) ring at about 51,000 km and inner-most (U2R) ring at about 40000 km. It is likely that dust particles generated by micrometeoroid bombardment or mutual inter-particle collisions will gradually move inward because of ballistic transport (e.g., Ip, 1983) or exospheric gas drag effect. This dusty material will finally land at the inner boundary of the ring system defined by the 6 Ring at 41500 km and be injected into the exosphere subsequently. After being released on the ring plane with an initial velocity equal to the Keplerian velocity, the motion of the dust grain will be subject to the planetary gravitational field and the collisional momentum transfer with the exospheric gas. In our Monte Carlo model of the nano-dust motion, the thermal motion of the exospheric H\({}_{2}\) and H is taken into consideration by including the 3D Maxwellian distribution of the thermal velocity into the momentum transfer calculation. As a result, small dust grains of the size of 1-3 nm will experience some amount of random scattering as they spiral downward. The post-collision velocities of the dust grain (\(\bf{v_{\rm d}}^{{}^{\prime}}\)) and of the gas molecule (\(\bf{v_{\rm g}}^{{}^{\prime}}\)) can be written as: \[\bf{v_{\rm d}}^{{}^{\prime}}=\frac{m_{g}(v_{g}-v_{d})}{m_{d}+m_{g}}+\frac{m_{d }v_{d}+m_{g}v_{g}}{m_{d}+m_{g}}\] \[\bf{v_{\rm g}}^{{}^{\prime}}=\frac{m_{d}(v_{d}-v_{g})}{m_{d}+m_{g}}+\frac{m_{ d}v_{d}+m_{g}v_{g}}{m_{d}+m_{g}}\] where \(m_{d}\) is the dust mass, \(m_{g}\) can be either the mass of (1) H\({}_{2}\) or of (2) H, \(v_{\rm g}\) is the corresponding gas velocity, and \(v_{\rm d}\) is the dust velocity before the collision. ## 3 Results Figure 2. (a) shows the profiles of H and H\({}_{2}\) collision frequency distributions, respectively, of a 1-nm grain after being released from a large ring particle. Despite the number density of hydrogen (H) being higher than that of hydrogen molecules (H\({}_{2}\)) at altitudes above 8000 km, the collision frequency for both with the single dust particle is low due to the sparse nature of the atmosphere at such high altitudes, resulting in less than 10 collisions per 50 km. However, as the dust particle descends to altitudes around 4000 km, the collision frequency increases significantly, with more than 100 collisions per 50 km occurring because of the increased density of the atmosphere. A significant part of the braking of the dust grain motion takes place between these two altitudes. It is also interesting to note that because of collisional scattering by the exospheric H\({}_{2}\) molecules with thermal motion, the 1-nm nano-dust grain could have a small random speed (\(<\) 75 m s\({}^{-1}\)) at the end of its inward spiral. As in analyses of the Cassini dust measurements during its Grand Finale (Mitchell et al., 2018), the orbital evolution of the nano-dust particles can be viewed either in the Sun-fixed frame or the rotating frame of the planet. Figure 3 show the trajectory of an uncharged dust grain with the radius of 1 nm in (a) the Sun-fixed frame and (b) Uranus' co-rotating frame. The dust grain slowly lowers its altitude due to random collisions with the exospheric gas particles. Figure 2. (b) shows how, at first, the total speed of a 1-nm dust grain increases because of gravitational acceleration. But the trend reverses and the inward orbital spiraling speeds up as collisions become more frequent until finally being captured into full co-rotation with the planetary atmosphere at about 4000 km altitude. While the initial speed of 11.81 km s\({}^{-1}\) from circular Keplerian motion is higher than the co-rotation speed of 4.21 km s\({}^{-1}\), the azimuthal velocity component will gradually increase before being reduced as a consequence of the gas drag effect. As the dust falls below the exobase at about 6500 km-7000 km altitude, the number density of the atmospheric neutral gas is relatively high. The probability of particle collision increases greatly, causing the azimuthal velocity component to be strongly decelerated. The relative velocity of the dust and Uranus' atmosphere will eventually become close to zero at about 3000 km altitude. The radial velocity component of the dust particle also increases of a maximum value of 2.3 km s\({}^{-1}\) before being slowed down to nearly zero subsequently (Figure 2. (b), dashed red line). Figure 4 (a)-(c) compare the dynamical behaviors of dust grains of three other radii, i.e., 3 nm, 10 nm and 30 nm. It can be seen that they generally follow the same pattern. The only difference is that the larger the dust grains, the lower will be the altitude for them to be totally assimilated into co-rotation of the planetary atmosphere. Finally, the time dependence of the dust grains is summarized in Figure 5. Examples of the four cases with different grain sizes are shown. In our Monte Carlo model calculations, hundreds simulation runs have been performed. The results show that the orbital histories of small grains with size \(\lesssim\) 3 nm are highly dispersed because their motion can be significantly influenced by the thermal motion of the gas particles. On the other hand, larger dust grains experienced lesser changes in their motion. Further analysis of the simulation results revealed that the infalling time of 1-nm dust particles was dry short, with a duration on the order of 32.5 \(\pm\) 18.8 hours. During this time, the dust particles mostly rotate at an altitude of 12,000 km for 10 to 35 hours, before spiraling below 2000 km for 7-8 hours. In comparison, 30-nm dust particles will take 2770.0 \(\pm\) 213.9 hours to reach a lower altitude. If dust grains with a radius of 30 nm reach altitudes below 10,000 km, they would fall to an altitude of 1,000 km above Uranus in only 20 hours. The test of dust with other sizes showed that the lifetime of 3 nm was 97.0 \(\pm\) 213.9 hours, and Figure 2: (a) Number distribution the collided hydrogen by the 1 nm-sized dust particles from the altitude of 16,000 km to Uranian lower atmosphere (b) The resulting velocities of the dust grains with the radius of 1\(nm\) (mass of about 3800\(u\)) colliding with atmospheric atoms. that of 10 nm was 525.2 \(\pm\) 94.7 hours. ## 4 Summary and Discussion Studying the behavior of nano-dust particles in the exosphere of Uranus can provide valuable insights into the inward transport of dusty material from the Uranian ring system. This study suggests that as the uncharged dust grains with different radii, ranging from 1 nm to 30 nm spirals inward, the collision frequency of dust particles with hydrogen atoms and hydrogen molecules increases. The trajectory of a 1-nm dust grain slowly lowers its altitude because of random collisions with the exospheric gas particles. The gravitational acceleration initially increases the dust grain's total velocity, but the trend reverses as collisions become more frequent. The inward orbital spiraling also speeds up until the grain is captured into co-rotation with the Uranian atmosphere below the altitude of 4000 km. The collisional scattering by the exospheric molecules with thermal motion gives the nano-dust grain a small random speed 0.075 km s\({}^{-1}\) at a lower altitude of Uranus. The dynamical behaviors of dust grains with radii of 3 nm, 10 nm, and 30 nm were also analyzed. The results revealed that the trajectories of small dust grains with a radius less \(\lesssim\) 3 nm are characterized by a high degree of dispersion due to the significant influence of thermal motion of exospheric gas particles on their dynamics, while larger dust grains experience lesser changes in their motion. The infalling time of 1-nm dust particles is very short, with a duration of about 32.5 \(\pm\) 18.8 hours, and larger dust grains take longer to reach lower altitudes. The study also highlights the effect of gas drag on the azimuthal velocity component of the dust particles, which slows them down as they fall below the exobase. Our toy-model calculation described here is meant to show the possible dynamical evolution of the nano-size dust grains ejected from the inner ring region. It shows that the extended exosphere of Uranus can be very effective in drawing in the oxygen-bearing material thus providing a source of the H\({}_{2}\)O, CO and CO\({}_{2}\) detected in the upper atmosphere of Uranus (Moses and Poppe, 2017; Lara et al., 2019; Feuchtgruber et al., 1999). As in the case of the Saturnian rings, the corresponding dust infalling rates, hopefully, to be measured by a Uranus Orbiter (Cohen et al., 2022), would provide constraints on the lifetime and hence origin of the ring system. An interesting issue which is outside the scope of the present work is about the dynamics of charged dust particles. Because of the highly offset and tilted magnetic field configuration, sub-micron size particles emitted from the Uranian ring system could be transported to a wide region of the planetary magnetosphere. Altogether, the Uranian ring system could have interesting connections to the composition of the upper atmosphere and magnetospheric processes, in spite of its tenuous nature. Overall, the study provides a framework for future research in this area. It demonstrates the interaction between exospheric gas particles and dust particles, and highlights the need for further investigation into the processes driv Figure 3: The modelling trajectory profile of a dust particle with the radius of 1 nm colliding with the exospheric gas particles and falling into Uranus center from 6 ring in two frames of reference. ing the inward transport of dusty material from the Uranian ring system. A similar dust infall process is also likely to take place in the vicinity of the Neptunian ring system. ## Acknowledgements We are thankful for the comments and suggestions from the anonymous referees to help improve the quality of this paper. This work was supported by the National Science and Technology Council (NSTC, Taiwan), grant NO. 111-2112-M-008-014-.
2309.10497
Sagnac effect in a rotating ring with Dirac fermions
The observation of the Sagnac effect for massive material particles offers a significant enhancement in sensitivity when compared to optical interferometers with equal area and angular rotation velocity. For this reason, there have been suggestions to employ solid-state interferometers that rely on semiconductors and graphene. We investigate the Sagnac effect in Dirac materials governed by the relativisticlike quasiparticle dispersion law and show that the fringe shift is still determined by the mass of a free electron. This confirms that graphene is indeed a promising material for creating solid-state Sagnac interferometers. Considering monolayer graphene with its linear dispersion law and comparing it with light provides a deeper understanding of the Sagnac effect.
A. Yu. Fesh, Yu. V. Shtanov, S. G. Sharapov
2023-09-19T10:18:54Z
http://arxiv.org/abs/2309.10497v3
# The Sagnac effect in a rotating ring with Dirac fermions ###### Abstract The observation of the Sagnac effect for massive material particles offers a significant enhancement in sensitivity when compared to optical interferometers with equal area and angular rotation velocity. As a result, there have been suggestions to employ solid-state interferometers that rely on semiconductors and graphene. However, in the case of monolayer graphene, its quasiparticles exhibit a linear dispersion, thus making the Sagnac effect in graphene comparable to that of for light. We investigate the Sagnac effect in the Dirac materials governed by the relativistic dispersion law and find the value of the fringe shift. The analysis reveals that optimal sensitivity is achieved in materials featuring a reduced value of Fermi velocity. Notably, the sign of the fringe shift depends on the nature of the charge carriers - whether they are electrons or holes. _Introduction._ Physical phenomena associated with rotation possess a captivating allure that spans multiple levels. One of the fundamental illustrations of this allure lies in the impossibility of establishing a standard clock synchronization procedure along a closed curve when the metric is non-static, as is the case of a rotating frame of reference (see the textbooks [1; 2] and [3]). The Sagnac effect refers to the phenomenon where a phase shift is observed between two coherent beams that travel along opposite paths within an interferometer situated on a rotating disk (see Refs. [4; 5; 6; 7] for the reviews). This phase shift, which was first demonstrated for light by Sagnac [8] in 1913 is the intrinsically relativistic effect. Thus, it can be essentially viewed as a consequence of the impossibility of synchronization of clocks along the circumference of the rotating disk. It is not restricted to light waves, but is observed for electron waves in vacuum [9], neutrons [10] and atoms [11] (see also the latest work [12]). Moreover, the observation of the Sagnac phase shift for massive particles in solids, specifically superconducting Cooper pairs, dates back to as early as 1965 [13]. While practical applications of the Sagnac effect currently rely on light waves, there is a compelling physical explanation as to why massive particles are significantly more advantageous for its realization. The Sagnac fringe shift, denoted as \(\Theta_{\rm S}\), with respect to the fringe position for the stationary interferometer, reads [6; 9; 14] \[\Theta_{\rm S}=\frac{4EA\Omega}{\hbar c^{2}}. \tag{1}\] This formula is applicable to waves comprising both massless and massive particles, and \(E\) represents the total energy of a corresponding particle. The value \(A\) denotes the area enclosed by the light or particle beams in the interferometer, \(\Omega\) is the angular velocity of the interferometer's rotation within an inertial frame, \(\hbar\) is the reduced Planck constant, and \(c\) is the the free-space velocity of light. This equation is written neglecting a small relativistic correction and under the assumption that the plane of the interferometer is perpendicular to the axis of rotation. Substituting in Eq. (1) the energy \(E=\hbar\omega\), where \(\omega\) is either the frequency of the light or the frequency of the de Broglie wave of a material particle, we recover the standard formula for the Sagnac phase shift [4; 5; 6; 7]\(\Theta_{\rm S}=4\omega A\Omega/c^{2}\). When considering light and using the dispersion relationship \(\omega=2\pi c/\lambda\), where \(\lambda\) represents the vacuum wavelength of light, we arrive at another commonly used form for the Sagnac phase shift [4; 5; 6], \[\Theta_{\rm S}=\frac{8\pi A\Omega}{c\lambda}. \tag{2}\] In the case of slow massive particles (nonrelativisitic case), the energy \(E=mc^{2}\) is associated with their rest mass \(m\), and the phase fringe acquires the following form \[\Theta_{\rm S}=\frac{4mA\Omega}{\hbar}. \tag{3}\] Comparing the phase shift for the matter-wave and optical interferometers, one finds that for the equal area and angular velocity the phase shift is enhanced by a factor \(mc^{2}/(\hbar\omega)\)[15]. For atoms, the matter-wave interferometer is significantly more sensitive to rotation, with a factor reaching the value of \(10^{10}\). This constitutes the primary reason why the existing optical gyroscopes necessitate the utilization of either several kilometers of optical fiber or a substantial area to achieve the necessary sensitivity. Conversely, the sensitivity enhancement for matter-waves has led to proposals to utilize cold atom interferometers in the search for smaller signals beyond Earth rotation [12]. For free electrons, this factor also reaches the value \(10^{6}\) and, as mentioned earlier, the Sagnac effect was observed using an electron interferometer in vacuum [9]. There is a possibility of realizing the Sagnac effect in solid-state by employing a serial array of mesoscopic ring-shaped electron interferometers, which was discussed in Refs. [16; 17; 18; 19]. It has to be noted that the simulations for ring arrays discussed in [16; 17; 18] are conducted with the assumption that electrons in solids have an effective mass, \(m^{*}\). Consequently, the enhancement factor \(m^{*}c^{2}/(\hbar\omega)\) is slightly reduced, estimated to be in the range of \(10^{5}\) to \(10^{6}\). Moreover, as highlighted in [18; 19], graphene emerges as a promising material for electron interferometry, attributed to its extraordinary electronic properties. Indeed, recent experiment [20] on Aharonov-Bohm oscillations in ring-shaped gated bilayer graphene provides further confirmation of this potential. However, at least monolayer graphene belongs to the new class of Dirac materials with a zero effective carrier mass \(m^{*}\) and linear dispersion relation as for light. At first glance, due to the assertion discussed e.g. in Ref. [6] that the phase shift remains independent of the phase velocity of the wave, the Sagnac effect in graphene seems to be analogous to the case of light. The objective of the present work is to examine this issue also taking into account that the density of carriers in graphene is finite. More broadly, the goal is to investigate the Sagnac effect in relativistic Dirac materials and to clarify the distinctions from the previously known cases. _Model and formalism_. We consider the Dirac materials characterized by the following dispersion relation: \[\epsilon(\mathbf{k})=\pm\sqrt{\hbar^{2}v^{2}k^{2}+\Delta^{2}}-\mu, \tag{4}\] where \(\mathbf{k}\) represents the wave vector counted from the Dirac points, which can be either 2D or 3D. However, since we are examining a ring on the plane geometry, its \(z\)-component, denoted as \(k_{z}\), can be effectively disregarded by setting it to zero. Also in Eq. (4) \(v\) is the Fermi velocity, \(\mu\) is the chemical potential and \(\Delta\) is the gap in the quasiparticle spectrum. In the case of graphene, the value of \(\mu\) (including the change of the character of carriers, either electrons or holes) is tunable by applying the gate voltage to the devices, and \(\mu>0\) corresponds to electrons. The gap term \(\Delta\) is present, in particular, in the Hamiltonian derived by Wolf (see Ref. [21] for a review) for Bi and similar effective Hamiltonians describing other 3D Dirac materials. It can also be induced in graphene monolayer by placing it on top of hexagonal boron nitride (G/hBN). As already mentioned, the quasi-relativistic spectrum (4) follows, for example, from the Wolf Hamiltonian as well as other effective low-energy either 2D or 3D Dirac Hamiltonians used to describe Dirac materials (for graphene see, e.g. the review [22]). To focus on the Sagnac effect for the quasiparticles with the relativistic-like dispersion we restrict ourselves by considering the squared Dirac Hamiltonians neglecting the coupling between pseudospin degree of freedom with the rotation of the frame [23]. Thus we assume that a free quasiparticle in the electron subsystem of the Dirac material in the inertial frame of reference obeys the following wave equation \[\left(\square^{\prime}+\frac{\Delta}{\hbar^{2}}\right)\psi(t^{\prime},\mathbf{ r}^{\prime})=0,\qquad\square^{\prime}\equiv\frac{1}{v^{2}}\frac{\partial^{2}}{ \partial t^{\prime 2}}-\triangle^{\prime}. \tag{5}\] Here \(\square^{\prime}\) and \(\triangle^{\prime}\) are the d'Alembertian and Laplace operators, respectively, \(\psi(t^{\prime},\mathbf{r}^{\prime})\) is the electron wave function in the inertial frame of reference denoted as primed. The chemical potential should also be included in Eq. (5) by the standard prescription for the relativistic systems, \(i\hbar\partial_{t}\to i\hbar\partial_{t}+\mu\). Seeking for a solution of Eq. (5) in the form \(\psi\sim\exp(-i\epsilon t^{\prime}/\hbar+i\mathbf{kr}^{\prime})\), one reproduces the spectrum (4). Clearly, relating the energy gap \(\Delta\) to the mass, \(m=\Delta/v^{2}\), and setting \(v=c\) and \(\mu=0\), the equation (5) reduces to the usual Klein-Gordon-Fock (KGF) equation. We shall consider the Sagnac effect for the quasiparticles characterized by the wave equation (5) from the point of view of a co-rotating observer employing the approach of Ref. [14]. It can be traced back to the explanation of the Sagnac effect proposed by Langevin in the framework of general relativity in 1921 [24] (see also Refs. [4; 7] for the reviews and the textbook [1]). The invariant interval in polar coordinates \((t^{\prime},r^{\prime},\phi^{\prime})\) in the inertial rest frame is \[ds^{\prime 2}=c^{2}dt^{\prime 2}-dr^{\prime 2}-r^{\prime 2}d\phi^{\prime 2}, \tag{6}\] where as mentioned above we restricted ourselves by the planar geometry. The transformation to a new non-primed frame of reference rotating about the \(z\)-axis with angular velocity \(\Omega\) is done by \(t^{\prime}=t\), \(r^{\prime}=r\) and \(\phi^{\prime}=\phi+\Omega t\), so the invariant interval reads \[ds^{2}=c^{2}\left(1-\frac{\Omega^{2}r^{2}}{c^{2}}\right)^{2}dt^{2}-\frac{2r^{ 2}\Omega}{c}cdtd\phi-dr^{2}-r^{2}d\phi^{2}. \tag{7}\] The corresponding contravariant metric tensor (\(\mu,\nu=0,1,2=t,r,\phi\)) is \[g^{\mu\nu}=\left(\begin{array}{ccc}1&0&-\frac{\Omega}{c}\\ 0&-1&0\\ -\frac{\Omega}{c}&0&-\frac{1}{r^{2}}+\frac{\Omega^{2}}{c^{2}}\end{array} \right). \tag{8}\] Note that \(g=\det[g_{\mu\nu}]=r^{2}>0\) because \(2+1\) dimensional space is considered. To elucidate the propagation of electron waves within a rotating coordinate frame, characterized by the metric (8), it becomes necessary to employ the following equation \[\left(\square+\frac{\Delta}{\hbar^{2}}\right)\psi(t,\mathbf{r})=0, \tag{9}\] with the generalized d'Alembertian operator [25] \[\square\psi=\nabla_{\mu}\nabla^{\mu}\psi=\frac{1}{\sqrt{g}}\partial_{\mu}( \sqrt{g}g^{\mu\nu}\partial_{\nu}\psi). \tag{10}\] Recall that by the definition the covariant derivative \(\nabla_{\mu}\equiv\partial_{\mu}=\frac{\partial}{\partial x\mu}\), where \(x^{\mu}=(vt,r,\phi)\). It easy to obtain that the operator \(\square=\square^{0}+\square^{\Omega}\) with \[\square^{0} =\frac{1}{v^{2}}\frac{\partial^{2}}{\partial t^{2}}-\frac{1}{r} \frac{\partial}{\partial r}\left(r\frac{\partial}{\partial r}\right)-\frac{1}{r^ {2}}\frac{\partial^{2}}{\partial\phi^{2}}, \tag{11}\] \[\square^{\Omega} =-\frac{2\Omega}{vc}\frac{\partial^{2}}{\partial t\partial\phi}+ \frac{\Omega^{2}}{c^{2}}\frac{\partial^{2}}{\partial\phi^{2}}.\] Obviously, for \(\Omega=0\) Eq. (9) reduces to Eq. (5) written in the rest frame in the polar coordinates. Also for \(v=c\) and \(\mu=0\) Eq. (9) reduces to the KGF equation in the rotating frame considered in Ref. [14] to investigate the Sagnac effect and in Ref. [26] to study the relativistic superfluidity. There is, however, an essential difference in the approaches used in [14] and [26]. The KGF equation within the rotating frame was obtained in [14] by replacing \(\partial t\to\partial t-{\bf V}\cdot\nabla_{\bf r}\), where \({\bf V}={\bf\Omega}\times{\bf r}\) is the local rotating velocity. Consequently, its extension to the case \(v\neq c\) is obscure because it is not enough to replace \(c\) by \(v\) in the KGF equation in the rest frame. On the other hand, the derivation in Ref. [26] utilizes the operator (11) as done in this work. The first term of \(\square^{\Omega}\) with the mixed derivative, which is sensitive to the rotational direction, corresponds to the Coriolis force. Meanwhile, the second one is related to the centrifugal force [26]. _The \(v=c\) case._ Before going ahead with the analysis of Eq. (9) for the general \(v\neq c\) and finite \(\mu\) case, it is instructive to recapitulate the derivation of Eqs. (1) and (3) made in Ref. [14]. The signals designated as \(\pm\), which propagate in the clockwise and counterclockwise directions around a circle of the radius \(R\), are considered. These signals are described by the solutions of Eq. (9) which depend on \(t\) and the angular variable \(\phi\) with \(\Delta=mc^{2}\). The solutions which have the frequency \(\omega\) in the local Lorenz frame at the source are \[\psi_{\pm}(t,\phi)=\exp\left[i\gamma\left(\pm k+\frac{\Omega\omega R}{c^{2}} \right)R\phi-i\omega t/\gamma\right] \tag{12}\] with \[\gamma=\left(1-\frac{\Omega^{2}R^{2}}{c^{2}}\right)^{-1/2}. \tag{13}\] The wave number in Eq. (12) is determined by the KGF dispersion relationship \(\omega^{2}=c^{2}k^{2}+m^{2}c^{4}/\hbar^{2}\). Let's assume that the two counter-propagating signals originate in phase from the same source at \(\phi=0\). Subsequently, these signals are detected after completing a full round-trip, with phases the counterclockwise signal is detected at \(\phi=2\pi\), and the clockwise at \(\phi=-2\pi\). The phase difference between the two detected signals is therefore \(\Theta_{\rm S}=4\pi\gamma\omega\Omega R^{2}/c^{2}\). Considering that the circular interferometer has an area of \(A=\pi R^{2}\), one finds that the last expression reproduces Eq. (1) rewritten via the frequency \(\omega\) up to the relativistic factor \(\gamma\). This can be further elaborated upon by examining whether Eq. (1) is written for the rest or rotating frame (as discussed in the review by Post [4]). Nevertheless, this distinction is not crucial to our discussion, as \(\Omega R\ll c\) by several orders of magnitude, rendering the corrections to the fringe shift arising from \(\gamma\) (which are of the order \(\Omega^{2}R^{2}/c^{2}\)) practically indiscernlo in experimental observations. In the nonrelativistic limit, the phase fringe shift characterized by Eq. (3) emerges as the KGF equation reduces to the Schrodinger equation. This can be obtained by transforming away quickly oscillating rest energy dependence, \(\psi=\chi\exp(-imc^{2}t/\hbar)\) which results in the Schrodinger equation in the rotating frame: \[i\frac{\partial\chi}{\partial t}=\frac{1}{2m}\left(-i\hbar\nabla-m{\bf V} \right)^{2}\chi-\frac{1}{2}m{\bf V}^{2}\chi. \tag{14}\] Its solution for the circularly propagating waves [14] results in the phase fringe (3). It follows from Eq. (14) that a profound analogy exists between Aharonov-Bohm (AB) oscillations in mesoscopic rings and the Sagnac effect in the nonrelativistic case, as described by Eq. (3). The rotation of a thin ring, and thus the Sagnac effect, can be associated with the AB effect occurring in a uniform (Larmor) magnetic field, \({\bf H}=2m{\bf\Omega}/e\), or expressed in terms of the effective AB flux penetrating the ring as \(\Phi_{\Omega}=2mc\Omega A/e\)[14; 27] (for additional references and discussion see the reviews [5; 6]). Here \(-e<0\) is the electron charge. One can also observe that Eq. (3) can be derived directly by substituting the nonrelativistic limit of the KGF dispersion relation, given by \(\omega\approx mc^{2}/\hbar+\hbar k^{2}/(2m)\), into the solution (12). It is worthwhile to remind that the original consideration by de Broglie of the matter waves relied on the special relativity and contained the de Broglie frequency, \(\omega=mc^{2}/\hbar\) associated with a resting particles. Although the Schrodinger theory does not contain explicitly such quantity, it appears when one calculates the phase difference between the two counter-propagating wave packages [28]. Since the de Broglie frequency is not a commonly used quantity, it is convenient to express the phase fringe for massive particles in the form (2). For example, one can reformulate Eq. (3) for electrons in the same manner as Eq. (2), but with the photon wavelength \(\lambda\) replaced by Compton wavelength, \(\lambda_{C}=2\pi\hbar/(m_{e}c)\approx 0.0243\,\mbox{\AA}\), where \(m_{e}\) represents the electron mass. This representation simplifies the comparison between interferometers using light and matter, both having the same area and angular velocity. For example, when comparing blue light with a frequency \(\omega_{B}\) and a wavelength of \(\lambda_{B}=450\,\mbox{nm}\) to electrons, one can estimate the enhancement factor as \(m_{e}c^{2}/\hbar\omega_{B}=\lambda_{B}/\lambda_{C}\approx 1.85\times 10^{5}\), in agreement with the aforementioned estimation. _The case of the Dirac materials, \(v<c\)._ Now we may return to the consideration of a rotating ring made of the Dirac material. The Dirac electron subsystem in the rotating ring is described by Eq. (9) with a finite chemical potential introduced by the prescription given below Eq. (5). We neglect the effect of deformation of the ring due to rotation and consider one dimensional rigid ring of the radius \(R\), so the wave function \(\psi(t,\phi)\) is independent of the radial coordinated as before. Then the solutions of Eq. (9) for the two counter-propagating electron waves [cp. Eq. (12)] reads \[\psi_{\pm}(t,\phi)=\exp\left[i\gamma\left(\pm k+\frac{\Omega(\epsilon+\mu)R}{ \hbar cv}\right)R\phi-i\frac{\epsilon}{\hbar}\frac{t}{\gamma}\right] \tag{15}\] with the relativistic factor \(\gamma\) given by Eq. (13) and the energy \(\epsilon=\epsilon({\bf k})\) and wave number \(k\) obeying the dispersion law (4) for the Dirac quasiparticles. The phase difference between the two counter-propagating electron waves is therefore \(\Theta_{\rm S}=4\gamma(\epsilon+\mu)\Omega R^{2}/(\hbar cv)\). Let's begin by examining the case where \(\mu=\Delta=0\), which corresponds to a quasiparticle exhibiting a linear dispersion relationship given by \(\epsilon=\pm\hbar vk\). Expressing the wave vector \(k\) via the wavelength \(\lambda_{\rm gr}\) by the relation \(k=2\pi/\lambda_{\rm gr}\), one again arrives at the expression (2) with \(\lambda\) replaced by \(\lambda_{\rm gr}\). As was anticipated, the phase shift remains independent of the phase velocity \(v\) of the wave. As pointed out in Ref. [6], the Sagnac effect holds holds not only for quantum-mechanical particles (photons, electrons, etc.) but also for ordinary acoustic waves. The physical meaning of Eq. (2) considered for light and material particles is further elucidated when one rewrites it as follows [5], \(\Theta_{\rm S}=4\pi(V/c)(P/\lambda)\) with \(P=2\pi R\). The fringe shift is proportional to the number of the corresponding wavelengths along the wave path. This interpretation of the phase shift \(\Theta_{\rm S}\) is always valid, although the meaning of the wavelength \(\lambda\) has to be clarified for each case. Let us proceed with the cases of a finite \(\mu\) or/and \(\Delta\). If the temperature \(T\) is much smaller than \(|\mu|\), only the quasiparticle excitations near the Fermi surface contribute to the transport. For the spectrum given by Eq. (4) the Fermi surface is determined by the condition \(\epsilon(k_{F})=0\), where \(k_{F}\) represents the Fermi wave vector. Then neglecting the factor \(\gamma\approx 1\), the Sagnac fringe shift for the Dirac fermions reads \[\Theta_{\rm S}=\frac{4\mu A\Omega}{\hbar vc}=\frac{4\,{\rm sgn}\,(\mu)m_{c}A \Omega}{\hbar}\frac{v}{c},\qquad m_{c}=\frac{|\mu|}{v^{2}}. \tag{16}\] Here in the second equality \(\Theta_{\rm S}\) is written in terms of a fictitious "relativistic" mass, \(m_{c}\), which plays the role of the cyclotron mass in the Lifshits-Kosevich formula [29] and also allows to rewrite Eq. (16) in the form resembling the nonrelativistic expression (3). As was already mentioned, in graphene \(\mu\) and thus \(m_{c}\) are easily tunable by the gate voltage. Interestingly, the value of \(\Theta_{\rm S}\) turns out to be sensitive to the sign of \(\mu\) or character of the carriers. At first glance, this seemingly paradoxical result is actually sensible, as it arises from the fact that the motion of hole carriers corresponds to the rotation of electrons in opposite directions. The representation of \(\Theta_{\rm S}\) in terms of \(m_{c}\) also turns out to be very convenient for an estimate of the phase fringe. Indeed, one finds in Ref. [29] that for the carrier density \(n\approx 7\times 10^{12}\,{\rm cm}^{-2}\) the mass \(m_{c}\approx 0.06m_{e}\). Also considering that in graphene, \(c/v\approx 300\), one can estimate that \(\Theta_{\rm S}^{e}/\Theta_{\rm S}^{\rm gr}=cm_{e}/(vm_{c})\approx 5\times 10^{3}\). Alternatively, when comparing with blue light, \(\Theta_{\rm S}^{\rm gr}/\Theta_{\rm S}^{B}\approx 37\). This enhancement factor is not significant enough to make graphene attractive for applications. However, a more substantial value could be potentially achieved by further increasing the carrier density. In contrast, there is a 3D topological insulator, specifically Bi\({}_{2}\)Te\({}_{3}\), exhibiting a low Fermi velocity [30]. While the Fermi energy in Bi\({}_{2}\)Te\({}_{3}\) is approximately 10 times lower than in graphene, the Fermi velocity is \(v\approx 3260\,{\rm m/s}\) which is over \(10^{4}\) times smaller than in graphene. This significant difference makes Bi\({}_{2}\)Te\({}_{3}\) and similar materials attractive for electron interferometry. Equation (16) is valid for both gapless and gapped cases. In the former, \(\Delta=0\) case, the Fermi wave vector, \(k_{F}\), is determined by the relationship \(\hbar vk_{F}=|\mu|\). Once again, it is evident that Eq.(16) can be reformulated in the manner of Eq.(2), utilizing the Fermi wavelength, \(\lambda_{F}=2\pi/k_{F}\). Using the relationship between the carrier imbalance \(n\) and chemical potential for graphene, \(|n|=(\mu^{2}-\Delta)/(\pi\hbar^{2}v^{2})\) (see e.g. Ref. [22]), one obtains for \(\Delta=0\) that \(\lambda_{F}=2\sqrt{\pi/|n|}\). Taking \(n=7\times 10^{12}\,{\rm cm}^{-2}\) one gets \(\lambda_{F}\approx 13.5\,{\rm nm}\). This corresponds to the enhancement factor \(\lambda_{B}/\lambda_{F}\approx 33\) which is quite close to the previous estimate made in terms of \(m_{c}\). The opening of the gap \(\Delta\) in graphene can be taken into account by expressing the chemical potential via the carrier imbalance, \(\mu^{2}=\Delta^{2}+\pi\hbar^{2}v^{2}|n|\). This relation proved to be useful for analyzing the dependence \(\mu(n)\) in G/hBN structures [31]. In particular, for \(n\approx 0\) one obtains that the fringe shift \(\Theta_{\rm S}=4\,{\rm sgn}\,(\mu)\Delta A\Omega/(\hbar vc)\). To produce the same phase fringe as light with a photon energy \(\hbar\omega\), the value of the gap is \(\Delta=\hbar\omega(v/c)\). Accordingly, for the blue light with \(\hbar\omega_{B}\approx 2.76\,{\rm eV}\) to reach the same sensitivity in graphene the gap has to be \(\Delta\approx 9\,{\rm meV}\), while in the experiment [31] on G/hBN \(\Delta\approx 3.4\,{\rm meV}\). However, the inclusion of the gap, along with a finite carrier density, and more notably, the reduction in the Fermi velocity, \(v\), definitely enhances the sensitivity. _Conclusion._ To conclude, we have obtained analytic expressions for the Sagnac fringe shift in the Dirac materials. The direction of the shift relies on the charge carriers' nature - whether they are electrons or holes. When considering graphene, the enhancement factor is not as substantial as that achievable in conventional semiconducting materials with finite effective carrier masses. Our analysis illustrates that the most significant enhancement factor values are attainable in materials characterized by a reduced Fermi velocity. We would like to thank the Armed Forces of Ukraine for providing security to perform this work. The authors acknowledge support by the National Research Foundation of Ukraine grant (2020.02/0051) "Topological phases of matter and excitations in Dirac materials, Josephson junctions and magnets". We would like to thank Iu.A. Chernii, E.V. Gorbar, V.P. Gusynin, A.A. Kordyuk, E.G. Len, V.M. Loktev, and A.A. Varlamov for the numerous stimulating and enlightening discussions.
2309.12280
Characterizing the topological properties of one-dimensional non-hermitian systems without the Berry-Zak phase
A new method is proposed to predict the topological properties of one-dimensional periodic structures in wave physics, including quantum mechanics. From Bloch waves, a unique complex valued function is constructed, exhibiting poles and zeros. The sequence of poles and zeros of this function is a topological invariant that can be linked to the Berry-Zak phase. Since the characterization of the topological properties is done in the complex plane, it can easily be extended to the case of non-hermitian systems. The sequence of poles and zeros allows to predict topological phase transitions.
Didier Felbacq, Emmanuel Rousseau
2023-09-21T17:43:19Z
http://arxiv.org/abs/2309.12280v1
Characterizing the topological properties of one-dimensional non-hermitian systems without the Berry-Zak phase ###### Abstract A new method is proposed to predict the topological properties of one-dimensional periodic structures in wave physics, including quantum mechanics. From Bloch waves, a unique complex valued function is constructed, exhibiting poles and zeros. The sequence of poles and zeros of this function is a topological invariant that can be linked to the Berry-Zak phase. Since the characterization of the topological properties is done in the complex plane, it can easily be extended to the case of non-hermitian systems. The sequence of poles and zeros allows to predict topological phase transitions. topological properties, photonic crystals, wave physics ## 1 Introduction A considerable amount of work has been devoted to the study of the topological properties of photonic structures [1]. The word topological means that what is at stake are the properties of a structure that are stable under a continuous variation of the parameters defining the it. For instance, the existence of a band gap is a topological property since a not too large variation of the properties (e.g. the size of the basic cell, the values of the electromagnetic parameters) do not close the gap. In some cases, the topological properties can be characterized by an integer number, a quantity that obviously remains constant over small continuous variations [2]. Of course, for larger variations it may happen that, e.g., the gap closes, which can lead to a change into the integer number. This will be called a topological transition. First attempts to find topological properties in photonic structures were made by mimicking the field of topological insulators [10]: the time-reversal invariance was broken by the use of gyromagnetic materials controled by a magnetic field. The devices considered there are quite complicated and specific. A major breakthrough was made when it was realized that topological effects could be obtained in purely dielectric structures [3]. As a matter of fact, topological effect can be obtained for very simple structures: one dimensional stratified media exhibit boundary modes that are topologically protected [14]. These can be analyzed using theoretical tools that were developed a long time ago [15] and which have been given a second look in the context of geometric phases: in [16], it was shown that the properties exhibited by Kohn could be interpreted using the mathematical apparatus developed by [4] in view of the results obtained by Berry in [5]. Mathematically speaking, this comes under the domain of vector bundles endowed with a connection [9, 8]. Simon introduced in [4] the now celebrated connection 1-form \(A(k)\,=\,\langle u_{k},\nabla_{k}u_{k}\rangle\), where \(u_{k}\) is a Bloch mode. This connection is often called the Berry connection, while it is in fact a specific case of the Levi-Civita connection [6]. Zak's article was probably the first work applying the concept of geometric phase to Bloch waves. Recently, there was an interest in the possibility of extending these results to non-hermitian Hamiltonians [7, 18, 19, 20] and, in the context of photonic crystals, to media with losses [17]. All the preceding results were obtained by generalizing the Levi-Civita connection for a non-hermitian bundle. In the present work, we propose a new approach to the topological properties of 1D periodic structures. We show that the topological properties can be analyzed without reference to the Levi-Civita-Simon-Berry connection. We introduce a function of the wavenumber that presents poles and zeros. The arrangement of the poles and zeros characterizes the topological properties of the medium. It turns out that this pole-zero structure extends naturally to the situation when losses are present in the materials out of which the structure is made, i.e. to the non-hermitian situation. In the first section, we recall the elements of the theory of wave propagation in 1D structures, comprising Bloch waves. In the second section, we develop our approach and introduce the function that will prove to be a clue to the understanding of the topological properties. In the third section, we make the link with the usual approach using the geometrical phase when the medium under consideration are lossless. Finally, we extend the approach to the situation where loss is added. Throughout the sections, numerical illustrations are provided in order to clarify the somewhat abstract statements. ## 2 Wave propagation in a one dimensional structure An example of a 1D medium is depicted in fig. (1). We have chosen to represent a 1D stratified photonic crystal, but the results that we obtain apply to continously varying structure as well as to acoustic structure and to quantum waves in a 1D potential. For definiteness, we proceed by using the vocabulary of electromagnetism in the following. However, we will use occasionally the word "potential" generically to designate the permittivity (permeability) or the quantum confining potential. With an abuse of notation, we will talk of the "potential \(V(x)\)" to denote generically these quantities. We will further make the hypothesis that there is an inversion symmetry in the medium, that is, an origin can be chosen in such a way that \(V(x)=V(-x)\). Let us briefly recall the theory of 1D media [11]. We consider time-harmonic fields (time-dependence of \(e^{-i\omega t}\)) that are invariant along the axis \(y\) and \(z\) and depend only on the variable \(x\) (see fig. (1) ). Since the medium under consideration is invariant along two directions of space, the electromagnetic field can be decomposed as a sum of two linearly polarized fields. Either the electric field is linearly polarized along \(z\) (\(E_{\parallel}\) case) or the magnetic field is linearly polarized along \(z\) (\(H_{\parallel}\) case). In both cases, we denote by \(u(x)\) the function representing the field, i.e. \(E_{z}(x)=u(x)e_{z}\) (\(E_{\parallel}\) case) or \(H_{z}(x)=u(x)e_{z}\) (\(H_{\parallel}\) case). The medium is described by a periodic relative permittivity \(\varepsilon(x)\) and a periodic relative permeability \(\mu(x)\). The system of units is chosen in such a way that \(\varepsilon(x+1)=\varepsilon(x),\,\mu(x+1)=\mu(x)\). From the Maxwell system, the following equation is obtained, valid in the Schwartz distributions meaning: \[Hu=k_{0}^{2}u, \tag{1}\] we denote \(k_{0}=\omega/c\) the wavenumber, and \[H=-p^{-1}(x)\frac{d}{dx}\left(q^{-1}(x)\frac{d.}{dx}\right), \tag{2}\] where, according to the polarization: \[E_{\parallel}:q(x)=\mu(x),\,p(x)=\varepsilon(x),\,H_{\parallel}:q(x)= \varepsilon(x),\,p(x)=\mu(x). \tag{3}\] This equation is more conveniently rewritten as an order one differential system: \[\frac{d}{dx}U=\left(\begin{array}{cc}0&1\\ -k_{0}^{2}p(x)&0\end{array}\right)U, \tag{4}\] where \[U=\left(\begin{array}{c}u\\ \frac{1}{q(x)}\frac{du}{dx}\end{array}\right).\] In the following, we denote \[u^{\prime}=\frac{1}{q(x)}\frac{du}{dx}. \tag{5}\] From the general theory of ordinary differential equation, there exists a so-called resolvent matrix \(\mathcal{R}(x,y)\) such that \(U(x)=\mathcal{R}(x,y)U(y)\). Over one period, the values of \(U(1)\) and \(U(0)\) are related by the so-called monodromy matrix: \[\mathcal{M}(k_{0})=\mathcal{R}(1,0). \tag{6}\] This matrix is unimodular (i.e. \(\det{\cal M}\;=\;1\)) and it characterizes the band structure in the hermitian case. The monodromy matrix depends on the norm of the wavevector in vacuum (or on the energy of the system in case of quantum physics). The characteristic polynomial of \({\cal M}\) reads as: \(X^{2}-tr({\cal M})X+1\), therefore three sets can be defined according to the nature of the eigenvalues of \({\cal M}\)[12, 13]: * \(G\;=\;\{k_{0}\in\mathbb{R},\,|\,{\rm Tr}({\cal M}(k_{0}))|>2\}\), for which the eigenvalues are real and inverse one of the other. This corresponds to non progagative modes, i.e. band gaps. * \(B\,=\,\{k_{0}\in\mathbb{R},\,|\,{\rm Tr}({\cal M}(k_{0}))|<2\}\), for which the eigenvalues are complex of modulus one and conjugated. This corresponds to progagative modes, i.e. conduction bands. The eigenvalues can be written \(e^{\pm i\theta}\), with \(\theta\,\in\,[-\pi,+\pi]\) the Bloch number and the interval \([-\pi,+\pi]\) is the so-called Brillouin zone. * \(\Delta=\{k_{0}\in\mathbb{R},\,|\,{\rm Tr}({\cal M}(k_{0}))|=2\}\), for which the eigenvalues are \(\pm 1\). The eigenvalues are of multiplicity 2. We denote \(\Delta_{0}\) the subset of \(\Delta\) for which \({\cal M}(k_{0})=\pm I_{d}\). Conventionaly, Bloch waves are associated to eigenvalues of modulus one and therefore to propagative modes in the structure. This corresponds to the set \(B\). Looking at the various sets, we see that the distinction between the various domain is purely qualitative: eigenmodes always exist in the system but they are unbounded for \(k_{0}\,\in\,G\). That is why we call "generalized Bloch modes" the modes corresponding to the sets \(G\) and \(\Delta\). In the band gaps, the solutions to the wave equation are not bounded over the infinite structure, but they play a crucial role in the case of a finite or semi-infinite medium. We therefore take as parameter the wavenumber \(k_{0}\) that will vary in \(\mathbb{R}^{+}\) and study the evolution of eigenvalues and eigenvectors with respect to \(k_{0}\). For \(k_{0}\,\in\,B\), the fields that can exist in the structure are superposition of Bloch waves \(\psi(x;\theta,k_{0})\). The Bloch wave are quasi-periodic in the \(x\) variable, that is, there are of the form: \(\psi(x;\theta,k_{0})=e^{i\theta x}u(x;\theta,k_{0})\), where \(u(x;\theta,k_{0})\) is 1-periodic in the \(x\) variable: \(u(x+1;\theta,k_{0})=u(x;\theta,k_{0})\). It is convenient to transform the Brillouin zone into the circle \(S^{1}\,=\,\{z\,\in\,\mathbb{C},\,|z|\,=1\}\), that is, we associate to \(\theta\,\in\,[-\pi,\pi]\) the complex number \(z\,=\,e^{i\theta}\,\in\,S^{1}\). Any function defined on the interval \([-\pi,\pi]\) can be considered as a function on \(S^{1}\). From now on the Bloch modes are thus denoted \(u(x;z,k_{0})\). For each value of \(z\), there corresponds a set of wavenumbers: \[k_{0,j}(z),\,j=1\ldots, \tag{7}\] where \(k_{0,j}^{2}\) are the eigenvalues of \(H\) and for each of these wavenumbers a complex vector space of dimension 1. The collection of these vector spaces constitutes the Bloch bundle associated with the branch \(k_{0,j}(z)\). As a function of \(\theta\), it is not obvious that \(u(x;\theta,k_{0})\) satisfies the condition that \(u(x;\pi,k_{0})=u(x;-\pi,k_{0})\). However, it is a general result that for any one dimensional complex bundle over \(S^{1}\), there exists a _continuous_ function \(z\,\in\,S^{1}\,\rightarrow\,u(x;z,k_{0})\)[9] (such a function is a called a "section" of the bundle). This does not mean that no topological effect is to be expected as we will see below, because we have the additional hypothesis that the potential is symmetric. Due to the fact that the Bloch waves depend on both the direct and indirect variables \(x\) and \(\theta\) this representation of the Bloch bundle is not easy to handle. We present another, simpler, representation of the bundle. In order to make the discussion less arid, we shall give a numerical illustration of the concept that we deal with (the code written in Matlab is available in the supplementary material). We consider the case of a binary medium, where the period is made of two homogeneous layers of relative permittivity \(\varepsilon_{1}\) and \(\varepsilon_{2}\) and width \(d_{1}\) and \(d_{2}\) (see fig. (1)). Let us denote \(\nu_{j}=\sqrt{\varepsilon_{j}}\), \(j=1,2\). For each layer, the monodromy matrix has the form \[{\cal M}_{j}=\left(\begin{array}{cc}\cos(k_{0}\nu_{j}d_{j})&\frac{1}{k_{0} \nu_{j}}\sin(k_{0}\nu_{j}d_{j})\\ -k_{0}\nu_{j}\sin(k_{0}\nu_{j}d_{j})&\cos(k_{0}\nu_{j}d_{j})\end{array}\right), \tag{8}\] and the complete monodromy matrix is simply \({\cal M}\;=\;{\cal M}_{2}{\cal M}_{1}\). The dispersion relation is obtained by computing the trace of \(\mathcal{M}\), which leads to the equation \[\cos(\theta)=\cos(k_{0}\nu_{1}d_{1})\cos(k_{0}\nu_{2}d_{2})-\frac{1}{2}\left( \frac{\nu_{2}}{\nu_{1}}+\frac{\nu_{1}}{\nu_{2}}\right)\sin(k_{0}\nu_{1}d_{1}) \sin(k_{0}\nu_{2}d_{2}) \tag{9}\] and the eigenvectors \(U^{\pm}\) are obtained by diagonalizing the monodromy matrix \(\mathcal{M}\). An example of a band structure is given in fig. (2) where the conduction bands are labeled as in (7). Let us now extend this setting to complex values of the energies. Consider the Bloch variety: \[F=\left\{(z,k_{0})\in\mathbb{C}^{2},\,\exists\,U\neq 0:\mathcal{M}(k_{0})U=zU\right\}\] The Bloch variety is obtained explicitly by the solving the characteristic polynomial of \(\mathcal{M}\): \[\det(\mathcal{M}(k_{0})-zI_{2})=0.\] This allows to consider the situation when the potential is not necessarily real (i.e. the case of a non-hermitian quantum system or of media with losses). The Bloch variety defines the zeros set of an analytic function of the two variable \((k_{0},z)\). In the community of integrable systems, it is called the spectral curve of the system [21]. The vector bundle envisioned previously extends to a vector bundle over this curve. The situation of a real potential and real energies corresponds to the set \(F_{0}\subset F\) defined by \[F_{0}=\left\{(z,k_{0})\in\mathbb{C}\times\mathbb{R},\,\exists\,U\neq 0: \mathcal{M}(k_{0})U=zU\right\}.\] Generically, the monodromy matrix can be put in diagonal form, however for \((k_{0},z)\,\in\,\Delta\,\setminus\,\Delta_{0}\), a particular situation happens: the eigenvalue \(\pm 1\) has a multiplicity of \(2\) but the eigenspace is of dimension \(1\). The points where this happens are called ramification points. The set \(\Delta_{0}\) where \(\mathcal{M}\,=\,\pm I_{2}\) corresponds to a non-generic situation that is a topological phase transition, as will be shown in the following. Indeed, this corresponds to two bands touching at one point \(z\,=\,\pm 1\). This singularity can be removed by an infinitesimal variation of the parameters. It separates two topological phases stable under small variations of the parameters far enough from this singularity. ## 3 Topological characterization using poles and zeros Let us consider for now the usual case of a real potential or lossless media. The eigenvectors \(U^{\pm}(x_{0})\) of \(\mathcal{M}\) are the boundary values of the generalized Bloch waves at an origin \(x_{0}\) that can be chosen at will. Changing the value of \(x_{0}\) amounts to changing the basic cell to \([x_{0},x_{0}+1]\). It is equivalent to a change of gauge in real space whenever an infinite medium is considered. When finite structures are considered different choices of the origin will correspond to different physical properties as shall be seen later on. Figure 1: An example of 1D structure. It is a stratified photonic crystal with two homogeneous slabs in the basic cell. The vertical dashed lines correspond to the two origins for which the potential is symmetric. Figure 2: band structure for a photonic crystal with parameters \(\varepsilon_{1}=3.8\), \(\varepsilon_{2}=1\), \(h_{1}=0.42\), \(h_{2}=0.58\) and the permeabilities are equal to 1. These are the values used in [14]. The labeling of the branches is as in (7). The eigenvectors \(U^{\pm}\) are complex conjugate when the potential is real and for \(k_{0}\,\in\,B\). Using the resolvent matrix, the value of a Bloch mode at any point \(x\) in the period is obtained by using the relation: \(U^{\pm}(x)\,=\,{\cal R}(x,x_{0})U^{\pm}(x_{0})\). From these considerations, we conclude that the Bloch eigenspace at a point \((k_{0},z)\) is entirely determined by the eigenvectors \(U^{\pm}(x_{0})\). Therefore the Bloch bundle, i.e. the collection of all the eigenspaces as \(z\) describes \(S^{1}\) is isomorphic to the vector bundle of eigenvectors of the monodromy matrix \({\cal M}\). As we have already said, it is a complex vector bundle over \(S^{1}\), and therefore it is trivial, which means that there exists a non-vanishing section, that is, a continuous parametrization of a Bloch mode all around the Brillouin zone. We recover here one of the conclusions of [16]. Still, there can be topological properties provided that we take into account the hypothesis that the potential has an inversion symmetry \(\sigma\): \(V(x)=V(-x)\) Two different origins \(x_{0}\) and \(x_{1}\) can be chosen such that \(V(x)\,=\,V(-x)\). These two points are such that \(x_{1}-x_{0}\,=\,1/2\mod(1)\) (see fig. (1) where the two origins are indicated as dashed lines). Let us assume that the boundary values are chosen at one of the two points such that \(V(x)\,=\,V(-x)\), that is, we fix the gauge in real space. This point is now the new origin \(x=0\). For the eigenvectors \(U^{\pm}\) of the monodromy matrix \({\cal M}\), the inversion symmetry \(\sigma\) acts as \(\sigma(U^{\pm})=\sigma_{z}U^{\pm}\), where \(\sigma_{z}\) is the Pauli matrix \(\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\). This is so because under the change \(x\,\to\,-x\), the derivative changes sign and the wave propagates backwards: the corresponding monodromy matrix is \({\cal M}^{-1}\). The inversion symmetry implies that if U is an eigenvector of the monodromy matrix with the eigenvalue \(z\) then \(\sigma_{z}U\) is an eigenvector of the monodromy matrix but with the eigenvalue \(1/z\). As a consequence, we have the following result: _When \(V(x)=V(-x)\), it holds:_ \[\sigma_{z}{\cal M}\sigma_{z}={\cal M}^{-1},\mbox{ and the basis of eigenvectors is of the form }(U,\sigma_{z}U). \tag{10}\] For \(k_{0}\,\in\,B\), it holds, projectively, \(U^{*}\,=\,\sigma_{z}U\) (\(*\) denotes complex conjugaison). When we say that two vectors are equal "projectively", we mean that they are colinear. Given a vector \(U=(u(0;k_{0},z),u^{\prime}(0;k_{0},z))^{t}\), the vector space generated by \(U\) is denoted \(\tilde{\chi}\,=\,[u(0;k_{0},z)\,:\,u^{\prime}(0;k_{0},z)]\). This is the standard notation for an element of the projective space \({\mathbb{C}}P_{1}\) which is the set of all complex lines going through the origin in the complex plane \({\mathbb{C}}^{2}\). This space is equivalent to the Riemann sphere, that is \({\mathbb{C}}\) together with a point \(\infty\) at infinity. This suggests to find a function that characterizes an eigenspace. Given an eigenvector \(U=(u(0;k_{0},z),u^{\prime}(0;k_{0},z)\) we define \[\chi(k_{0},z)=u(0;k_{0},z)/u^{\prime}(0;k_{0},z). \tag{11}\] This quantity does not depend upon the specific eigenvector that is chosen to represent the eigenspace, i.e. \(U\) and \(\gamma U\), \(\gamma\in{\mathbb{C}}\) define the same function \(\chi\). For \(k_{0}\,\in\,B\), the ratio \(\chi(k_{0},z)\,=\,u(0;k_{0},z)/u^{\prime}(0;k_{0},z)\) satisfies the relation \(\chi^{*}(k_{0},z)\,=\,-\chi(k_{0},z)\) and therefore, thanks to (10), it is purely imaginary in the conduction bands for a real potential. For a given wavenumber \(k_{0}\), there are two eigenvectors \((U^{+}=U,U^{-}=\sigma_{z}U)\) with eigenvalues \(z\) and \(1/z\) respectively. Therefore two functions \(\chi^{\pm}\) obtained from the components of the eigenvector \(U^{+}\) or \(U^{-}\), respectively. These functions satisfy therefore the relation: \[\chi^{+}(k_{0},z)=-\chi^{-}(k_{0},1/z), \tag{12}\] Let us show that these can be combined to provide a single function. For \(k_{0}\,\in\,G\), since one of the numbers \((|z|,1/|z|)\) is lower than \(1\), we can define a single function \(\chi\) such that: \[\chi(k_{0})=\left\{\begin{array}{ll}\chi^{+}(k_{0},z),&|z|<1\\ -\chi^{+}(k_{0},z),&|z|>1\end{array}\right.. \tag{13}\] In order to extend this definition to the conduction band, a criterium is needed to distinguish between \(z\) and \(1/z\). Let us consider the curves: \(k_{0}\to(z(k_{0}),1/z(k_{0}))\), which represent the evolution of the eigenvalues as functions of \(k_{0}\). These are curves in the complex planes. This is represented in fig. (3) and (4). The curve in fig. (3) represents the eigenvalues of the monodromy matrix in the complex plane when varying the wavenumber \(k_{0}/2\pi\) between \(0\) and \(4\). This curve shows all the possible eigenvalues for a given geometry. The parameters used for the calculations are indicated in the caption of fig. (2). The eigenvalues corresponding to propagating waves, i.e. corresponding to the set \(B\), lie in the unit circle. The eigenvalues corresponding to the band gap \(G\) have a null imaginary part. They give rise to the line segments around \(z=\pm 1\). When the potential is real, the ramification points are \(1\) and \(-1\). The curve in fig. (4) is the "blow-up" of the preceding curve by plotting directly the skew curve \(k_{0}\rightarrow(z(k_{0}),1/z(k_{0}))\). At the ramification point, the curves cross at a right angle, which makes it seemingly impossible to follow one eigenvalue by continuity from \(B\) to \(G\). However, this degeneracy is in fact directly linked to the stringent condition that be potential be real, since it is a consequence of the fact that the monodromy has eigenvalues \(\pm 1\) at the boundaries of the conduction bands. This degeneracy can be lifted by using a limiting absorption principle, that is, by adding a small imaginary part \(\delta\) either to the potential or to the frequency. Indeed, if we replace \(k_{0}\) by \(k_{0}+i\delta\), the equation \(tr({\cal M}(k_{0}))=\pm 2\) is replaced by the equation \(tr({\cal M}(k_{0}+i\delta))=\pm 2\). Therefore the crossing points are no longer real (generically). This is what is done in fig. (5) where a small imaginary value of \(10^{-2}\) was added to \(k_{0}\). The real part of \(k_{0}\) is used as a blow-up parameter to plot the eigenvalues as curves in \(\mathbb{R}^{3}\). It is seen that the curves no longer cross and therefore the eigenvalues can be followed individually. Since the eigenvalues can be distinguished, it is also possible to follow the eigenvectors by continuity, i.e. to resolve the ramification points. Therefore the same holds for the functions \(\chi^{\pm}\). For these functions, the extension problem appears when there is a pole. When a small imaginary part is added to \(k_{0}\), the poles of \(\chi^{\pm}\) gain a small imaginary part and the restriction of these functions to the real axis of \(k_{0}\) is now continuous. It is illustrated in fig.(6) where the values of \(\chi^{\pm}\) are plotted on the Riemann sphere by stereographic projection. We conclude the following: _There is a unique function \(\chi(k_{0},z)\) corresponding to eigenspaces associated with eigenvalues lower than \(1\) in modulus for \(k_{0}\in G\)._ At the heart of the topological properties captured by the function \(\chi\) are the poles and zeros that it possesses. A zero is associated with the eigenvector \((0,1)^{t}\) and a pole to the eigenvector \((1,0)^{t}\). We have the following result: Figure 3: Eigenvalues of the monodromy matrix in the complex plane. The circle corresponds to \(B\) and the real intervals to \(G\). The transitions between \(G\) and \(B\), i.e. the set \(\Delta\), correspond to \(z=\pm 1\). [MISSING_PAGE_POST] Figure 5: Evolution of the eigenvalues as functions of \(k_{0}\) when small losses are added. The curves are now can now be distinguished (except at the bottom when \(k_{0}\to 0\)). Figure 6: On the right, the graph of the function \(\chi(k_{0})\) on the Riemann sphere. The north pole corresponds to 0 and the south pole to infinity. On the left, the same but blowed up by using the parameter \(k_{0}\). _For \(k_{0}\in B\), since \(U^{*}(k_{0},z)=U(k_{0},1/z)\), we have that either \(\chi(k_{0},z)\) is null at \(z=\pm 1\) or it has a pole._ The values of \(k_{0}\) for which there can be a zero or a pole are characterized as follows. Assume that there is a zero \(k_{Z}\) in the interior of a band. Then \(\chi^{\pm}\) are both null and therefore the eigenvectors of \({\cal M}(k_{Z})\) are linearly dependent and the eigenvalues \(z\) and \(1/z\) are equal. Consequently, \(tr({\cal M})\,=\,\pm 2.\) If \(tr({\cal M})\) is not transversal there, i.e. if the derivative of \(tr({\cal M})\) is null, then it is a non-generic point that can be removed by an infinitesimal variation of the parameters. Similarly, if there is a pole of \(\chi\) then \(U=(1,0)^{t}\) with eigenvalue \(z\). The other eigenvector is \(\sigma_{z}U\,=\,(1,0)^{t}\). Therefore there is a degeneracy and \(z^{2}\,=\,1\) and therefore \(tr({\cal M})=\pm 2\). We can then conclude the following: _The only poles and zeros of the function \(\chi\) are generically situated at band edges only, that is, at points where \(tr({\cal M})=\pm 2\) and \({\cal M}\neq\pm I_{2}\). They are real when the potential is real._ The position of poles and zeros is the clue to understanding the topological properties of the structures. Indeed, a topological transition is characterized by the closing and re-opening of a gap through a continuous variation of the parameters defining the structure. The value of \(k_{0}\) for which the lower and upper bands touch is a critical point of \(f(k_{0})={\rm Tr}({\cal M}(k_{0}))^{2}-4\), indeed, there it holds \(f(z)=0\) and \(f^{\prime}(z)=0\). Therefore: \({\cal M}\,=\,\pm I_{2}\). Since at this point the eigenvectors are \((1,0)^{t}\) and \((0,1)^{t}\), this means that a pole and a zero of \(\chi\) merge. As a consequence, there is always a pole and a zero at the boundaries of a band gap. In other words, when the topological transition takes place, a pole and a zero change places and therefore they have to merge. We conclude that: _When the potential is real, the topological transitions take place at \({\cal M}=\pm I_{2}\)._ Furthermore, the poles and zeros are continuous functions of the parameters and disappear only when they merge. As a consequence: _The sequence of poles and zeros is a topological invariant. It determines the positions of the forbidden and conduction bands_ This sequence is called the Poles-Zeros pattern. When losses are added, the situation is more complicated, because in that case the Poles-Zeros pattern is contained in the lower part of the complex plane. When the potential is complex, the function \(\chi\) is no longer purely imaginary in the conduction bands nor real in the band gaps. It turns out that the function \(\chi\) is defined over \(F\) and takes values in the projective space, assimilated to the Riemann sphere. This result is immediate whenever the eigenspaces are not degenerated, since the entries of the monodromy matrix are holomorphic. ## 4 The Berry-Zak phase and the triviality of the bundle Let us relate these results to the Berry-Zak phase. This phase is the one that is acquired by a Bloch wave as the Bloch number evolves around the Brillouin zone. It is defined explicitely as follows. Consider the periodic part of a Bloch mode \(\psi(k;x)\). The Bloch mode infinitely close to \(\psi(k;x)\) is obtained by making a small variation \(dk\) and by imposing that \(\psi\) is transported without variation at order 1. To do so, we write that \(\psi(k+dk;x)=\psi(k;x)+\partial_{k}\psi(x;k)dk+O(dk^{2})\), and we impose that the variation \(\delta\psi\) of \(\psi\) belongs to the eigenspace of \(\psi\): \[\delta\psi=\Pi_{\psi}\left(\psi(k+dk;x)-\psi(k;x)\right) \tag{14}\] where \(\Pi_{\psi}\,=\,|\psi\rangle\langle\psi|\). This gives: \(\delta\psi\,=\,\langle\psi,\partial_{k}\psi\rangle\psi\). This defines the so-called "connection form": \(A(k)\,=\,\langle\psi,\partial_{k}\psi\rangle dk\). From this definition, it is seen that \(A(k)\) is purely imaginary for a normalized Bloch mode. Indeed, if \(\langle\psi,\psi\rangle=1\), then \(\langle\partial_{k}\psi,\psi\rangle+\langle\psi,\partial_{k}\psi\rangle=0\). For a generic Bloch mode \(\phi(k;x)\,=\,w(k)\psi(k;x)\), the parallel transport of \(\phi\) around the Brillouin zone amounts to let \(\phi\) evolve in such a way that its variation is orthogonal to the the eigenspace generated by \(\psi\) (this is what is done in first order time-independent perturbation theory): \(\langle\delta\phi,\psi\rangle=\partial_{k}w+A(k)w=0\)[6]. This gives the differential equation: \(\partial_{k}w=-A(k)w\) and therefore: \(w(k)\,=\,e^{-\int_{-\pi}^{k}A(k^{\prime})}w(-\pi)\). Going around the Brillouin zone defines the so-called Berry-Zak phase \(\int_{-\pi}^{\pi}A(k^{\prime})\) through the integration of the differential equation: \(w(\pi)\,=\,e^{-\int_{-\pi}^{\pi}A(k^{\prime})}w(-\pi)\). A major result obtained in [15] and reexpressed in the terms of geometrical phases in [16] is that, provided the potential as the invertion symmetry, it holds: \[e^{-\int_{-\pi}^{\pi}A(k^{\prime})}=\pm 1.\] The Bloch bundle is trivial when the value is \(1\) and non-trivial when it is \(-1\). This is equivalent to the existence of an _equivariant section_. Here, equivariant means that a section, i.e. a continuous parametrization of a Bloch mode over the Brillouin zone, satisfies a compatibility condition with the inversion symmetry. This concept translates easily in our formulation. Indeed, The Bloch modes are represented by the eigenvectors of \({\cal M}(k_{0})\) and a conduction band corresponds to an interval of wavenumbers \([k_{1},k_{2}]\). At the boundaries \(k_{1},k_{2}\) the monodromy matrix is of one of the following form: \[\begin{pmatrix}\pm 1&*\\ 0&\pm 1\end{pmatrix}\,\,\mbox{or}\,\,\begin{pmatrix}\pm 1&0\\ *&\pm 1\end{pmatrix} \tag{15}\] where \(*\) is a non-zero element. The corresponding eigenvectors are: \[\begin{pmatrix}1\\ 0\end{pmatrix},\,\begin{pmatrix}0\\ 1\end{pmatrix} \tag{16}\] The point however is the possibility to follow by continuity an eigenvector around the Brillouin zone. As already said, since the bundle is complex and over \(S^{1}\), such a section necessarily exists [9]. Here, we request further that it be equivariant, that is, that it be compatible with the group action induced by the inversion symmetry. Specifically, an equivariant section \(V(k_{0},z)\) should satisfy \[U(k_{0},1/z)=\tilde{\sigma}U(k_{0},z), \tag{17}\] where \[\tilde{\sigma}=\sigma_{z}\,\,\mbox{or}\,\,\tilde{\sigma}=-\sigma_{z}. \tag{18}\] * Assume that we start with the eigenvector \((1,0)^{t}\) (that is, a pole of \(\chi\)) for \(z\,=\,1\). It satisfies the relation: \((1,0)^{t}\,=\,\sigma_{z}(1,0)^{t}\). Let us say that we follow the upper part of \(S^{1}\), then we arrive at an eigenvector \((a,b)^{t}\) for \(z=-1\) that corresponds either to a zero or to a pole. Going around the lower part, we arrive at the eigenvector \((a^{\prime},b^{\prime})^{t}\) for \(z\,=\,-1\). Keeping into account the equivariance of the section, we impose that \((a^{\prime},b^{\prime})^{t}\,=\,\sigma_{z}(a,b)^{t}\), that is: \(a\,=\,a^{\prime}\), \(b\,=\,-b^{\prime}\). For the section to be continuous, we have of course to impose: \(a\,=\,a^{\prime}\), \(b\,=\,b^{\prime}\), but then, of course, \(b\,=\,0\). Therefore we end up with the eigenvector \((1,0)^{t}\). We have therefore fulfilled the conditions for the existence of a section. * Let us start now with a zero \((0,1)^{t}\). This does fullfill the requirement of equivariance provided that we write \((0,1)^{t}\,=\,-\sigma_{z}(0,1)^{t}\). If there is also a zero at \(z\,=\,-1\), the same gluing works to provide a global section. * The situation is different if we start with a zero \((0,1)^{t}\) and end with a pole \((1,0)^{t}\). This time we start with the condition \((0,1^{t})\,=\,-\sigma_{z}(0,1)\) and end with the condition \((1,0)^{t}\,=\,\sigma_{z}(1,0)^{t}\). Therefore the section is not globally equivariant and it has to be twisted. We end up with the following conclusions: 1. The pole-pole or zero-zero cases correspond to a Berry phase equals to \(0\), 2. the pole-zero or zero-pole cases correspond to a Berry phase equals to \(\pi\). We note that following our approach, the Berry-Zak phase is very easy to compute as it suffices to consider the form of the monodromy matrix at the boundaries of the conduction bands. ## 5 The bulk-boundary correspondence Let us consider a structure made of two semi-infinite photonic crystals put side by side such as depicted in fig.(7), characterized by the monodromy matrices \({\cal M}_{1}\) and \({\cal M}_{2}\). Our point is to investigate under what conditions it can exist a boundary mode and how it can be characterized topologically by means of the properties of the \(\chi\) function. The edge states are ruled by the following result. Assume the photonic crystal defined by \({\cal M}_{1}\) extends over \(x\,>\,0\) and that defined by \({\cal M}_{2}\) over \(x\,<\,0\). At \(k_{0}\) the eigenvalues of \({\cal M}_{1}\) and \({\cal M}_{2}\) are real and of the form \((z_{1},1/z_{1})\) and \((z_{2},1/z_{2})\). Let us assume that \(|z_{1,2}|<1\). A boundary mode is characterized by its initial value \(U\) at the junction between the media. For the mode to be bounded, the vector \(U\) should be damped along \(x>0\) and \(x<0\), therefore it should hold \[{\cal M}_{1}U=z_{1}U\mbox{ and }{\cal M}_{2}U=1/z_{1}U. \tag{19}\] This means first that, generically, \({\cal M}_{1}\) and \({\cal M}_{2}\) have a common set of eigenvectors, hence these matrices commute. Second, because of the symmetry of the potential, the second eigenvector is \(V=\sigma_{z}U\) _Let \({\cal M}_{1}\) and \({\cal M}_{2}\) be the monodromy matrix of each photonic crystal. For a Bloch wavevector \(k_{0}\), there exists an eigenvector vector \(U(k_{0})\) defining an edge state provided the following conditions are fulfilled_ * _The matrices_ \({\cal M}_{1}(k_{0})\) _and_ \({\cal M}_{2}(k_{0})\) _have a common gap at the Bloch wavevector_ \(k_{0}\)_, i.e._ \(|{\rm tr}({\cal M}_{1})|>2\) _and_ \(|{\rm tr}({\cal M}_{1})|>2\)_,_ * _the matrices_ \({\cal M}_{1}(k_{0})\) _and_ \({\cal M}_{2}(k_{0})\) _commute:_\([{\cal M}_{1}(k_{0}),{\cal M}_{2}(k_{0})]=0\)_,_ * _the associated Bloch functions_ \(\chi_{1}(k_{0})\) _and_ \(\chi_{2}(k_{0})\) _have opposite signs._ The last two conditions are equivalent to the single following one: _The Bloch functions_ \(\chi_{1}(k_{0})\) _and_ \(\chi_{2}(k_{0})\) _are opposite:_ \(\chi_{1}(k_{0})=-\chi_{2}(k_{0})\)_._ The link with the Poles-Zeros pattern can now be deduced. Recall that at the boundaries of the gaps the functions \(\chi\) necessarily have a zero or a pole and that they have a constant sign within a band gap. This means that, provided \(\chi_{1}\) and \(\chi_{2}\) have opposite signs in a band gap, when the pattern inside a gap is Pole-Zero for one structure and Zero-Pole for the other, the functions \(\chi_{1}\) and \(-\chi_{2}\) necessarily cross and there necessarily is an edge mode. A few words are in order as to the fact that the existence of a mode is linked in a very strict way to the symmetry property of the potential, as far as the theoretical analysis is concerned. However, an edge mode inside a gap corresponds to a pole of the scattering matrix (i.e. of the reflection and the transmission coefficients). Since we expect the pole to be a continuous function of the parameters it seems paradoxical that it could disappear suddenly when the symmetry is broken since it can be broken in a continuous fashion by moving continuously the origin of the basic cell. To resolve this apparent paradox, one should recall that the reflection and transmission coefficients are defined for two finite structures put side by side (see [13] for the definition of the reflection coefficient for a semi-infinite structure). When Figure 7: The structure made with two 1D photonic crystals with different topological properties. two semi-infinite structures are considered, the mode should be evanescent in both structures away from the edge and this is a strict condition. When finite structures are considered (containing each N periods), and with an plane wave incident field, there necessarily are anti-evanescent waves in addition to evanescent waves in order to fullfill the boundary conditions. Therefore, breaking the symmetry condition does not kill suddenty the edge mode, rather its proper wavenumber is shifted towards the edges of the band gap and, as the number of periods N tends to infinity, the edge mode disappears continuously by being absorbed at the edges of the band gap. Let us illustrate these results numerically. The two band structures corresponding to each photonic crystal are given in fig. (8). A finite structure made of 10 periods of each photonic crystal is considered. The transmission spectrum for an incident plane wave is plotted in fig. (8) The existence of an edge state is detected as a peak inside the band gap for the forbidden band situated inside the interval of \(k_{0}/2\pi\;\in\;[2.4,2.6]\). In fig. (10,11), we have plotted \(\chi_{1}\) (in red) and \(-\chi_{2}\) (in blue), as well as the commutator of \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) in green. The position of the band gaps is indicated in black. As can be seen on fig. (10,11), the edge state correspond indeed to a value of \(k_{0}\) for which the functions \(\chi_{1,2}\) cross and the commutator of the monodromy matrices is null. Let us now see what happens when complex values of \(k_{0}\) are used. In fig. (12), we plot the absolute value of \(\chi\) in the complex strip around the real axis. The poles are indicated by bright spots and the zeros by blue spots. We see clearly that the structures have the same Poles-Zeros pattern, except for the gap around \(k_{0}/2\pi\,=\,2.5\) where one of the structure has the pattern Zero-Pole and the other one the pattern Pole-Zeros. In this band gap, there is a boundary mode. In fig. (9), we have plotted the band structure and transmission spectrum when the right-hand side photonic crystal does not fullfill the symmetry condition that \(V(-x)\;=\;V(x)\). As explained in the discussion above, the edge mode is still present but is shifted towards the boundary of the band gap. By varying the values of the permittivities and the width of the layers, it is possible to obtain a phase diagram for the topological properties, i.e. the pole and zero (cf. fig. (13)). It can be seen that the regions of interest are separated by lines corresponding to the closing of the band gap. Our point is now to study the situation where the potential is complex, that is, an imaginary part is added to the permittivities. The new Poles-Zeros pattern is given in fig.(14). There, it can be seen that the poles and zeros have moved towards the lower part of the complex plane of wavevectors. Still the pole-zero structure is preserved. There is a continuous transformation, mathematically speaking a homotopy, between the poles and zeros structures of both photonic crystals. When Figure 8: band structures for two photonic crystals with parameters \(\varepsilon_{1}\,=\,3.8,\,\varepsilon_{2}\,=\,1,\,h_{1}\,=\,0.42,\,h_{2}\,=\,0.58\),on the left, and \(\varepsilon_{1}\,=\,4.2,\,\varepsilon_{2}\,=\,1,\,h_{1}\,=\,0.38,\,h_{2}\,=\,0.62\), on the right. The origin is chosen is such a way that the basic cell contains 3 layers of width \(h_{1}/2,h_{2},h_{1}/2\) and permittivities \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{1}\). In both cases the permeabilities are equal to 1. In the middle the transmission spectrum is given. It is obtained with a finite structure comprising 10 periods of each photonic crystal. Figure 10: In blue the function \(\chi_{1}\), in red the function \(-\chi_{1}\). The commutator of \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) is plotted in green. The positions of the band gaps are indicated by the black indicator (value 1 in the band gaps). The edge mode appears when \(\chi_{1}\) and \(-\chi_{2}\) cross and the commutator is null. Figure 9: band structures for the same photonic crystals as in fig. (8) but the photonic crystal on the left no longer satisfies the symmetry condition. The formula for the widths of the layers of the basic cell is now \(3/5h_{1},h_{2},2/5h_{1}\) and the permittivities are \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{1}\).On the right, the transmission spectrum is given. An edge mode is still present but it has moved towards the upper edge of the bandgap. Figure 11: Same as fig. (10) but zoomed in. Figure 12: Poles-zeros patterns for the two structures described in fig. (8). The bright spots are the poles and the blue ones the zeros. Around the value \(k_{0}/2\pi~{}=~{}2.5\) one of the structure has a Zero-Pole pattern while the other one has a Pole-Zero pattern. Figure 14: Same as fig. (12) but losses have been added to the structure corresponding to the upper Poles-Zeros pattern. The other structure remains lossless. The Poles-Zeros pattern is still there, except that the poles and zeros have moved towards the lower part of the complex planbe of wavevectors. It can still be seen that the band gap around \(k_{0}/2\pi\ \ =\ \ 2.5\) corresponds to a Pole-Zero/Zero-Pole pattern. Figure 13: Phase diagram for a stratified medium with two slabs. The parameters are the permittivities \(\varepsilon_{1,2}\) and the widths \(h_{1,2}\) of the slabs. Since the period is \(1\), there is only one width parameter since \(h_{1}\ +\ h_{2}\ \ =\ \ 1\). The diagram corresponds to only a band gap in the interval \([2.4;2.6]\). The index \(1\) is attributed to the pattern Pole-Zero and the index \(-1\) to the pattern Zero-Pole. The index \(0\) is for structures which do not have a band gap in the considered interval. a pole and zero exchange places by continuously varying a parameter, there is a configuration for which the pole and the zero have the same real part. This amounts to saying that the gap closes there. Consequently, as far as the Poles-Zeros pattern remains close to the real axis, these patterns continue to characterize topologically the structures. Indeed, when plotting the transmission spectrum for the finite structure with losses (cf. fig. (15)), a mode can be seen to remain (as an enlarged peak) in the same band gap. Of course, the life-time of the mode is now much shorter due to material losses added to radiative losses. ## 6 Conclusion It is customary to analyze the topological properties of one dimensional structures by using the familiar concept of Zak phase, directly linked to Berry's connection. We have shown here that another approach can be put forward, by using the poles and zeros of a function defined for all energies and not only for that corresponding to propagating modes. By using this tool, the extension to the classification of media with losses, or non-hermitian problems, is straightforward and avoids the difficulties encountered when trying to extend the Berry connection approach to non-hermitian system. More generally, the proposed approach gives possible theoretical insights for handling the case of complex energies and for analysing the Bloch variety [22].
2307.16562
SAKSHI: Decentralized AI Platforms
Large AI models (e.g., Dall-E, GPT4) have electrified the scientific, technological and societal landscape through their superhuman capabilities. These services are offered largely in a traditional web2.0 format (e.g., OpenAI's GPT4 service). As more large AI models proliferate (personalizing and specializing to a variety of domains), there is a tremendous need to have a neutral trust-free platform that allows the hosting of AI models, clients receiving AI services efficiently, yet in a trust-free, incentive compatible, Byzantine behavior resistant manner. In this paper we propose SAKSHI, a trust-free decentralized platform specifically suited for AI services. The key design principles of SAKSHI are the separation of the data path (where AI query and service is managed) and the control path (where routers and compute and storage hosts are managed) from the transaction path (where the metering and billing of services are managed over a blockchain). This separation is enabled by a "proof of inference" layer which provides cryptographic resistance against a variety of misbehaviors, including poor AI service, nonpayment for service, copying of AI models. This is joint work between multiple universities (Princeton University, University of Illinois at Urbana-Champaign, Tsinghua University, HKUST) and two startup companies (Witness Chain and Eigen Layer).
Suma Bhat, Canhui Chen, Zerui Cheng, Zhixuan Fang, Ashwin Hebbar, Sreeram Kannan, Ranvir Rana, Peiyao Sheng, Himanshu Tyagi, Pramod Viswanath, Xuechao Wang
2023-07-31T10:48:56Z
http://arxiv.org/abs/2307.16562v1
# SAKSHI: Decentralized AI Platforms ###### Abstract Large AI models (e.g., Dall-E, GPT4) have electrified the scientific, technological and societal landscape through their superhuman capabilities. These services are offered largely in a traditional web2.0 format (e.g., OpenAI's GPT4 service). As more large AI models proliferate (personalizing and specializing to a variety of domains), there is a tremendous need to have a neutral trust-free platform that allows the hosting of AI models, clients receiving AI services efficiently, yet in a trust-free, incentive compatible, Byzantine behavior resistant manner. In this paper we propose SAKSHI, a trust-free decentralized platform specifically suited for AI services. The key design principles of SAKSHI are the separation of the data path (where AI query and service is managed) and the control path (where routers and compute and storage hosts are managed) from the transaction path (where the metering and billing of services are managed over a blockchain). This separation is enabled by a "proof of inference" layer which provides cryptographic resistance against a variety of misbehaviors, including poor AI service, nonpayment for service, copying of AI models. This is joint work between multiple universities (Princeton University, University of Illinois at Urbana-Champaign, Tsinghua University, HKUST) and two startup companies (Witness Chain and Eigen Layer). Introduction **Era of AI**. Artificial Intelligence (AI) has been steadily making progress on a variety of tasks (household tasks by vacuuming robots [1, 2], playing games - Chess, Go [3, 4, 5] - at superhuman levels, scientific discovery via protein folding predictions [6, 7], medical progress by drug discoveries [8, 9, 10]), but have broken through the barrier of _general intelligence_ in recent months with the emergence of a new family of _generative_ deep learning models - GPT4 [11, 12] is the prototypical application capturing the world's attention, at a tremendous energy price. GPT4 has super-human mastery over natural language, and can comprehend complex ideas, exhibiting proficiency in a myriad of domains such as medicine, law, accounting, computer programming, music, and more. Moreover, GPT4 is capable of effectively leveraging external tools such as search engines, calculators, and APIs to complete tasks with minimal instructions and no demonstrations, showcasing its remarkable ability to adapt and learn from external resources. Such progress portends AI's forthcoming dominance in mediating (and replacing under several situations) human interactions, and promises AI to be the dominant energy consuming activity for years to come. **Large Generative AI Models**. An AI model that is largely representative of the class is _generative AI_, which creates content that resembles human-generated ones. These models have attracted considerable interest and popularity due to their impressive capabilities in generating high-quality, realistic images, text, video and music. For instance, large language models (LLMs) like ChatGPT [13], Bard [14], and LLaMA [15] attain impressive performance on a wide array of tasks and are being integrated in products such as search engines [16], coding assistants [17] and productivity tools in Google Docs [18]. Further, text-to-image models like StableDiffusion [19], MidJourney [20], Flamingo [21], text-to-music models like MusicLM, [22] and text-to-video models like Make-a-Video [23] have shown the immense potential of large multimodal generative AI models. As large generative AI models continue to evolve, we will witness the emergence of numerous fine-tuned and instruction-tuned models cater to specific use cases (e.g., healthcare, finance, law). Whilst models grow rapidly, Amazon and Nvidia report that AI inference tasks particularly account for up to 90% of the computational resource in AI systems, which are much more frequently demanded than AI model training tasks [24]. In this white paper, we mainly focus on the AI inference tasks, but the flexibility of our layer architecture design allows the market for model training as well. **Current model: Centralized inference**. The dominant platform of serving these large models is through public inference APIs [25, 26, 27], offered via by the dominant platform companies of today's economy. For example, the OpenAI API allows users to query models like ChatGPT and DALL-E over a web interface. Although this is a relatively user-friendly option, it is susceptible to the deleterious side-effect of centralization: monopolization. Apart from the rent-seeking aspect of the centralized nature of the service offering, privacy im plications loom large: the human interactions mediated by generative AI models is vastly more personal and intrusive than a web browsing and search queries. Addressing the grand challenge of AI computation via the design of decentralized and programmable platforms is the goal of this paper. **Proposed model: Decentralized Inference**. In this paper, we propose to decentralize AI inference across servers provided by consumer devices at the grid edge. Decentralized inference can reduce communication and energy costs by leveraging local computation capabilities. This is made possible by utilizing energy-efficient devices located at the edge, which could potentially be powered by renewable energy sources. Crucially, the energy overhead of running large data-centers is largely reduced, simultaneously opening an opportunity to democratize AI whilst limiting its ecological footprint. Such a decentralized platform would also enable the deployment of a library of large customized models in a scalable manner - users can host in-demand customized models on this decentralized cloud, and earn appropriate rewards. Our decentralized AI platform, SAKSHI, is populated by a host of different agents: AI service providers, AI clients, storage and compute hosting nodes. A carefully designed incentive fabric stitches the different agents together into an efficient, trustworthy, and economically fruitful AI platform. Our design of SAKSHI is best visualized in terms of a layered architecture (analogous to network stacks). The layers are enumerated below and visualized in Figure 1. 1. **Service layer.** This is the path where the query and response (AI inference) are managed. The goal is to have high throughput and low latency - the goal is to enable user journey similar to a standard web2-like service, with the underlying resources (storage, computation) and economic transaction managed in a decentralized and trustless manner. 2. **Control layer.** This is the path where networking and compute/storage load balancing actions are managed. The decentralized AI models are hosted at multiple locations connected via a (potentially peer to peer) network, and our decentralized design borrows from classical web2 content delivery network designs (e.g., Akamai) while managing the economic transaction also in a decentralized and trustless manner. 3. **Transaction layer.** This is the path where billing and metering are conducted. The key is to have this outside the data path and visible to a broader audience (e.g., via commitments on blockchains). Importantly this is trust free crucially enabled via Witness Chain's transaction layer service (originally designed for decentralized 5G wireless networks [28], but now naturally repurposed for decentralized AI services). 4. **Proof layer.** Any dispute in terms of metering and billing are handled here. These proofs also provide resistance to unauthorized usage (e.g., just copying) of AI models. This is definitely outside the data path, but also outside the transaction path. This layer allows the formulation of novel Figure 1: The six layer architecture for Web3.0 services research questions (at the intersection of large AI models, cryptography and security). We highlight three such key questions: (i) Proof of Inference - where the proof of computation of a specific (deep learning) AI model can be verified; (ii) Proof of ownership, fine-tuning and watermarking - where the proof of downstream modification to an AI model can be verified; (iii) Proof of service delivery - where the proof of the delivery of an AI service can be verified at customizable granularities. These dispute resolutions naturally feed into a reputation system (leading to positive incentives for salutary behavior) or crypto economic security via slashing (negative incentives; see next layer). This new research, outlined in detail in this paper, is joint work between multiple universities (Princeton University, University of Illinois at Urbana-Champaign, Tsinghua University, HKUST), and two blockchain startups Witness Chain and Eigen Layer. 5. **Economic layer.** So far, the transactions can be handled purely via fiat without the need for a token. This layer explores the benefits of having a token to incentivize participants, both in the transient and long term stages and the corresponding economic benefits therein. \(\rightarrow\) Eigenlayer integration and ideas. 6. **Marketplace.** Compositional AI services, in a single atomic transaction, are naturally enabled. The common data shared on the blockchain leads to the creation of a decentralized marketplace for AI services. The supply and demand allows the efficient discovery of prices. Optional in the first version. ## 2 Architecture of Decentralized AI Service ### Requirements We now describe a specific architecture based on the general six layer architecture outlined in the last section, allowing SAKSHI to be concrete. Our decentralized AI service is designed to enable an open marketplace for AI models where any user can access inference service offered by multiple, untrusted AI service suppliers. Our goal is to ensure that the user is guaranteed a good quality of service and the suppliers get a fair payment for their service. There are several challenges that can hinder bootstrapping and growth of such a decentralized service: 1. Individual suppliers may not be able to attract enough clients; 2. The supplier may not apply a good model and return low quality results; 3. The client may not pay after getting the service. Each of these challenges is addressed by our decentralized AI service model: 1. We allow an aggregator to collectively offer service on behalf of multiple suppliers. The aggregator and suppliers engage in an SLA implemented as a smart contract to ensure that each gets a fair share of the revenue. 2. We have a proof system for quality of AI services to ensure that suppliers provide the promised quality of service. The proof is implemented through a challenge-response setup executed using a decentralized pool of challenger nodes. 3. We have smart contracts and payment channels to implement scalable and reliable payment service for the suppliers. This will be supported by an objective dispute resolution mechanism to ensure that suppliers can get paid if they deliver service. ### The six layer architecture with Witness Chain These functionalities of SAKSHI are enabled using the architecture in Figure 2. At the top is the marketplace, a decentralized two-sided platform for buying and selling AI services. A client (user) comes to our marketplace and places an order to access inference service from an aggregator. Both agree on an SLA which contains terms for quality of service and payments. Next comes the service layer that provides the APIs for clients to make inference requests to the aggregators. This request is appropriately passed to a matching supplier server using a router deployed as a part of the control layer. Both service and control layer are reminiscent of standard web 2.0 services with multiple servers, with the caveat that the supplier servers can now be hosted Figure 2: SAKSHI- Decentralized AI service architecture by different entities with their own business incentives and without any pre-existing reputation. These servers are bound to an SLA between them and the aggregator. All the SLAs that govern the service-payment rules between different parties are deployed as smart contracts as a part of the transaction layer, a decentralization middleware provided by Witness Chain [29]. The Witness Chain transaction layer not only hosts and provides interfaces for the SLA smart contracts, but also provides state channels to maintain the payment and service state for interacting client, aggregator and supplier. Furthermore, it provides a dispute resolution framework to ensure that the client completes the payment after availing the service. Finally, a proof layer deploys an appropriate Proof of Inference to ensure that the suppliers are using models agreed upon in the SLA. This challenge and verification for this proof is executed by a pool of challengers, Witnesses, provided by Witness Chain. These proofs interact with the transaction layer to ensure service quality promised in the SLA. The Witness Chain challenger nodes executing these proofs are incentivised by Witness Chain using a part of service payment. Witness Chain, in turn, provides a programmable layer for choosing the challenger nodes which can be used to specify how decentralized the challenger pool should be and how well-provisioned each challenger node needs to be. A detailed description of each layer is provided in Section 3; the interactions discussed above are depicted at a high level in Figure 3 below. ### The economic layer with Eigen Layer All entities in the above ecosystem are incentivized to do their job fairly because of the economics underlying the SLA and the incentive system for the challengers. Often, each new blockchain ecosystem launches its own token to provide this cryptoeconomic security. However, this new token may not gain the necessary volume and spread to enforce reasonable security in the early stages, resulting in failure to bootstrap for the ecosystem. This problem was solved recently by Eigen Layer [30] which provides a framework for using Ethereum cryptoeconomic security by engaging Ethereum validators. Witness Chain integrates with Eigen Layer and uses Eigen Layer operators as challengers to extend Ethereum security to the decentralized AI marketplace. The challengers running the Proof of Inference, the ultimate root of trust in service quality, would have staked/restaked Eth using Eigen Layer. Witness Chain deploys an additional proof of custody [29] to ensure that these challengers are being diligent in their job, lest their stake be slashed. Putting the restaking framework of Eigen Layer together with the proof of diligence/custody by Witness Chain provides a comprehensive economic security layer for SAKSHI. Figure 3: Various steps in using SAKSHI ## 3 Detailed Description of Each Layer ### Service layer The service layer enables the infrastructure for ML inference queries and is responsible for committing service information to the proof layer. This layer is equivalent to a Web2 server-client architecture with some modifications to support the proof framework. An instantiation of this layer creates a connection between a client and a server to exchange data and makes the server's compute available through agreed-upon Inference APIs. The service layer works in conjunction with other layers in the infrastructure as depicted in Figure 4 and described below: **Server Assignment:** The client requests the control layer to assign a server for an AI model, and the control layer notifies the client of the server's ID and address. It also notifies the server of an incoming connection from the client. **Service exchange:** The client establishes a connection with the server using the address provided by the control layer. Both server and client verify through the transaction layer if an SLA path exists between them through the common aggregator; if such a path exists, both parties implicitly agree on the trade. The client sends inference requests using the server's API endpoint; the client signs Figure 4: Service Layer overview the request for use in dispute resolution if the need arises. The server processes the requests and sends the output data back to the client as the response; the server might submit a commitment to the delivered response on a DA layer at a later stage if the need arises for dispute resolution. Per service of a single unit of inference - a single API request, the server anticipates a micropayment as dictated by its SLA. A request is made to the transaction layer, which then sends payments from the client to the aggregator and from the aggregator to the server. The server proceeds to serve the subsequent request from the client only if the payment for the previous request is processed. **Service dispute witnesses:** The data exchanged in the service layer is used as a witness in case a payment dispute arises, such as a client not paying for the AI inference service delivered. The signed inference requests, output data committed to a DA layer, and the previous exchanged micropayment will be used for dispute resolution, as discussed in detail in the following sections on the Transaction and Proof layers. ### Control Layer The control layer is responsible for matching clients and servers. This layer consists of a set of routers that maintains the state of all servers subscribed to it. It performs load balancing by allocating client requests to servers that optimize cost measured in latency, compute cost, and compliance to SLAs. Servers can subscribe to a router of their choice, and clients can select a router of their choice. The control layer works in conjunction with other layers as depicted in figure 5 and described below: **Server state maintenance:** Router maintains a server network state consisting of the following non-exhaustive set of variables: Figure 5: Control layer overview * Server model capacity: The set of AI models that the server can compute inference on * Server hardware capacity: The compute capacity of each server * Server request load: The number of clients the server is currently connected to at the service layer * Server location: Verified server location from the proof layer Some of these variables require the routing trusting server's claims - these will be used for soft constraints in routing, whereas other variables such as location will be verified through the proof layer - these can be used for hard constraints such as geo-restricting the inference compute. **SLA state maintenance:** The router maintains the state of SLAs signed at the transaction layer between client-aggregators and aggregator-servers so that it can match clients to servers that share a common aggregator. The router watches the transaction layer contracts for events to register or de-register SLAs. **Client-server matching:** The client submits a request specifying the type of server it would like to be matched to - this request consists of parameters such as model id, location boundary, server uptime, etc. The router runs a matching logic to select a server best suited for that model at that time by utilizing the server state and the SLA state. The router then notifies the service layer to establish a connection between the client and the servers and the transaction layer to anticipate payments through their common aggregator. Note on fairness: A malicious router can unfairly route requests leading to a loss in revenue for some servers; if a server sees such behavior, it will migrate to another router that provides better revenue by providing fair routing. This market dynamic facilitates fairness in routing. ### Transaction Layer The transaction layer is responsible for payment to servers and intermediaries for delivering their service. #### 3.3.1 Necessity of an integrated transaction layer Decentralized platforms generate supply by incentivizing and compensating an extensive network of parties - termed suppliers. The platform can be considered a marketplace for the service supply chain, with service flowing from suppliers (servers) to intermediaries and finally to consumers and compensation flowing the other way. A compensation system is, therefore, a critical part of a decentralized service-oriented platform. Compensation for providing services is already an integral part of existing centralized platforms such as Uber, AirBnB, and Amazon; however, the billing systems used for their decentralized counterpart need to be composable with the trustless and programmable service framework that decentralized platforms enable. Decentralized platforms need the billing system to support automated smart contract-initiated dispute resolution and high-speed dispersion of funds, as we will see next. The transaction layer incorporates the web3 equivalent of a billing system. The transaction layer ties the billing of a service with a Service Level Agreement (SLA) that codifies the terms of service and payment, and ensures that metering for the SLA is consistent with the service delivered. #### 3.3.2 Scalability solutions Decentralized AI platforms cannot rely on the assumption of trust between a server and a client since either party may be too small to be bound by the principles of reputation maintenance or legal agreements. Thus, they need to be constantly in consensus about the amount of inference service delivered and payment for such service. A requirement for achieving this consensus is that it must be achieved per delivery of an inference service unit - a query. All parties involved in service delivery must agree on the service delivered and settle payment for that service delivered at frequent intervals. This requirement necessitates a high throughput, low latency payment system. Consensus literature is rich in solutions to scale payment p ranging from sharding, rollups, and sidechains to payment channels. Our payment system should ideally satisfy the following properties: * High throughput of payments * Low latency between payment initiation and confirmation * Scale throughput with the number of supply or demand side participants * Payment per service delivery is not public information and may only be shared between the supplier, consumer, and the chosen intermediaries. State channels and payment channels satisfy all the above requirements. Modeling a decentralized AI platform, we observe that a single client will interact with multiple servers to query for different models and use different suppliers for inter-session privacy. The requirement for managing a state channel across multiple servers is not scalable. Hence we choose a payment channel approach to build the transaction layer's payment system. We will have a payment channel between a client and an aggregator intermediary and another between the aggregator intermediary and server, enabled by SLA chaining. Figure 6 depicts the interaction of transaction layer components with other layers, with details on the architecture below: #### 3.3.3 Architecture overview The transaction layer encompasses SLAs that any two parties agree on, an SLA manager that converts service measurements to payments using SLA, SLA clients running on machines of both parties fetching data from the measurement gateway, and a blockchain wrapper for posting transactions. These components are described in detail below: **Service contracts:** Service contracts consist of two components: A SLA that both the transacting parties agree on and a unidirectional payment channel with funds flowing from the service consumer to the supplier. For the AI platform there exists two consumer - supplier pairs: (i) Client - Aggregator and (ii) Aggregator - Server. The SLA is codified based on a SLA4OpenAPI standard [31] and maps service usage to a payment. SLAs for AI application maps (model type, input size, output size) to token payment amount. The unidirectional payment channel is set up with an escrow from the consuming party to supplying party and set's terms of delegation of payment keys to an intermediary SLA manager. **SLA manager:** SLA manager end clients are given to run a codebase that signs micropayments or delegate it to an application running on the cloud: SLA manager. SLA manager receives signed measurements from the consumer and supplier's SLA client and converts that to an appropriate payment amount by signing a micropayment and sending funds on the payment channel on behalf of the consumer. **SLA client and measurement gateway:** SLA client and measurement gateway are components that run on the end devices of the consumer and supplier. The measurement gateway interprets the service messages and converts them into service units. For AI applications, these would be the model requested, input size, and output size. The SLA client fetches this information from the measurement gateway, signs it with the key codified in the service contract, and sends it to the SLA manager; optionally, the SLA client (on the consumer end) can convert the measurement to a micropayment themselves and forward it to the supplier. Figure 6: Transaction layer overview **Blockchain wrapper** This component runs on the SLA manager and SLA client. It is responsible for broadcasting and listening to on-chain transactions such as payment channel start, termination, and dispute messages on-chain. The blockchain wrapper is compatible with multiple blockchains such as Ethereum, Polygon, Solana, and all EVM-compatible rollups. #### 3.3.4 Dispute-compatibility SAKSHI utilizes a post-service payment model - Payment disputes can emerge when a supplier claims non-receipt of payment for a service unit (a single AI inference). The associated micropayment can serve as a proof of payment to resolve such disputes. Micropayments in unidirectional payment channels typically consist of a signed commitment of the total payable amount. To render these payment channels to be dispute-compatible, we need to augment them with additional parameters. Firstly, the micropayment should include a unique'requestID' that corresponds to the disputed inference. Secondly, it should contain the hash of the preceding micropayment, which can be validated using a nonce - a counter incremented with each successive micropayment. To resolve a payment dispute raised by the server, the payer can commit the associated micropayment. Additionally, the preceding micropayment must also be committed, to calculate the amount payable for the disputed service unit. Depending on who is deemed to be correct, the dispute can be settled on-chain from the existing balance in the payment channel. Our dispute resolution protocol also addresses other scenarios, such as disputes raised by a malicious server without providing service, and inconsistent micropayment commitments. Figure 7 depicts an example flow of utilizing payment channel commitments for service dispute resolution. ### Proof Layer The proof layer, operating outside the data and transaction paths, provides a way to resolve various disputes in SAKSHI, utilizing blockchains as a immutable and trusted medium to read and write service states. A variety of disputes can arise in the AI service and "proof" systems to provide cryptographic resolution mechanisms address the corresponding issues. In this paper, we focus on two categories of proofs, each responding to different types of disputes. * Proof of Inference, a proof of correct computation on a prescribed (and open) AI model, mediates disputes of correct inference; * Proof of Model-ownership, a proof of how closely two AI models are related to each other and whether one AI model is a clone or a fine-tuned version of the other, mediates potential disputes related to intellectual property held by the owner of an AI model. Figure 8 depicts the interaction of the dispute resolution contract in the proof layer with the rest of the platform layers. A detailed description of the individual proof follows. #### 3.4.1 Proof of Inference A crucial aspect of decentralized inference platforms is the presence of incentives that encourage honest participation in the protocol while discouraging malicious actors. An essential component of this incentive design is addressing the problem of provably verifying computations executed by untrusted servers. Various design choices are available to enable such proof of inference, with several emerging research directions. One such line of research involves the application of zero-knowledge proofs (ZKP) to verify AI model execution [32]. However, this approach is extremely computationally intensive, necessitating concessions such as quantization, which Figure 8: Proof layer overview Figure 7: Utilizing transaction layer payments for service dispute resolution leads to lower accuracy. Furthermore, generating ZKPs for modern, large-scale generative AI models is currently impractical. An alternative strategy is to adopt an optimistic approach. In this scheme, the server commits the hash of the generated output, and the system assumes the off-chain inference to be accurate. If a participant ("challenger") doubts the inference's correctness, they can contest its validity by submitting a fraud proof. This proof can be generated using a verification oracle that can re-run the model and determine the accuracy of the server's or challenger's claim. However, since these oracle nodes may have limited computational capabilities, recomputing the entire neural network forward pass is prohibitively expensive and inefficient. To address this issue, we propose a method inspired by the bisection scheme employed in the optimistic rollup Arbitrum [33]. A key observation is that AI models can be viewed as a sequence of functions, such as layers in a neural network. \[f(x)=y\quad\rightarrow\quad f_{n}(f_{n-1}(f_{n-2}(...f_{2}(f_{1}(x))...)))=y\] When there is a discrepancy between the outputs of a server and a challenger, we can employ an interactive bisection scheme to identify a single function--the first layer in the AI model where the outputs of the two parties differ. By implementing this system, oracle nodes only need to compute and verify a single layer of the network, significantly reducing costs and making the verification of extremely large models feasible. Indeed, deterministic AI inference is a prerequisite for such schemes, which is attainable by fixing the random state. We illustrate our ModelBisection algorithm in Figure 9, that identifies the earliest layer of the AI model where the inputs align for both parties, but the resulting outputs diverge, while minimizing the number of interactive steps involved. In case of a sequential model (left), one can use a form of binary search - if the output of a queried layer (typically the midpoint) is inconsistent between the parties, we recursively bisect the first half of the node sequence. Otherwise, we eliminate the first half, and recursively bisect the second half of the sequence. Each bisection step eliminates half of the remaining candidates for the faulty layer. After a logarithmic number of iterations, we locate a layer whose input is consistent, yet the parties produce differing outputs. However, the computations within an AI model are not simply sequential but rather form a Directed Acyclic Graph (DAG) structure. Consequently, the bisection mechanism used for sequential networks cannot be directly applied to AI models. We demonstrate our approach, _ModelBisection_, on an Inception block of GoogLeNet [34] as depicted in Figure 9 (right). Suppose we select the node \(n_{1}=L_{2.2}\) in the DAG for output verification. Both parties compute and share the intermediate output of layer \(L_{2.2}\). If the outputs are equal, we prune all ancestor nodes of this node in the DAG from consideration (as their outputs would have to be consistent). If, however, the outputs differ, we eliminate all non-ancestor nodes of this node in the DAG (since one of outputs among ancestors must be inconsistent). We keep track of the identified consistent and inconsistent nodes, and continue this process until we reach a single layer where the inputs are consistent between the parties, but the outputs differ. We employ Figure 9: Model bisection a greedy strategy to select the node in the digraph such that it is split in the most balanced way. We choose the node which maximizes \(\min\{|x|,n-|x|\}\), where \(|x|\) is the number of ancestors of node \(x\), and n is the total number of nodes in the current digraph. This score can be interpreted as the least number of nodes that would be eliminated as potential candidates for the first point of divergence, when \(x\) is queried, thus minimizing the number of ModelBisection rounds. It's noteworthy that even in large foundation models, the ModelBisection approach can pinpoint a single layer of divergence in a very small number of iterations. For example, in the case of the 13 billion parameter LLaMA model [15], fewer than ten iterations suffice. Finally we observe that the bisection subroutine bears similarity to the one utilized by GitHub in _git bisect_, which aids in identifying the first faulty entry in the DAG of commits and merges. #### 3.4.2 Proof of Model ownership A decentralized AI marketplace comprises three main entities - model owners who collect datasets and train or finetune AI models, compute-rich servers, and end-users. As opposed to current open-source model hosting solutions, decentralized marketplaces can allow incentivizing model creators by rewarding them a percentage of the inference fee when their models are utilized. However, such an incentive design is susceptible to model copying attacks, where a malicious actor can copy, slightly modify, and profit from the hosted models at the cost of the model creators. Therefore, a robust mechanism for model ownership resolution becomes a crucial prerequisite for decentralized AI marketplaces. One promising solution for a proof of model ownership is by embedding a watermark in the neural networks during the training phase. To be effective, a DNN watermarking scheme must fulfill several criteria: it should be functionality-preserving, meaning the watermark embedding must not impact model performance. The watermark must be robust, and be extractable from any transformed model (e.g., through weight scaling or finetuning). Additionally, a watermarked model should remain indistinguishable from a non-watermarked model to potential adversaries. Moreover, a watermark must be resistant to ambiguity attacks - false claims of existence of a different watermark. Various watermarking schemes have been proposed in research literature. Parameter encoding methods [35, 36, 37], integrate a watermark directly into the model's parameters. For classification models, an alternate method involves backdooring, which involves assigning incorrect labels to examples in a trigger set, and this can be used as a watermark [38, 39]. Additionally, task-specific and model-specific watermarking methods have been proposed [40, 41, 42, 43]. Nonetheless, the robustness of existing methods against model copying has been questioned by recent attacks [44, 45, 46], highlighting an unresolved research challenge. Notably, in most watermark extraction algorithms, information about the watermark location or the trigger examples are revealed during the verification process. This knowledge facilitates easier watermark removal and ambiguity attacks. Therefore, in our system a trusted judge is required to resolve model ownership disputes. Model creators must embed watermarks in their models, and commit a commitment of the watermark on the blockchain. The judge must be able to verify the existence of watermarks using the extraction algorithm, which may be task and model-specific. Such a proof of model ownership can ensure the non-feasibility of profiting from stolen models within the decentralized marketplace. However, it does not prevent an adversary from copying a model and using it outside this system (eg - via a black-box api). Such acts can be deterred by licensing the model's use only in this marketplace, and resorting to legal means if necessary. ### Summary Proofs of inference and ownership are two examples of a broader family of protocols providing Byzantine resistance in SAKSHI. Even here, we have worked more to describe the problems rather than the solutions - a call to arms from the scientific community. As the platform evolves and participation rises, the attack space could also expand opening the door for new and different kinds of proof systems (e.g., proof of custody; proof of infrastructure hosting the AI models).